id
stringlengths 10
10
| title
stringlengths 5
246
| abstract
stringlengths 42
3.32k
| authors
stringlengths 5
21.5k
| published_date
timestamp[s] | link
stringlengths 33
34
| markdown
stringlengths 140
1.08M
| abstract_ja
stringlengths 0
1.35k
|
---|---|---|---|---|---|---|---|
2308.00019 | The Basis Invariant Flavor Puzzle | The flavor puzzle of the Standard Model quark sector is formulated in a
non-perturbative way, using basis invariants that are independent of the choice
of quark field basis. To achieve this, we first derive the algebraic ring of 10
CP even (primary) and 1 CP odd (secondary) basis invariants, using the Hilbert
series and plethystic logarithm. An orthogonal basis in the ring of basis
invariants is explicitly constructed, using hermitian projection operators
derived via birdtrack diagrams. The thereby constructed invariants have well
defined CP transformation behavior and give the most direct access to the
flavor symmetric alignments of basis covariants. We firstly "measure" the
orthogonal basis invariants from experimental data and characterize their
location in the available parameter space. The experimentally observed
orthogonal basis invariants take very close to maximal values and are highly
correlated. Explaining the location of the invariants at close to maximal
points, including the associated miniscule and highly correlated deviations,
corresponds to solving the flavor puzzle in the invariant language. Once
properly normalized, the orthogonal basis invariants are close to scale (RGE)
invariant, hence, provide exquisite targets for fits of both, low- and
high-scale (bottom-up and top-down) flavor models. Our result provides an
entirely new angle on the flavor puzzle, and opens up ample opportunities for
its ultimate exploration. | Miguel P. Bento, Joao P. Silva, Andreas Trautner | 2023-07-31T18:00:00 | http://arxiv.org/abs/2308.00019v2 | # The Basis Invariant Flavor Puzzle
###### Abstract
The flavor puzzle of the Standard Model quark sector is formulated in a non-perturbative way, using basis invariants that are independent of the choice of quark field basis. To achieve this, we first derive the algebraic ring of 10 CP even (primary) and 1 CP odd (secondary) basis invariants, using the Hilbert series and plethystic logarithm. An orthogonal basis in the ring of basis invariants is explicitly constructed, using hermitian projection operators derived via birdtrack diagrams. The thereby constructed invariants have well defined CP transformation behavior and give the most direct access to the flavor symmetric alignments of basis covariants. We firstly "measure" the orthogonal basis invariants from experimental data and characterize their location in the available parameter space. The experimentally observed orthogonal basis invariants take very close to maximal values and are highly correlated. Explaining the location of the invariants at close to maximal points, including the associated miniscule and highly correlated deviations, corresponds to solving the flavor puzzle in the invariant language. Once properly normalized, the orthogonal basis invariants are close to scale (RGE) invariant, hence, provide exquisite targets for fits of both, low- and high-scale (bottom-up and top-down) flavor models. Our result provides an entirely new angle on the flavor puzzle, and opens up ample opportunities for its ultimate exploration.
## 1 Introduction
Ever since the theoretical completion of the Standard Model (SM), and especially after the experimental confirmation of all of its constituents, the flavor puzzle remains its most captivating mystery [1]. The question is why there are exactly three generations of matter fermions and what determines their pattern of hierarchical masses and intergenerational interactions that also hosts the only definitively observed source of charge-parity (CP) violation in Nature.
The theoretical formulation of the SM flavor sector shows ambiguities in the Higgs Yukawa couplings to the three generations of left- and right-handed up and down-type fermions, corresponding to an unphysical choice of basis in the three-generational flavor space SU(3)\({}^{5}\). Since experimental outcomes cannot be affected by arbitrary choices of basis or parametrization, physical observables must be given by basis invariant functions that are independent of unphysical basis choices. Nonetheless, there exists presently no quantitative investigation of the flavor puzzle exclusively in terms of basis invariant quantities. The purpose of our paper is to deliver such a formulation of the flavor puzzle entirely in terms of basis invariants, and firstly explore the lessons we can learn from it.
The current absence of an entirely basis invariant quantitative formulation of the flavor puzzle may be tracked back to technical challenges, regarding the questions of _how_ a set of minimal basis invariants should be practically and systematically constructed, and _when_ such a construction is complete. Most well known is certainly the pioneering basis invariant characterization of CP violation in terms of the so-called Jarlskog invariant [2; 3], which also has been extended to enlarged fermion and scalar sectors [4; 5; 6; 7; 8]. But also the complete ring of basis invariants of the SM (and most common neutrino sector extensions) has been constructed, using the Hilbert series of invariant theory [9; 10; 11].
Since any algebraic combination of basis invariants is itself a basis invariant, a common misconception is that there are no meaningfully discernible bases in the space of invariants. Perhaps because of this, little attention has been paid to the explicit and systematic construction of the basis invariants themselves, which typically leads to a more complicated treatment than necessary. We will follow the strategy outlined in [12; 13], using hermitian projection operators [14] (see also [15; 16; 17]), in order to systematically construct an orthogonal basis in the ring of basis invariants. The necessary projection operators can conveniently be constructed using birdtrack diagrams [18; 19; 20]. The thereby derived projection operators allow to project arbitrary rank tensors onto their irreducible covariant contents, incl. trivial singlets i.e. invariants, which allows us to track down the origin of a given invariant to independent covariant channels. The thus obtained orthogonal invariants and their relations are as short as possible by construction. As an additional benefit, the formulation in terms of basis invariants simplifies the analysis of renormalization group equations (RGE), RGE running and the derivation of RGE invariants [21], which has been observed for both, SM fermion [8; 22; 23; 24; 25], as well as for extended scalar sectors [26; 27; 28] and is expected to be even further simplified if the involved invariants are orthogonal to each other.
Moreover, using basis invariants is a powerful tool to examine the violation of CP and other global symmetries, see [29; 30; 31; 32]. Symmetries manifest themselves either in the vanishing of non-trivially transforming covariants, or in alignment of covariants that correspond to specific relation of basis invariants [32]. Using orthogonal invariants, such relations become as short and transparent as possible, and this may also help to detect symmetries and their violation in the SM.
Our principal technique to construct orthogonal invariants generally applies to both, quark and lepton sectors. However, the different possible mechanisms of neutrino mass generation involve different covariant tensor structures, hence, different possibilities for
invariant rings of the lepton sector, see [11; 25; 33]. For this reason, we entirely focus on the quark sector here which is free of such ambiguities, and whose parameters have been experimentally determined with very high precision. The main task is to derive an orthogonal basis of flavor invariants, after which we quantitatively examine them in order to obtain the complete basis and parametrization invariant picture of the SM flavor puzzle.
The paper is structured as follows. Section 2 gives a general overview over the SM quark sector flavor covariants and different parametrizations used to evaluate them. We also state our choice of orthogonal basis invariants, as well as the syzygy that relates our ten primary invariants to the CP-odd, secondary invariant known as the Jarlskog invariant. Subsequently, in 3.1 we formally characterize the invariant ring of the SM quark sector using the Hilbert series and plethystic logarithm. This is followed by the construction of our orthogonal, adjoint space basis of projection operators in sec. 3.2, which have been used to construct the orthogonal invariants of sec. 2. The CP transformation behavior of basis co- and invariants is unveiled in section 3.4. Finally, we quantitatively analyze the parameter space of the orthogonal invariants, determine their experimental values and errors, as well as their renormalization group evolution in sections 4 and 5, respectively. Sections 6 and 7 contain further discussions and comments, as well as our conclusions. In six appendices, we provide A a discussion of the CP-even subring of the SM, B useful birdtrack identities, C a comment about the group theoretically correct and unitary normalization factors of the projection operators, D plots for an alternative normalization of the invariants, E the Frobenius inner product to set limits on the boundaries of the invariant parameter space, and F, an up-to-date display of the running CKM parameters.
## 2 Quark sector flavor invariants
The Yukawa couplings of the SM quark sector are given by the Lagrangian
\[-\mathcal{L}_{\rm Yuk.}\ =\ \bar{Q}_{{\rm L},i}\,\widetilde{H}\left[Y_{u} \right]^{i}_{\ j}u^{j}_{\rm R}+\bar{Q}_{{\rm L},i}\,H\left[Y_{d}\right]^{i}_{ \ k}d^{k}_{\rm R}+{\rm h.c.}\, \tag{1}\]
where we explicitly display the flavor indices \(i,j,k=1,2,3\). The Yukawa coupling matrices \(Y_{u}\) and \(Y_{d}\) are general complex \(3\times 3\) matrices. Under general flavor space redefinitions of the quark fields, described by the group \({\rm SU}(3)_{Q_{\rm L}}\otimes{\rm SU}(3)_{u_{\rm R}}\otimes{\rm SU}(3)_{d_{ \rm R}}\), the Yukawa matrices transform covariantly as \(Y_{u}\ \hat{=}\ (\mathbf{\bar{3}},\mathbf{3},\mathbf{1})\) and \(Y_{d}\ \hat{=}\ (\mathbf{\bar{3}},\mathbf{1},\mathbf{3})\). Hence, \(Y_{u}\) and \(Y_{d}\) can be regarded as _spurions_ of flavor symmetry breaking. We define the two matrices
\[\tilde{H}_{u}:=Y_{u}Y_{u}^{\dagger}\,,\quad\tilde{H}_{d}:=Y_{d}Y_{d}^{\dagger}\,. \tag{2}\]
\(\tilde{H}_{u,d}\) are hermitian with positive eigenvalues. Taking these products, the right-handed spaces are traced out1 and \(\tilde{H}_{u}\) and \(\tilde{H}_{d}\) each transform as \(\mathbf{\bar{3}}\otimes\mathbf{3}=\mathbf{1}\oplus\mathbf{8}\) in the left-handed
quark flavor space. The singlet pieces are given by \(\operatorname{Tr}\tilde{H}_{u}\) and \(\operatorname{Tr}\tilde{H}_{d}\). Hence, the octet pieces can be isolated as [34]
\[H_{u}:=\tilde{H}_{u}-\mathbbm{1}\operatorname{Tr}\frac{\tilde{H}_{u}}{3}\qquad \text{and}\qquad H_{d}:=\tilde{H}_{d}-\mathbbm{1}\operatorname{Tr}\frac{\tilde {H}_{d}}{3}\;. \tag{3}\]
\(H_{u}\) and \(H_{d}\) are traceless hermitian matrices with eight parameters each.
There are 10 physical parameters in the quark Yukawa sector (3 up-type masses, 3 down-type masses, 1 CP odd and 3 CP even mixing parameters [35]). The physical basis corresponds to diagonal kinetic terms after spontaneous symmetry breaking, and is achieved by bi-unitarily diagonalizing the Yukawa matrices as
\[V^{\dagger}_{u,\mathrm{L}}\,Y_{u}\,V_{u,\mathrm{R}} = \frac{\sqrt{2}}{v}\;\text{diag}(m_{u},m_{c},m_{t})\qquad\text{and} \tag{4}\] \[V^{\dagger}_{d,\mathrm{L}}\,Y_{d}\,V_{d,\mathrm{R}} = \frac{\sqrt{2}}{v}\;\text{diag}(m_{d},m_{s},m_{b})\;. \tag{5}\]
For clarity, we express this in terms of the masses here, such that dividing out the Higgs vacuum expectation value \(v=246\,\text{GeV}\) yields the dimensionless diagonal Yukawa couplings. In the physical basis, the covariant tensors can be expressed as
\[\tilde{H}_{u} = \text{diag}(\,y_{u}^{2}\,,\,y_{c}^{2}\,,\,y_{t}^{2}\,) \tag{6}\] \[\text{and}\qquad\tilde{H}_{d} = V_{\text{CKM}}\;\text{diag}(\,y_{d}^{2}\,,\,y_{s}^{2}\,,\,y_{b}^{ 2}\,)\;V^{\dagger}_{\text{CKM}}\;, \tag{7}\]
where \(V_{\text{CKM}}:=V^{\dagger}_{u,\mathrm{L}}V_{d,\mathrm{L}}\) is the Cabibbo-Kobayashi-Maskawa (CKM) matrix [36]. In the standard parameterization it can be written as [37]
\[V_{\text{CKM}}=\begin{pmatrix}1&0&0\\ 0&c_{23}&s_{23}\\ 0&-s_{23}&c_{23}\end{pmatrix}\begin{pmatrix}c_{13}&0&s_{13}e^{-\mathrm{i} \delta}\\ 0&1&0\\ -s_{13}e^{\mathrm{i}\delta}&0&c_{13}\end{pmatrix}\begin{pmatrix}c_{12}&s_{12} &0\\ -s_{12}&c_{12}&0\\ 0&0&1\end{pmatrix}\,, \tag{8}\]
where the quark mixing angles appear as \(s_{ij}=\sin\theta_{ij}\) and \(c_{ij}=\cos\theta_{ij}\), and CP violation is parametrized by the complex phase \(\delta\). For concrete applications, it is often convenient to use the Wolfenstein parametrization of the CKM matrix [38]. Since it is better suited for our purpose, we adopt the Wolfenstein-like but exactly unitary parametrization of [39], for which we obtain up-to-date values for the parameters \(\lambda=0.22481\pm 0.00059\), \(A=0.817\pm 0.018\), \(\rho=0.145\pm 0.015\) and \(\eta=0.366\pm 0.012\) by performing our own fit to PDG data [37].
The 10 physical parameters correspond to 10 algebraically independent2_primary_ basis invariants that must be given as functions of \(\tilde{H}_{u}\) and \(\tilde{H}_{d}\). Various choices for the ten primary invariants are possible. We will first state our set of primary basis invariants, then subsequently motivate our choice and outline the systematic construction of our basis invariants in the following sections.
Footnote 2: Algebraic (in-)dependence of invariants is easily tested by numerically evaluating the rank of their Jacobi matrix, see e.g. [12, Appendix A].
We have already identified the two "trivial" primary invariants
\[I_{10}:=\operatorname{Tr}\tilde{H}_{u}\qquad\text{and}\qquad I_{01}:= \operatorname{Tr}\tilde{H}_{d}\;. \tag{9}\]
Hence, eight more algebraically independent basis invariants have to arise as trivial singlets in the tensor products \(\mathbf{8}_{u}^{\otimes n}\otimes\mathbf{8}_{d}^{\otimes m}\) of the \(\mathbf{8}\)-plet covariant tensors \(H_{u}\) and \(H_{d}\). We construct those eight algebraically independent primary basis invariants as
\[I_{20} :=\operatorname{Tr}\bigl{(}H_{u}^{2}\bigr{)}\,,\quad I_{02}:= \operatorname{Tr}\bigl{(}H_{d}^{2}\bigr{)}\,,\quad I_{11}:=\operatorname{Tr}(H_ {u}H_{d})\,, \tag{10}\] \[I_{30} :=\operatorname{Tr}\bigl{(}H_{u}^{3}\bigr{)}\,,\quad I_{03}:= \operatorname{Tr}\bigl{(}H_{d}^{3}\bigr{)}\,,\quad I_{21}:=\operatorname{Tr} \bigl{(}H_{u}^{2}H_{d}\bigr{)}\,,\quad I_{12}:=\operatorname{Tr}\bigl{(}H_{u} H_{d}^{2}\bigr{)}\,,\] \[I_{22} :=3\operatorname{Tr}\bigl{(}H_{u}^{2}H_{d}^{2}\bigr{)}- \operatorname{Tr}\bigl{(}H_{u}^{2}\bigr{)}\operatorname{Tr}\bigl{(}H_{d}^{2} \bigr{)}\,.\]
The above 10 basis invariants correspond to the 10 physical parameters of the SM quark Yukawa sector. It is intuitively clear that the up and down sector masses must correspond to \(I_{10}\), \(I_{20}\), \(I_{30}\), and \(I_{01}\), \(I_{02}\), \(I_{03}\), respectively. Furthermore, the (mis-)alignment of up and down sectors, i.e. CKM angles and phase, must correspond to the mixed invariants \(I_{11}\), \(I_{21}\), \(I_{12}\) and \(I_{22}\), in agreement with [40]. Explicit expressions for the masses and CKM squared elements can be obtained as a combination of invariants, see e.g. [41], but this is not the topic of the present paper where we set out to characterize the orthogonal invariants themselves.
All of the 10 primary basis invariants are real and CP even, as we will derive in detail in section 3.4. Hence, one may wonder how CP violation is encoded in them. It is well known that the only CP-odd basis invariant that can be constructed in the SM (ex. \(\bar{\Theta}\)) is given by the so-called Jarlskog invariant [2; 3]. The Jarlskog invariant can be defined as
\[J_{33}:=\operatorname{Tr}\bigl{(}H_{u}^{2}H_{d}^{2}H_{u}H_{d}\bigr{)}- \operatorname{Tr}\bigl{(}H_{d}^{2}H_{u}^{2}H_{d}H_{u}\bigr{)}\equiv\frac{1}{3 }\operatorname{Tr}\left[H_{u},H_{d}\right]^{3}\,. \tag{11}\]
In the physical basis, and using the standard parametrization, it is given by
\[J_{33}=\operatorname{i}J\,\frac{2^{7}}{v^{12}}\,\left(m_{t}^{2}-m_{c}^{2} \right)\,\left(m_{t}^{2}-m_{u}^{2}\right)\,\left(m_{c}^{2}-m_{u}^{2}\right)\, \left(m_{b}^{2}-m_{s}^{2}\right)\,\left(m_{b}^{2}-m_{d}^{2}\right)\,\left(m_{s }^{2}-m_{d}^{2}\right)\,, \tag{12}\]
where
\[J:=\cos\theta_{12}\,\cos^{2}\theta_{13}\,\cos\theta_{23}\,\sin\theta_{12}\, \sin\theta_{13}\,\sin\theta_{23}\,\sin\delta\approx A\,\eta\,\lambda^{6}\,. \tag{13}\]
The Jarlskog invariant is not included in the set of primary invariants above, but arises as a _secondary_ invariant in the ring of basis invariants. This implies that it is not algebraically independent of the CP-even primary invariants but fulfills a polynomial relation with them.
As we will see below, this relation is the syzygy of the ring of invariants. It is given by
\[(J_{33})^{2}= -\frac{4}{27}I_{22}^{3}+\frac{1}{9}I_{22}^{2}I_{11}^{2}+\frac{1}{9}I _{22}^{2}I_{02}I_{20}+\frac{2}{3}I_{22}I_{30}I_{03}I_{11}-\frac{2}{3}I_{22}I_{21 }I_{12}I_{11}-\frac{1}{9}I_{22}I_{11}^{2}I_{20}I_{02}\] \[+\frac{2}{3}I_{22}I_{21}^{2}I_{10}+\frac{2}{3}I_{22}I_{12}^{2}I_{2 0}-\frac{2}{3}I_{22}I_{30}I_{12}I_{02}-\frac{2}{3}I_{22}I_{03}I_{21}I_{20}\] \[-\frac{1}{3}I_{30}^{2}I_{03}^{2}+I_{21}^{2}I_{12}^{2}+2I_{30}I_{03 }I_{21}I_{12}-\frac{4}{9}I_{30}I_{03}I_{11}^{3}\] \[+\frac{1}{18}I_{30}^{2}I_{02}^{3}+\frac{1}{18}I_{03}^{2}I_{20}^{3 }-\frac{4}{3}I_{30}I_{12}^{2}-\frac{4}{3}I_{03}I_{21}^{2}\] \[-\frac{1}{3}I_{30}I_{21}I_{11}I_{02}^{2}-\frac{1}{3}I_{03}I_{12}I_ {11}I_{20}^{2}+\frac{2}{3}I_{30}I_{12}I_{11}^{2}I_{02}+\frac{2}{3}I_{03}I_{21} I_{11}^{2}I_{20}\] \[-\frac{2}{3}I_{21}I_{12}I_{20}I_{02}I_{11}-\frac{1}{108}I_{20}^{3 }I_{02}^{3}+\frac{1}{36}I_{20}^{2}I_{02}^{2}I_{11}^{2}+\frac{1}{6}I_{21}^{2}I_ {20}I_{02}^{2}+\frac{1}{6}I_{12}^{2}I_{02}I_{20}^{2}. \tag{14}\]
For our choice of primary invariants, this relation is described by a polynomial of 27 terms (out of 37 possible power products of lower lying non-trivial invariants, not involving the trivial invariants \(I_{10}\) and \(I_{01}\)). This should be compared to the result of Jenkins and Manohar [10] which involved 241 terms in order to express \(J_{33}^{2}\) in terms of their choice of CP-even invariants. The enormous simplification of the syzygy arises from our usage of _orthogonal_ basis invariants, as we will further elaborate on in section 3.2. In practice, orthogonality of invariants in the adjoint space of the SM quark flavor ring corresponds to the automatic removal of the traces in all non-linear invariants, as done manually in eq. (3). Also the specific choice of \(I_{22}\) (10) is motivated by orthogonality and the fact that amongst all possible orthogonal quartic invariants (of which we will see there are multiple possibilities) our choice gives rise to the shortest syzygy.
## 3 Construction of an orthonormal basis of flavor invariants
We will now formalize the construction of our set of orthonormal primary and secondary invariants stated in the previous section. For this, we first construct the structure of the ring of basis invariants using the Hilbert series and plethystic logarithm. Subsequently, we explicitly construct the invariants using orthogonal hermitian projection operators that are constructed using the technique of birdtrack diagrams.
### The Hilbert series
The Hilbert series (HS) of the SM quark sector is straightforwardly calculated (see [42] for a concise introduction to the HS technique). The covariantly transforming objects \(H_{u}\) and \(H_{d}\) are \(\mathbf{8}\)-plets under the left-handed \(\mathrm{SU}(3)\) flavor rotation. Hence, the HS is computed via the integral
\[H(K[V]^{G};u,d)=\int_{\mathrm{SU}(3)}d\mu_{\mathrm{SU}(3)}\operatorname{PE} \left[z_{1},z_{2};u;\mathbf{8}\right]\operatorname{PE}\left[z_{1},z_{2};d; \mathbf{8}\right]\,, \tag{15}\]
where the plethystic exponential \(\operatorname{PE}\left[z_{1},z_{2};u;\mathbf{8}\right]\) and the integral measure can be found in [43, Eq. 5.6] (see also [42, 44, 45, 46, 47]).
We are working in a parameter space with dimension \(\dim V=16\), transforming under \(G=\mathrm{SU}(3)\) with dimension \(\dim G=8\). Hence, the number of physical parameters is given by
\[N_{\mathrm{physical}}=\dim V-\dim G=8\,. \tag{19}\]
Together with the two trivial invariants this reproduces the number of 10 physical parameters. The integral in eq. (18) is straightforwardly computed to yield the multi-graded HS
\[H(K[V]^{G};u,d)=\frac{1+u^{3}d^{3}}{(1-u^{2})(1-d^{2})(1-ud)(1-u^{3})(1-d^{3})( 1-ud^{2})(1-u^{2}d)(1-u^{2}d)}\,. \tag{20}\]
The ungraded form of the HS (setting the dummy indices as \(t=u=d\)) is directly read off as
\[H(K[V]^{G},t)=\frac{1+t^{6}}{(1-t^{2})^{3}(1-t^{3})^{4}(1-t^{4})}\,. \tag{21}\]
The denominators in eqs. (20)-(21) determine the number and order of primary invariants. The numerators give the secondary invariants. The multi-graded HS additionally informs about the structure of the invariant in terms of the covariants.
Another function of interest is the plethystic logarithm (PL) [48; 49] (in the particle physics context it was first introduced and used in [44; 45; 46; 47; 50]), defined as
\[\mathrm{PL}\left[H(K[V]^{G};u,d)\right]:=\sum_{k=1}^{\infty}\frac{\mu(k)\,\ln H (K[V]^{G};u^{k},d^{k})}{k}\;, \tag{22}\]
where \(\mu(k)\) is the Mobius function. The PL of our ring can be computed exactly, because it terminates at order \(u^{6}d^{6}\). It is given by
\[\mathrm{PL}\left[H(K[V]^{G};s,t)\right]=u^{2}+ud+d^{2}+u^{3}+d^{3}+u^{2}d+ud^{2 }+u^{2}d^{2}+u^{3}d^{3}-u^{6}d^{6}\,. \tag{23}\]
The leading positive terms of the PL correspond to the number and structure of the generating set of (primary and secondary) invariants of the ring. The final negative term cuts off the generating set and informs us that there is a syzygy of order \(u^{6}d^{6}\) between the invariants of the generating set. For our choice of primary and secondary invariants we have already explicitly stated this syzygy in eq. (14).3
Footnote 3: For the construction of general invariant relations and syzygies we refer to the procedure outlined in [12] (that was adopted also in [25, App. C]).
Our choice of primary and secondary invariants conveniently realize the ring in form of a Hironaka decomposition [51; 52] (see e.g. also [53, Sec. 2.3], [54, Sec. 5.4.1]). Therefore, it is guaranteed that _any_ basis invariant quantity (any observable) \(\mathcal{O}_{\mathrm{flavor}}\) of the SM quark sector can be expressed through our generating set of invariants as
\[\mathcal{O}_{\mathrm{flavor}}=\mathbb{C}[I]+J_{33}\,\mathbb{C}[I]\,, \tag{24}\]
where \(\mathbb{C}[I]\) denote polynomials in the primary invariants \(I\) with potentially complex coefficients.
Lastly, we note that it is possible and instructive to formulate an \(\mathrm{SO}(3)\) version of the SM ring in the absence of CP violation, an exercise that we perform in appendix A.
### Construction of orthogonal invariant projection operators
Given the number and structure of primary and secondary invariants as obtained from the HS and PL, we proceed with the explicit construction of invariants using projection operators. The fact that any algebraic combination of basis invariants yields another basis invariant invites the common misconception that there are no meaningfully distinguishable bases in the space of invariants. However, if invariants are obtained using projection operators, then properties such as orthogonality of invariants can be defined based on the orthogonality of the respective projection operators. A good choice of basis should be an orthogonal basis. There can be different possible constructions of orthogonal projection operators, hence, different orthogonal bases, potentially suitable for different applications.
As a first step in the quantitative basis independent exploration of the SM flavor puzzle, we construct here an orthogonal set of SM flavor invariants in the adjoint space of flavor. This explicitly shows that the trace basis invariants of section 2 arise as an orthogonal basis of invariants in the adjoint space of left-handed quark flavor.
In left-handed quark flavor space, \(\tilde{H}_{u}\) and \(\tilde{H}_{d}\) transform as \(\mathbf{\bar{3}}\otimes\mathbf{3}=\mathbf{8}\otimes\mathbf{1}\). Graphically, in terms of birdtrack diagrams [18; 19] (for a concise introduction, see [20]), this relation is represented via projection operators of \(\mathbf{\bar{3}}\otimes\mathbf{3}\rightarrow\mathbf{\bar{3}}\otimes\mathbf{3}\) as
(1
Likewise, the adjoint space components of \(H_{u}\) and \(H_{d}\) are obtained via the projections4
Footnote 4: From here on it makes no difference whether we work with \(\tilde{H}_{u,d}\) or their trace-subtracted counterparts \(H_{u,d}\), as the projection operators _automatically_ pick the orthogonal (i.e. traceless) components of \(\tilde{H}_{u,d}\).
\[\mathbf{u}^{a} = \mathrm{Tr}\Big{[}Y_{u}^{\dagger}\,t^{a}\,Y_{u}\Big{]}= \tag{11}\] \[\mathbf{d}^{a} = \mathrm{Tr}\Big{[}Y_{d}^{\dagger}\,t^{a}\,Y_{d}\Big{]}= \tag{12}\]
These \(\mathbf{8}\)-plet vectors in adjoint space are the objects entering the Hilbert series in section 3.1.
In order to explicitly construct the resulting invariants, we have to construct orthogonal projection operators in adjoint space for a mapping of \(k\) objects, transforming in adjoint space, onto trivial singlets,
\[\mathbf{8}^{\otimes k}\;\longrightarrow\;\mathbb{C}\;. \tag{13}\]
Performed systematically, this involves the construction of all \(\mathbf{8}^{\otimes k}\;\rightarrow\;\mathbf{8}^{\otimes k}\) projection operators, then selecting the ones that factorize to yield the trivial singlet irreducible representations. Precisely because of their necessary factorization, this procedure can be abridged if only trivial singlet projection operators are sought after. For example, the operators for \(\mathbf{8}^{\otimes 4}\rightarrow\mathbb{C}\) are readily obtained from _all_ projection operators of \(\mathbf{8}^{\otimes 2}\rightarrow\mathbf{8}^{\otimes 2}\). This includes operators that project the direct product of two eights onto the irreducible representations
\[\mathbf{8}^{\otimes 2}=\mathbf{1}\oplus\mathbf{8}_{\mathrm{S}}\oplus\mathbf{8}_{\mathrm{A}} \oplus\mathbf{10}\oplus\overline{\mathbf{10}}\oplus\mathbf{27}\;, \tag{14}\]
but also the so-called transition operators [15; 55], which here correspond to transitions \(\mathbf{8}_{\mathrm{S}}\leftrightarrow\mathbf{8}_{\mathrm{A}}\) between identically transforming irreps in the direct sum on the r.h.s. Together, these operators form a complete orthogonal basis of all \(k\)-legged tensor structures, where in above case \(k=4\). Retrieving the singlet projection operators in \(\mathbf{8}^{\otimes 4}\rightarrow\mathbf{8}^{\otimes 4}\) from all projection operators in \(\mathbf{8}^{\otimes 2}\rightarrow\mathbf{8}^{\otimes 2}\) then merely corresponds to formally re-assigning the legs of the respective operator and re-adjusting its normalization, as we discuss in detail in appendix C. Finally, contracting each of the legs with covariantly transforming objects in adjoint space - here all possible combinations of \(\mathbf{u}^{a}\) and \(\mathbf{d}^{a}\) - then yields the individual orthogonal invariants.
The necessary adjoint space projection operators are summarized in the following; below we discuss their normalization. For \(\mathbf{8}^{\otimes 2}\rightarrow\mathbb{C}\) the only projection operator is
\[\delta^{ab}=\;\raisebox{-14.226378pt}{\scalebox{0.8}{$\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
Here,
\[f^{abc}=\frac{1}{{\rm i}\,T_{\mathbf{r}}}\,{\rm Tr}\left(\left[t^{a},t^{b}\right]t^{c} \right)\qquad\text{ and }\qquad d^{abc}=\frac{1}{T_{\mathbf{r}}}\,{\rm Tr}\left(\left\{t^{a},t^{b} \right\}t^{c}\right) \tag{3.17}\]
are the anti-symmetric and symmetric, respectively, invariant tensors of \(\mathrm{SU}(N)\) where square (curly) brackets \([\cdot,\cdot]\) (\(\left\{\cdot,\cdot\right\}\)) denote the (anti-)commutator.
For \(\mathbf{8}^{\otimes 4}\to\,\mathbb{C}\) there are eight invariant structures given by [55, 56] (see also [19, Tab. 9.4])
(3.18)
(3.19)
Viewed as maps in \(\mathbf{8}^{\otimes 2}\to\mathbf{8}^{\otimes 2}\), these are the projection operators realizing the decomposition into irreps in (3.14), as well as the \(\mathbf{8}_{\rm S}\leftrightarrow\mathbf{8}_{\rm A}\) transition operators in (3.19). The explicit internal structure of the \(\mathbf{10}\), \(\overline{\mathbf{10}}\), and \(\mathbf{27}\) projection operators can be found in [55, Eq 1.23].
For \(\mathbf{8}^{\otimes 5}\to\,\mathbb{C}\) there are no new non-trivial (in the sense of non-factorizing) invariant structures in the SM, as witnessed also by the absence of the respective term in the HS.
For \(\mathbf{8}^{\otimes 6}\to\,\mathbb{C}\) several orthogonal operators exist but we do not display the complete basis of projection operators here as they are not all necessary. The non-trivial, orthogonal projection operator
(3.21)
can be used to construct the Jarlskog invariant, which is the only new invariant at this level. This completes the set of necessary projection operator structures.
All of the above operators are orthogonal to each other, hence, produce orthogonal basis invariants when contracted with \(\mathbf{u}^{a}\)'s and/or \(\mathbf{d}^{a}\)'s. This is straightforward to show using the diagrammatic identities collected in appendix B. We have displayed the operators at this stage without normalization factors. Different normalizations can make sense depending on how the respective operator is interpreted. Knowing the correct normalization is not
necessary to extract the "essence" of the corresponding invariant (i.e. the invariant with correct relative prefactors of all terms but with an arbitrary global prefactor). However, the correct normalization of the operators must be known in order to obtain the correct absolute values of the invariants, including all group theoretical global prefactors.
In general, projection operators ought to be idempotent (\(P^{2}=P\)) and this fixes the normalization, with \(\operatorname{Tr}P\) being equal to the dimension of the space the operator projects onto. For the projection operators for \(\mathbf{8}^{\otimes 4}\to\mathbb{C}\), for example, this implies that our normalization must differ from the one used e.g. in [55] where the same operators are understood as projectors \(\mathbf{8}^{\otimes 2}\to\mathbf{8}^{\otimes 2}\). We explain how we obtain correct normalization for all of our projectors and state the results in appendix C.
### Construction of the basis invariants
To finally obtain the orthogonal basis invariants, the projection operators are contracted with all possible combinations of \(H_{u}\)'s and \(H_{d}\)'s, or more precisely, \(\mathbf{u}^{a}\)'s and \(\mathbf{d}^{a}\)'s. Using all of the above projection operators, the invariants stated in (10) are derived up to global numerical, typically \(\mathcal{O}(1)\), prefactors (corresponding to the correct normalization of the projection operators) that we drop in (10) for simplicity.
The quadratic traces \(I_{20}\), \(I_{02}\), and \(I_{11}\) are obtained using (35) to contract combinations of \(\mathbf{u}^{a}\) and \(\mathbf{d}^{a}\). The cubic traces \(I_{30}\), \(I_{03}\), \(I_{21}\) and \(I_{12}\) are obtained by contraction with a \(d\) tensor invariant projector, eq. (36). All contractions with the \(f\) tensor vanish for symmetry reasons, which we discuss in more detail in the next section.
All of the quadratic and cubic trace basis invariants are the unique orthogonal invariants at the respective level of contractions in the adjoint space. This changes at the quartic level. Using the projectors (38) to (30) to derive invariants at the quartic level, multiple different invariants are obtained. Trivially, contractions with (38) yield factorized invariants of the quadratic level, i.e. products of \(I_{20}\), \(I_{02}\), and \(I_{11}\). The same is true for non-vanishing invariants that originate from contractions with either all legs being \(\mathbf{u}^{a}\) or \(\mathbf{d}^{a}\), or with an odd number of \(\mathbf{u}^{a}\)'s and \(\mathbf{d}^{a}\)'s (uuud, dddu and permutations). Taking contractions of two \(\mathbf{u}^{a}\) and two \(\mathbf{d}^{a}\), still, multiple invariants are obtained, in different contraction channels, corresponding to the different operators in eqs. (38) to (30). Two different non-zero and non-factorized invariants arise from the \(\mathbf{8}_{\mathrm{S}}\)-projection operator, and one invariant each arises from the \(\mathbf{8}_{\mathrm{A}}\), \(\mathbf{10}\) (an identical one arises from \(\overline{\mathbf{10}}\)), and \(\mathbf{27}\)-plet projection operators. The different quartic invariants are, by construction, all orthogonal to each other, and orthogonal to all lower lying invariants, but they are _not_ algebraically independent of each other (taking into account the lower lying invariants). In consistency with the Hilbert Series, there is exactly one algebraically independent quartic invariant. Amongst all orthogonal quartic invariants found, \(I_{22}\) of (10) - as constructed from \(\mathbf{8}_{\mathrm{S}}\) contracted with \(H_{u}\) and \(H_{d}\) - minimizes the number of terms in the syzygy (14), which is why we have chosen to display this particular invariant and use it in the numerical analysis below. This does not mean that our expression for \(I_{22}\) has the fewest possible number of terms when spelled out in matrix elements of \(\tilde{H}_{u,d}\). We have found much shorter (fewer number of terms) orthogonal and algebraically independent quartic invariants, which, however, give
rise to more complicated syzygies. The shortest invariant found is the one obtained from \({\bf 8}_{\rm A}\) which has 189 terms (as compared to \(I_{22}\), which has 294 terms).
### CP transformation of the basis invariants
Let us now derive general rules for the transformation of projection operators under charge-parity (CP) transformations. The most general physical CP transformation is given by a simultaneous complex conjugation outer automorphism of all involved symmetry groups [57]. Hence, CP also acts as a complex conjugation outer automorphism in flavor space. The most general possible CP transformation acts on quark field multiplets as5[4; 59]
Footnote 5: Here we have suppressed a possibly non-trivial action of the CP transformation in the gauge representation space of each field for the simple reason that it is always possible to rotate this transformation to an identity matrix \(U=\mathbb{1}\) for the gauge groups of the SM [58].
\[Q_{\rm L}(t,\mathbf{x}) \mapsto\ U_{\rm L}\,{\cal C}\,Q_{\rm L}^{*}(t,-\mathbf{x})\,, \tag{3.22}\] \[u_{\rm R}(t,\mathbf{x}) \mapsto\ U_{u,\rm R}\,{\cal C}\,u_{\rm R}^{*}(t,-\mathbf{x})\,,\] (3.23) \[d_{\rm R}(t,\mathbf{x}) \mapsto\ U_{d,\rm R}\,{\cal C}\,d_{\rm R}^{*}(t,-\mathbf{x})\,. \tag{3.24}\]
Here, \(U_{\rm L}\), \(U_{u,\rm R}\), and \(U_{d,\rm R}\) are general \(3\times 3\) unitary matrices acting in flavor space, while \({\cal C}\) is the charge conjugation matrix of fermions given by \({\cal C}={\rm i}\gamma_{2}\gamma_{0}\) in the chiral Weyl or Dirac basis of gamma matrices. Equivalently, the CP transformation can be viewed as acting on the Yukawa coupling matrices \(Y_{u,d}\) as
\[Y_{u} \mapsto\ U_{\rm L}^{\rm T}\,Y_{u}^{*}\,U_{u,\rm R}^{*}\,, \tag{3.25}\] \[Y_{d} \mapsto\ U_{\rm L}^{\rm T}\,Y_{d}^{*}\,U_{d,\rm R}^{*}\,. \tag{3.26}\]
From this it is straightforward to derive the transformation of the adjoint space vectors
\[\mathbf{u}^{a} \mapsto\ -R^{ab}\,\mathbf{u}^{b}\,, \tag{3.27}\] \[\mathbf{d}^{a} \mapsto\ -R^{ab}\,\mathbf{d}^{b}\,,\]
where \(R\) is the representation matrix of the CP transformation (\(\mathbb{Z}_{2}\) outer automorphism of \({\rm SU}(N)\)) in adjoint space, related to \(U_{\rm L}\) by the consistency condition [58; 60], see also [57; 61]
\[U_{\rm L}\ (-t^{a})^{\rm T}\ U_{\rm L}^{\dagger}\ =\ R^{ab}\,t^{b}\;. \tag{3.28}\]
For example, in the standard Gell-Mann basis for the \({\rm SU}(3)\) generators of the fundamental representation, \(U_{\rm L}=\mathbb{1}\) and \(R={\rm diag}(-1,+1,-1,-1,+1,-1,+1,-1)\).
The transformation of \(f\) and \(d\) tensors under the CP outer automorphism is given by
\[f^{abc} \mapsto\ R^{aa^{\prime}}\,R^{bb^{\prime}}\,R^{cc^{\prime}}\,f^{a^{ \prime}b^{\prime}c^{\prime}}\ =\ f^{abc}\,, \tag{3.29}\] \[d^{abc} \mapsto\ R^{aa^{\prime}}\,R^{bb^{\prime}}\,R^{cc^{\prime}}\,d^{a^{ \prime}b^{\prime}c^{\prime}}\ =\ -d^{abc}\,. \tag{3.30}\]
Given this, the CP transformation behavior of invariants can easily be read-off from their respective projection operators, presuming that their external legs are contracted with objects that transform like (3.27). The rule for all of our invariants is:
An invariant obtained by projection is CP even (CP odd),
if the corresponding projection operator contains an even(odd) number of \(f\) tensors.
Without surprise, this shows that the Jarlskog invariant constructed via the operator (3.21) is the only non-vanishing CP-odd invariant in our construction.
However, note that there is a potential CP-odd invariant lurking already at the cubic level, \(k=3\), given by the projection with the operator \(\mathrm{i}f^{abc}\). Only by accident this invariant vanishes in the SM, since there are only two independent tensors, \(\mathbf{u}^{a}\) and \(\mathbf{d}^{a}\) (from \(H_{u}\) and \(H_{d}\)), while \(f^{abc}\) is totally anti-symmetric. The same argument is true for other potentially CP-odd projections at the levels \(k=4\) and \(k=5\) (e.g. the \(\mathbf{8}\)-plet transition operators shown in eq. (3.19)). The lowest order CP-odd invariant in the SM then arises only at level \(k=6\) and is, therefore, highly suppressed.
Note that in more general models, CP violation would generically arise at a lower order. For example, in SM extensions with more than two independent structures in the left-handed quark flavor space, CP violation would arise already at the cubic level. This could be the case upon taking into account higher-dimensional operators in effective field theories of the SM [62] (beyond the paradigm of minimal flavor violation [63]), or, more concretely, in models with some level of "quark-lepton unification". For example, if left-handed charged leptons would, at some scale, be unified with the left-handed quarks. In this case, the charged lepton Yukawa couplings \(H_{\ell}:=Y_{\ell}Y_{\ell}^{\dagger}\) would form a third object in the left-handed adjoint flavor space, thereby allowing the CP-odd invariant
\[\begin{split}\includegraphics[width=142.26378pt]{Fig4}& =\mathrm{i}f^{abc}\,\operatorname{Tr}\!\left[Y_{u}^{\dagger}\,t^{a}\,Y_{u} \right]\operatorname{Tr}\!\left[Y_{d}^{\dagger}\,t^{b}\,Y_{d}\right] \operatorname{Tr}\!\left[Y_{\ell}^{\dagger}\,t^{c}\,Y_{\ell}\right]\,.\end{split} \tag{3.31}\]
This invariant would give rise to CP violation with a strength roughly given by the square-root of the Jarlskog invariant, i.e. lifting the accidental suppression of CPV in the SM by a factor \(\mathcal{O}(1/\sqrt{|J|})\sim 10^{12}\). Studying the details of such an enhancement and its effect on baryogenesis is beyond the scope of the present paper. However, it is obvious that such enhancement effects must be taken into account when discussing the generation of matter-antimatter asymmetry in SM extensions with (left-handed) quark-lepton unification, and this might vastly improve our quantitative understanding of Baryogenesis.
## 4 Parameter space and experimental values of quark flavor invariants
Having the orthogonal basis invariants at hand, let us quantitatively analyze them in order to obtain a basis invariant picture of the quark flavor puzzle.
On the one hand, we can scan the allowed parameter space (preferentially with a measure that also swipes the corners) to obtain a picture of the landscape of possibilities. On the other hand, all physical parameters of the quark sector have been experimentally
etermined with high accuracy, which also experimentally fixes the values of all invariants and their observational uncertainties. This is where models _like_ the SM (same field content and symmetries but different numerical values of the parameters) are differentiated from _the_ SM, as determined by observations of Nature.
Using the physical parameters collected in [37] for the CKM, and the masses renormalized at the electroweak scale \(\mu=M_{Z}\), see e.g. [64], the orthogonal invariants and their errors are evaluated in the left column of table 1. Without loss of generality, one can use the standard parametrization and basis choice of eqs. (6) and (7) for this.
To better display the invariants and their correlations, we normalize them to the corresponding power of the two largest Yukawa couplings (which we define here to be \(y_{t}\) and \(y_{b}\) without loss of generality),
\[\hat{I}_{ij}:=\frac{I_{ij}}{\left(y_{t}^{2}\right)^{i}\left(y_{b}^{2}\right)^ {j}}. \tag{10}\]
The corresponding numerical results are shown in the right column of table 1 and in figure 1.6
Footnote 6: We discuss an alternative, arguably even “more basis invariant” normalization in appendix D.
In order to map out the possible parameter space of the invariants, we perform a scan over the physical parameters. We scan the parameter space twice, once with a linear measure and once with a logarithmic measure on the physical parameters in PDG parametrization. We stress that the goal here is not to explore the likelihood of a given point or even the experimentally observed values, as this would be impossible to determine without knowing the proper measure to be used in the scan. By contrast, we seek to map out _all possible_ values of the invariants, i.e. the shape of their parameter space when the physical parameters are varied in their physically allowed ranges. We use NumPy and evaluate points within a uniform random distribution of CKM angles \(s_{12},s_{13},s_{23}\in[-1,1]\) and \(\delta\in[-\pi,\pi]\), as well as masses, either within a uniform random distribution \(m_{u,c}\in[0,1]y_{t}\), \(y_{d,s}\in[0,1]y_{b}\) ("linear") or within a uniform random distribution \((m_{u,c}/\mathrm{MeV})\in 10^{[-1,\log(m_{t}/\mathrm{MeV})]}\), \((m_{d,s}/\mathrm{MeV})\in 10^{[-1,\log(m_{b}/\mathrm{MeV})]}\) ("logarithmic"). In both
Figure 1: Experimentally determined values of the orthogonal quark sector basis invariants \(\hat{I}_{ij}\), normalized according to (10), with \(1\sigma\) experimental errors.
cases we only keep points with the "correct" mass orderings \(m_{u}<m_{c}\) and \(m_{d}<m_{s}\) as not to overweight regions (this is a question of labeling the angles or Yukawa couplings first). Altogether our plots show about \({\cal O}(10^{7})\) random points. The resulting parameter space of all non-trivial invariants is displayed in figures 2 and 3. The boundedness of the parameter space and the limiting values can be understood by using the Frobenius inner product for our invariants, as explained in detail in appendix E. We highlight several special points in the parameter space as well as the experimentally determined locations of the invariants and their errors (with error bars scaled up by a factor \(10^{3}\) for visibility).
Using the physical parameters of the SM as determined from experiment, the following relations turn out to hold approximately
\[\hat{I}_{11}\ \approx\ \hat{I}_{20}\ \approx\ \hat{I}_{02}\ \lesssim\ \frac{2}{3}\;, \tag{4.2}\]
\[\hat{I}_{30}\ \approx\ \hat{I}_{03}\ \approx\ \hat{I}_{21}\ \approx\ \hat{I}_{12}\ \approx\ \hat{I}_{22}\ \lesssim\ \frac{2}{9}\;. \tag{4.3}\]
The exact numerical values of the relations should not be over-interpreted as we have used an arbitrary normalization of the invariants in eq. (2.10) and not the "correct" normalization of the projection operators discussed in appendix C. In particular, the resulting
\begin{table}
\begin{tabular}{c l|c l} \hline \hline Invariant & best fit and error & Normalized invariant & best fit and error \\ \hline \(I_{10}\) & \(0.9340(83)\) & \(\hat{I}_{10}\) & \(1.00001358(^{+85}_{-88})\) \\ \(I_{01}\) & \(2.660(49)\times 10^{-4}\) & \(\hat{I}_{01}\) & \(1.000351(^{+63}_{-71})\) \\ \(I_{20}\) & \(0.582(10)\) & \(\hat{I}_{20}\) & \(0.66665761(^{+59}_{-57})\) \\ \(I_{02}\) & \(4.71(17)\times 10^{-8}\) & \(\hat{I}_{02}\) & \(0.666432(^{+47}_{-42})\) \\ \(I_{11}\) & \(1.651(45)\times 10^{-4}\) & \(\hat{I}_{11}\) & \(0.664783(^{+91}_{-87})\) \\ \(I_{30}\) & \(0.1811(48)\) & \(\hat{I}_{30}\) & \(0.22221769(^{+29}_{-28})\) \\ \(I_{03}\) & \(4.18(23)\times 10^{-12}\) & \(\hat{I}_{03}\) & \(0.222105(^{+24}_{-21})\) \\ \(I_{21}\) & \(5.14(^{+18}_{-19})\times 10^{-5}\) & \(\hat{I}_{21}\) & \(0.221593(^{+30}_{-29})\) \\ \(I_{12}\) & \(1.463(^{+65}_{-68})\times 10^{-8}\) & \(\hat{I}_{12}\) & \(0.221555(^{+38}_{-36})\) \\ \(I_{22}\) & \(1.366(^{+73}_{-76})\times 10^{-8}\) & \(\hat{I}_{22}\) & \(0.221554(^{+38}_{-36})\) \\ \hline \(J_{33}\) & \(4.47(^{+1.23}_{-1.58})\times 10^{-24}\) & \(\hat{J}_{33}\) & \(2.92(^{+0.74}_{-0.93})\times 10^{-13}\) \\ \(J\) & \(3.08(^{+0.16}_{-0.19})\times 10^{-5}\) & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Numerical values of the quark flavor sector basis invariants evaluated using experimental data collected by the PDG [37]. The uncertainty intervals are obtained by randomly varying the physical parameters within their \(1\sigma\) uncertainty intervals. The left column displays the orthogonal invariants of eqs. (2.10), (2.12), and (2.13). The right column displays the same invariants normalized according to eq. (4.1).
numbers would differ and the approximate equality between cubic and the quartic invariants would not appear. Much more interesting is the fact that the experimentally found values of SM parameters realize a situation in which the invariants are sitting very close to their maximal possible value, see figure 2. Furthermore, the invariants display a strong level of positive correlation, see figure 4. The correlation between the invariants shown in 4 becomes strong, particularly for the observed (hierarchical) pattern of parameters. This can also be seen by the darker lines in 2, corresponding to more points along the diagonal line of positive correlation, arising from the scan of the parameter space with the logarithmic measure that prefers hierarchical parameters. The invariants are not correlated in the \(\beta\)-direction, but the \(\beta\)-direction is not correlated in the \(\beta\)-direction.
Figure 2: Parameter space and correlations of all non-trivial orthogonal quark sector flavor basis invariants normalized according to eq. (4). The experimentally determined values are shown with error bars scaled up by a factor thousand. Points with specific symmetries are marked by symbols according to the legend. Some of the special points overlap for some invariants and we show some more detailed version in figure 3.
this way for anarchical patterns of parameters or points in parameter space with otherwise increased flavor symmetry, see the different symbols in figures 2 and 3.
The deviations of the normalized invariants from the exact values in eq. (4.2) and (4.3) are statistically significant, see table 1 and figure 1. The relations become exact in the limit \(y_{c,u}\to 0\), \(y_{s,d}\to 0\), \(\lambda\to 0\) (instead of \(\lambda\), also \(A\to 0\) is sufficient). Hence, having the invariants fulfill eq. (4.2) and (4.3) exactly, corresponds to a situation with exact \(\mathrm{SU}(2)_{Q_{\mathrm{L}}}\) flavor symmetry and massless first and second generation quarks, see also appendix E. Deviations from the exact values of \(2/3\) and \(2/9\) are given by (highly correlated) leading order negative corrections of size \(\mathcal{O}(y_{c,u}^{2}/y_{t}^{2})\), \(\mathcal{O}(y_{s,d}^{2}/y_{b}^{2})\) and \(\mathcal{O}(A^{2}\lambda^{4})\).7 The masses of lighter generations, their hierarchies, and deviations from a unity CKM mixing matrix, hence, corresponds to the deviations of the invariants from the symmetric points and the detailed correlation of
Figure 3: Parameter space of the invariants \(\hat{I}_{11}\), \(\hat{I}_{22}\), \(\hat{I}_{12}\), and \(\hat{I}_{21}\) as determined by two (overlayed) scans over the parameter space with linear and logarithmic measure. The dark diagonal lines originate from the scan with the logarithmic measure which prefers hierarchical physical parameters which lead to strong correlations of the invariants. The special location of the experimentally determined values of the invariants are shown incl. their \(1\sigma\) error bars scaled up by a factor thousand. We also mark other points in the parameter space corresponding special choices of physical parameters as indicated in the legend (\(\mathrm{CKM}=\mathrm{antiD}\) means entries \(V_{13}=V_{22}=V_{31}=1\) on the anti-diagonal, and \(V_{ij}=0\) everywhere else). On the l.h.s. some of the special points overlap according to the rules \(\mathbbm{H}=\bigstar\), \(\overline{\psi}=\bigstar\), \(\underline{\Delta}=\phi\), \(\bigstar=\blacksquare\), \(\bigstar=\blacksquare\). On the r.h.s. we only have \(\bigcirc\approx\blacksquare\).
those deviations. Explaining the primary location at the symmetric points and the nature and size of the experimentally significant deviations from the symmetric points would amount to solving the flavor puzzle in this language of orthogonal basis invariants.
## 5 Renormalization group evolution of the orthogonal invariants
To display the evolution of the invariants under the renormalization group we use the one-loop renormalization group equations (RGEs) of [65] adopted to our case (see also [66; 67; 68; 69]). We use the definitions
\[\mathcal{D} :=16\pi^{2}\mu\frac{\mathrm{d}}{\mathrm{d}\mu}\;, \tag{5.1}\] \[a_{\Delta} :=-8\,g_{s}^{2}-\frac{9}{4}g^{2}-\frac{17}{12}g^{\prime 2}\;,\] (5.2) \[a_{\Gamma} :=-8\,g_{s}^{2}-\frac{9}{4}g^{2}-\frac{5}{12}g^{\prime 2}\;,\] (5.3) \[a_{\Pi} :=-\frac{9}{4}g^{2}-\frac{15}{4}g^{\prime 2}\;,\] (5.4) \[t_{udl} :=3\,\mathrm{Tr}\tilde{H}_{u}+3\,\mathrm{Tr}\tilde{H}_{d}+ \mathrm{Tr}\tilde{H}_{\ell}\;, \tag{5.5}\]
with \(g_{s}\), \(g\), and \(g^{\prime}\) being the respective gauge couplings of \(\mathrm{SU}(3)_{\mathrm{c}}\), \(\mathrm{SU}(2)_{\mathrm{L}}\), and \(U(1)_{\mathrm{Y}}\), normalized such that the Higgs doublet has hypercharge \(1/2\). The RGEs for \(\tilde{H}_{u}\) and \(\tilde{H}_{d}\) are then given by
\[\mathcal{D}\tilde{H}_{u} =2\left(a_{\Delta}+t_{udl}\right)\,\tilde{H}_{u}+3\,\tilde{H}_{u}^ {2}-\frac{3}{2}\left(\tilde{H}_{d}\tilde{H}_{u}+\tilde{H}_{u}\tilde{H}_{d} \right)\;, \tag{5.6}\] \[\mathcal{D}\tilde{H}_{d} =2\left(a_{\Gamma}+t_{udl}\right)\,\tilde{H}_{u}+3\,\tilde{H}_{d} ^{2}-\frac{3}{2}\left(\tilde{H}_{d}\tilde{H}_{u}+\tilde{H}_{u}\tilde{H}_{d} \right)\;,\] (5.7) \[\mathcal{D}\tilde{H}_{\ell} =2\left(a_{\Pi}+t_{udl}\right)\,\tilde{H}_{\ell}+3\,\tilde{H}_{ \ell}^{2}\;, \tag{5.8}\]
while the RGEs of the gauge couplings take the standard form
\[\mathcal{D}g_{s} =-7\,g_{s}^{3}\;, \mathcal{D}g =-\frac{19}{6}g^{3}\;, \mathcal{D}g^{\prime} =\frac{41}{6}g^{\prime 3}\;. \tag{5.9}\]
Figure 4: Correlations of all orthogonal basis invariants when scanning over their entire physically allowed parameter space with linear (left) and logarithmic measure (right).
fter solving the RGEs for \(\tilde{H}_{u}\), \(\tilde{H}_{d}\), and \(\tilde{H}_{\ell}\), it is straightforward to evaluate the orthogonal invariants defined in eqs. (9), (10), and (11) at any scale. We show the result for the invariants in figure 5 (left) and for the normalized invariants in figure 5 (right). No particularly striking feature is happening in the RGE evolution from the electroweak scale up to very high scales, and we display the running up to the Planck scale. At a scale \(\mu\sim 10^{41}\,\text{GeV}\) the invariants turn zero, while running to lower scales the invariants become infinite at a scale \(\mu\sim 40\,\text{MeV}\) (some of the invariants, but not their normalized counterparts, show crossings in the RGE flow at scales below \(\sim 10\,\text{GeV}\)). We do not consider threshold effects and matching here, but note that integrating out fermions unavoidably corresponds to changing the ring and its structure. Hence, the significance of these scales should be evaluated using a precise evaluation of higher-loop order RGEs and proper treatment of thresholds below the electroweak scale, which is beyond the scope of this work.
As an important crosscheck, we explicitly confirm that we can use our scale dependent invariants to extract the running of the masses in agreement with the results of [64, Table 2] (within reasonable errors to be blamed on one vs. three-loop accuracy). We also confirm the correct running of CKM elements, Wolfenstein parameters, and \(J\) compared to [68, 70], as discussed in detail in appendix F.
Even though the normalized invariants do evolve very little, see r.h.s. of fig. 5, their evolution is significant as compared to the error budget of the invariants at the electroweak scale (see table 1, right). For a direct construction of RGE evolution invariants at the one loop order we refer to [21, 22]. Using the orthogonal invariants and their directly derived RGE equations (to be presented elsewhere) will also enable the future construction of higher-loop-order RGE invariant expressions, or allow to show that they do not exist. Formulating the running of invariants directly and exactly in terms of the invariants themselves is a formidable task for future work.
Figure 5: Renormalization group running (at one-loop accuracy) of the orthogonal quark sector basis invariants (left) and invariants normalized according to (11) (right).
Discussion and Comments
Let us give some remarks about the construction of the invariants, numerical results and directions for future work.
* We have constructed our invariants such that they are orthogonal in adjoint space of left-handed quark flavor, \(\mathrm{SU}(3)_{Q_{\mathrm{L}}}\). We are aware of at least one other distinct orthogonal basis to construct the SM flavor basis invariants, namely from orthogonal projectors in the fundamental space of \(\mathrm{SU}(3)_{Q_{\mathrm{L}}}\). Depending on the application, it may be more appropriate to work with one orthogonal basis for the invariants or the other. The construction of the orthogonal basis in fundamental space requires the construction of orthogonal projection operators up to \((\mathbf{\bar{3}}\otimes\mathbf{3})^{\otimes 6}\to\mathbb{C}\). Those can be constructed via Young tableaux (pulling up or down one of the (anti-)fundamental indices), see [12], and require several steps of (anti-)symmetrization of up to 18 fundamental indices of \(\mathrm{SU}(3)\). On the one hand, such complicated projection operators can straightforwardly be constructed by hand using birdtrack technology which nicely generalizes, for example to \(\mathrm{SU}(N)\). On the other hand, evaluating the according operators explicitly is an intensive task of high complexity that easily exhausts memory capacities even of large computing clusters (this is a problem with large number of possible permutations and the computational effort grows roughly proportional to the factorial of the number of indices). Hence, eventually this task should be delegated to super- or even quantum computers (and quantum analogue simulators) where the construction of invariant operators may even serve as useful benchmark problem. Once the operators are constructed, they are small in memory size and their correctness is computationally cheap to confirm.
* In the adjoint space construction of orthogonal basis invariants, there is an ambiguity in choosing \(I_{22}\) (see discussion in sec. 3.3). On the one hand, there could be orthogonal bases other than in the adjoint space in which there exist a unique orthogonal quartic invariant \(I_{22}\). On the other hand, the origin of distinct orthogonal quartic invariants from different covariant contraction channels in the adjoint space could be important to understand the (mis-)alignment of the \(\mathbf{8}\)-plet vectors \(\mathbf{u}^{a}\) and \(\mathbf{d}^{a}\), which is instrumental for the detection of flavor symmetries and might be very relevant in order to understand the observed flavor structure of the SM. Depending on the application it may be more appropriate to work with one or the other choice of quartic invariant, or even with multiple of them simultaneously.
* Using orthogonal projection operators automatically tracks the origin of the invariants from specific contraction of covariants. Specific alignment of covariants is in one-to-one relation with a corresponding relation between the basis invariants [71; 32]. The importance of the relative alignment of basis covariant quantities for the detection of flavor symmetries is known from 2HDM [72; 73; 74; 75; 76] as well as 3HDM [29; 30; 31]. It is clear that the alignment of covariants will also play an important role in classifying and detecting all possible flavor symmetries in the parameter space of the SM.
* Thinking about the alignment of covariants in the SM also leads the way to investigate symmetries of the invariants under \(u\leftrightarrow d\) exchange. Interestingly such "custodial flavor" transformations are usually not considered as flavor symmetries in the sense that they are not subgroups of \(\mathrm{SU}(3)_{Q_{\mathrm{L}}}\otimes\mathrm{SU}(3)_{u_{\mathrm{R}}}\otimes \mathrm{SU}(3)_{d_{\mathrm{R}}}\) (under which the basis invariants are, by construction, invariant). The behavior of invariants under permutation of up- and down-sector structures is an analysis tool to detect flavor symmetry and "custodial flavor" symmetry violation beyond the usually considered subgroups of \(\mathrm{SU}(3)_{Q_{\mathrm{L}}}\otimes\mathrm{SU}(3)_{u_{\mathrm{R}}}\otimes \mathrm{SU}(3)_{d_{\mathrm{R}}}\). In this respect, we note that our choice of orthogonal invariants are either symmetric under up- and down-sector exchange or are simply being pairwisely permuted (which allows to form \(u\leftrightarrow d\) symmetric and anti-symmetric combinations). While the various quartic invariants are all \(u\leftrightarrow d\) even, the \(u\leftrightarrow d\) anti-symmetric combinations of invariants such as \(\hat{I}_{21}-\hat{I}_{12}\) seem to be particularly relevant to explore the custodial flavor symmetry breaking. Also the Jarlskog invariant is odd under \(u\leftrightarrow d\), hence, even under the combined transformation of CP and \(u\leftrightarrow d\), providing a link of CP and flavor transformations that shall also be further studied.
* The close to maximal correlation of all invariants correspond to close-to minimization of the absolute value of the Jarlskog invariant, see fig. 2. That is, CP violation in the SM as measured by the absolute value of \(J_{33}\) is much smaller than it could be (which is, of course, well known) but in the invariant language it is evident that a larger positive value and high correlation of the CP-odd invariants (which itself corresponds to large parameter hierarchies) corresponds to less CP violation. It remains to be seen whether this can help to explain the observed structure.
* It is clear that any successful approach to the flavor puzzle must explain the special values of the invariants close to their maximum values, including the small but significant deviations from the maximal values, as well as the strong correlation of the invariants. Being aware of the special locations of the orthogonal invariants and their explicit covariant content may help in order to resolve the flavor puzzle. It remains to be seen whether the necessary alignment and the resulting parameters can be explained in a conventional QFT and model building approaches with spurion potentials, see e.g. [77; 78; 79; 80; 81; 82; 83], or otherwise, for example using radiative corrections (see [1] and references therein), textures (see e.g. [84]), discrete or modular flavor symmetries (see e.g. [85; 86; 87; 88] for reviews), or more exotic approaches, such as explaining the parameters of Nature by entanglement or entropy arguments [89; 90; 91; 92; 93; 94; 95]. In fact, the latter approach seems to be particularly attractive here, as it is feasible that the maximum correlation of orthogonal invariants corresponds to a stationary point of quantum information theoretic von Neumann or Shannon entropy.
* Physical observables must not depend on an unphysical choice of basis and parametrization. Hence, all physical observables must be expressible in terms of basis invariant quantities. For the SM, and SM+4th generation rephasing invariants this was already
discussed in [96; 40], while for typical extensions like the 2HDM it was discussed in [97; 98; 99], and more recently [100; 101; 102; 103]. However, presently it is unclear how basis invariants can, in general, be related to all possible physical observables of a theory, and this should also be clarified. An important observation in this context is that basis invariants always correspond to closed-form diagrams akin to "vacuum bubble" Feynman diagrams (for an example, see e.g. [104, Figs. 3 and 4]). Hence, a conjecture is that the general relation between basis invariants and physical observables can be made via the well-known optical theorem, in the spirit of modern amplitude methods. Here, bubble diagrams correspond to forward-scattering vacuum-to-vacuum transitions and it shall be explicitly explored whether successive Cutkosky cuts [105] of vacuum bubble diagrams are sufficient to relate the basis invariants of a model to all physical observables such as cross sections and decay rates. Aspects of this technique have been pioneered in [106; 107; 108; 6] and should be reanalyzed with respect to orthogonal basis invariants and finally merged with modern amplitude methods.
* Regarding the running of basis invariants and the derivation of RGE invariants, it seems most promising to use conformal transformation of the invariants in order to derive their RGEs directly. By contrast, trying to anticipate the RGEs of invariants from _truncated_ perturbative treatment of running of physical parameters (which is trustworthy for the physical parameters) may lead to RGEs of invariants that are _not_ trustworthy (because they would automatically involve higher powers of the couplings not covered by the RGE expansion for the physical parameters). The power counting in terms of invariants is different than the power counting in terms of the physical parameters. An important crosscheck to pass for any system of RGEs of invariants, is that these do indeed reproduce the \(n\)-loop running of physical parameters used to derive them, and we emphasize that RGEs of invariants that do not pass this crosscheck are not trustworthy.
* Finally, we re-iterate that our method is intrinsically non-perturbative. It may be interesting to think about the invariants at the QCD scale. Our invariants are _exact_ to all orders in the Yukawa couplings, implying that the couplings could be arbitrarily large and we could still evaluate the invariants exactly. This means that our invariants are also exact in any kind of light or heavy flavor expansion, or working in specific limits like isospin symmetry. Nonetheless, using such approximations may of course be helpful in analyzing the invariants, or in expressing experimental observables as functions of the invariants in their associated symmetric limits. Since all observables ought, in principle, to be basis invariant and our computations here should also facilitate derivation of basis invariant expressions for otherwise perturbative expressions. A formidable first task of this kind would be to explicitly show the proportionality of all measured CP violating observables in the SM to the Jarlskog invariant.
Conclusions
This paper provides the first quantitative, entirely basis independent characterization of the Standard Model quark flavor puzzle. To achieve this, we have explicitly constructed an orthogonal basis for the ring of flavor basis invariants, using hermitian projection operators derived via birdtrack diagrams in adjoint flavor space. The virtue of constructing the invariants using orthogonal hermitian projection operators is that the invariants are as short as possible by construction and their covariant content, and transformation behavior under symmetries and CP, is explicit. Furthermore, the orthogonal invariants give rise to the shortest syzygy known to date, which relates the CP-odd Jarlskog invariant to the set of ten CP-even primary invariants. These ten primary invariants correspond to the ten well known physical parameters of the quark sector, but provide an intrinsically non-perturbative view on the parameter space.
We have explored the full parameter space of the invariants with a scan, and firstly "measured" the value of the orthogonal invariants as determined by experiments. At the parameter point realized by Nature, we find the invariants are close to maximally correlated and assume close to maximal values (besides the Jarlskog invariant, which is close to minimized in absolute value by the fact that the other invariants are maximized). The deviations from the maximal possible values of the invariants correspond to the subleading parameters of the quark flavor sector, i.e. light Yukawa couplings and small mixing angles. We have also investigated the renormalization group evolution of the invariants and find that they are almost RGE invariant up to scales much higher than the Planck mass.
Alongside the main line of the paper, we have given comments about other possible orthogonal bases for the flavor invariants, the correct absolute normalization of the invariants, accidentally vanishing order-3 CP violation in the SM, detection of symmetries using covariant alignment and invariant relations, as well as about the general relation of basis invariants to observables.
The quark flavor puzzle in invariants may be phrased as: Why are the invariants so strongly correlated, and what explains their tiny deviation from the maximal possible values? We hope that our treatment provides clarity and guidance for model building, to ultimately describe and understand the flavor puzzle with fewer parameters than in the SM.
We would like to thank Renato Fonseca for discussions on related projects. AT is grateful to Maximilian Berbig for an insightful observation. All birdtrack diagrams of this work have been generated with JaxoDraw[109].
## Appendix A The Hilbert series of a CP conserving theory
Here we show a nice way to describe a CP conserving subring of the SM. For this, note that under the special subgroup of \(\mathrm{SU}(3)\supset\mathrm{SO}(3)\), the branching of the adjoint is \(\mathbf{8}\to\mathbf{5}\oplus\mathbf{3}\). Note that under the CP outer automorphism, \(\mathbf{8}\)-plets transform as (3.27) implying that for
the SO(3) irreps:
\[\text{CP}:\ \ \ \mathbf{5}\mapsto\mathbf{5}\,\qquad\text{and}\qquad\mathbf{3} \mapsto-\mathbf{3}. \tag{104}\]
Hence, we certainly investigate a CP even ring if we set the triplets to zero \(\mathbf{3}\to 0\). This turns the hermitian matrices \(\tilde{H}_{u,d}\) to symmetric matrices. The group acting on the five-plets now is not the full SU(3) but only the SO(3) subgroup. Due to the well-known isomorphism of the Lie algebras \(\mathfrak{so}(3)\cong\mathfrak{su}(2)\), we can write the HS for this ring as HS of SU(2) without loss of generality. The two trivial singlets \(\text{Tr}(\tilde{H}_{u,d})\) stay exactly the same.
The HS is computed as (we use dummy indices \(u\) and \(d\) here but this time for the \(\mathbf{5}\)-plets alone)
\[H(K[V]^{G};u,d)=\int_{\text{SU(2)}}d\mu_{\text{SU(2)}}\,\text{PE}\left[z_{1},z _{2};u;\mathbf{5}\right]\text{PE}\left[z_{1},z_{2};d;\mathbf{5}\right]\,. \tag{105}\]
This yields
\[H(K[V]^{G};u,d)=\frac{1+u^{2}d^{2}+u^{3}d^{3}}{(1-u^{2})(1-d^{2})(1-ud)(1-u^{3 })(1-d^{3})(1-ud^{2})(1-u^{2}d)}\,, \tag{106}\]
and its ungraded version (\(t=u=d\))
\[H(K[V]^{G};t)=\frac{1+t^{4}+t^{6}}{(1-t^{2})^{3}(1-t^{3})^{4}}\,. \tag{107}\]
We see that \(\dim K[V]^{G}=7\) (corresponding to \(5+5-3\) degrees of freedom). Thus there is one less physical parameter here as compared to the full (CP-odd) SM. This parameter would correspond to \(\delta\) in the CKM matrix. Furthermore, the order-4 primary invariant was demoted to a secondary invariant. The plethystic logarithm is given by
\[\text{PL}\left[H(K[V]^{G};u,d)\right]=(u^{2}+ud+d^{2})+(u^{3}+d^{3}+u^{2}d+ud^ {2})+(u^{2}d^{2})-(u^{6}d^{6})\,. \tag{108}\]
We see that also here there exists a syzygy of order \(u^{6}d^{6}\), and we already know how it arises. The fact that \(\left(J_{33}\right)^{2}=0\) suggests a relation between all (primary and secondary) invariants of the CP-even ring. We have explicitly confirmed that this relation is the syzygy in the CP-even ring.
There is another computation we can perform. Let us divide both Hilbert series we have computed. This yields
\[\frac{H_{1}}{H_{2}}\equiv\frac{H(K[V]^{\text{SU(3)}};u,d)}{H(K[V]^{\text{SU(2)} };u,d)}=\frac{1}{1-u^{3}d^{3}}\,. \tag{109}\]
This points to the source of CP violation in the quark sector, the parameter which is proportional to the Jarlskog invariant. Finally, because
\[\text{PL}\left[H_{1}/H_{2}\right]=\text{PL}\left[H_{1}\right]-\text{PL}\left[H _{2}\right]\,, \tag{110}\]
it is trivial to compute the plethystic logarithm of eq. (109) as
\[\text{PL}\left[\frac{H(K[V]^{\text{SU(3)}};u,d)}{H(K[V]^{\text{SU(2)}};u,d)} \right]=u^{3}d^{3}\,, \tag{111}\]
thus explicitly exposing the CP-odd invariants of the original theory. We stress that there seems to be no proof that the division of Hilbert series is always a Hilbert series in itself. However, we certainly can always take the difference of plethystic logarithms of rings and their subrings in order to find elements contained in one but not in the other.
## Appendix B Birdtrack identities
We mostly use the conventions of [20] with the following identities
\[\begin{array}{lcl}\includegraphics[width=14.226378pt]{0.0pt}&=T_{\mathbf{r}}& \includegraphics[width=14.226378pt]{0.0pt}&\text{with}&T_{\mathbf{r}}\delta^{ab}= \text{Tr}\Big{[}t^{a}t^{b}\Big{]}\;,\end{array} \tag{144}\]
\[\begin{array}{lcl}\includegraphics[width=14.226378pt]{0.0pt}&=C_{D}& \includegraphics[width=14.226378pt]{0.0pt}&\text{with}&C_{D}=\frac{N^{2}-4}{N} \;,\end{array} \tag{145}\]
\[\begin{array}{lcl}\includegraphics[width=14.226378pt]{0.0pt}&=C_{A}& \includegraphics[width=14.226378pt]{0.0pt}&\text{with}&C_{A}=2T_{\mathbf{r}}N\;. \end{array} \tag{146}\]
\[\begin{array}{lcl}\includegraphics[width=14.226378pt]{0.0pt}&=C_{F}& \includegraphics[width=14.226378pt]{0.0pt}&\text{with}&C_{F}=T_{\mathbf{r}}\frac{N^{2}- 1}{N}\;,\end{array} \tag{147}\]
\[\begin{array}{lcl}\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics [width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&=& \includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&=& \includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.226378pt]{0.0pt}&= &\includegraphics[width=14.226378pt]{0.0pt}&=&\includegraphics[width=14.
This fixes the norm to \(\mathcal{N}(\raisebox{-0.5pt}{\includegraphics[scale=0.5]{fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/ fig//fig/fig/fig/fig/fig/ fig//fig/fig/fig/fig//fig/fig/fig//fig/ fig//fig/fig//fig/fig/ fig//fig/fig//fig/fig/ fig//fig/fig/fig/ fig//fig/fig/ fig//fig/fig//fig/ fig//fig//fig/ fig//fig//fig/ fig//fig/fig//fig/ fig//fig/fig/ fig//fig/ fig//fig//fig/ fig//fig/fig//fig/ fig//fig/ fig//fig/ fig//fig//fig/ fig//fig/ fig//fig/ fig//fig/ fig//fig/ fig//fig/ fig//fig/ fig//fig/fig/ fig//fig/ fig//fig//fig/ fig//fig/ fig//fig/ fig//fig/ fig//fig/ fig//fig/ fig/ fig//fig/ fig//fig/ fig//fig/ fig//fig/ fig//fig/ fig//fig/ fig//fig/ fig//fig/ fig//fig/ fig//fig/ fig//fig/ fig//fig/ fig//fig/ fig//fig/ fig/ fig/ fig//fig/ fig//fig/ fig/ fig//fig// fig// fig//fig/ fig//fig/ fig// fig/ fig//fig/ fig// fig// fig// fig//fig/ fig// fig// fig// fig// fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig// fig// fig/ fig// fig/ fig/ fig// fig// fig/ fig// fig/ fig// fig// fig/ fig// fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig
### Frobenius inner product
The Frobenius inner product \(\left\langle\cdot,\cdot\right\rangle_{F}:V\times V\to\mathbb{C}\) is defined as
\[\left\langle A,B\right\rangle_{F}:=\mathrm{Tr}\left(A^{\dagger}B\right)\,, \tag{100}\]
with the induced norm
\[||A||_{F}=\sqrt{\left\langle A,A\right\rangle_{F}}=\sqrt{\mathrm{Tr}\left(A^{ \dagger}A\right)}\,. \tag{101}\]
If the elements of \(V\) are hermitian matrices then,
\[\left\langle A,B\right\rangle_{F}=\mathrm{Tr}\left(AB\right)\,,\quad||A||_{F}^ {2}=\mathrm{Tr}\left(A^{2}\right)\,. \tag{102}\]
Because (100) is indeed an inner product, the usual properties apply. In particular, the Cauchy-Schwarz inequality,
\[\left|\left\langle A,B\right\rangle_{F}\right|\leq||A||_{F}||B||_{F}\,, \tag{103}\]
Figure 6: Triangle correlations of the invariants \(\hat{I}_{ij}^{\mathrm{alt}}\) normalized according to (100). Otherwise the same as caption of figs. 2 and 3.
which in terms of traces of hermitian matrices is given by
\[|\mathrm{Tr}\left(AB\right)|\leq\sqrt{\mathrm{Tr}\left(A^{2}\right)}\sqrt{\mathrm{ Tr}\left(B^{2}\right)}\,. \tag{100}\]
The inequality is saturated if and only if \(A\) and \(B\) are linearly dependent.
### Adjoint space invariants as inner products and constraints
Using the previous subsection, it is straightforward to identify our flavor invariants of eq. (10) as Frobenius inner products. We will use this here to derive bounds on their possible parameter space.
A special feature of the traceless hermitian matrices is that
\[I_{20}\equiv\mathrm{Tr}\left(H_{u}^{2}\right) =\frac{1}{3}\left[(y_{t}^{2}-y_{c}^{2})^{2}+(y_{t}^{2}-y_{u}^{2}) ^{2}+(y_{c}^{2}-y_{u}^{2})^{2}\right]\,,\] \[I_{02}\equiv\mathrm{Tr}\left(H_{d}^{2}\right) =\frac{1}{3}\left[(y_{b}^{2}-y_{s}^{2})^{2}+(y_{b}^{2}-y_{d}^{2}) ^{2}+(y_{s}^{2}-y_{d}^{2})^{2}\right]\,, \tag{101}\]
where we recall that \(y_{u,c,t}^{2}\) and \(y_{d,s,b}^{2}\) are the strictly positive eigenvalues of the traceful matrices \(\tilde{H}_{u}\) and \(\tilde{H}_{d}\). We see that \(I_{20}\) and \(I_{02}\) are an effective measure of the hierarchy in the quark Yukawa couplings. Normalizing the traces to their respective largest eigenvalue \(y_{t}^{2}\) or \(y_{b}^{2}\) (without loss of generality) as in eq. (4), we get
\[\hat{I}_{20}\equiv\frac{\mathrm{Tr}\left(H_{u}^{2}\right)}{y_{t}^ {4}} =\frac{1}{3}\left[(1-r_{c})^{2}+(1-r_{u})^{2}+(r_{c}-r_{u})^{2} \right]\,,\] \[\hat{I}_{02}\equiv\frac{\mathrm{Tr}\left(H_{d}^{2}\right)}{y_{b}^ {4}} =\frac{1}{3}\left[(1-r_{s})^{2}+(1-r_{d})^{2}+(r_{s}-r_{d})^{2} \right]\,, \tag{102}\]
with \(r_{u,c}:=(y_{u,c}/y_{t})^{2}\) and \(r_{d,s}:=(y_{d,s}/y_{b})^{2}\). It is then straightforward to check that
\[0\ \leq\ \hat{I}_{20}\ \leq\ \frac{2}{3}\,,\qquad\qquad\qquad\text{ and }\qquad\qquad 0\ \leq\ \hat{I}_{02}\ \leq\ \frac{2}{3}\,. \tag{103}\]
Minimal and maximal values here corresponding to hierarchical Yukawas as
\[\max_{0\leq r_{u,d}\leq r_{c,s}\leq 1}\hat{I}_{20},\ \hat{I}_{20}= \frac{2}{3}\quad\Rightarrow\quad r_{u,d}=0\wedge(r_{c,s}=0\lor r_{c,s}=1)\,\] \[\min_{0\leq r_{u,d}\leq r_{c,s}\leq 1}\hat{I}_{20},\ \hat{I}_{02}=0\quad \Rightarrow\quad r_{u,d}=1\wedge r_{c,s}=1\,. \tag{104}\]
Thus, \(\hat{I}_{20}\) and \(\hat{I}_{02}\) are maximal when there is maximal hierarchy and is minimal when there is degeneracy of all masses.
We can use eq. (100) to constrain (recall that all of our primary invariants obey \(I_{ij}\in\mathds{R}\)).
\[|\hat{I}_{11}|\leq\sqrt{\hat{I}_{20}}\ \sqrt{\hat{I}_{02}}\leq\frac{2}{3}\,. \tag{105}\]
The Cauchy-Schwarz inequality is saturated if and only if \(H_{u}\) and \(H_{d}\) are linearly dependent,
\[H_{d}=\lambda H_{u}\,,\quad\lambda\in\,\mathbb{C}\,. \tag{106}\]
Hence, sufficient condition for maximality of \(I_{11}\) are
\[y_{u,c}=0\,\wedge y_{d,s}=0\,\wedge\,s_{13}=s_{23}=0\,, \tag{114}\]
The fact that the SM is to a good approximation fulfilling these conditions corresponds to the close-to maximality of the experimentally determined invariants.
For the pure cubic invariants one can derive
\[\hat{I}_{30} \equiv\frac{\text{Tr}\left(H_{u}^{3}\right)}{y_{t}^{6}}=\frac{1}{ 9}\left[(2-r_{c}^{2}-r_{u}^{2})(1+r_{u}^{2}-2r_{c}^{2})(1+r_{c}^{2}-2r_{u}^{2} )\right]\,,\] \[\hat{I}_{03} \equiv\frac{\text{Tr}\left(H_{d}^{3}\right)}{y_{b}^{6}}=\frac{1}{ 9}\left[(2-r_{s}^{2}-r_{d}^{2})(1+r_{d}^{2}-2r_{s}^{2})(1+r_{s}^{2}-2r_{d}^{2} )\right]\,, \tag{115}\]
which straightforwardly allows to show
\[-\frac{2}{9}\ \leq\ \hat{I}_{30}\ \leq\ \frac{2}{9}\,,\qquad\qquad\text{ and }\qquad\qquad-\frac{2}{9}\ \leq\ \hat{I}_{03}\ \leq\ \frac{2}{9}\,. \tag{116}\]
An exact bound can be derived for
\[\left|\frac{\text{Tr}\big{(}H_{u}^{2}H_{d}^{2}\big{)}}{y_{t}^{4}\,y_{b}^{4}} \right|\leq\frac{1}{2}\sqrt{\hat{I}_{20}}\ \sqrt{\hat{I}_{02}}\leq\frac{2}{9}\,, \tag{117}\]
upon noting that \(\text{Tr}(H_{u,d}^{4})=\frac{1}{2}\text{Tr}(H_{u,d}^{2})^{2}\) by the Cayley-Hamilton theorem. Using these results, also
\[\hat{I}_{22}\leq\frac{2}{9}\;. \tag{118}\]
For the remaining non-trivial invariants, \(\hat{I}_{21}\leq\frac{2}{9}\) and \(\hat{I}_{12}\leq\frac{2}{9}\), exact bounds can be derived from careful treatment of the \(d\) tensor inner product, but for now we refer to the numerical proof of their boundedness shown in figs. 2 and 6.
## Appendix F Running of CKM parameters
From the running of \(\tilde{H}_{u}\) and \(\tilde{H}_{d}\), it is possible to extract the running of \(|V_{ub}|\), \(|V_{cb}|\), \(|V_{td}|\) and \(J\). This appendix reproduces the running of CKM parameters derived in ref. [68] (see also [67; 70] and references therein) but using updated values for the low energy CKM parameters. We show the results here both for completeness, and as an important cross-check to confirm the correctness of the running of our invariants.
At the scale \(\mu=M_{Z}\) we begin the running by choosing a basis where \(H_{u}\) is diagonal.9 At any scale, the CKM matrix is defined as the matrix that diagonalizes \(H_{d}\) in the basis where \(H_{u}\) is diagonal (see eq. (6)). As we run to higher values of the scale \(\mu\), \(H_{u}(\mu)\) evolves to a different, in general, non-diagonal basis. This contains the running of the physical parameters, in addition to disguising them by an inconvenient basis choice (this is one of the reason why running directly in the invariants is superior - unphysical effects
like rotations of the basis drop out by construction). At any given scale, we can extract the equations
\[H_{u}(\mu) =V_{u,\text{L}}(\mu)\,D_{u}^{2}(\mu)\,V_{u,\text{L}}^{\dagger}(\mu)\,,\] \[H_{d}(\mu) =V_{d,\text{L}}(\mu)\,D_{d}^{2}(\mu)\,V_{d,\text{L}}^{\dagger}(\mu)\,,\] (F.1)
where \(D_{u,d}^{2}=\text{diag}(y_{u,d}^{2},y_{c,s}^{2},y_{t,b}^{2})\). The corresponding CKM matrix at that scale \(\mu\) is then given as
\[V_{\text{CKM}}(\mu)=V_{u,\text{L}}^{\dagger}(\mu)\,V_{d,\text{L}}(\mu)\,.\] (F.2)
This allows us to extract the values of \(|V_{\text{CKM}}(\mu)|\) and \(J(\mu)\), here using the definition \(J=\text{Im}\left(V_{ud}V_{cs}V_{us}^{*}V_{cd}^{*}\right)\). We show their evolution in figure 7, which should be compared to figure 1 of [68]. We also confirm the running of the CKM parameter \(A\) as reported in [70]. Explicitly we find \(A(10^{15}\,\text{GeV})\approx 0.930\), \(A(10^{19}\,\text{GeV})\approx 0.945\) and virtually no running of \(\lambda\), \(\eta\), \(\rho\) which only change at the relative order of \(10^{-4}\).
| Standard模型クォークセクターの風味の謎は非 perturbative で構成されています。これは、クォークファインな基底の選択に依存しない基底インVARIANTを用いて行われます。この目的を達成するため、まず、HilbertseriesとPlethystic logarithmを用いて10CP偶数(初級)と1CP奇数(副次)の基底インVARIANTの代数環を導出します。基底インVARIANTの環における正交基底を、鳥の足跡の図を用いてHermitian プロジェクションオペレーターによって生成します。生成されたインVARIANTはCP変換の behavior を明確に示し、基底変数との共役関係の風味の対称性を最も直接的に示します。実験データから正交基底インVARIANTを測定し、その位置を可視化されたパラメータ空間における位置に関連付けます。実験的に観察された正交 |
2309.06999 | An adaptive functional regression framework for spatially heterogeneous
signals in spectroscopy | The attention towards food products characteristics, such as nutritional
properties and traceability, has risen substantially in the recent years.
Consequently, we are witnessing an increased demand for the development of
modern tools to monitor, analyse and assess food quality and authenticity.
Within this framework, an essential set of data collection techniques is
provided by vibrational spectroscopy. In fact, methods such as Fourier near
infrared and mid infrared spectroscopy have been often exploited to analyze
different foodstuffs. Nonetheless, existing statistical methods often struggle
to deal with the challenges presented by spectral data, such as their high
dimensionality, paired with strong relationships among the wavelengths.
Therefore, the definition of proper statistical procedures accounting for the
peculiarities of spectroscopy data is paramount. In this work, motivated by two
dairy science applications, we propose an adaptive functional regression
framework for spectroscopy data. The method stems from the trend filtering
literature, allowing the definition of a highly flexible and adaptive estimator
able to handle different degrees of smoothness. We provide a fast optimization
procedure that is suitable for both Gaussian and non Gaussian scalar responses,
and allows for the inclusion of scalar covariates. Moreover, we develop
inferential procedures for both the functional and the scalar component thus
enhancing not only the interpretability of the results, but also their
usability in real world scenarios. The method is applied to two sets of MIR
spectroscopy data, providing excellent results when predicting milk chemical
composition and cows' dietary treatments. Moreover, the developed inferential
routine provides relevant insights, potentially paving the way for a richer
interpretation and a better understanding of the impact of specific wavelengths
on milk features. | Federico Ferraccioli, Alessandro Casa, Marco Stefanucci | 2023-09-13T14:52:34 | http://arxiv.org/abs/2309.06999v1 | # An adaptive functional regression framework for spatially heterogeneous signals in spectroscopy
###### Abstract
The attention towards food products characteristics, such as nutritional properties and traceability, as well as towards the adherence of production systems to environmental and ethical procedures, has risen substantially in the recent years. Consequently, we are witnessing an increased demand for the development of modern tools to monitor, analyse and assess food quality, security, and authenticity. Within this framework, an essential set of data collection techniques is provided by vibrational spectroscopy. In fact, methods such as Fourier near-infrared (NIR) and mid-infrared (MIR) spectroscopy have been often exploited to analyze different foodstuffs. Nonetheless, existing statistical methods often struggle to deal with the challenges presented by spectral data, such as their high-dimensionality, paired with strong relationships among the wavelengths. Therefore, the definition of proper statistical procedures accounting for the intrinsic peculiarities of spectroscopy data is paramount.
In this work, motivated by two dairy science applications, we propose an adaptive functional regression framework for spectroscopy data. The method stems from the trend filtering literature, allowing the definition of a highly flexible and adaptive estimator able to handle different degrees of smoothness. We provide a fast optimization procedure that is suitable for both Gaussian and non-Gaussian scalar responses, and allows for the inclusion of scalar covariates. Moreover, we develop inferential procedures for both the functional and the scalar component thus enhancing not only the interpretability of the results, but also their usability in real world scenarios. The method is applied to two sets of MIR spectroscopy data, providing excellent results when predicting both milk chemical composition and cows' dietary treatments. Moreover, the developed inferential routine provides relevant insights, potentially paving the way for a richer interpretation and a better understanding of the impact of specific wavelengths on milk features.
**Keywords:** Adaptive Regression, Trend Filtering, Functional Data, Bootstrap, Spectroscopy
## 1 Introduction
In the past decades, increased consumers' attention towards food quality and security has fostered the development of new technologies to analyze different foodstuffs. More specifically, adherence to environmental-friendly procedures, product traceability, and quantification of the nutritional properties have become central topics on the agenda both of the public opinion and of the scientific community. Moreover, methods to assess food authenticity are increasingly important, as expensive products are often subject to fraud and adulteration.
In this framework, commonly adopted methodologies often require lengthy and expensive laboratory extraction routines to collect data, thus jeopardizing their usefulness. As a consequence, other alternatives have recently been proposed to overcome these drawbacks, with vibrational spectroscopy techniques currently playing a pivotal role. Methods such as Fourier
transform near-infrared (NIR) and mid-infrared (MIR) spectroscopy are known to be fast, relatively cheap, and nondisruptive ways to collect huge amounts of data on a plethora of materials. In fact, they have been used in different fields, ranging from medicine (Petrich, 2001; Talari et al., 2017) and astronomy (Keller et al., 2006; Tennyson, 2019), to food and animal science (Reid et al., 2006; Berzaghi and Riovanto, 2009; Porep et al., 2015).
In this work, we focus specifically on MIR spectroscopy, where the light is passed through a sample of a given material at a sequence of wavelengths in the mid-infrared region, activating the sample's chemical bonds. This leads to an absorption of the energy from the light, the amount of which, evaluated at different wavelengths, creates the spectrum of the analyzed sample. Each spectrum contains an invaluable amount of information about the sample since, according to _Beer-Lambert Law_(Beer, 1852), the absorption of energy is specific to atoms and molecules and is proportional to the concentration of the corresponding chemical substance. As a consequence, nowadays spectral data are being used to predict different characteristics of a given material. With specific reference to the dairy framework, the one considered in this work, MIR spectroscopy showed promising results in predicting traits such as milk coagulation properties (Visentin et al., 2016), fatty acids (Soyeurt et al., 2006), protein and lactose concentration (De Marchi et al., 2014), energy efficiency and intake (McParland and Berry, 2016), as well as in discriminating between different cows' dietary treatments (Frizzarin et al., 2021).
Despite being widely used, the peculiar characteristics of spectroscopy data introduce statistical challenges that need to be addressed. First, spectral data lie in high-dimensional spaces, as each single spectrum usually consists of more than 1000 absorbance values measured at different wavelengths. Moreover, the relationships among variables are rather complex, often preventing the use of standard models developed for time-dependent data. In fact, even if adjacent spectral regions tend to be highly correlated, strong correlations are also observed among distant wavelengths, since the same chemical components can have several absorption peaks in distinct spectral regions. Lastly, as pointed out by Politsch et al. (2020), the underlying signal is often spatially heterogeneous. Therefore, flat spectral regions are often followed by more irregular ones, characterized by multiple peaks, posing cumbersome issues in the modelling process.
Both with regression and classification aims in mind, these data have been often analyzed by means of latent variable models. Methods such as _Partial Least Squares_ (PLS) and _Principal Component Analysis_ (PCA) have been widely used to tackle some of the mentioned problems. With a similar underlying rationale, _Factor Analysis_ has also been considered (see e.g. Casa et al., 2022, for a recent work) since, allowing one to reduce the dimensionality of the data while focusing on a proper reconstruction of the correlation matrix, it seems particularly suitable for the framework. Recently, statistical and machine learning techniques have also been explored in order to relate spectral information to different milk traits (see e.g. Frizzarin et al., 2021).
All these methods do not account for the peculiar structure of the spectral data and for the natural ordering among the variables, which can be considered by resorting to approaches pertaining to the functional data analysis setting (FDA; Ramsay and Silverman, 2005). In fact, even if Alsberg (1993) suggested that spectra should be represented as continuous functions of wavelengths, in this framework functional approaches have been to some extent overlooked until relatively recently (Saeys et al., 2008). Some works that it is worth mentioning, even if not necessarily focused on MIR spectral data, are the ones by Reiss and Ogden (2007); Morris et al. (2008); Zhao et al. (2012); Yang et al. (2016); Codazzi et al. (2022).
As briefly mentioned before, the varying degrees of smoothness of MIR spectroscopy data over their domain pose some challenges that need to be tackled when evaluating FDA strategies. Saeys et al. (2008) suggest to adopt a basis approach with unequally spaced knots, with knot placement driven by subject-matter knowledge on the properties of the analyzed material. In this work, we take a different approach by considering _trend filtering_ as a building block of our approach (see Politsch et al., 2020, for a discussion in a similar framework).
### Trend Filtering
Trend filtering is a signal reconstruction technique initially developed by Kim et al. (2009) and further studied, among others, by Tibshirani (2014). In the context of nonparametric regression, where data \(\mathbf{y}=(y_{1},y_{2},\ldots,y_{p})^{\top}\in\mathbb{R}^{p}\) are supposed to be generated by the model \(y_{i}=f_{0}(\omega_{i})+\varepsilon_{i}\), \(i=1,\ldots,p\), trend filtering solves the following empirical problem
\[\widehat{\mathbf{f}}=\arg\min_{\mathbf{f}\in\mathbb{R}^{p}}\|\mathbf{y}-\mathbf{f}\|_{2}^{2}+ \lambda\|\mathbf{D}^{(k+1)}\mathbf{f}\|_{1} \tag{1}\]
where \(\mathbf{f}=(f(\omega_{1}),\ldots,f(\omega_{p}))^{\top}\), \(\mathbf{D}^{(k+1)}\in\mathbb{R}^{(p-k-1)\times p}\) is the discrete difference matrix of order \(k+1\), and \(\lambda>0\) is a tuning parameter. The resulting discretely estimated function \(\widehat{\mathbf{f}}\) has a number of interesting properties, the most important being its adaptivity to local features of the true underlying function \(f_{0}\). More precisely, the specification of the penalty yields a solution which, even if generally not sparse, exhibits a sparse \((k+1)\)-th derivative. This behaviour resembles a spline function of degree \(k\) which possesses continuous derivatives up to order \(k-1\), while the \(k\)-th derivative is zero except for the points where polynomials meet, also known as _knots_ of such spline. As shown in Tibshirani (2014), for \(k=1,2\) the resulting estimated function is indeed a spline, while for \(k>2\) it is _close_, but not exactly equal, to a spline function with unequally spaced knots. The method is quite general thanks to the choice of \(k\), e.g. with \(k=0\) one obtains a stepwise function with a first derivative that is different from zero only where jumps lie. In fact, given the form of \(\mathbf{D}^{(1)}\), in this specific case the penalty becomes \(\sum_{j=1}^{p-1}|f(x_{j})-f(x_{j+1})|\) and the problem is equivalent to the Fused Lasso (Tibshirani et al., 2005). With \(k=1\) the second derivative is penalized, thus yielding an estimate that is piecewise linear. These and higher-order examples can be found in the original paper of Tibshirani (2014). A prominent instance is cubic trend filtering (\(k=3\)) that allows to fit to the data something very similar to a cubic spline with unequally spaced knots. The further relevance of this approach can be appreciated also from another point of view; in the literature, several adaptive estimation procedures have been proposed, mainly focusing on finding good sets of spline knots (see e.g., Dimatteo et al., 2001; Zhou and Shen, 2001). The trend filtering approach implicitly overcomes this problem since, by solving the minimization in (1) for a given \(\lambda\), only a number \(p_{\lambda}<p\) of knots are selected; the entire path spans a range of nested solutions without the need of a forward/backward knot search algorithm.
In this paper, after a brief description of the analyzed data in Section 2, in Section 3 we extend the main trend filtering concepts to functional regression with Gaussian scalar response, developing an estimator able to infer spatially inhomogeneous regression functions. We further investigate this modeling strategy in the case of a _partial_ functional linear model, i.e. by including a set of scalar covariates and propose an extension intended for non-Gaussian response, as for example presence/absence or count data. Efficient estimation algorithms are presented in Section 4, followed by a simulation study in Section 5. In Section 6 we present the result of the analysis on real data, while in Section 7 we draw some final conclusions and remarks.
## 2 Mid-infrared spectroscopy data
In this study, we consider two different sets of mid-infrared spectroscopy data.
The first data set consists of a collection of 730 milk samples produced from 622 cows from different research herds in Ireland between August 2013 and August 2014. All the animals involved in the study were following a predominantly grass-based diet, with relevant heterogeneity in terms of number of parities and stage of lactation. Samples were collected during morning and evening milking and analyzed using a MilkoScan FT6000 (Foss Electronic A/S, Hillerod, Denmark). The resulting spectra consists of 1060 absorbance observations in the mid-infrared light region (see Figure 1). Furthermore, some additional information is available such as the date and the time (am/pm) of the milkings, the breed, the number of parities and the days in
milk for all the cows involved in the study. Note that some milk related traits have been collected by means of wet chemistry techniques. Among these traits are included both technological, such as rennet coagulation time and heat stability, and protein related ones, as for example \(\kappa\)-casein and \(\alpha_{S1}\)-casein. In the analyses reported in Section 6, we focus on the prediction of the \(\kappa\)-casein. Lastly, we retain only one observation per cow, thus working with 622 milk spectra. For a more complete description of the data collection process and of the data themselves, readers can refer to Visentin et al. (2015).
On the other hand, the second data set considered has been collected at the Teagasc MoorePAR Dairy Research Farm (Fermoy, Co.Cork, Ireland), in an experiment designed by O'Callaghan et al. (2016b), which represents the first study of its kind in Ireland and, to the best of our knowledge, in the world. Further information on the experimental setting can be found in O'Callaghan et al. (2016a); O'Callaghan et al. (2017). The data consist of MIR spectra, comprising 1060 wavelengths in the region from 925cm\({}^{-1}\) and 5010cm\({}^{-1}\), obtained by analyzing 4320 milk samples using a Pro-Foss FT6000 series instrument. A total number of 120 Holstein-Friesian cows have been involved in the study, and milked twiced daily in the morning and in the afternoon in three consecutive years (2015, 2016 and 2017). The data collection scheme has been carried out in a balanced way, both in terms of the year and of the number of cattle' parities. Moreover, we restrict our attention to the samples collected from May to August, since in the summer period there is the highest prevalence of grass growth. For each of the years considered, the cattle were randomly assigned to a specific dietary treatment for the entire lactation period. The treatment diets included grass (GRS), with cows mantained outdoors on a perennial ryegrass sward only, clover (CLV), consisting of perennial ryegrass with 20% white clover sward only, and total mixed ration (TMR), where cows were mantained indoors with nutrients combined in a single mix consisting of grass and maize silage and concentrates. In this work, given the strong compositional similarities between GRS and CLV diets, these two classes have been merged to create a general pasture-based diet group. As a consequence the final data set consists of 2931 samples from pasture-fed cattles and 1389 from TMR-fed ones. Lastly, some
Figure 1: Mid-infrared spectra in the region from \(925cm^{-1}\) to \(5010cm^{-1}\), corresponding to the first case study.
additional information on fat, protein, and lactose content has been obtained by calibrating the FT6000 against wet chemistry results and is available for the milk samples considered.
## 3 Proposed methodology
Given the data \(\mathcal{D}=\{X_{i}(\omega),y_{i}\}_{i=1}^{n}\), we assume that \(y_{1},\ldots,y_{n}\) are scalar values drawn from a Gaussian random variable \(y_{i}|X_{i}(\omega)\sim N(\mu_{i},\sigma^{2})\) and \(X_{1}(\omega),\ldots,X_{n}(\omega)\) are realizations of a functional covariate. We model the conditional expected value of \(y_{i}\) as \(\mathbb{E}(y_{i}|X_{i}(\omega))=\mu_{i}=\int X_{i}(\omega)f(\omega)d\omega\) where \(f(\omega)\) is an unknown regression function. This leads to the functional linear model
\[y_{i}=\int X_{i}(\omega)f(\omega)d\omega+\varepsilon_{i}\,,\]
with \(\varepsilon_{i}\sim N(0,\sigma^{2})\) being an additive noise term. There are several works in the functional data analysis literature devoted to estimation of this model, ranging from basis approaches (Ramsay and Silverman, 2005), penalized strategies (Crambes et al., 2009) and functional principal component representations (Yao et al., 2005). A complete review of different estimation methodologies is outside the scope of this work, and readers may refer to Morris (2015) for a general overview of such methods; however, here we spend some words on the penalization approach, which shares some features with our proposal.
Smoothing splines are the most popular tool in the family of penalized estimators. In the context of functional regression, given the data \(\mathcal{D}\), they are obtained as the solution of the following optimization problem:
\[\widehat{\mathbf{f}}=\arg\min_{\mathbf{f}\in\mathbb{R}^{p}}\lVert\mathbf{y}-\mathbf{X}\mathbf{f} \rVert_{2}^{2}+\lambda\lVert\mathbf{D}^{(2)}\mathbf{f}\rVert_{2}^{2}\,, \tag{2}\]
where \(\mathbf{y}=(y_{1},\ldots,y_{n})^{\top}\) is the vector of scalar responses, \(\mathbf{X}=(X_{1}(\mathbf{\omega}),\ldots,X_{n}(\mathbf{\omega}))^{\top}\) is the matrix of functional data observed on a regular grid \(\mathbf{\omega}=(\omega_{1},\ldots,\omega_{p})\) and \(\mathbf{D}^{(2)}\) is the matrix of second order discrete differences. The approach is justified as a discrete counterpart of a certain variational problem and provides as a solution a natural cubic spline with knots at observation points, see Wahba (1990), Crambes et al. (2009) and Goldsmith et al. (2011) for details. The amount of penalization, managed by the tuning parameter \(\lambda\), represents a trade-off between two extreme solutions, one being a completely wiggly interpolating function (\(\lambda=0\)) and the other being a constant (\(\lambda=\infty\)). Despite being very intuitive and easy to implement, this estimator lacks the local adaptivity property, i.e. the resulting estimated curve is not able to capture the different levels of smoothness of the true function \(f\). Therefore, we propose to rely on a different penalization strategy and estimate the regression curve by
\[\widehat{\mathbf{f}}=\arg\min_{\mathbf{f}\in\mathbb{R}^{p}}\lVert\mathbf{y}-\mathbf{X}\mathbf{f} \rVert_{2}^{2}+\lambda\lVert\mathbf{D}^{(k+1)}\mathbf{f}\rVert_{1}. \tag{3}\]
This expression represents a generalization of the trend filtering loss function, with the design matrix no longer being the \(p\times p\) identity matrix (Tibshirani, 2014), but an \(n\times p\) matrix of discretely observed functional data. The key difference between optimization problems (2) and (3) is that in the latter, thanks to the \(\ell_{1}\)-penalty applied on certain discrete derivative of \(f\), we are able to estimate the regression function taking care of its local features. Indeed, similarly to the original trend filtering estimate, the function that minimizes (3) is equal (when \(k=1,2\)) or very similar (when \(k>2\)) to a spline function with unequally spaced knots along its domain. Clearly, the smoothing splines approach, not being able to adapt to local features of the curve, either misses its smooth or its wiggly parts, depending on the selected degrees of freedom. This is a consequence of the regularization term which does not produce spatial heterogeneity nor knot selection. This makes the approach in (3) particularly appealing for spectroscopy data, where the effect of a functional covariate (e.g. the MIR spectrum) on a scalar variable (e.g. a material trait) can be studied with particular attention to local effects.
### Extensions
In the current framework, when scalar covariates are available alongside the functional one, their inclusion in the modelling strategy can bring additional information useful to predict the response variable. More formally, in this setting we denote the observed data by \(\mathcal{D}=\{X_{i}(\omega),y_{i},\mathbf{z}_{i}\}_{i=1}^{n}\), with \(\mathbf{z}_{i}=(z_{i1},\ldots,z_{ir})^{\top}\) being a set of \(r\) scalar covariates corresponding to the \(i\)-th observation. We then model the conditional expected value of \(y_{i}\) as \(\mathbb{E}(y_{i}|X_{i}(\omega),\mathbf{z}_{i})=\mu_{i}=\int X_{i}(\omega)f(\omega) d\omega+\sum_{j=1}^{r}z_{ij}\gamma_{j}\), where \(\{\gamma_{j}\}_{j=1}^{r}\) are unknown regression coefficients. For such data structure, it is worth considering the partial functional linear model (Shin, 2009)
\[y_{i}=\int X_{i}(\omega)f(\omega)d\omega+\sum_{j=1}^{r}z_{ij}\gamma_{j}+\varepsilon _{i}\,.\]
Note that, if needed, the intercept can be easily included in the model as a constant scalar covariate. Following the trend filtering paradigm, we propose to estimate the function \(f\) and the vector \(\mathbf{\gamma}=(\gamma_{1},\ldots,\gamma_{r})^{\top}\) by solving the optimization problem
\[\mathbf{\tilde{\theta}}=\arg\min_{\mathbf{\theta}\in\mathbb{R}^{p+r}}\lVert\mathbf{y}- \tilde{\mathbf{X}}\mathbf{\theta}\rVert_{2}^{2}+\lambda\lVert\tilde{\mathbf{D}}^{(k+1)} \mathbf{\theta}\rVert_{1}\,, \tag{4}\]
where \(\tilde{\mathbf{X}}=[\mathbf{X}|\mathbf{Z}]\in\mathbb{R}^{n\times(p+r)}\), \(\mathbf{Z}=(\mathbf{z}_{1},\ldots,\mathbf{z}_{n})^{\top}\in\mathbb{R}^{n\times r}\), \(\mathbf{\theta}=(\mathbf{f}^{\top},\mathbf{\gamma}^{\top})^{\top}\in\mathbb{R}^{p+r}\) and \(\tilde{\mathbf{D}}^{(k+1)}=[\mathbf{D}^{(k+1)}|\mathbf{0}_{(p-k-1)\times r}]\in\mathbb{R}^ {(p-k-1)\times(p+r)}\). Note that, with this formulation, the penalty does not affect the parametric part of the model. When \(r\) is large, one can include an \(\ell_{1}\)-penalty for the vector \(\mathbf{\gamma}\) in order to achieve sparsity in the estimated coefficients; see, for instance, Kong et al. (2016). Since the application presented in Section 6 involves a small set of covariates, this potential extension has not been pursued in this work.
An additional generalization of our proposal is required when assumption \(y_{i}\sim N(\mu_{i},\sigma^{2})\) is not met because the scalar responses \(y_{1},\ldots,y_{n}\) are generated by some other distribution. For example, for count data, we can assume \(y_{i}\sim Poisson(\lambda_{i})\), and for presence/absence data \(y_{i}\sim Bernoulli(\pi_{i})\). In these settings, where a functional linear model is not adequate, a generalized functional linear model (James, 2002; Muller and Stadtmuller, 2005; Goldsmith et al., 2011) can be applied. In particular, we assume that \(g(\mathbb{E}(y_{i}|X_{i}(\omega)))=g(\mu_{i})=\int X_{i}(\omega)f(\omega)d\omega\), with \(g(\cdot)\) being a suitably chosen link function. Now the empirical minimization problem is recasted as
\[\mathbf{\hat{f}}=\arg\min_{\mathbf{f}\in\mathbb{R}^{p}}L(\mathbf{y};\mathbf{X}\mathbf{f})+\lambda \lVert\mathbf{D}^{(k+1)}\mathbf{f}\rVert_{1}, \tag{5}\]
where the loss function \(L(\mathbf{y};\mathbf{X}\mathbf{f})\) depends upon the distribution of the response variable. The objective is now represented by a nonlinear function of the unknown parameter \(\mathbf{f}\) and its direct minimization is usually not straightforward. As a consequence, in Section 4 we present a clever modification of the proposed algorithm to deal with the modified loss appearing in (5). Lastly note that in the presence of explanatory scalar covariates and a non-Gaussian response, the last two specifications can be combined together by adjusting (5) as it has been done in equation (4) for problem (3).
Another potential extension would be to combine two (or more) penalties in the optimization problem. This allows to estimate functions that exhibit a complex behaviour, typically piecewise polynomials of different order. The loss function in this context is
\[\mathbf{\hat{f}}=\arg\min_{\mathbf{f}\in\mathbb{R}^{p}}\lVert\mathbf{y}-\mathbf{X}\mathbf{f} \rVert_{2}^{2}+\lambda_{1}\lVert\mathbf{D}^{(k+1)}\mathbf{f}\rVert_{1}+\lambda_{2} \lVert\mathbf{D}^{(\ell+1)}\mathbf{f}\rVert_{1}, \tag{6}\]
where \(k\) and \(\ell\) are integers and \(\lambda_{1}\), \(\lambda_{2}\) are regularization parameters. This modification can be employed when additional scalar covariates are observed and/or when the distribution of the response is not Gaussian. In section 4 we will illustrate how to solve problem (6) with the same toolbox used for the other cases.
### Inference
In this section, we describe a strategy to build confidence intervals for most of the pointwise estimates introduced in the previous section. Given the complexity of functional regression models, inferential procedures have sometimes been overlooked, with the focus often being on pointwise estimation. Nonetheless, the introduced procedure represents a key component that improves the usability of the methodology in real world scenarios. In the case of the trend filtering framework, the construction of confidence intervals and confidence bands can be addressed via bootstrap procedures. Standard frequentist inference is not suitable, since the distribution of the trend filtering estimator is non-Gaussian, even when the observational noise is Gaussian (Politsch et al., 2020). Here we propose a Wild bootstrap procedure (Mammen, 1993), that is particularly appropriate in high-dimensional regression models when the noise distribution is unknown. Briefly, the idea behind the Wild bootstrap is to construct an auxiliary random variable with zero mean and unit variance (and ideally higher moments equal to \(1\)). This random variable is then used to define a transformation of the observed residuals that gives a valid bootstrap sample (see Algorithm 1).
A classical choice for the auxiliary random variable is the two point distribution suggested in Mammen (1993), that is
\[u_{i}^{*}=\begin{cases}\hat{\epsilon}_{i}(1+\sqrt{5})/2&\text{with probability}(1+\sqrt{5})/(2\sqrt{5}),\\ \hat{\epsilon}_{i}(1-\sqrt{5})/2&\text{with probability}(\sqrt{5}-1)/(2 \sqrt{5}).\end{cases} \tag{7}\]
Other examples are the Rademacher distribution, that takes values \(1,-1\) with equal probability, the Uniform distribution on the interval \([-\sqrt{3},\sqrt{3}]\), or various transformations of the Gaussian distribution (Mammen, 1993). In general, since it is not possible to define a random variable that has mean \(0\), variance \(1\), and all higher moments equal to \(1\), the different choices lead to different values for the third and the fourth moment. For instance, the third and fourth moments of the Rademacher distribution are \(0\) and \(1\), respectively, while the two point distribution defined above have third and fourth moments \(1\) and \(2\), respectively. The specific choice is generally driven by considerations on the symmetry of the observed residuals.
Given the full bootstrap estimate set \(\{\widehat{\mathbf{f}}^{(b)}\}_{b=1}^{B}\), for any \(\alpha\in(0,1)\), we can define a \((1-\alpha)\) quantile-based pointwise variability band as
\[V_{1-\alpha}(f(\omega_{j}))=\left(\widehat{f}_{\alpha/2}(\omega_{j}),\widehat {f}_{1-\alpha/2}(\omega_{j})\right), \tag{8}\]
where
\[\widehat{f}_{\gamma}(\omega_{j})=\inf_{g}\left\{g:\frac{1}{B}\sum_{b=1}^{B} \mathbb{I}(\widehat{f}^{(b)}(\omega_{j})\leq g)\geq\gamma\right\},\quad\text {for all}\quad j=1,\ldots,p.\]
Optimization procedure
In the literature, several algorithms for solving the original trend filtering problem have been proposed; see, among others, Kim et al. (2009), Tibshirani (2014) and Tibshirani and Taylor (2011). Some of these algorithms are not directly generalizable to our context, where the presence of the \(n\times p\) data matrix \(\mathbf{X}\) makes the optimization task more challenging. To solve problem (3), we rely on the Alternating Direction Method of Multipliers (ADMM) framework and consider an extension of the approach by Ramdas and Tibshirani (2016) where a specialized acceleration scheme is proposed.
ADMM algorithms are a wide class of algorithms particularly useful for solving constrained problems of the form
minimize \[f(\mathbf{\alpha})+g(\mathbf{\delta})\,,\] (9) subject to \[\mathbf{A}\mathbf{\alpha}+\mathbf{B}\mathbf{\delta}+\mathbf{c}=0.\]
A general ADMM algorithm proceeds by minimizing the augmented Lagrangian of (9). Since the objective function is separable, minimization can take place in an alternate fashion. ADMM approaches are largely used in penalized estimation schemes, which can often be recasted as in (9), leading to a faster optimization thanks to variable splitting. Specifically, the problem in (3) can be stated as
minimize \[\|\mathbf{y}-\mathbf{X}\mathbf{\alpha}\|_{2}^{2}+\|\mathbf{\delta}\|_{1}\,,\] (10) subject to \[\mathbf{D}^{(k+1)}\mathbf{\alpha}-\mathbf{\delta}=0.\]
where \(f(\mathbf{\alpha})=\|\mathbf{y}-\mathbf{X}\mathbf{\alpha}\|_{2}^{2}\) is the \(\ell_{2}\)-loss and \(g(\mathbf{\delta})=\|\mathbf{\delta}\|_{1}\) the \(\ell_{1}\)-norm. As shown in Boyd et al. (2011), updates for the parameters are straightforward. In fact, since \(f\) is quadratic, the update step for \(\mathbf{\alpha}\) has a least squares form and the \(\mathbf{\delta}\) update amounts in soft-thresholding a given vector. Although these updating rules could be applied, an acceleration scheme for such problem is exploited. In fact, as demonstrated in Ramdas and Tibshirani (2016), a different parametrization of the ADMM can save computational time due to the existence of efficient algorithms for the constant-order trend filtering problem. The idea is to reformulate the problem as follows.
minimize \[\|\mathbf{y}-\mathbf{X}\mathbf{\alpha}\|_{2}^{2}+\|\mathbf{D}^{(1)}\mathbf{\delta}\|_ {1}\,,\] (11) subject to \[\mathbf{D}^{(k)}\mathbf{\alpha}-\mathbf{\delta}=0.\]
where \(\mathbf{D}^{(1)}\) is the discrete difference matrix of order 1. The reader can verify the equivalence between problem (10) and problem (11). Hereafter, we derive the specialized parameter updates needed for the generic \(t+1\) iteration:
\[\mathbf{\alpha}^{t+1} =(\mathbf{X}^{\top}\mathbf{X}+\rho(\mathbf{D}^{(k)})^{\top}\mathbf{D}^{(k)})^{-1 }(\mathbf{X}^{\top}\mathbf{y}+\rho(\mathbf{D}^{(k)})^{\top}(\mathbf{\delta}^{t}-\mathbf{u}^{t}))\,, \tag{12}\] \[\mathbf{\delta}^{t+1} =\arg\min\|\mathbf{D}^{(k)}\mathbf{\alpha}^{t+1}+\mathbf{u}^{t}-\mathbf{\delta}^ {t}\|_{2}^{2}+\lambda/\rho\|\mathbf{D}^{(1)}\mathbf{\delta}^{t}\|_{1}\,,\] (13) \[\mathbf{u}^{t+1} =\mathbf{u}^{t}+\mathbf{D}^{(k)}\mathbf{\alpha}^{t+1}-\mathbf{\delta}^{t+1}\,. \tag{14}\]
The update for \(\mathbf{\delta}\) is much more involved than a simple soft-thresholding and requires solving a new constant-order trend filtering problem, that is, a one-dimensional fused lasso problem. However, fast solutions are available by employing the dynamic programming solver by Johnson (2013) or the proposal by Davies and Kovac (2001) based on the taut string principle. Ramdas and Tibshirani (2016) showed the superiority of this specialized ADMM formulation over the classical one in terms of convergence rates: the single operation is more expensive than the one in the usual parametrization, but convergence is achieved in fewer iterations, leading to an overall gain in terms of computational time.
The parameter \(\rho\) is sometimes made adaptive by allowing a different value at each iteration to speed up the learning process. Using a varying \(\rho\), one has to compute \((\mathbf{X}^{\top}\mathbf{X}+\rho^{t}\mathbf{D}^{\top}\mathbf{D})^{-1}\) at each iteration of the algorithm, and this can be prohibitive even for moderate dimensions. With a fixed \(\rho\) instead, one can precompute the quantity \((\mathbf{X}^{\top}\mathbf{X}+\rho\mathbf{D}^{\top}\mathbf{D})^{-1}\) that is never modified by the updating rules at the expense of some more iterations. In our implementation, we found this last approach faster and, in particular, we followed Ramdas and Tibshirani (2016) setting \(\rho=\lambda\), which led to stable solutions. From a practical point of view, often the entire solution path is needed as a function of \(\lambda\). In this case, a speed up is made by considering warm starts i.e., by starting the algorithm from the solution obtained for the previous value of the regularization parameter.
Lastly, note that slight modifications are needed in the presence of scalar covariates: the problem is stated as the minimization of \(\|\mathbf{y}-\tilde{\mathbf{X}}\mathbf{\alpha}\|_{2}^{2}+\|\mathbf{D}^{(1)}\mathbf{\delta}\|_{1}\) subject to \(\tilde{\mathbf{D}}^{(k)}\mathbf{\alpha}-\mathbf{\delta}=0\) and the updating rules are the same as ((12) - (14)) except for the substitution of \(\tilde{\mathbf{X}}\) and \(\tilde{\mathbf{D}}^{(k)}\) in place of \(\mathbf{X}\) and \(\mathbf{D}^{(k)}\).
For the generalized functional linear model, we develop an iterative reweighted penalized least squares approach based on the alternation of a Newton step and an ADMM step. Specifically, problem (5) can be written as
\[\text{minimize} L(\mathbf{y};\mathbf{X}\mathbf{\alpha})+\|\mathbf{D}^{(1)}\mathbf{\delta}\|_{1}\,,\] (15) subject to \[\mathbf{D}^{(k)}\mathbf{\alpha}-\mathbf{\delta}=0\,.\]
In the first step of the algorithm, given the current estimate \(\mathbf{\alpha}^{t}\) we approximate the generic loss function \(L(\mathbf{y};\mathbf{X}\mathbf{\alpha})\) around \(\mathbf{\alpha}^{t}\) by a quadratic loss \(\|\tilde{\mathbf{y}}^{t}-\tilde{\mathbf{X}}^{t}\mathbf{\alpha}\|_{2}^{2}\), where \(\tilde{\mathbf{y}}^{t}=(\mathbf{W}^{t})^{1/2}\mathbf{s}^{t}=(\mathbf{W}^{t})^{1/2}(\mathbf{X}\mathbf{ \alpha}^{t}+(\mathbf{V}^{t})^{-1}(\mathbf{y}-\mathbf{\mu}^{t}))\) and \(\tilde{\mathbf{X}}^{t}=(\mathbf{W}^{t})^{1/2}\mathbf{X}\), building a penalized least squares problem. This step is intended as a Fisher scoring update. The quantities \(\mathbf{\mu}=\mathbb{E}(\mathbf{y}|\mathbf{X})\), \(\mathbf{V}=V(\mathbf{\mu})\) and \(\mathbf{W}=V(\mathbf{\mu})^{-1}(g^{\prime}(\mathbf{\mu}))^{-2}\) depend on the random variable characterizing the response. For example, if \(y_{i}\sim Bernoulli(\pi_{i})\) we have \(\mu_{i}=\pi_{i}=\exp\{\int X_{i}(\omega)f(\omega)d\omega\}/(1+\exp\{\int X_{i }(\omega)f(\omega)d\omega\})\), \(V(\mu_{i})=\mu_{i}(1-\mu_{i})\), \(g^{\prime}(\mu_{i})=1/V(\mu_{i})\) and \(W_{ii}=V(\mu_{i})\) while if \(y_{i}\sim Poisson(\lambda_{i})\) we have \(\mu_{i}=\lambda_{i}=\exp\{\int X_{i}(\omega)f(\omega)d\omega\}\), \(V(\mu_{i})=\mu_{i}\), \(g^{\prime}(\mu_{i})=1/V(\mu_{i})\) and \(W_{ii}=V(\mu_{i})\).
In the second step we solve the penalized problem by applying ADMM updates ((12) - (14)) until convergence, just by replacing \(\mathbf{X}\) with \(\tilde{\mathbf{X}}^{t}\) and \(\mathbf{y}\) with \(\tilde{\mathbf{y}}^{t}\), thus obtaining \(\mathbf{\alpha}^{t+1}\). The two steps are repeated until some stopping criterion is achieved and the final estimator is obtained.
Lastly, note that it is possible to use the same machinery presented in this section for the multiple penalty approach too, by stacking the two matrices \(\mathbf{D}^{(k+1)}\) and \(\mathbf{D}^{(\ell+1)}\) to form \(\tilde{\mathbf{D}}\), which will replace the difference matrix in the ADMM updates.
## 5 Simulation study
In this section, we assess the performance of the proposed methods by means of simulations. We first generate a sample of functional data \(\{X_{i}(\omega)\}_{i=1}^{n}\) from a B-spline basis with 10 equispaced internal knots, drawing each coefficient from a standard normal distribution. The resulting functions are evaluated on an equispaced grid of \(p=100\) points in order to form the \(n\times p\) matrix \(\mathbf{X}\), and then kept fixed in all simulation repetitions. We define several scenarios that can be addressed with one of the methods previously described in the following way.
In scenario a), given the sample of functional data \(\{X_{i}(\omega)\}_{i=1}^{n}\) we generate a sample of scalar responses from a Gaussian distribution \(y_{i}\sim N(\mu_{i},\sigma^{2})\) where the expected value depends linearly on the functional covariate, i.e. \(\mu_{i}=\int X_{i}(\omega)f(\omega)d\omega\). We set \(n=250\) and a signal-to-noise ratio equal to 4.
In scenario b), in addition to the functional data sample, we also consider a set of scalar covariates \(z_{1i},\ldots,z_{ri}\) for each observational unit. These covariates are generated from a standard Gaussian distribution and are independent of each other. Then, a sample of scalar responses is
Figure 2: Estimated functional coefficient for \(f_{3}(\omega)\), with different combinations of \((\lambda_{1},\lambda_{2})\) in the mixed penalty case \(k=0,l=3\). The dashed-black lines correspond to the true function, while the solid-red lines correspond to estimates. From top to bottom, the estimated functional coefficient approaches a piecewise-constant function; from left to right, it approaches a cubic function.
obtained from a Gaussian distribution \(y_{i}\sim N(\mu_{i},\sigma^{2})\) where the expected value depends linearly on the functional and scalar covariates, i.e. \(\mu_{i}=\int X_{i}(\omega)f(\omega)d\omega+\sum_{j=1}^{r}z_{ji}\gamma_{j}\). We set \(n=250\), \(r=5\), \(\gamma=(2,-1,1,0,0)\) and a signal-to-noise ratio equal to 4.
In scenario c), given the functional data sample, we generate scalar responses from a Bernoulli distribution \(y_{i}\sim Bernoulli(\pi_{i})\) where \(g(\pi_{i})=\text{logit}\{\pi_{i}\}\) depends linearly on the functional covariate, i.e. \(\text{logit}\{\pi_{i}\}=\int X_{i}(\omega)f(\omega)d\omega\). We set \(n=250\).
We combine each described scenario with three different specifications of the unknown regression function \(f(\omega)\). In detail, \(f_{1}(\omega)\) is a piecewise cubic function in \([0,1]\) built from a cubic B-spline basis with 3 internal knots at \(0.2,0.75\) and \(0.9\), \(f_{2}(\omega)\) is the classical mexican hat function
\[f_{2}(\omega)=(1-\omega^{2})\text{exp}\{-\omega^{2}/2\}\quad\text{for $\omega \in[-5,5]$},\]
and \(f_{3}(\omega)\) is the same function with truncated peaks
\[f_{3}(\omega)=\begin{cases}f_{2}(\omega)&\text{if $f_{2}(\omega)\in[-0.3,0.5] $}\,,\\ 0.5&\text{if $f_{2}(\omega)>0.5$}\,,\\ -0.3&\text{if $f_{2}(\omega)<-0.3$}\,.\end{cases}\]
All functions are evaluated on the same equispaced grid of \(p=100\) points used to generate \(\{X_{i}(\omega)\}_{i=1}^{n}\).
For all the \(B=100\) synthetic samples generated, we estimate the regression parameters with the trend filtering approach, penalizing the fourth derivative (_TF-4_), the first derivative (_TF-1_) and both of them (_Mixed-TF_). For comparison purposes we also employ the spline method (_SPL_) outlined in Goldsmith et al. (2011) penalizing the second derivative of the function. The tuning parameters for all the methods have been selected using a separate validation set. In Table 1 we present for our methods and the spline estimator the value of the Integrated Mean
\begin{table}
\begin{tabular}{l l l l} \hline _Function_ & \(f_{1}(\omega)\) & \(f_{2}(\omega)\) & \(f_{3}(\omega)\) \\ \hline Scenario a) & & & \\ _TF-4_ & 0.335 (0.401) & 0.123 (0.173) & 0.265 (0.073) \\ _TF-1_ & 1.969 (1.821) & 2.120 (0.680) & 0.595 (0.339) \\ _MTF_ & 0.588 (0.514) & 0.297 (0.296) & 0.184 (0.081) \\ _SPL_ & 0.669 (0.666) & 0.207 (0.860) & 0.357 (0.405) \\ Scenario b) & & & \\ _TF-4_ & 0.382 (0.434) & 0.139 (0.189) & 0.269 (0.083) \\ _TF-1_ & 1.584 (0.335) & 2.035 (0.189) & 0.561 (0.069) \\ _MTF_ & 0.513 (0.395) & 0.302 (0.299) & 0.194 (0.083) \\ _SPL_ & 1.101 (1.014) & 0.221 (0.941) & 0.368 (0.477) \\ Scenario c) & & & \\ _TF-4_ & 2.051 (1.861 & 0.822 (0.586) & 0.680 (0.472) \\ _TF-1_ & 4.334 (2.180) & 2.705 (1.091) & 0.907 (0.363) \\ _MTF_ & 3.713 (2.386) & 1.165 (0.974) & 0.579 (0.298) \\ _SPL_ & 2.698 (1.122) & 0.908 (1.676) & 0.630 (0.366) \\ \hline \end{tabular}
\end{table}
Table 1: Average Integrated Mean Squared Error (MISE) and its standard error (in parentheses) over 100 repetitions for the estimation of three regression functions (details in the text). TF-4: Trend filtering with penalization on fourth derivative only; TF-1: Trend filtering with penalization on the first derivative only; MTF: Trend filtering with penalization on both fourth and first derivative; SPL: Penalized splines as in Goldsmith et al. (2011).
Squared Error (MISE) defined as
\[\text{MISE}(\widehat{f})=\int\{f(\omega)-\widehat{f}(\omega)\}^{2}d\omega,\]
evaluated on the finite grid \(\mathbf{\omega}\), averaged over all simulation repetitions, and its standard error (in parenthesis). The proposed approach shows superior performance in all the combinations of functions and scenarios considered, when compared to the spline methodology. In fact, this latter strategy is not well suited in situations where the regression function is spatially heterogeneous. Moreover we observe that, among the different specifications of the trend filtering, the one penalizing the fourth derivative achieves the best results in estimating \(f_{1}(\omega)\) and \(f_{2}(\omega)\), in all considered scenarios. Unfortunately, penalizing the first derivative does not lead to satisfactory results, for two main reasons: the estimated regression function is not continuous, against what is commonly assumed in functional data analysis, and the estimation error is large due to inherent smoothness of the considered unknowns. However, adding the first derivative penalization to the plain trend filtering of order four leads to an improved performance if the regression function is particularly complex, as in the case of \(f_{3}(\omega)\). To elucidate the behaviour of double penalization in this scenario, in Figure 2 we graphically depict the unknown function and several estimates based on different values of the parameter \(\mathbf{\lambda}=(\lambda_{1},\lambda_{2})\). Starting from the upper left corner where the impact of regularization is the lowest, we see that increasing \(\lambda_{1}\) keeping \(\lambda_{2}\) fixed, leads to almost piecewise constant solutions. By contrast, increasing \(\lambda_{2}\) while keeping \(\lambda_{1}\) fixed, leads to almost piecewise cubic functions. However, since \(f_{3}(\omega)\) exhibits both features, a better reconstruction is obtained by combining the two penalties, as can be appreciated in the lower right corner of the figure. Lastly, note that this specification automatically includes the "marginal" models with only one of the two derivative penalized.
## 6 Applications to milk spectral data
In the following, the proposed method is applied to the first set of data introduced in Section 2. Following suggestions from the literature, prior to running the analyses a variables aggregation step has been performed. In fact, it has been pointed out (see e.g., Murphy et al., 2010) that the aggregation of adjacent wavelengths implies almost negligible losses in terms of information and predictive abilities. This is coherent with the idea that, when dealing with spectra, the strong correlations among wavelengths allow to work on data with slightly lower resolution while retaining most of the informative content. Accordingly, we aggregate four adjacent wavelengths, to reduce the overall computational cost, resulting in a dataset with \(n=622\) milk spectra and \(p=264\) wavelengths.
As briefly mentioned in Section 2, in the regression framework the proposed method has been used to predict the \(\kappa\)-casein content in the milk samples. The actual observed values for the response variable, expressed in grams per liter of milk, were collected using reverse-phase high performance liquid chromatography (HPLC), with an adaptation of the methodology considered in Visser et al. (1991). This technology is known to be expensive and time-consuming and is not considered suitable for modern large-scale applications; therefore, the calibration of statistical tools, used in conjunction with infrared spectroscopy, can be highly beneficial for research in the dairy framework and for the dairy production systems.
\(\kappa\)-casein has been selected as the milk trait to be predicted as it is one of the major components of milk, playing an essential role in cheese production systems, affecting both cheese yield and its characteristics (Wedholm et al., 2006). Moreover, \(\kappa\)-casein is also used as a food additive and it generally represents an important economic factor whose timely and precise prediction might increase the efficiency of the dairy production chain. For these reasons, milk casein content is nowadays also considered as one of the determinants to estimate the breeding values of the animals, inspiring research lines on genetic control and selective breeding (Bittante
and Cecchinato, 2013; Bittante et al., 2022). Exploratory analysis of this variable revealed a strong asymmetric behaviour of the empirical distribution. For this reason, we considered as a response variable for our model the logarithm of \(\kappa\)-casein.
In this section, the model in (6) has been considered, with \(k=3\) and \(l=0\), thus penalizing the fourth and the first derivative, respectively. This choice can be justified by the assumption of a regression function that is smooth in some parts of the domain and flat in some others. Note, as mentioned in section 5, that the marginal formulations with penalization only on the fourth or on the first derivative are included as limiting models. The hyperparameters \(\lambda_{1}\) and \(\lambda_{2}\), which control the strenght of the penalty, have been selected resorting to a cross-validation scheme. In conjunction with the spectral variables, we consider also some scalar covariates, as per the extension of the model outlined in Section 3.1; in particular, information on the season when the milk samples have been collected, the milking time (morning or afternoon milking), the number of cows' parities and the number of days an animal has been milking in the current lactation (days in milk). This implies the presence of \(r=6\) additional scalar variables, with a total of 270 covariates. The estimated regression function, together with the inferential results obtained by means of the procedure introduced in Section 3.2, are visually reported in Figure 3, while the results concerning the scalar variables are shown in Table 2.
The method shows high prediction accuracy, with a cross-validated mean square prediction error equal to 0.04986. The result has been additionally compared with a PLS-based regression approach, unarguably representing the state-of-the-art when working with infrared spectroscopy data, which resulted in a cross-validated mean square prediction error of 0.05457.
As mentioned, our proposal introduces other relevant strength points while showing improved predictive performance. First, it respects and preserves the functional nature of the data, without mapping them into lower-dimensional latent spaces. Consequently, it provides richer insights on the studied phenomenon and, generally speaking, an easier interpretation of the results; this potentially sheds light on the chemical factors which are the main determinants of casein content in milk. In fact, a thorough analysis of the results depicted in Figure 3 highlights some interesting behaviours. First of all, the inferential routine outlined in Section 3.2, allows to detect some spectral regions which are considered to be uninformative for the determination of the \(\kappa\)-casein content in the milk. For instance, our method considers as uninformative the spectral regions from 1619 cm\({}^{-1}\) to 1673 cm\({}^{-1}\) and from 3069 cm\({}^{-1}\) to 3663 cm\({}^{-1}\). In literature these highly-noisy regions are designated as water absorption areas, usually considered as uninformative, thus removed from the data prior to the analyses (Rutten et al., 2011). Nonetheless, the determination of these regions is still controversial and not unambiguous, as it can be influenced by spectrometer specific characteristics. Interestingly, our method marks as uninformative more wavelengths being adjacent to these highly noisy regions; this is coherent with practitioners' experiences which often point out that water may influence larger portions of the spectra, with respect to the ones suggested in the literature.
Focusing on the variables regarded as significant, it has to be noted that the proposal suggests that \(\kappa\)-casein can be predicted using a relatively small portion of the infrared spectrum. This is
\begin{table}
\begin{tabular}{l r r r} \hline Covariate & Lower (0.025) & Estimate & Upper (0.975) \\ \hline Intercept & 0.257 & **0.438** & 0.627 \\ Spring & -0.222 & **-0.133** & -0.038 \\ Summer & -0.075 & -0.019 & 0.026 \\ Milk time (morning) & -0.186 & **-0.129** & -0.074 \\ Parity (2) & -0.061 & -0.019 & 0.030 \\ Parity (3) & -0.043 & 0.004 & 0.052 \\ DIM & -0.001 & -0.000 & 0.000 \\ \hline \end{tabular}
\end{table}
Table 2: Estimated coefficients, and 95% confidence intervals, for the scalar covariates.
consistent with the results obtained in Frizzarin et al. (2021) where standard predictive tools displayed good performances exploiting fewer wavelengths, with respect to the ones used to predict other milk proteins and technological traits. These indications are particularly important for the dairy industry, where there is an increasing demand for cheaper and potentially portable instruments, scanning only relevant portions of the spectrum.
A proper interpretation of the specific peaks shown in the estimated regression function is complex since, for composite materials as milk, chemical constituents have absorption peaks at different wavelengths which often overlap (Soyeurt et al., 2006). Nonetheless, some interesting behaviours can be highlighted. In general, we see a strong influence for the wavelengths in the so called _fingerprint region_, below 1400 cm\({}^{-1}\)(Hewavitharana and van Brakel, 1997); this region is often regarded as highly informative for the analysis of proteinaceous material as here chemical bonds related to amide groups are formed (Van Der Ven et al., 2002). Coherently, being \(\kappa\)-casein a protein, our method flags as influential, and with a positive effect on the \(\kappa\)-casein concentration, those wavelengths around 1550 cm\({}^{-1}\) and 1250 cm\({}^{-1}\) which are associated with amide II and amide III bands (De Marchi et al., 2009). In the region around 1100 cm\({}^{-1}\) and between 1200 cm\({}^{-1}\) and 1300 cm\({}^{-1}\), the peaks often depend on the phosphate bands; interestingly, phosphorus in milk occurs only in casein and not in whey proteins (Hewavitharana and van Brakel, 1997) and our method seems to be able to detect the importance of these areas.
Lastly, some insights can be obtained by inspecting the results for the scalar covariates. For example, milk samples collected in spring appear to have a significant decrease in terms of \(\kappa\)-casein concentration; knowing that cows calve in the first months of the year, this is consistent with the suggestions in Sorensen et al. (2003) where it is stated that casein concentration is usually lower after calving. Moreover, Quist et al. (2008); Forsback et al. (2010) showed that casein content is higher for afternoon and evening milkings, with respect to the morning ones;
Figure 3: Top: Sample of 25 spectra, with dark (light) colors corresponding to low (high) values of \(\kappa\)-casein. Bottom: Estimated functional coefficient with \(95\%\) bootstrap bands. The regions that do not contain zero are highlighted in red in the bottom line.
as it can be seen in Table 2, this is confirmed by our results.
Generally speaking, the devised procedure is capable of adequately predict the content of \(\kappa\)-casein in milk while, at the same time, paving the way for a convenient interpretation of the results, which is often more cumbersome when different predictive tools are considered. Finally, it should be noted that interpretation must be paired, as it might be enriched, by a close cooperation with an expert in the dairy and animal science framework.
### Application to cow dietary treatments
In this section, one of the extension discussed in Section 3.1, is considered to analyze the second datasets introduced in Section 2. More specifically, assuming that the response variable arises from a Bernoulli distribution, we employ the proposed strategy to predict the cows' dietary treatment, relying only on the spectral information. Coherently with the application in the previous section, a wavelength aggregation step has been performed. Moreover, consistently with Frizzarin et al. (2021), some outlying spectra have been removed. This results in a set of data with \(n=4261\) spectra, \(2893\) from pasture-fed cows and \(1368\) from TMR-fed ones, and \(p=264\) measured wavelengths.
Hereafter, model (5) has been employed; in particular we considered \(k=0\), therefore penalizing the first derivative, with the loss function adequately chosen to accomodate the binary nature of the response variable. The hyperparameter \(\lambda\) has been selected again by cross-validation. The application of our method produces highly satisfactorily performances, resulting in a cross-validation missclassification error equal to \(2.98\%\). This result has been again compared with the one obtained by means of a PLS-based discriminant analysis strategy, which produced a similar cross-validated error equal to \(2.60\%\). This provides a strong indication about the suitability of our proposal, also when considered for classification purposes.
Note that the extension to model (5) of the procedure outlined in Section 3.2 is not trivial. Nevertheless, even if it is not possible to draw formal inferential conclusions on the estimated functional coefficient, a closer inspection of the result allows us to obtain relevant insights, which can be further explored and integrated with subject-matter knowledge. Firstly, the penalization on the first derivative allows one to obtain an estimated functional coefficient being flat and equal to zero, or having negligible magnitude, for the highly noisy spectral regions from \(1604\) cm\({}^{-1}\) to \(1712\) cm\({}^{-1}\), and from \(3039\) cm\({}^{-1}\) to \(3810\) cm\({}^{-1}\), which strongly overlaps with the water absorption areas, deemed irrelevant for discrimination. Small discrepancies with the results obtained in the previous section highlight how the proposed method could represent a completely data-driven and application-oriented way to detect uninformative spectral regions. Further inspection of the most relevant wavelengths leads to coherent indications, with respect to those available in the literature (see e.g., Frizzarin et al., 2021). For example, the _fingerprint region_ is again useful to discriminate between diets. Moreover, wavelengths between \(2854\) cm\({}^{-1}\) and \(2977\) cm\({}^{-1}\) seems to have a strong impact on the feeding regimens classification, thus agreeing with the suggestions in De Marchi et al. (2011); Lefevre and Subirade (2000) where it is highlighted that this region is often used to estimated the milk fatty acid composition, which is in turn known to be highly correlated with the dietary treatment.
Concluding the proposed classification tool, while respecting and preserving the functional nature of the data, is able to outperform state-of-the-art discriminative methods. Moreover, the inspection of the estimated coefficients allows us to gain relevant insights from a chemical standpoint, which might deserve further exploration from experts in the field.
## 7 Conclusions and future directions
In this work, we presented an adaptive functional framework for spectroscopy data, that stems from the trend filtering literature. The proposed regression method is characterized by high
flexibility and adaptivity with respect to the complexity of the underlying data generating process. In particular, the method is capable of capturing different degrees of regularity of the slope function, while accounting for the high dimensionality and strong correlation among the wavelengths thanks to the \(\ell_{1}\)-regularization. The estimation is supported by a fast optimization procedure that leverages on the alternating direction method of multipliers framework, with a specialized acceleration scheme which provides superior convergence rates. The method is suitable for both Gaussian and non-Gaussian responses, and allows for the inclusion of scalar covariates, whose addition is often overlooked in the spectroscopy framework even if it might lead to better predictive performances. Moreover, the estimation strategy is enriched by a newly developed inferential procedure which allows to draw formal conclusions both on the functional and the scalar component. These are obtained with a nonparametric bootstrap approach, i.e. the wild bootstrap, that is particularly appropriate in high-dimensional regression models where the noise distribution is unknown.
The high adaptivity and the availability of inferential procedure are key features to enhance not only the interpretability of the results, but also their usability in real world scenarios. Indeed, spectroscopy data present peculiar statistical challenges, in particular intrinsic high-dimensionality of the inputs and strong correlation structures among the wavelengths. It is therefore paramount, from a practical perspective, to have a viable and interpretable tool that allows to carry out inference on specific regions of the spectrum, in order to gain relevant knowledge on specific properties of the samples (e.g. \(\kappa\)-casein content) or to highlight differences due to external factors (e.g. dietary treatments).
The proposed methodology showed satisfactory performance in simulations and, more importantly, very promising results in the two spectroscopy-based data analyses. In terms of prediction accuracy, the results were either superior or comparable to the ones obtained by means of state-of-the-art techniques. In terms of inference, the flexibility of the model allowed a correct identification of the highly-noisy water absorption areas, without the necessity to remove such portions of the data prior to the analysis. Moreover, in both the regression and classification framework, informative peaks (e.g., those in the fingerprint region) were highlighted, providing interesting insights into which spectral regions affect certain properties of milk. The inclusion of covariates has also constituted a relevant advantage that resulted in interesting observations on the effect, for example, of the seasonality. It should be stressed that, even if the proposed methodology has been applied to MIR spectroscopy data, it may be extended to other data sharing similar features.
A first direction for future research might be the development of inferential procedures for the non-Gaussian response cases. This would solidify the interpretability of the proposed methods even further, for instance in the classification framework. Moreover, this represents a particularly stimulating open problem, that might be approached via appropriate generalization of the nonparametric bootstrap procedures. Another possible extension could be the introduction of more complex penalties, that would allow the applicability of the method to a wider range of problems.
## Acknowledgements
This publication has emanated from research conducted with the financial support of Science Foundation Ireland (SFI) and the Department of Agriculture, Food and Marine on behalf of the Government of Ireland under grant number (16/RC/3835).
| 食品製品の特徴、例えば栄養成分と追跡性を重視する注目度は近年大幅に向上しています。このため、食品の品質と本質を測定、分析、評価するための最新のツール開発の需要が高まっています。この枠組みでは、振動的分光法が必須なデータ収集技術を提供しています。実際、フーリエ近赤外および中赤外分光法などの方法が、様々な食品を分析するために頻繁に使われています。しかしながら、既存の統計方法では、スペクトルデータ特有の課題、例えば高次元性と波長間の強い相関に苦戦しています。したがって、スペクトルデータの特性を考慮した適切な統計的な手続きの定義は非常に重要です。この研究では、2つの乳製品科学応用を動機付けに、分光データに対して適応的な機能的回帰フレームワークを提案しました。この方法は、フィルター処理の文献に基づいており、異なる滑らかさ度の |
2309.10123 | On the generalization of the Kruskal-Szekeres coordinates: a global
conformal charting of the Reissner-Nordstrom spacetime | The Kruskal-Szekeres coordinates construction for the Schwarzschild spacetime
could be viewed geometrically as a squeezing of the $t$-line associated with
the asymptotic observer into a single point, at the event horizon $r=2M$.
Starting from this point, we extend the Kruskal charting to spacetimes with two
horizons, in particular the Reissner-Nordstr\"om manifold, $\mathcal{M}_{RN}$.
We develop a new method for constructing Kruskal-like coordinates and find two
algebraically distinct classes charting $\mathcal{M}_{RN}$. We pedagogically
illustrate our method by constructing two compact, conformal, and global
coordinate systems labeled $\mathcal{GK_{I}}$ and $\mathcal{GK_{II}}$ for each
class respectively. In both coordinates, the metric differentiability can be
promoted to $C^\infty$. The conformal metric factor can be explicitly written
in terms of the original $t$ and $r$ coordinates for both charts. | Ali Fawzi, Dejan Stojkovic | 2023-09-18T19:56:43 | http://arxiv.org/abs/2309.10123v2 | On the Generalization of the Kruskal-Szekeres Coordinates: A Global Conformal Charting of the Reissner-Nordstrom Spacetime
###### Abstract
The Kruskal-Szekeres coordinates construction for the Schwarzschild spacetime could be viewed geometrically as a squeezing of the \(t\)-line associated with the asymptotic observer into a single point, at the event horizon \(r=2M\). Starting from this point, we extend the Kruskal charting to spacetimes with two horizons, in particular the Reissner-Nordstrom manifold, \(\mathcal{M}_{RN}\). We develop a new method for constructing Kruskal-like coordinates and find two algebraically distinct classes charting \(\mathcal{M}_{RN}\). We pedagogically illustrate our method by constructing two compact, conformal, and global coordinate systems labeled \(\mathcal{GK}_{\mathcal{I}}\) and \(\mathcal{GK}_{\mathcal{II}}\) for each class respectively. In both coordinates, the metric differentiability can be promoted to \(C^{\infty}\). The conformal metric factor can be explicitly written in terms of the original \(t\) and \(r\) coordinates for both charts.
## I Introduction
Reissner-Nordstrom (RN) spacetime is a unique, static, spherically symmetric and asymptotically flat solution to the coupled set of Maxwell equations and Einstein Field equations. It describes the spacetime with the mass \(M\), measured in the asymptotic region, and a static spherical electric field sourced by the charge \(Q\) in the background, with the corresponding non-zero stress-energy tensor. Spherical-like coordinates, \((t,r,\theta,\phi)\), known as the Reissner-Nordstrom coordinates are the natural coordinates to represent the metric tensor \(g_{\mu\nu}\)[1; 2; 3; 4; 5]. This chart could be assigned to an asymptotic observer, named Bob, at \(r\rightarrow\infty\) equipped with a clock measuring the time \(t\). The RN metric in units (\(c=G=1\)) can be written as
\[\mathrm{d}S_{RN}^{2}=-\left(1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}\right)\mathrm{ d}t^{2}+\left(1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}\right)^{-1}\,\mathrm{d}r^{2}+r^{2 }\left(\,\mathrm{d}\theta^{2}+\sin^{2}\theta\mathrm{d}\phi^{2}\right). \tag{1}\]
This coordinate system is ill-defined at two hypersurfaces (horizons). Similar to the Schwarzschild spacetime, the coordinate singularity \(g_{tt}=0\) locates the Killing horizons of the spacetime related to the Killing vector \(\partial_{t}\).
\[\begin{gathered} g_{tt}\left(r_{\pm}\right)=0,\\ r_{\pm}=M\pm\sqrt{M^{2}-Q^{2}}.\end{gathered} \tag{2}\]
For the non-extremal case, \(M>Q\), the Reissner-Nordstrom black hole has an inner \(r_{-}\) and outer \(r_{+}\) horizon, which makes its interior somewhat similar to the interior of the Kerr
Spacetime [6; 7]. Further, in the region \(E_{-}=\{r|0<r<r_{-}\}\) the metric will have the same signature as in the region \(E_{+}=\{r|r_{+}<r<\infty\}\). Consequently, the physical point-like singularity at \(r=0\) is timelike in nature, in drastic disagreement with the Schwardchild spacelike singularity. The metric is dynamical in the trapped and anti-trapped regions \(E=\{r|r_{-}<r<r_{+}\}\), since the \(r\) coordinate is timelike due to the flip of the metric signature in these coordinates [4].
One way to illustrate the incompleteness of this chart around the black hole horizons is by examining Bob's clock to time out his girlfriend Alice who is, for unclear reasons, freely falling towards the RN black hole's outer horizon. While Alice measures a finite amount of time, \(\Delta\tau\), using her own clock in her rest frame, Bob measures a significantly dilated duration of time, \(\Delta t\), by timing Alice's worldline. In other words, Bob will never see Alice crossing the outer event horizon in his lifetime. Therefore, better charts are needed there [8].
Finding a new charting system to penetrate null hypersurfaces in different spacetimes is a long-standing business. Novikov coordinates [9], Lemaitre coordinates [10], Gullstrand-Painleve coordinates [11; 12], Eddington-Finkelstein coordinates [13], and Kruskal-Szekeres coordinates [14; 15; 16] are all examples of charts developed to overcome the incompetents of the Schwarzchild coordinates near the horizon. Some of them have been generalized to Reissner-Nordstrom [3] and Kerr spacetimes [17; 18]. Most of them were constructed by studying time-like and null-like geodesic behavior around those black holes. However, here we will be adopting a more algebraic approach based on a geometrical interpretation of the problem analogous to the one found in [8].
Large astrophysical black holes are expected to be electrically neutral, given that our universe is electrically neutral at large scales [19]. One exception may be small primordial black holes that did not live long enough to get neutralized. Another exception might be black holes charged under some other hidden \(U(1)\) gauge group different from electromagnetism [20]. In addition, even a small amount of charge on a large black hole could be important when we encounter certain phenomena such as cosmic rays. This provides enough motivation to study RN black holes not only for academic interests but also from a phenomenological point of view [21]. On the other hand, studying the causal structure of the RN black hole, which is entirely different from the one associated with the Schwardchild spacetime, is important since it shares some generic features with other types of black holes with two horizons, e.g. the Kerr black hole which is much more relevant in astrophysical situations [7; 22].
Since rotating and charged black holes share a similar interior structure, this makes constructing Penrose diagrams for the RN metric a cumbersome task on its own. For example, Klosch and Strobl managed to provide _non-conformal_ global coordinates for the extreme
and non-extreme RN spacetimes [23]. However, the attempts to construct conformal global coordinates were so far based on patching two sets of the Kruskal-Szekeres coordinates \(\mathcal{K}_{\pm}\), where each set is well-behaved on one horizon \(r_{\pm}\) while it fails on the other one \(r_{\mp}\). This makes the region of validity for each chart \(\mathcal{E}_{+}=E_{+}\cup\{r|r=r_{+}\}\) and \(\mathcal{E}_{-}=E_{-}\cup\{r|r=r_{-}\}\) respectively. Switching between the two charts was the key to covering the whole RN manifold and constructing a global Penrose diagram [22; 24; 25]. Such patched Penrose diagrams, found in [4] for example, will still prove inconvenient if we want to study geodesics across the horizons [26]. To overcome this obstacle, we need to construct a global conformal coordinate system.
Recently in [27], Farshid proposed a _smoothing_ technique that could be used to provide a \(C^{2}\)-conformal global chart for the RN spacetime, and pointed out the possibility of generalizing the method to spherically symmetric spacetimes. The method used was reported to be a generalization of one used in[22] aiming to promote the differentiability of the map. One can also find Penrose diagrams constructed using this method in [28]. The central idea of this work was to find coordinates that extrapolate to each of the Kruskal-Szekeres coordinates \(\mathcal{K}_{\pm}\) when approaching the horizon located at \(r=r_{\pm}\). In addition, the smoothing was achieved through the use of the bump functions [29; 30]. A similar technique was used by Schindler in [31; 32], designed to provide a global chart regular for a special class of spherically symmetric spacetimes with multiple horizons. The reader can also find a comprehensive summary of the Penrose diagram theory in chapter one of Schindler's doctoral thesis [33].
In this work, we will define a new procedure that can produce compact, conformal, and global (CCG) charts that are valid at both the inner and outer horizons of RN spacetime, and for which the metric tensor is \(C^{\infty}\). Using this procedure we will provide two CCG coordinate systems for the RN spacetime, which we label as type-I and type-II coordinates, based on their class. Moreover, coordinates provided in [27] could be thought of as coordinates of type-II. Our method makes no underlying assumptions about the nature of the spacetime, other than it possesses two horizons. Therefore, to facilitate future applications of this procedure, we will present here a detailed pedagogical approach.
The structure of this paper is as follows. In section (II.1), we begin by reformulating the core idea of the Kruskal chart, and then revisit the Kruskal charting of the Schwarzchild (II.2) and the RN (II.4) spacetimes. In section (III), the main procedure for constructing generalized Kruksal charts is presented. The type-I and type-II coordinates as well as their relaxed versions for RN spacetime are given in (IV.1) and (IV.2). Finally, we discuss the outcome of the analysis and possible future work in section (V)
Preliminary
### Kruskal-Szekeres coordinates
Kruskal-Szekeres coordinates represent a maximal CCG chart for the Schwarzschild metric and has been studied extensively in the literature [14; 15; 16; 34; 35]. Their global nature is attributed to two features: (i) they can cover the null sphere located at radius \(r=2M\) which Bob will fail to chart, and (ii) it is a maximal extension of the Schwarzchild chart representing two copies of the Schwarzschild universe. The metric written in the spherical-like coordinate known as the Schwarzschild Spacetime \(\left(t,r,\theta,\phi\right)\)1 where \(t\in\mathbb{R}\), \(r\in\mathbb{R}_{+}\backslash\{0\}\), \(\theta\in\left(0,\pi\right)\), and \(\phi\in\left[0,2\pi\right)\) takes the well-known form
Footnote 1: Since examining the behavior and possible problems of the spherical coordinates as \(r\to\infty\) falls beyond the scope of this work, the angular dependence \(\left(\theta,\phi\right)\) will be neglected from now on for simplicity.
\[dS^{2}_{Sch}=\left(\frac{r-2M}{r}\right)\left\{-dt^{2}+dr_{*}^{2}\right\}= \frac{1}{r(r_{*})}\left(r(r_{*})-2M\right)dS^{2}_{Con}, \tag{3}\]
where \(dS^{2}_{Sch}\) and \(dS^{2}_{Con}\) stand for the Schwarzschild and conformal metric respectively. Here, \(r_{*}\) is defined2 as follows
Footnote 2: Usually, the constant of integration in defining the tortoise coordinate, \(r_{*}\), is chosen to be \(-2Mln(2M)\) in order to maintain dimensionless quantity inside the natural logarithm. Here, for simplicity we omit this step.
\[exp\left(r_{*}\right)=exp\left(r\right)\left|r-2M\right|^{2M}. \tag{4}\]
It is worth emphasizing that the map from \(r\)-coordinate to its tortoise version \(r_{*}\) is bijective and its inverse is differentiable on each of \(\mathcal{S}_{+}\) and \(\mathcal{S}_{-}\) separately. This is obviously due to the modulus included in the definition of these coordinates in equation (4).
A rigorous procedure would involve solving the Einstein Field Equations in Kruskal coordinates (which is the _top-down_ approach as in [8; 36]3) by means of null-casting and redefining the null coordinates. Since the Schwarzschild coordinates cover only the regions \(\mathcal{S}_{-}=\{r|0<r<2M\}\) and \(\mathcal{S}_{+}=\{r|2M<r<\infty\}\) of one universe of the Kruskal metric, trying to map the local chart to the global one (i.e. the _bottom-up_ approach) is not quite rigorous, because the map between the two charts as well as the Jacobian, Hessian, and the higher-versions of it will be singular at the event horizon \(r=2M\)[37].
Footnote 3: The conformal factor in these references is written in terms of \(r\), however, it is more instructive to think of \(r(U,V)\) as a function of \(U\) and \(V\), and not as the areal coordinate \(r\).
Nevertheless, we seek a global chart in which the metric is at least \(C^{2}\) everywhere on the manifold in order to satisfy the coupled Field Equations which contain first and second derivatives of the metric. Thus, we can apply this bottom-up approach (as in most of the General Relativity textbooks [2; 38]) by studying the limit at \(r=2M\) and analytically continuing the metric there. At the end, the metric \(g_{\mu\nu}\) must be written explicitly in the Kruskal coordinates \(\left(T,R,\theta,\phi\right)\) only. In this paper, we will follow the bottom-up approach
to find the generalized Kruskal coordinates which chart the whole RNspacetime. Taking the Kruskal charting of the Schwarzschild black hole as our guide, we review the traditional derivation of the Kruskal coordinates.
### Construction of the Kruskal coordinates: Schwardchild Spacetime
We begin by mapping the Schwarzschild coordinates to intermediate null coordinates first, in particular the retarded (\(u\)) and advanced (\(v\)) time coordinates, defined as \(u=t-r_{*}\) and \(v=t+r_{*}\). To handle the coordinate singularity of the former at the horizon, \(r=2M\), the null freedom is used to map the latter set to another set of the null coordinates using \(u\to U\equiv h(u)\) and \(v\to V\equiv k(v)\). This gives
\[dS^{2}_{con}=-dudv=-\frac{dUdV}{\frac{dh}{du}\frac{dk}{dv}}\equiv-\frac{Q(U,V) dUdV}{r(U,V)-2M}, \tag{5}\]
where \(Q(U,V)\) is at least \(C^{2}\)-function \(\mathcal{S}=\mathcal{S}_{+}\cup\mathcal{S}_{-}\cup\{r|r=2M\}\). This is achieved by employing the definition of \(r_{*}\). A sufficient coordinate transformation is given by
\[U\equiv\nu exp\left(\frac{-u}{4M}\right),\ \ \ V\equiv\nu exp\left(\frac{v}{4M} \right), \tag{6}\]
where
\[\nu=\begin{cases}+1&r>2M\\ -1&r<2M\end{cases}, \tag{7}\]
The signs \(\pm\) are included to achieve the maximal analytical extension of the metric. The product \(UV\) is positive in the regions II and III, and negative in the regions I and IV, following the convention given in [2]. The \(r\) coordinate is defined implicitly as
\[UV=exp\left(\frac{r}{2M}\right)(r-2M). \tag{8}\]
This equation can be explicitly solved for \(r\) by employing the multi-valued Lambert function \(W\)[39; 40],
\[r=2M\left[W\left(\frac{UV}{-2Me}\right)+1\right]. \tag{9}\]
Then, the Schwarzchild metric will have the following form in the new double null coordinates
\[dS^{2}_{Sch}=-\frac{16M^{2}e^{-\frac{r(U,V)}{2M}}}{r(U,V)}dUdV+r^{2}(U,V)d\Omega ^{2}. \tag{10}\]
Finally, the Kruskal coordinates \(T_{KS}\) and \(R_{KS}\) are related to the new null coordinates through the following transformations
\[\begin{split} U\equiv\frac{1}{2}\left(T_{KS}-R_{KS}\right),\\ V\equiv\frac{1}{2}\left(T_{KS}+R_{KS}\right).\end{split} \tag{11}\]
It is worth writing the final version of the metric in the Kruskal coordinates as
\[\begin{array}{c}dS_{Sch}^{2}=\frac{8Mexp\left(-W\left(T_{KS},R_{KS} \right)-1\right)}{W\left(T_{KS},R_{KS}\right)+1}(-dT_{KS}^{2}+dR_{KS}^{2})\\ \hskip 14.226378pt+4M^{2}\left(W\left(T_{KS},R_{KS}\right)+1\right)^{2}d\Omega ^{2}.\end{array} \tag{12}\]
As a cross-check, one could verify that the Einstein tensor \(G_{\mu\nu}\) corresponding to the Kruskal metric is zero everywhere on the Schwarzschild manifold, thus confirming that the stress-energy tensor \(T_{\mu\nu}\) is identically zero (as it must be for the Schwarzschild solution). This is true despite the fact that taking the derivatives of the metric with respect to the coordinates \((T,R)\) (using implicit differentiation with respect to \((t,r)\)) will be ill-defined at the event horizon. One could also verify that the maps between the Kruskal and the Schwarzschild chart are diffeomorphic in the regions \(S_{+}\) and \(S_{-}\)[37].
### A geometric picture of the Kruskal charting
The procedure of constructing Kruskal coordinates for Schwarzschild spacetime outlined in the previous section becomes limited when applied to spacetimes with more than one horizon. To be able to resolve this obstacle, we re-interpret the main premise of the construction. If Bob lived in a four-dimensional Minkowski spacetime, his clock would be able to properly time the events taking place there globally. However, once the spacetime is only asymptotically Minkowskian, the chart will fail near the null hypersurfaces. _But what if we start with a chart in the flat spacetime which is ill-defined at the locations defining these null hypersurfaces?_ For example, we can define a "bad" chart \(\mathcal{Z}\) in the conformal spacetime with the metric \(g_{\mu\nu}^{Con}\), in which any given time duration \(\Delta\tau\) of Alice's trip to the \(r=2M^{4}\) is mapped to \(\Delta\tilde{t}\to 0\).
Apparently, there is a family of these "_bad_" charts \(\mathcal{Z}\) that would be well defined on the physical spacetime, with the metric \(g_{\mu\nu}=\omega^{2}(x)g_{\mu\nu}^{Con}\), where \(\omega(x)\) is the conformal factor. They are only conditioned to contract the time interval \(\Delta\tau\) at the same rate as the dilation of time in Bob's frame. One can find an equivalent argument in [8] that we quote here "_A better coordinate system, one begins to believe, will take these two points at infinity and spread them out into a line in a new (\(r_{new},t_{new}\))-plane; and will squeeze the line (\(r=2M,t\) from \(-\infty\) to \(\infty\)) into a single point in the (\(r_{new},t_{new}\))-plane_".
As we will show here later, applying this simple argument to spacetimes with more than one horizon would be a tedious algebraic task. Mathematically, the fundamental premise of the construction is to find conformal coordinates \(\mathcal{Z}\) that generate poles of the same rank as zeros of the conformal factor. Then as the zeros and poles cancel out, the physical metric in
\(\mathcal{Z}\) will be CCG, in light of the bottom-up approach. In the next subsections, we will review the Kruskal charting of the RN spacetime following the notation in [3; 22].
### Outer and inner Kruskal coordinate: Reissner Nordstrom spacetime
One example where the standard Schwarzchild-like Kruskal charting will fail in constructing CCG is the RN spacetime.
\[\begin{split} dS_{RN}^{2}=\frac{\left(r-r_{+}\right)\left(r-r_{-} \right)}{r^{2}}\left\{-dt^{2}+dr_{*}^{2}\right\}=\frac{\left(r-r_{+}\right) \left(r-r_{-}\right)}{r^{2}}dS_{Con}^{2},\\ dS_{Con}^{2}=-dudv,\end{split} \tag{13}\]
where \(dS_{RN}^{2}\) stands for RN metric, while \(\left(u,v\right)\) represents the double null coordinates constructed in the same manner as in the Schwardchild case. The RN radial tortoise coordinate \(r_{*}\) is defined as
\[\begin{split} exp\left(r_{*}\right)=exp\left(r\right)\left|r-r_{ +}\right|^{\frac{\alpha_{+}}{2}}\left|r-r_{-}\right|^{\frac{\alpha_{-}}{2}}, \\ \alpha_{+}\equiv\frac{2r_{+}^{2}}{r_{+}-r_{-}},\\ \alpha_{-}\equiv\frac{2r_{-}^{2}}{r_{+}-r_{-}},\end{split} \tag{14}\]
where \(\alpha_{-}\) and \(\alpha_{+}\) are the surfaces gravity at \(r_{-}\) and \(r_{+}\) respectively.
Similar to the Schwardchild tortoise coordinate, \(r_{*}(r)\) is bijective and its inverse is differentiable on \(E_{+}\), \(E\), and \(E_{-}\) separately. However, there is a potential to solve explicitly for \(r\) by employing generalized Lambert functions \(\mathcal{W}\)[41; 42; 43; 44]. Since this is a tedious task on its own, we confine our analysis to the main objective, while this step could be addressed in future work.
By examining the tortoise coordinate definition, it is obvious that a zero at \(r_{\pm}\) is always coupled with a pole at \(r_{\mp}\), hence it is not straightforward to factor out a product of simple poles at \(r_{+}\) and \(r_{-}\) in the conformal metric. Nevertheless, it remains possible to construct regular charts at one horizon that is ill-defined at the other. These coordinates are regular in the domain \(\mathcal{E}_{+}\) and \(\mathcal{E}_{-}\), respectively. The outer \(\mathcal{K}_{+}\) and inner \(\mathcal{K}_{-}\) Kruskal coordinates are simply related to the "\(-\)" null-chart \((\mathcal{U}_{-},\mathcal{V}_{-})\) and "\(+\)" null-chart \((\mathcal{U}_{+},\mathcal{V}_{+})\) following the same definition in (11). We will work with the following sign convention
\[\begin{split}\mathcal{U}_{+}&=\nu_{+}U_{+}, \mathcal{V}_{+}=\nu_{+}V_{+}\\ U_{+}&=exp\left(\frac{-u}{\alpha_{+}}\right), \mathcal{V}_{+}=exp\left(\frac{v}{\alpha_{+}}\right),\end{split} \tag{15}\]
where
\[\nu_{+}=\begin{cases}+1&r>r_{+}\\ -1&r<r_{+}\end{cases}, \tag{16}\]
to represent the maximal analytical extension of these coordinates. Then the \(t\) and \(r\) coordinates are characterized by the following curves in the \(\left(\mathcal{U}_{+},\mathcal{V}_{+}\right)\)-plane:
\[\begin{split}\mathcal{U}_{+}\mathcal{V}_{+}&=exp \left(\frac{r}{2\alpha_{+}}\right)\left(r-r_{+}\right)\left|r-r_{-}\right|^{- \alpha},\\ &\frac{\mathcal{V}_{+}}{\mathcal{U}_{+}}=\pm exp\left(+\frac{2t}{ \alpha_{+}}\right).\end{split} \tag{17}\]
Similarly,
\[\begin{split}\mathcal{U}_{-}&=\nu_{-}U_{-},\qquad \qquad\mathcal{V}_{-}=\nu_{-}V_{-},\\ U_{-}&=exp\left(\frac{u}{\alpha_{-}}\right),\qquad V _{-}=exp\left(\frac{-v}{\alpha_{-}}\right)\end{split} \tag{18}\]
where
\[\nu_{-}=\begin{cases}+1&r>r_{-}\\ -1&r<r_{-}.\end{cases} \tag{19}\]
The \(t\) and \(r\) curves in the \(\left(\mathcal{U}_{-},\mathcal{V}_{-}\right)\)-plane are defined as
\[\begin{split}\mathcal{U}_{-}\mathcal{V}_{-}&=exp \left(-\frac{r}{2\alpha_{-}}\right)\left(r-r_{-}\right)\left|r-r_{+}\right|^{ -\bar{\alpha}},\\ &\frac{\mathcal{V}_{-}}{\mathcal{U}_{-}}=\pm exp\left(-\frac{2t} {\alpha_{-}}\right),\end{split} \tag{20}\]
Consequently, the metric in these "\(+\)" or "-" null-charts becomes
\[\begin{split} dS_{RN}^{2}&=-\alpha_{\pm}\frac{\left( r-r_{+}\right)\left(r-r_{-}\right)}{r^{2}}\frac{d\mathcal{U}_{\pm}d\mathcal{V}_{\pm}}{ \mathcal{U}_{\pm}\mathcal{V}_{\pm}}\\ &=-\alpha_{+}\frac{exp\left(-\frac{2r}{\alpha_{+}}\right)}{r^{2}} \left(r-r_{-}\right)^{1+\alpha}d\mathcal{U}_{+}d\mathcal{V}_{+}\\ &=-\alpha_{-}\frac{exp\left(\frac{2r}{\alpha_{-}}\right)}{r^{2}} \left(r_{+}-r\right)^{1+\bar{\alpha}}d\mathcal{U}_{-}d\mathcal{V}_{-},\end{split} \tag{21}\]
where5
Footnote 5: The extreme cases (\(Q=M\) and \(Q>M\)) of the RN metric are not considered here.
\[\begin{split}\alpha&\equiv\frac{\alpha_{-}}{\alpha _{+}}=\left(\frac{r_{-}}{r_{+}}\right)^{2}\rightarrow\ \ \ 0<\alpha<1,\\ \bar{\alpha}&\equiv\frac{\alpha_{+}}{\alpha_{-}}= \left(\frac{r_{+}}{r_{-}}\right)^{2}\rightarrow\ \ \ \ \ \ \ \ 1<\bar{\alpha}.\end{split} \tag{22}\]
It is easy to check that the metric in "\(+\)" ("\(-\)") null-coordinates is regular at the outer (inner) horizon \(r_{+}(r_{-})\). However, the coordinates fail 6 at the inner (outer) horizon \(r_{-}\) (\(r_{+}\)). Moreover, the metric in the "\(+\)"-null coordinates is not asymptotically flat in agreement with the Schwarzschild induced metric defined on the hypersurfaces with fixed \(\theta\) and \(\phi\) in equation (12), where the conformal factor approaches zero as \(r\rightarrow\infty\). Nevertheless, global Kruskal coordinates could be built by combining these two definitions in (15, 18) together (see e.g. the work of Carter[24], Hamilton[22] Schindler[31], and Farshid [27]). Although they all managed to find regular metric across the horizon, still the metric is only \(C^{2}\) in the former case.
Footnote 6: \(dS_{RN}^{2}=0\) at \(r=r_{-}\) (\(r_{+}\)) in the “\(+\)” (“\(-\)”) coordinates according to equation (21).
## III Global conformal chart criteria
We start our analysis by studying the conditions needed for a valid conformal global chart. We want to map the double null coordinates \((u,v)\) to the global double null coordinates \((\tilde{u},\tilde{v})\), while still maintaining the light-cone invariance in the new coordinates \((\tilde{u},\tilde{v})\). The most direct way to achieve that will be to use only the null-freedom as
\[\begin{array}{ccc}\tilde{u}\equiv h(u)&\rightarrow&du=\frac{1}{ \frac{dh}{du}}d\tilde{u},\\ \tilde{v}\equiv k(v)&\rightarrow&dv=\frac{1}{dk}d\tilde{v}.\end{array} \tag{23}\]
To construct a well-defined chart on the entire Reissner-Nordstrom manifold, we identify three distinct possibilities with reference to the singularity structure of the term \(\frac{dh}{du}\frac{dk}{dv}\), focusing on its behavior at \(r=r_{-}\) and \(r=r_{+}\). The three options are:
1. _Type-O_: \(\frac{dh}{du}\frac{dk}{dv}\) has a zero either at \(r=r_{-}\) or \(r=r_{+}\). The regularity of the metric in the new \((\tilde{u},\tilde{v})\) coordinates would be achieved at \(r=r_{-}\) or \(r=r_{+}\) but not simultaneously. "\(\pm\)" null coordinates are examples for this case. However, generating nontrivial coordinates out of \((U_{\pm},V_{\pm})\) is possible7 Footnote 7: The transformations that lead to such coordinates are expected to be more complicated as they are restricted by the requirement to leave the singularity structure invariant or to generate a decoupled zero at the other horizon. This condition could be formulated as follows \[\frac{dh}{du}\frac{dk}{dv}=\left(r-r_{\pm}\right)\zeta\left(r_{*},t\right)\] (24)
2. _Type-I_: \(\frac{dh}{du}\frac{dk}{dv}\) has a product of zeros at \(r=r_{-}\) and \(r=r_{+}\). If we manage to factor out this product of zeros while keeping the associated poles decoupled, then we will have a conformal global coordinate for the RN spacetime. We will illustrate this case with an example in IV.1. This condition can be formulated as \[\frac{dh}{du}\frac{dk}{dv}=\left(r-r_{+}\right)\left(r-r_{-}\right)\gamma\left( r_{*},t\right)\] (25)
3. _Type-II_: a sum of decoupled simple zeros at \(r=r_{+}\) and \(r=r_{-}\), each coupled to a pole, and possibly zeros of constrained rank at \(r=r_{-}\) for former and \(r=r_{+}\) for later. In principle, this mixture of poles and zeros might be easier to find compared to _Type-I_, however, the metric is expected to be more complicated form-wise. We will illustrate this case with an example in IV.2. This condition can be formulated as \[\frac{dh}{du}\frac{dk}{dv}=(r-r_{+})M_{+}(r_{*},t)+(r-r_{-})M_{-}(r_{*},t)+ \beta(r_{*},t),\] (26)
The three differential equations listed above are sufficient to construct the desired singularity structure in each case, while the constraints are encoded within \(\zeta\), \(\gamma\), \(M_{\pm}\) and \(\beta\).
Constructing CCGS for Reissner-Nordstrom Spacetime
### Type-I CCG Global Chart
As we mentioned before, just by looking at the definition of \(r_{*}\), there is no simple way of factorizing the zeros \((r-r_{+})\) or \((r-r_{-})\) without invoking poles at \(r_{-}\) or \(r_{+}\). Still, we can consider combining equations (20) and (17)
\[\mathcal{U}_{+}\mathcal{U}_{-}\mathcal{V}_{+}\mathcal{V}_{-}=\frac{(r-r_{+})}{ \left|r-r_{+}\right|^{\alpha}}\frac{(r-r_{-})}{\left|r-r_{-}\right|^{\alpha}} exp\left(\frac{r}{2\alpha_{+}}\right)exp\left(-\frac{r}{2\alpha_{-}}\right) \tag{27}\]
This may give us a hint for how to find \((\tilde{u},\tilde{v})\) with the desired map to fulfill the singularity structure of type-I. For example, we can start with the following definitions of \(\mathcal{GK}_{\mathcal{I}}\)
\[\frac{dh}{du}=\frac{\mu}{U_{+}^{-1}+U_{-}^{-1}},\hskip 28.452756pt\frac{dk}{ dv}=\frac{\mu}{V_{+}^{-1}+V_{-}^{-1}}, \tag{28}\]
where
\[\mu=\begin{cases}+1&r>r_{+}\mid\mid r<r_{-}\\ -1&r_{-}<r<r_{+}.\end{cases} \tag{29}\]
The definition we give in (28) reduces to evaluating the \(I_{1}\)-integration given here
\[I_{1}=\int\frac{1}{x^{q}+1}dx, \tag{30}\]
where \(q>1\). This integration has an upper and lower bound, hence the sign convention we use here will locate the inner horizon \(r_{-}\), outer horizon \(r_{+}\) and the asymptotically flat region \(r\rightarrow\infty\) according to the choice of the reference point. We choose that point to be the outer horizon \(u\rightarrow-\infty\) (\(v\rightarrow\infty\)). Accordingly, we have a monotonic map from \(u(v)\), defined in any of the regions \(E_{\pm}\) and \(E\); to \(\tilde{u}\) (\(\tilde{v}\)). Moreover, the map from \((t,r)\) to the later coordinates is also _globally monotonic_. Moreover, those coordinates have a built-in _compact_ domain and, hence, could be used directly to construct Penrose diagrams for the RN spacetime. The choice of signs \(\mu\) does not harm the continuity or differentiability of the map, still, it will result in a uniform signature of the Generalized Kruskal Coordinate of type-I. Nevertheless, it is sufficient if we have only a semi-positive conformal factor. Accordingly, the metric in the CCG type-I coordinates will take the following form.
\[\begin{split} dS_{RN}^{2}&=-\frac{1}{r^{2}}\left\{ \left|r-r_{-}\right|^{\alpha+1}exp\left(-\frac{2r}{\alpha_{+}}\right)+\left|r -r_{+}\right|^{\tilde{\alpha}+1}exp\left(\frac{2r}{\alpha_{-}}\right)\right. \\ &\left.+2\cosh\left[t\left(\frac{1}{\alpha_{+}}+\frac{1}{\alpha_{ -}}\right)\right]exp\left(r\left[\frac{-1}{\alpha_{+}}+\frac{1}{\alpha_{-}} \right]\right)\left|r-r_{+}\right|^{\frac{\alpha+1}{2}}\left|r-r_{-}\right|^{ \frac{\alpha+1}{2}}\right\}d\tilde{u}d\tilde{v}\end{split} \tag{31}\]
The metric possesses a conformal factor resembling the sum of the conformal factors of the \(\mathcal{K}_{\pm}\) in addition to a new time-dependent term that vanishes on both horizons. The conformal factor shown in equation (31) is a semi-positive function over the domain of
the \(r\)-coordinate, hence the metric is well-behaved on both of the horizons and takes the following asymptotic behavior as \(r\to r_{+}\),
\[dS_{RN}^{2}(r\to r_{+})\rightarrow-\frac{exp\left(-\frac{2r}{\alpha_{+}} \right)}{r^{2}}\left(r-r_{-}\right)^{1+\alpha}d\tilde{u}d\tilde{v}, \tag{32}\]
Similarly as \(r\to r_{-}\),
\[dS_{RN}^{2}(r\to r_{-})\rightarrow-\frac{exp\left(\frac{2r}{\alpha_{-}} \right)}{r^{2}}\left(r_{+}-r\right)^{1+\tilde{\alpha}}d\tilde{u}d\tilde{v}, \tag{33}\]
Executing the integration in \(I_{1}\) might be doable through the use of Hypergeometric functions \({}_{2}F_{1}(a,b;c;x)\)[45; 46; 47] or generically through expanding the function under the integration sign and then integrating.
However, this is not the end of the story. Similar to the Schwarzschild case, the Jacobian and its higher relatives will be undefined at the horizons, thus it is not reliable to take the derivatives of the conformal factor implicitly. One can also argue that there should be two kinks present at \(r=r_{+}\) and \(r=r_{-}\), due to the existence of the modules in the conformal factor as well as the monotonic map between \(r\) and \((\tilde{u},\tilde{v})\). However, we can get rid of these kinks, for instance, through the use of relaxation functions.
In short, another set of coordinates \((\tilde{u},\tilde{v})\) can be introduced which will inherit the properties mentioned above and possess a relaxed (well-behaved and smooth) conformal factor at \(r_{+}\) and \(r_{-}\). As a consequence, the metric will be guaranteed to be \(C^{\infty}\) in these new coordinates. We choose the function \(\tanh x\) to do this job. The relaxed coordinate transformation is
\[\begin{split}\frac{dh}{du}&=\frac{\mu}{\tanh\left[U_ {-}^{2}\right]U_{+}^{-1}+\tanh\left[U_{+}^{2}\right]U_{-}^{-1}},\\ \frac{dk}{dv}&=\frac{\mu}{\tanh\left[V_{-}^{2} \right]V_{-}^{-1}+\tanh\left[V_{+}^{2}\right]V_{-}^{-1}}.\end{split} \tag{34}\]
The metric now becomes
\[\begin{split} dS_{RN}^{2}&=-\frac{1}{r^{2}}\left\{ Q_{-}(r,t)\left|r-r_{-}\right|^{\alpha+1}exp\left(-\frac{2r}{\alpha_{+}} \right)+Q_{+}(r,t)\left|r-r_{+}\right|^{\tilde{\alpha}+1}exp\left(\frac{2r}{ \alpha_{-}}\right)\right.\\ &\left.+\tilde{Q}(r,t)exp\left(r\left[\frac{-1}{\alpha_{+}}+ \frac{1}{\alpha_{-}}\right]\right)\left|r-r_{+}\right|^{\frac{\alpha+1}{2}} \left|r-r_{-}\right|^{\frac{\alpha+1}{2}}\right\}d\tilde{u}d\tilde{v},\end{split} \tag{35}\]
where \(Q_{-}\), \(Q_{+}\), and \(\tilde{Q}\) are defined as
\[\begin{split} Q_{-}(r,t)&=\tanh\left[exp\left( \frac{2(t-r)}{\alpha_{-}}\right)\frac{\left|r-r_{-}\right|}{\left|r-r_{+} \right|^{\tilde{\alpha}}}\right]\tanh\left[exp\left(\frac{-2(t+r)}{\alpha_{-} }\right)\frac{\left|r-r_{-}\right|}{\left|r-r_{+}\right|^{\tilde{\alpha}}} \right]\\ Q_{+}(r,t)&=\tanh\left[exp\left(\frac{2(r-t)}{\alpha_ {+}}\right)\frac{\left|r-r_{+}\right|}{\left|r-r_{-}\right|^{\tilde{\alpha}}} \right]\tanh\left[exp\left(\frac{-2(-t+r)}{\alpha_{+}}\right)\frac{\left|r-r_ {+}\right|}{\left|r-r_{-}\right|^{\tilde{\alpha}}}\right]\\ \tilde{Q}(r,t)&=Q_{1}(r,t)exp\left(t\left(\frac{1}{ \alpha_{+}}+\frac{1}{\alpha_{-}}\right)\right)+Q_{2}(r,t)exp\left(-t\left( \frac{1}{\alpha_{+}}+\frac{1}{\alpha_{-}}\right)\right)\\ Q_{1}(r,t)&=\tanh\left[exp\left(\frac{2(t-r)}{ \alpha_{-}}\right)\frac{\left|r-r_{-}\right|}{\left|r-r_{+}\right|^{\tilde{ \alpha}}}\right]\tanh\left[exp\left(\frac{-2(-t+r)}{\alpha_{+}}\right)\frac{ \left|r-r_{+}\right|}{\left|r-r_{-}\right|^{\alpha}}\right]\\ Q_{2}(r,t)&=\tanh\left[exp\left(\frac{-2(t+r)}{ \alpha_{-}}\right)\frac{\left|r-r_{-}\right|}{\left|r-r_{+}\right|^{\tilde{ \alpha}}}\right]\tanh\left[exp\left(\frac{2(r-t)}{\alpha_{+}}\right)\frac{ \left|r-r_{+}\right|}{\left|r-r_{-}\right|^{\tilde{\alpha}}}\right]\end{split} \tag{36}\]
This relaxed version of the conformal factor is guaranteed to be smooth and semi-positive everywhere in coordinates \((\tilde{u},\tilde{v})\). Before we move to construct the type-II coordinates \(\mathcal{GK_{I\!I}}\), there are three features of the metric worth commenting on. First, the metric is not asymptotically flat and is different from the \(\mathcal{K_{\pm}}\) coordinates where the induced metric on the submanifold \(M_{2}=\mathcal{M}\backslash SO(3)\) is asymptotically vanishing. In \(\mathcal{GK_{I\!I}}\) coordinates the induced metric on \(M_{2}\) blows up. This is completely natural as the coordinates are compact, hence the proper distance is invariant. Second, the \(\mathcal{GK_{I\!I}}\) coordinates are dynamically casting the metric since the conformal factor includes explicit time dependence after and before the relaxation. This prevents the \(r\) and \(t\) from being related to \((\tilde{u},\tilde{v})\) by simple transformation similar to (17,20). Third, the integral \(I_{2}\) defining \((\tilde{u},\tilde{v})\) is given by
\[I_{2}=\int\frac{dx}{\tanh\left(x^{2}\right)x^{q+1}+\tanh\left(x^{-2q}\right)}, \tag{37}\]
The \(q>1\) cases could be evaluated numerically, however, analytical methods could still be helpful in studying the relation between \(\mathcal{K_{\pm}}\) and \(\mathcal{GK_{I\!I}}\) at any point. This could be achieved for example by employing series expansion, as mentioned earlier. Moreover, if we manage to invert equations (34) to solve explicitly for the null coordinates in terms of \(\mathcal{GK_{I\!I}}\), then we could employ the generalized Lambert function to solve for \((t,r)\) explicitly as well. Such an expansion is expected to recover equations (20) and (17) near to the horizons \(r=r_{-}\) and \(r=r_{+}\), respectively.
### Type-II Global Chart
While constructing \(\mathcal{GK_{I\!I}}\), a simple zero at each horizon \(r_{\pm}\) was a coupled one at the other horizon \(r_{\mp}\). This product of zeros had a semi-positive regular amplitude everywhere as shown in equation (31) or (35). However, for \(\mathcal{GK_{I\!I}}\) we will have a different singularity structure that serves the same purpose: sum of two zeros at each horizon \(r_{\pm}\), each coupled to a semi-positive singular amplitude at the other horizon \(r_{\mp}\) that is singular at the other horizon. In principle, this class of charts should contain families of coordinates at which the coordinates themselves are extrapolation between two Outer and Inner Kruksal coordinates. In light of this statement, the chart given in [27] plausibly belongs to that class.
The conformal metric will have a simple pole at \(r=r_{\pm}\) coupled to \(M_{\pm}(r_{*},t)\), while \(\beta\left(r_{*},t\right)\) is effectively a residual term for completeness. \(M_{\pm}(r_{*},t)\) and \(\beta(r_{*},t)\) are satisfying the following constraints. As \(r\to r_{\pm}\)
\[\begin{split} M_{\pm}\left(r_{*},t\right)&\to constant,\\ M_{\mp}\left(r_{*},t\right)&\to 0,\\ \beta\left(r_{*},t\right)&\to 0.\end{split} \tag{38}\]
Alternatively, we can restate the first constraint as: \(M_{\pm}\) must have no overall pole at \(r_{+}\) (\(r_{-}\)). Later, through this analysis, we will learn that \(\beta\) will be the key to finding the global conformal charts in this procedure for the type-II coordinates. Given equation (14), we can rewrite this in terms of the \(\mathcal{K}_{\pm}\) or the double null coordinates as follows
\[\begin{split}\frac{dh}{du}\frac{dk}{dv}=\frac{exp\left(\frac{2(r_ {*}-r_{+})}{\alpha_{+}}\right)}{\left|r-r_{-}\right|^{\alpha}}M_{+}(r_{*},t)+ \frac{exp\left(-\frac{2(r_{*}-r_{-})}{\alpha_{-}}\right)}{\left|r-r_{+} \right|^{\tilde{\alpha}}}M_{-}(r_{*},t)+\beta(r_{*},t),\end{split} \tag{39}\]
Revisiting the condition in equation (38), \(M_{+}\) (\(M_{-}\)) must have zeros at \(r=r_{-}\) (\(r=r_{+}\)) of rank higher than \(\alpha\) (\(\bar{\alpha}\)) respectively. Searching for solutions for equation (39) could be more fruitful if we were able to find functions \(M_{\pm}\) and \(\beta\) with (\(r_{*}\pm t\)) dependence. Accordingly, the residual term \(\beta\) could be used to easily factorize the right-hand side of the equation (39) into a product of \(u\)- and \(v\)-dependent functions. The task of generating a solution to equation (39) is not trivial, but if we find \(M_{\pm}(u,v)\) and \(\beta(u,v)\), this will boost our progress towards achieving this task. The easiest hint we can get from the form of that equation is to try to construct \(M_{+}(M_{-})\) from the \(\mathcal{K}_{+}\) (\(\mathcal{K}_{-}\)). Following this logic, using the trial and error method, we learn that if we define \(M_{\pm}\) as
\[M_{+}\equiv\frac{\mu\mu}{\left(1+U_{+}^{1+2\tilde{\alpha}}\right)\left(1+V_{+ }^{1+2\tilde{\alpha}}\right)},\hskip 28.452756ptM_{-}\equiv\frac{\mu\mu}{ \left(1+U_{-}^{2}\right)\left(1+V_{-}^{2}\right)}, \tag{40}\]
we can find \(\beta\) that can do the factorization for us
\[\beta\equiv\frac{\mu\mu U_{+}V_{-}}{\left(1+U_{+}^{1+2\tilde{\alpha}}\right) \left(1+V_{-}^{2}\right)}+\frac{\mu\mu U_{-}V_{+}}{\left(1+V_{+}^{1+2\tilde{ \alpha}}\right)\left(1+U_{-}^{2}\right)}. \tag{41}\]
This will leave us eventually with the following choices for \(\frac{dh}{du}\) and \(\frac{dk}{dv}\)
\[\begin{split}\frac{dh}{du}&=\mu\left[\frac{U_{+}}{1 +U_{+}^{1+2\tilde{\alpha}}}+\frac{U_{-}}{1+U_{-}^{2}}\right]\\ \frac{dk}{dv}&=\mu\left[\frac{V_{+}}{1+V_{+}^{1+2 \tilde{\alpha}}}+\frac{V_{-}}{1+V_{-}^{2}}\right]\end{split} \tag{42}\]
One more time, the choice we made for \(\mathcal{GK_{TI}}\) coordinates is naturally compact which means we can use those coordinates directly to build the Penrose diagrams. Nonetheless, the integration in terms of \((u,v)\) is significantly simpler than the one used in the example we give for the \(\mathcal{GK_{I}}\). Again, our choice of integration reference point will be the outer horizon \(r_{+}\). We can now write the metric
\[dS_{RN}^{2}=-\frac{1}{r^{2}}\left\{\right.\left.\left.\left.A_{+}^{-1}(r,t)+A_{ -}^{-1}(r,t)+A^{-1}(r,t)\right\}^{-1}d\tilde{u}d\tilde{v}\right. \tag{43}\]
where \(A_{\pm}(r,t)\) and \(A(r,t)\) are defined as follows
\[\begin{split} A_{+}(r,t)&\equiv exp\left(-\frac{2r}{ \alpha_{+}}\right)\left|r-r_{-}\right|^{\alpha+1}+exp\left(\frac{2r}{\alpha_{-} }\right)\left|r-r_{+}\right|^{1+2\bar{\alpha}}\left|r-r_{-}\right|^{-1}\\ &\qquad 2\cosh\left[t\left(\frac{1}{\alpha_{+}}+\frac{2}{\alpha_{-}} \right)\right]exp\left(r\left[-\frac{1}{\alpha_{+}}+\frac{2}{\alpha_{-}}\right] \right)\left|r-r_{+}\right|^{\bar{\alpha}+\frac{1}{2}}\left|r-r_{-}\right|^{ \frac{\alpha}{2}}\\ A_{-}(r,t)&\equiv exp\left(\frac{2r}{\alpha_{-}} \right)\left|r-r_{+}\right|^{\bar{\alpha}+1}+exp\left(-\frac{2r}{\alpha_{-}} \right)\left|r-r_{-}\right|^{2}\left|r-r_{+}\right|^{-\bar{\alpha}+1}\\ &+2\cosh\left[\frac{2t}{\alpha_{-}}\right]\left|r-r_{-}\right| \left|r-r_{+}\right|\\ A(r,t)&\equiv 2\cosh\left[\kappa t\right]\exp(-\bar{ \kappa}r)\left|r-r_{+}\right|^{\frac{1+\bar{\alpha}}{2}}\left|r-r_{-}\right|^{ \frac{1+\bar{\alpha}}{2}}\\ &+2\cosh\left[\frac{-t}{\alpha_{-}}\right]exp\left(\frac{3r}{ \alpha_{-}}\right)\left|r-r_{+}\right|^{\frac{2+3\bar{\alpha}}{2}}\left|r-r_{ -}\right|^{\frac{1}{2}}\\ &+2\cosh\left[\frac{-3t}{\alpha_{-}}\right]exp\left(\frac{r}{ \alpha_{-}}\right)\left|r-r_{+}\right|^{\frac{2+4\bar{\alpha}}{2}}\left|r-r_{ -}\right|^{\frac{1}{2}},\end{split} \tag{44}\]
with the following limits
\[\begin{split} A_{+}\left(r\to r_{+},t\right)& \to exp\left(-\frac{2r}{\alpha_{+}}\right)\left|r-r_{-}\right|^{ \alpha+1},\\ A_{+}\left(r\to r_{\pm},t\right)&\to exp\left(\frac{2r}{ \alpha_{-}}\right)\left|r-r_{+}\right|^{\bar{\alpha}+1},\\ A_{\pm}\left(r\to r_{\mp},t\right)&\to\infty,\\ A\left(r_{\to}r_{\pm},t\right)&\to\infty.\end{split} \tag{45}\]
Consequently, the metric will take the following asymptotic limit
\[\begin{split} dS_{RN}^{2}(r\to r_{+})&=-\frac{e^{ \frac{-2r}{\alpha_{+}}}|r-r_{-}|^{1+\alpha}}{r^{2}}d\bar{u}d\bar{v},\\ dS_{RN}^{2}(r\to r_{-})&=-\frac{e^{\frac{2r}{ \alpha_{-}}}|r-r_{+}|^{1+\bar{\alpha}}}{r^{2}}d\bar{u}d\bar{v}.\end{split} \tag{46}\]
## V Discussion and Conclusion
After reinterpreting the premises of the Kruskal charting of the Schwarzschild spacetime, we were able to provide a new approach to chart the Reissner-Nordstrom spacetime featuring two horizons. The technique itself showed to be employable in two distinctive ways, resulting in two families of charting systems: conformal global _type-I_ and _type-II_ charts. In both cases, the asymptotic form of the metric approaches the form of metric written in terms of _Type-O_ charts. We illustrated the success of the provided technique by constructing compact conformal global coordinates of type-I \(\mathcal{GK}_{\mathcal{I}}\) and of type-II \(\mathcal{GK}_{\mathcal{I}\mathcal{I}}\) for the RN spacetime. The price we pay for covering the whole spacetime with two horizons with only one chart is time dependence.
After the construction, one could conclude that the metric is only \(C^{1}\) since the map between \(r\) and each of type-I \(\mathcal{GK}_{\mathcal{I}\mathcal{I}}\) and of type-II \(\mathcal{GK}_{\mathcal{I}\mathcal{I}}\) is monotonic and smooth, and
also due to the behavior of the conformal map written in terms of \(t\) and \(r\) functions. Consequently, for both type-I and type-II charts, the metric becomes \(C^{\infty}\) only if we add an extra step in the procedure. This additional step aims to promote differentiability if and only if the metric has kinks and was applied by employing relaxing functions to modify the charts at the kinks' locations. As expected, it is complicated to write the generalized Kruskal coordinates \((\tilde{u},\tilde{v})\) explicitly in terms of the RN coordinates \((t,r)\), related through equation \((\ref{eq:201},\ref{eq:202})\). However, we hinted that this could be achieved by utilizing the generalized Lambert function \(\mathcal{W}\) in similar manner to the use of the Lambert function \(W\) in Schwarzchild case.
We proved that the domain of \(r\) can be globally and monotonically mapped to \(\mathcal{GK}_{\mathcal{I}\mathcal{I}}\) and \(\mathcal{GK}_{\mathcal{I}\mathcal{I}}\) for any curve of constant \(t\). We analyzed some aspects of the integral equation relating the null coordinates to \(\mathcal{GK}_{\mathcal{I}}\) and \(\mathcal{GK}_{\mathcal{I}\mathcal{I}}\). Finally, we demonstrated that the smoothing technique developed in [27] could be thought of as a special case of the type-II family of coordinates. Since the Kerr and RN spacetimes share some similarities (i.e. two horizons), our technique might be applicable to the former case given that there is no underlying assumption about the type of spacetime in hand.
###### Acknowledgements.
We wish to thank Mahmoud Mansour9 and Wei-Chen10 Lin for the useful discussions on various aspects of the analysis provided in this article. We are also grateful to Sam Powers for many valuable comments on the draft. D.S. is partially supported by the US National Science Foundation, under Grants No. PHY-2014021 and PHY-2310363.
Footnote 9: mansour@iis.u-tokyo.jp.ac
Footnote 10: ArchenLin@gmail.com
| Kruskal-Szekeres座標構造は、Schwarzschild時空を幾何的に解釈すると、アポシームの観測者関連の$t$軸を、イベント境界$r=2M$という点に圧縮できることがわかります。この点から、Kruskal座標を拡張し、二つのイベント境界を持つ時空、特に、Reissner-Nordstr\"omマニフOLD、$\mathcal{M}_{RN}$を含む。新しいKruskal座標構築方法を開発し、Kruskal座標と類似の座標を構築するための2つのアル gebroidically 異なるクラスを開発しました。この方法を教育的に示すために、それぞれクラスに対して、コンパクトで共形で、世界的な座標系$\mathcal{GK_{I}}$と$\mathcal{GK_{II}}$を構築しました。これらの座標系では、計量微分は$C^\infty$に拡張 |
2309.07339 | Efficient quantum recurrent reinforcement learning via quantum reservoir
computing | Quantum reinforcement learning (QRL) has emerged as a framework to solve
sequential decision-making tasks, showcasing empirical quantum advantages. A
notable development is through quantum recurrent neural networks (QRNNs) for
memory-intensive tasks such as partially observable environments. However, QRL
models incorporating QRNN encounter challenges such as inefficient training of
QRL with QRNN, given that the computation of gradients in QRNN is both
computationally expensive and time-consuming. This work presents a novel
approach to address this challenge by constructing QRL agents utilizing
QRNN-based reservoirs, specifically employing quantum long short-term memory
(QLSTM). QLSTM parameters are randomly initialized and fixed without training.
The model is trained using the asynchronous advantage actor-aritic (A3C)
algorithm. Through numerical simulations, we validate the efficacy of our
QLSTM-Reservoir RL framework. Its performance is assessed on standard
benchmarks, demonstrating comparable results to a fully trained QLSTM RL model
with identical architecture and training settings. | Samuel Yen-Chi Chen | 2023-09-13T22:18:38 | http://arxiv.org/abs/2309.07339v1 | # Efficient Quantum Recurrent Reinforcement Learning via Quantum Reservoir Computing
###### Abstract
Quantum reinforcement learning (QRL) has emerged as a framework to solve sequential decision-making tasks, show-casing empirical quantum advantages. A notable development is through quantum recurrent neural networks (QRNNs) for memory-intensive tasks such as partially observable environments. However, QRL models incorporating QRNN encounter challenges such as inefficient training of QRL with QRNN, given that the computation of gradients in QRNN is both computationally expensive and time-consuming. This work presents a novel approach to address this challenge by constructing QRL agents utilizing QRNN-based reservoirs, specifically employing quantum long short-term memory (QL-STM). QLSTM parameters are randomly initialized and fixed without training. The model is trained using the asynchronous advantage actor-aritic (A3C) algorithm. Through numerical simulations, we validate the efficacy of our QLSTM-Reservoir RL framework. Its performance is assessed on standard benchmarks, demonstrating comparable results to a fully trained QLSTM RL model with identical architecture and training settings.
Samuel Yen-Chi Chen+Wells Fargo Quantum machine learning, Reinforcement learning, Recurrent neural networks, Long short-term memory, Reservoir computing
Footnote †: The views expressed in this article are those of the authors and do not represent the views of Wells Fargo. This article is for informational purposes only. Nothing contained in this article should be construed as investment advice. Wells Fargo makes no express or implied warranties and expressly disclaims all legal, tax, and accounting implications related to this article.
## 1 Introduction
Quantum computing (QC) holds promise for enhanced performance in challenging computational tasks compared to classical counterparts. Yet, current quantum computers lack error correction, complicating deep quantum circuit implementation. These noisy intermediate-scale quantum (NISQ) devices [1] require specialized quantum circuit designs to fully exploit their potential advantages. A recent hybrid quantum-classical computing approach [2] leverages both realms, with quantum computers handling advantageous tasks while classical counterparts manage tasks like gradient calculations. Known as _variational quantum algorithms_, these methods have excelled in specific machine learning (ML) tasks. Reinforcement learning (RL), a subset of ML concerned with sequential decision making, has achieved remarkable success through deep neural networks in complex tasks [3]. In contrast, the nascent field of quantum reinforcement learning (QRL) poses unexplored challenges. While most QRL approaches focus on variational quantum circuits (VQCs) without recurrence, a recent development introduces quantum recurrent neural networks (QRNNs) as seen in the work [4]. This innovation demonstrates promise in partially observable environments, outperforming classical models. The challenge of training QRNNs is their computationally expensive gradient calculation. To address this, we propose the QLSTM-RC-RL framework. By harnessing quantum long short-term memory (QLSTM) [5] and reservoir computing (RC) [6], we optimize QRL training. QLSTM operates as an untrained _reservoir_, with only the classical neural components before and after it undergoing training. The scheme is illustrated in Figure 1. We further accelerate the training through the use of asynchronous training developed in the work [7, 8]. Our numerical simulation shows that the proposed framework can reach performance comparable to fully trained counterparts when the model sizes are the same and under the same training setting.
Figure 1: **The hybrid quantum-classical framework for QLSTM-RC-RL.**
## 2 Related Work
Quantum Reinforcement Learning (QRL) traces its origins to Dong et al.'s work [9]. While traditionally requiring a quantum environment, recent VQC-based QRL advancements tackle classical settings. Chen et al. [10] initiated this, addressing discrete environments like Frozen-Lake and Cognitive-Radio. Later, Lockwood et al. [11] and Skolik et al. [12] expanded to continuous observation spaces, such as Cart-Pole. Chen et al. also introduced Quantum RNNs [4] for partially observable scenarios, broadening QRL's practicality in classical contexts. In addition to learning value functions like \(Q(s,a)\), recent QRL advances include policy function learning. Jerbi et al. [13] introduce quantum policy gradient RL using REINFORCE [14]. Hsiao et al. [15] enhance this with PPO and VQCs, showing quantum models with fewer parameters can surpass classical ones. This trend extends to various modified quantum policy gradient algorithms, including actor-critic [16] and SAC [17]. QRL applies to quantum control, architecture search [18, 19] and multi-agent settings [20, 21]. QRL optimization with evolutionary optimization is first studied in [22]. Asynchronous training of QRL is proposed in the work [23, 8]. Reservoir computing (RC) employing classical RNNs has undergone extensive research [6], whereas the utilization of quantum RNNs such as QLSTM as reservoirs to perform time-series modeling represents a recent development [24]. The idea is to use QLSTM's internal dynamics and hidden states as a rich, dynamic memory or context for processing sequential data. This study integrates QRNN-based RL from [4] with Quantum A3C from [8]. Additionally, we demonstrate the effectiveness of using randomly initialized QRNNs, like QLSTM, as reservoirs, achieving comparable performance to fully-trained models and reducing training time.
## 3 Reinforcement Learning
_Reinforcement learning_ (RL) involves an agent interacting with an environment \(\mathcal{E}\). At each time step \(t\), the agent observes a _state_\(s_{t}\), selects an _action_\(a_{t}\) from the action space \(\mathcal{A}\) according to its current _policy_\(\pi\), and receives a _reward_\(r_{t}\). The agent aims to maximize the expected return \(V^{\pi}(s)=\mathbb{E}\left[R_{t}|s_{t}=s\right]\), where \(R_{t}=\sum_{t^{\prime}=t}^{T}{\gamma^{t^{\prime}-t}r_{t^{\prime}}}\). It can also be defined using the action-value function \(Q^{\pi}(s,a)=\mathbb{E}[R_{t}|s_{t}=s,a]\), which represents the expected return for taking action \(a\) in state \(s\) under policy \(\pi\)[14]. Unlike value-based methods (e.g., \(Q\)-learning), _policy gradient_ methods optimize \(\pi(a|s;\theta)\) directly, updating parameters based on expected return, e.g., the REINFORCE algorithm [14]. In standard REINFORCE, \(\theta\) updates via \(\nabla_{\theta}\log\pi(a_{t}|s_{t};\theta)R_{t}\). High variance can be an issue. To reduce variance, subtract a learned state-dependent baseline, typically \(V^{\pi}(s_{t})\). This yields \(\nabla_{\theta}\log\pi(a_{t}|s_{t};\theta)A(s_{t},a_{t})\), where \(A(s_{t},a_{t})=Q(s_{t},a_{t})-V^{\pi}(s_{t})\) is the _advantage_. This method is called advantage actor-critic (A2C) with policy as the actor and the value function as the critic [14]. A3C (Asynchronous Advantage Actor-Critic) uses multiple concurrent actors for parallelized policy learning, improving stability and reducing memory needs. Diverse state encounters enhance numerical stability. A3C's efficient use of actors makes it a popular choice in reinforcement learning [7] and has been studied in quantum RL recently [8].
## 4 Variational Quantum Circuits
Variational quantum circuits (VQC), also known as parameterized quantum circuits (PQC) in the literature, are a distinctive class of quantum circuits containing trainable parameters. These parameters are optimized using classical machine learning methods, which can be gradient-based or gradient-free. A VQC comprises three essential components. The _encoding_ block, denoted as \(U(\mathbf{x})\), transforms classical data \(\mathbf{x}\) into a quantum state. The _variational_ or _parameterized_ block, represented by \(V(\boldsymbol{\theta})\), contains learnable parameters \(\boldsymbol{\theta}\) optimized through gradient descent in this study. Finally, the _measurement_ phase outputs information by measuring a subset or all of the qubits, resulting in a classical bit string. Running the circuit once provides a bit string like "0,0,1,1." However, multiple circuit runs yield expectation values for each qubit. This paper specifically examines the Pauli-\(Z\) expectation values from VQC measurements. The mathematical expression of the VQC used in this work is, where.
VQCs offer several advantages, including enhanced resilience to quantum device noise [25, 26], which proves particularly valuable in the NISQ era [1]. Moreover, research has indicated that VQCs can exhibit greater expressiveness than classical neural networks [27, 28, 29] and can be trained effectively with smaller datasets [30]. Noteworthy applications of VQC in QML span classification [31, 32, 33], natural language processing [34, 35, 36], generative modeling [37] and sequence modeling [5, 38].
## 5 Methods
### QLSTM-Reservoir Computing
The QLSTM, depicted in Figure2 and introduced by Chen et al. in [5], is a quantum adaptation of LSTM [39]. It employs VQCs instead of classical neural networks and excels in both time-series data and NLP tasks [5, 36, 40]. The VQC incorporated into the QLSTM, as depicted in Figure3, has shown impressive performance in time-series modeling [5]. It encompasses data encoding via \(R_{y}\) and \(R_{z}\) rotations, a variational component involving CNOT gates for qubit entanglement, trainable unitary \(R\) gates, and quantum measurement. The original QLSTM proposal, applicable to time-series modeling [5] and QRL [4], involves time-consuming training of VQC parameters. In this work, we adopt an approach using QLSTM
as a reservoir to transform input data without the need for explicit VQC parameter training, as detailed in [24].
### Qlstm-Rc-rl
The proposed QLSTM-RC-RL framework includes a _dressed QLSTM model_ consisting of classical neural networks for preprocessing and postprocessing, with a QLSTM in between. We adopted the quantum asynchronous advantage actor-critic (QA3C) training method developed in the work [8]. Furthermore, we extend the original method to include the recurrent policy QLSTM. During asynchronous training, each local agent interacts with its own environment and stores the current trajectory in local memory. This trajectory is later used to calculate local gradients, which are then uploaded to the global shared model for updating.
## 6 Experiments
### Environment
In this study, we employ the MiniGrid-Empty environment, a widely utilized maze navigation scenario [41]. The primary objective for our QRL agent is to effectively generate appropriate actions sequence based on the observations it receives at each time step, enabling it to traverse from the initial location to the designated destination, represented as the green box in Figure4. Notably, the MiniGrid-Empty environment is characterized by a \(147\)-dimensional vector observation, denoted as \(s_{t}\). It offers an action space \(\mathcal{A}\) comprising six actions, namely _turn left_, _turn right_, _move forward_, _pick up an object_, _drop the object being carried_, and _toggle_. Of these actions, only the first three have practical consequences in this context, and the agent is expected to learn this distinction. Moreover, successful navigation to the goal rewards the agent with a score of \(1\), albeit subject to a penalty determined by the formula \(1-0.9\times(\textit{number of steps}/\textit{max steps allowed})\), with the maximum allowable steps set at \(4\times n\times n\), where \(n\) is the grid size [41]. Throughout our experimentation, we explore various configurations, encompassing different grid sizes and variations in the initial starting points.
### Hyperparameters
The hyperparameters for the proposed QLSTM-RC-RL are: Adam with learning rate: \(1\times 10^{-4}\), \(beta_{1}=0.92\), \(beta_{2}=0.999\), model lookup steps \(L=5\) and discount factor \(\gamma=0.9\). The local agents/models calculate their own gradients every \(L\) steps (the length of trajectory used during model updates) and update the model as described in Section5.2. The number of parallel processes (local agents) is \(80\).
### Model Size
In our study, we employ hybrid QLSTM models composed of four key components: a classical NN for environmental observation preprocessing, a QLSTM that can be fully trained or initialized randomly and fixed in the RC scenario, and two classical NNs for processing QLSTM outputs to produce action logits and state values. In our study, we utilize an \(8\)-qubit VQC-based QLSTM model with input and hidden dimensions of \(4\). The internal state is \(8\)-dimensional. We explore QLSTM variations with \(1\), \(2\), and \(4\) VQC layers, as shown in dashed box in Figure 3 All hybrid models share identical configurations for their classical neural networks. Specifically, the preprocessing NN consists of \(147\times 4+4=592\) parameters, the NN for action logits has \(4\times 6+6=30\) parameters, and the NN for state values comprises \(4\times 1+1=5\)
Figure 4: **The MiniGrid environments.** (a) - (c) are MiniGrid environments with fixed starting points and (d) - (f) are with random starting points (starting points shown in (d) - (f) are a set of examples).
Figure 3: **VQC architecture for QLSTM.** The VQC architecture here is inspired by the work [10]. The parameters \(\alpha,\beta,\gamma\) are not trained in QLSTM-RC settings.
Figure 2: **The QLSTM-Reservoir used in the proposed QA3C.** QLSTM is first proposed in the work [5]. In the proposed QLSTM-RC-RL framework, the VQC parameters are not trained. The input to the QLSTM is the concatenation \(v_{t}\) of the hidden state \(h_{t-1}\) from the previous time step and the current input vector \(x_{t}\) which is the processed observation from the environment.
parameters. The number of parameters of QLSTM with \(n\) VQC layer is: \(8\times 3\times 5\times n=120n\) in which the VQCs are \(8\)-qubit and each general rotation rate is parameterized by \(3\) parameters. There are \(5\) VQCs in a QLSTM as shown in Figure 2.
### Results
**MiniGrid with fixed starting point** We first consider the setting that the RL agent start from a fixed point in the environment. In the MiniGrid environment, this is at the upper-left corner of the maze as described in Section 6.1 and Figure 4. The results are shown in the Figure 5. We can observe that among the three environments settings we tested, QLSTM and QLSTM-RC with 1, 2 or 4 VQC layers reach similar performance in the MiniGrid-5x5. In the MiniGrid-6x6, fully-trained QLSTM with 4 VQC layers achieve the best performance. Other models still achieve good performance, which are close to the best one. The QLSTM-RC performs very similar to the fully-trained QLSTM. In the most difficult MiniGrid-8x8 case, only the fully-trained QLSTM with 4 VQC layers reaches the optimal performance. The QLSTM-RC with 2 VQC layers still learns but slowly. Other model configurations struggle to learn the good policies.
**MiniGrid with random starting point** We further consider the setting that, in each episode, the RL agent starts from a random point in the environment. We provide a set of examples in Figure4. The results are shown in the Figure6. We can observe that among the three environments settings we tested, the fully-trained QLSTM with 4 VQC layers outperform other models in all three cases. An interesting result is that the QLSTM-RC with only one VQC layer still reaches performance very close to the best performing agent. Other agents, either QLSTM or QLSTM-RC, perform very similar. Overall, the performance of QLSTM and QLSTM-RC agents in this environment are comparable or superior than in the environments with fixed starting point. A possible reason is that the agent can experience different situations more frequently. The agent may start from a location closer to the goal and achieve the goal with a positive reward. This is crucial since the MiniGrid is a sparse environment and the agent may require a large number of trial and error to obtain a positive reward in the non-random environment.
## 7 Conclusions
In this paper, we first show the quantum recurrent neural network (QRNN)-based reservoir computing for RL. Specifically, we employ the hybrid QLSTM reservoir as the function approximator to realize the quantum A3C. From the results obtained from the testing environments we consider, our proposed framework shows stability and average scores comparable to their fully-trained counterparts in most testing cases when the model sizes, model architectures and training hyper-parameters are fixed. The proposed method paves a new way of pursuing QRL with recurrence more efficiently.
Figure 5: **Results: QLSTM-RC-RL in MiniGrid-Empty environment with fixed starting point.**
Figure 6: **Results: QLSTM-RC-RL in MiniGrid-Empty environment with random starting point.** | 量子強化学習(QRL)は、連続的決断タスクを解決するためのフレームワークとして登場し、実証的な量子優位性を示しています。重要な進歩は量子反復ニューラルネットワーク(QRNN)によるものであり、特に、可視性が低い環境などのメモリ intensive タスクに取り組んでいます。ただし、QRNNをインкорポレートした QRL モデルでは、QRNN の勾配計算が計算コストが大きく、時間的にもかかってしまうという課題があります。この研究では、QRNN のリソースを利用した QRL アgente の構築を提案し、特に量子長期短期メモリ(QLSTM)を用いてそれを実現しています。QLSTMのパラメータはランダムに初期化され、トレーニングする必要がありません。このモデルは、非同期アドバンテージアクタート(A3C)アルゴリズムを用いて訓練されました。数値シミュレーションを通して、QLSTM-Reservoir RL フレームワークの効果 |
2309.10645 | Towards Energy-Aware Federated Traffic Prediction for Cellular Networks | Cellular traffic prediction is a crucial activity for optimizing networks in
fifth-generation (5G) networks and beyond, as accurate forecasting is essential
for intelligent network design, resource allocation and anomaly mitigation.
Although machine learning (ML) is a promising approach to effectively predict
network traffic, the centralization of massive data in a single data center
raises issues regarding confidentiality, privacy and data transfer demands. To
address these challenges, federated learning (FL) emerges as an appealing ML
training framework which offers high accurate predictions through parallel
distributed computations. However, the environmental impact of these methods is
often overlooked, which calls into question their sustainability. In this
paper, we address the trade-off between accuracy and energy consumption in FL
by proposing a novel sustainability indicator that allows assessing the
feasibility of ML models. Then, we comprehensively evaluate state-of-the-art
deep learning (DL) architectures in a federated scenario using real-world
measurements from base station (BS) sites in the area of Barcelona, Spain. Our
findings indicate that larger ML models achieve marginally improved performance
but have a significant environmental impact in terms of carbon footprint, which
make them impractical for real-world applications. | Vasileios Perifanis, Nikolaos Pavlidis, Selim F. Yilmaz, Francesc Wilhelmi, Elia Guerra, Marco Miozzo, Pavlos S. Efraimidis, Paolo Dini, Remous-Aris Koutsiamanis | 2023-09-19T14:28:09 | http://arxiv.org/abs/2309.10645v1 | # Towards Energy-Aware Federated Traffic Prediction for Cellular Networks
###### Abstract
Cellular traffic prediction is a crucial activity for optimizing networks in fifth-generation (5G) networks and beyond, as accurate forecasting is essential for intelligent network design, resource allocation and anomaly mitigation. Although machine learning (ML) is a promising approach to effectively predict network traffic, the centralization of massive data in a single data center raises issues regarding confidentiality, privacy and data transfer demands. To address these challenges, federated learning (FL) emerges as an appealing ML training framework which offers high accurate predictions through parallel distributed computations. However, the environmental impact of these methods is often overlooked, which calls into question their sustainability. In this paper, we address the trade-off between accuracy and energy consumption in FL by proposing a novel sustainability indicator that allows assessing the feasibility of ML models. Then, we comprehensively evaluate state-of-the-art deep learning (DL) architectures in a federated scenario using real-world measurements from base station (BS) sites in the area of Barcelona, Spain. Our findings indicate that larger ML models achieve marginally improved performance but have a significant environmental impact in terms of carbon footprint, which make them impractical for real-world applications.
5G/6G, Federated learning, Machine learning, Cellular traffic prediction, Sustainable AI
## I Introduction
The advent of fifth-generation (5G) networks has brought forth a plethora of increasingly communication-dependent applications [1], including autonomous driving [2], healthcare [3] and real-time recommender systems [4]. To address the increasing complexity faced by 5G communications and beyond, network traffic forecasting emerges as an essential tool for proactively managing and operating networks.
Machine learning (ML) holds a great potential to undertake the network traffic forecasting task, as it may offer real-time and highly accurate predictions [5, 6]. More specifically, Deep Learning (DL) techniques such as Long Short-Term Memory (LSTM) networks [7] and transformers [8, 9] have shown remarkable performance in cellular traffic prediction. However, deploying such DL algorithms often face limitations in aggregating data from diverse sources due to regulatory restrictions, high bandwidth requirements and business competitiveness issues [10, 11, 12]. These issues are particularly stressed in traditional centralized ML settings, which struggle to cope with massively distributed data due to privacy concerns and communication overheads.
As a result, edge computing and distributed ML have garnered considerable attention in the recent years [13], as they improve privacy and address energy-related issues by reducing data movement and using hardware with limited resources [14]. Among several distributed ML mechanisms, Federated Learning (FL) [15] emerges as a popular solution for collaboratively training ML models without requiring raw data exchange. FL effectively tackles challenges related to multi-operator collaboration and multi-domain (geographical) problems within a single operator [16], which makes it an appealing tool to realize traffic forecasting in future communications networks. Furthermore, FL holds the promise of enhanced accuracy and reduced environmental impact since it avoids heavy communication overheads and additional energy costs, such as cooling energy, incurred in big data centers [17].
The foreseen benefits of FL are however threatened by the rapid increase in data volumes and the adoption of large-scale deep learning models, which demand substantial storage capacity and network bandwidth, while the accuracy improvements from these complex models often come at a significant environmental cost [18, 19, 20]. Several studies also demonstrate that, despite the advances of large models with respect to their accuracy, they do not significantly surpass simpler models in the domain of time series forecasting [8, 9].
In this paper, we assess the environmental impact of training DL models for network traffic forecasting in a federated setting. By examining the trade-off between accuracy and energy consumption, we aim to provide valuable insights and raise awareness regarding the environmental implications posed by the development and deployment of distributed AI technologies in communications systems.
Our main contributions are summarized as follows:
1. We introduce a generic FL framework, which we use to compare the state-of-the-art DL architectures for cellular traffic forecasting.
2. We present a novel indicator to assess the sustainability of ML models, which we use to showcase the trade-off between energy consumption and predictive accuracy. While we employ the proposed indicator specifically for cellular traffic forecasting, its applicability can be extended to numerous other applications.
3. We evaluate the performance of the considered DL models using real-world traffic measurements collected from cellular Base Stations (BSs), in the area of Barcelona (Spain) between 2018 and 2019.
The rest of this paper is organized as follows. Section II presents the related literature on federated cellular traffic prediction and sustainability assessment of ML models. Section III outlines the problem statement, describes the models used and introduces the sustainability indicator. Section IV presents and discusses the experimental results. Section V summarizes the findings and provides final remarks.
## II Related Work
### _Federated Learning for Cellular Traffic Prediction_
The number of 5G connections is estimated to reach 5 billion (\(10^{9}\)), by 2030 [21], hence traffic prediction will be of utmost importance for designing and optimizing next generation communications systems. The diverse and complex patterns of human mobility contribute to traffic variations among BSs, emphasizing the need for reliable predictive models. Traffic characteristics can exhibit significant changes between weekdays, weekends and social events [22], making traffic forecasting and infrastructure planning challenging.
Recent advances leverage DL approaches to predict traffic demands [11, 23]. More recently, research efforts have shifted towards the decentralization of ML operations, offering improvements regarding scalability through decentralized ML model training frameworks such as FL. FL can potentially boost the collaboration among network operators withholding private data which they are reluctant to share, but which if used would lead to powerful and robust traffic predictors [24].
In [24], the authors designed a client-shifted FL algorithm with a dual aggregation scheme using call detail records (CDR) collected in Italy between 2013 and 2014 [25]. Using the same dataset, Nan et al. [7] trained a federated LSTM model using a regional aggregation algorithm. Similarly, in [26], the authors used the aforementioned dataset and employed a federated meta-learning approach. Subramanya et al. [10] compared several models, including LSTM, CNN-LSTM and LSTM-LSTM for time-series forecasting in 5G networks using a private dataset from a commercial network operator. In [12], the authors presented several models for federated traffic prediction and demonstrated that advanced aggregation algorithms do not significantly outperform the FedAvg baseline [15], owing to the influence of non-IID data in cellular traffic data.
In contrast to previous works, which focused on the now obsolete CDR data (SMS, voice calling, Internet) [7, 24, 26], in this paper, we use a more recent dataset that comprises real measurements from Long Term Evolution (LTE) BSs. This dataset contains contemporary information accounting for the current usage of cellular networks. Moreover, we build upon the models used in [10] by incorporating state-of-the-art transformer-based models. Finally, we extend the work from [12] by designing a sustainability indicator for federated settings that considers both the training and inference phases, both critical for the adoption of ML solutions in communications systems.
### _Sustainability of Machine Learning_
Several studies have explored the sustainability of ML from various perspectives, offering insights and recommendations for reducing the environmental impact of ML algorithms [27]. Wu et al. [18] investigated the carbon footprint throughout the entire life-cycle of ML, showing that carbon emissions primarily originate from the training and inference stages. In [28], the authors measured the energy consumption associated to the training and inference of multi-layer perceptron (MLP) models, showing that significant energy saving of up to 50% could be achieved by reducing the number of hidden layers and units, with a minimal drop in accuracy ranging from 1-2%. Additionally, Savazzi et al. [17] showed that carbon emission reduction can be achieved using FL instead of centralized ML. Lastly, Guerra et al. [29] compared the environmental impact of FL, gossip FL and blockchain-based FL, raising several open issues regarding the environmental aspects of training models using distributed approaches.
In contrast to existing research on ML sustainability, we focus on the emissions of FL when applied to cellular traffic forecasting. Furthermore, we quantify the trade-off between ML model accuracy and emissions, especially with regard to large-sized models like transformers. Our research goes beyond MLPs [28] and focuses on several DL models applied in a real-world scenario. Ultimately, our goal is to contribute to the promotion of sustainable AI practices [30].
## III Methodology
In this section, we present the problem formulation for cellular traffic prediction and discuss the FL scenario. Additionally, we provide an overview of the ML models used.
### _Problem Statement and FL Formulation_
We consider a cellular network with \(N\) BSs connected to a common edge server. At every timestep \(t\), each BS \(k\) obtains a vector of \(d\) measurements, denoted by \(x_{t}^{(k)}\in\mathbb{R}^{d}\). At timestep \(T\), each BS \(k\) predicts its multivariate target measurements \(y_{T}^{(k)}\in\mathbb{R}^{d^{\prime}}\), where \(d^{\prime}\) denotes the number of measurements to be predicted, using a sliding window of last \(W\) local measurements \(X_{T-W:T-1}^{(k)}\in\mathbb{R}^{W\times d}\), where \(X_{T-W:T-1}^{(k)}=\begin{bmatrix}x_{T-W}^{(k)}&x_{T-W+1}^{(k)}&\cdots&x_{T-1}^ {(k)}\end{bmatrix}\). A common neural network model \(f(\cdot)\) is utilized for generating predictions, i.e., \(\hat{y}_{T}^{(k)}=f(X_{T-W:T-1}^{(k)})\), aiming at minimizing the
prediction error while considering the energy consumption. The specific input and output values in the considered scenario are presented is Section IV-A.
We utilize two widely adopted metrics to quantify prediction error for time series forecasting: _i)_ the normalized root mean squared error (\(\mathrm{NRMSE}\)) and _ii)_ the mean absolute error (\(\mathrm{MAE}\)). In our setting, given \(m\) different target observation samples, \(\mathrm{MAE}\) and \(\mathrm{NRMSE}\) for BS \(k\) are defined as follows:
\[\mathrm{MAE}^{(k)}=\frac{1}{md^{\prime}}\sum_{i=1}^{m}\lVert\hat{y}_{T+i}^{(k )}-y_{T+i}^{(k)}\rVert_{1}, \tag{1}\]
\[\mathrm{NRMSE}^{(k)}=\frac{1}{\overline{y}^{(k)}}\sqrt{\frac{\sum_{i=1}^{m} \lVert\hat{y}_{T+i}^{(k)}-y_{T+i}^{(k)}\rVert_{2}^{2}}{md^{\prime}}}, \tag{2}\]
where \(\overline{y}^{(k)}=\frac{1}{md^{\prime}}\sum_{i=1}^{m}y_{T+i}^{(k)}\). The goal is to minimize the average of \(\mathrm{NRMSE}^{(k)}\) and \(\mathrm{MAE}^{(k)}\), respectively, across all the BSs.
To develop a common model that can predict measurements at each BS while benefiting from the training data of all the BSs, we employ an FL-based training strategy whereby the server orchestrates the model training process. In each federated round, the server distributes the current global model to the edge devices. Each device feeds its local dataset to the ML pipeline and locally trains the received model for a number of local epochs. After local training, the updated model parameters are sent back to the server. The server aggregates the received model weights to create the new global model. The entire process repeats for multiple rounds until the global model converges.
In our learning scenario, we consider that each BS of a given area is associated with a Local Neighborhood Server (LNS) that collects data within its coverage. The LNS has sufficient resources to perform model training and orchestrate resource allocation at the monitored BSs. In this sense, each LNS serves as an FL node, communicating with the central server and performing local training and inference operations. Figure 1 provides an overview of the overall envisioned federated traffic prediction framework. Algorithm 1 summarizes the federated learning operations using the FedAvg algorithm [15]. In our experimental study, the terms LNS and BS are equivalent and will be used interchangeably throughout the paper, as each LNS serves as the processing unit for each BS.
```
0: Base stations \(BSs=\{BS_{1},BS_{2},...,BS_{n}\}\). \(\mathcal{R}\) is the number federated rounds, \(E\) is the number of local epochs, \(B\) is the batch size, \(\eta\) is the learning rate and \(\nabla\mathcal{L}\) is the gradient optimization objective.
0: Model weights \(w\).
1: Initialize \(w_{0}\).
2:for each round \(r=1,2,...,\mathcal{R}\)do
3:\(\{BS_{r}\}\leftarrow\) select round participants from \(BSs\) at random without replacement.
4: Transmit global model \(w_{r-1}\) to LNSs that monitor each base station \(k\in\{BS_{r}\}\)
5:for each base station \(k\in\{BS_{r}\}\) in parallel do
6:\(w_{r}^{k}\leftarrow\) LocalTraining(\(k\), \(w_{r-1}\))
7:endfor
8:\(n_{r}\leftarrow\sum_{k\in\{BS_{r}\}}n_{k}\)
9:\(w_{r+1}\leftarrow\sum_{k\in\{BS_{r}\}}\frac{n_{k}}{n_{r}}w_{r}^{k}\)
10:endfor
11:functionLocalTraining(\(k,w\))\(\triangleright\) run on LNS monitoring BS \(k\).
12:\(\mathcal{B}\leftarrow\) split local dataset into batches of size \(B\).
13:for each local epoch \(e=1,2,...,E\)do
14:for batch \(b\in\mathcal{B}\)do
15:\(w\gets w-\eta\nabla\mathcal{L}(w;b)\)
16:endfor
17:endfor
18:return\(w\) to server.
19:endfunction
```
**Algorithm 1** Federated Learning for Cellular Traffic Forecasting with the FedAvg Algorithm.
We develop a generic training methodology for accurate and energy-aware prediction of cellular traffic at each LNS. To achieve this, we assess different time series prediction models \(f(\cdot)\) and monitor their associated energy consumption.
### _Machine Learning Models_
To explore the trade-off between predictive accuracy and energy consumption, we adapt state-of-the-art DL models to the federated setting. We start with a vanilla LSTM as a baseline and then train encoder-decoder architectures as in [10]. In addition, we explore additional models by integrating three transformer-based architectures, which have been widely adopted in various domains including time series forecasting [8, 9]. In particular, the following models are utilized:
1. **LSTM:** The input series are fed into a single LSTM layer with 128 hidden units. The last output sequence by the LSTM layer is forwarded to a fully-connected
Fig. 1: Federated learning-based traffic prediction in cellular networks.
layer of 128 units. The resulting hidden representation is then passed to the output layer to obtain the prediction.
2. **CNN-LSTM:** The input series are passed through two one-dimensional convolutional layers with 32 channels and a kernel size of 1. The output of the CNN model is then processed by a single LSTM layer of 128 units and further processed through a fully-connected layer as for the previous model.
3. **LSTM-LSTM:** This architecture comprises two models, the LSTM encoder and the LSTM decoder. The input series are fed into the LSTM encoder with 128 units, which generates interpreted sequences that capture dependencies from the input domain and hidden representations. These encoded sequences are then forwarded to the LSTM decoder with 128 units, followed by a fully-connected layer of 128 units.
4. **BasicTransformer:** This architecture contains the encoder part of a transformer [31]. The input series are passed to a fully-connected layer of 128 units. The resulting hidden representation is then fed into 2 transformer blocks. Each transformer block applies multi-head attention using 8 heads, capturing complex dependencies throughout the input sequences. The output of the attention mechanism is normalized and further forwarded to a feed-forward neural network (FFNN) with 2 hidden layers, each comprising 128 units. Finally, the output is normalized and a fully-connected layer outputs a prediction.
5. **Transformer:** This architecture extends the previous model by including a decoder sub-model. In this case, the output of the encoder and specifically, the last hidden representation, is fed into a fully-connected layer with 128 units. After that, 2 transformer blocks apply multi-head attention to the sum of the hidden representation obtained from the fully-connected layer and the entire output of the encoder. Transformer blocks are identical to the ones from the BasicTransformer. Finally, a fully-connected layer takes the last hidden representation obtained after applying multi-head attention and generates a prediction.
6. **Transformer-LSTM:** This architecture enhances the previous Transformer model by including an additional LSTM layer. More specifically, the input series are processed by the encoder, which output is forwarded to an LSTM layer of 128 units. The last output sequence from the LSTM layer, along with the entire encoder's output, are fed to the decoder, which generates predictions.
### _Attention Mechanism_
Given the ability of attention mechanisms to focus on specific parts of features and subsequently lead to higher predictive accuracy [8, 9, 31], we also integrate an attention mechanism to models _1)_ to _3)_. Note that transformer-based models _4)_ to _6)_ directly utilize self-attention mechanism, so additional attention is not required. More precisely, in the LSTM model, attention is applied to the output sequences obtained from the LSTM, i.e., before the final fully-connected layer. In the CNN-LSTM model, the attention is integrated after LSTM's operation. Lastly, in the LSTM-LSTM model, attention is incorporated into the encoder's output.
The attention mechanism considered in this work takes the final hidden states generated by an LSTM layer and repeats them for the specified window (\(W=10\) in our case). The repeated hidden states and the LSTM's output sequence are concatenated to the last dimension and then forwarded to a fully-connected layer, followed by a _tanh_ activation. The resulting transformation of the hidden states undergoes a dot product operation with a learnable vector \(v\). Finally, _softmax_ is applied to normalize the attention weights, which represent the importance of hidden features in the LSTM's output sequence. The entire process can be summarized as
\[\text{Attention(}h\text{, }out\text{)}=\text{softmax}\left(v\cdot e\right), \tag{3}\]
where
\[e=\text{tanh}\left(\text{FC}\left(\text{Concat}\left(\text{Repeat}\left(h \text{, timesteps}\right)\text{, }out\right)\right)\right), \tag{4}\]
where \(h\) and \(out\) represent the LSTM's outputs, \(v\) is a learnable weight vector, FC denotes a fully-connected layer and \(\cdot\) represents matrix multiplication. In the remainder of this paper, models with the integrated attention mechanism are denoted with the suffix '-A', e.g., LSTM with attention is denoted as LSTM-A.
### _Sustainability Indicator_
The absence of standardized indicators for evaluating ML sustainability poses challenges to performing fair comparisons among different algorithms. To address this issue, we propose a novel metric that considers both predictive accuracy and environmental impact. More specifically, we consider the following aspects:
1. Accuracy on unseen data, measured by the prediction error.
2. Computational efficiency with respect to the energy consumed in Watt hours (Wh).
3. Communication efficiency, quantified by the data size to be transmitted in kilobytes (kB).
We select these aspects since accuracy indicates the training and inference reliability; computational efficiency is associated with energy consumption and environmental implications; communication efficiency relates to throughput and bandwidth requirements. These factors are crucial for assessing the sustainability of FL applications.
The sustainability indicator, denoted as \(S\), provides a comprehensive evaluation of a model's sustainability throughout the training and inference phases. The formula for calculating \(S\) is as follows:
\[S=S_{\text{Tr}}\times S_{\text{Inf}}, \tag{5}\]
where \(S_{\text{Tr}}\) and \(S_{\text{Inf}}\) represent the attained trade-off between accuracy and energy consumption during the _training_ and
_inference_ phases, respectively. The indicator for training can be calculated using the following equation:
\[S_{\text{Tr}}=(1+E_{\text{Val}})^{\alpha}\times(1+C_{\text{Tr}})^{\beta}\times(1+ DS)^{\gamma}, \tag{6}\]
where \(E_{\text{Val}}\) is the validation error (in this work, we consider MAE), \(C_{\text{Tr}}\) represents the total energy consumed for model training in Wh and \(DS\) is the data size to be transmitted to the central server in kB. Under the FL setting, \(DS\) represents the model size that is transmitted per federated round, while in a centralized scenario, it represents the raw dataset size. Note that, in this work, we solely focus on FL. The exponents denote the importance of each value, with \(\alpha+\beta+\gamma=1\). It should be clarified that a simple weighted average was not used due to the lack of normalization across the different scales, which could lead to misleading outcomes. As for the rationale behind \(S_{Tr}\), a lower value indicates better computational and communication efficiency relative to accuracy. An ideal model has \(E_{\text{Val}}=C_{\text{Tr}}=DS=0\) and \(S_{\text{Tr}}=1\).
During inference, the following formula is utilized:
\[S_{\text{Inf}}=(1+E_{\text{Test}})^{\alpha^{\prime}}\times(1+C_{\text{Inf}})^ {\beta^{\prime}} \tag{7}\]
where \(E_{\text{Test}}\) is the error observed in unseen test data (e.g., based on MAE) and \(C_{\text{Inf}}\) is the energy consumed for predictions. The communication cost during the inference phase is not considered as each client holds its own model locally. Since the number of times that an FL client employs the model for generating predictions is specific to the implementation, we measure these values in terms of predictions per 1.000 samples. Note that, in the considered scenario, each BS uses the model every two minutes, resulting in 720 predictions per day. In this service, the operator is interested in knowing the predictions of the model's output variables to implement its network management optimization solutions. Similar to \(S_{\text{T}}\), \(\alpha^{\prime}+\beta^{\prime}=1\) and, the lower the value of \(S_{\text{Inf}}\), the better the trade-off between computational efficiency and predictive accuracy.
## IV Results
In this section, we outline the experimental setup and present the results focusing on the predictive accuracy and energy consumption. Finally, we delve into the sustainability aspects of the considered models, examining their viability through the sustainability indicator proposed in this paper.1
Footnote 1: The code is available at [https://github.com/vperfian/federated-Time-Series-Forecasting/](https://github.com/vperfian/federated-Time-Series-Forecasting/).
### _Dataset and Experimental Details_
The datasets were collected from three locations in Barcelona, Spain, representing different zones: touristic, entertainment and residential areas. The datasets ensure user anonymity and offer accurate information about the network utilization, allowing the extraction of detailed traces from individual communications. The datasets have been pre-processed to include aggregated statistics for every two-minute interval. Table I reports the minimum and maximum values for each location and type of measurement. In particular, the three locations are treated as distinct nodes under FL and each site has the following characteristics:
* **ElBorn:** 5.421 samples, collected from 2018-03-28 15:56:00 to 2018-04-04 22:36:00.
* **LesCorts:** 8.615 samples, collected from 2019-01-12 17:12:00 to 2019-01-24 16:20:00.
* **PobleSec:** 19.909 samples, samples, collected from 2018-02-05 23:40:00 to 2018-03-05 15:16:00.
Following the analysis by [12], the distributions and the number of observation significantly vary among these localities, resulting in non-IID data distribution. For the experimental evaluation, we use the standard 60/20/20 train, validation and test split per base station.
For each site, the eleven features given in Table I are used as input to the ML models with a window of \(W=10\). The goal is to predict the next timestep's five features: _uplink_ and _downlink_ traffic (Down and Up), the _radio network temporary identifiers_ (RNTI Count) and the _resource blocks_ for _downlink_ and _uplink_ (RB Down and RB Up). We employ 50 federated rounds with 3 local epochs per site, optimizing the MAE for the five target values. The energy consumption per considered model is measured using CodeCarbon,2 a tool that monitors the energy consumed either in GPU or CPU during the training. The experiments were conducted on a workstation running Ubuntu 22.04 with 64 GB memory and an Intel Xeon 4210R CPU and RTX A6000 GPU.
Footnote 2: [https://github.com/mlczo/codecarbon](https://github.com/mlczo/codecarbon)
### _Forecasting Error_
The results per area considering the validation and test NRMSE and MAE of the forecasting task are presented in Table II. The following observations are made per BS:
* **ElBorn:** In terms of validation and test MAE, LSTM-LSTM-A and Transformer are the top performing models. Although vanilla LSTM demonstrates the lowest NRMSE on the validation set, Transformer-LSTM performs the best on test NRMSE.
* **LesCorts:** Interestingly, simpler models yield better results for this particular BS. The LSTM model achieves the highest score considering the validation MAE, while the addition of attention mechanism enables the LSTM-A model to achieve the best scores on both the test MAE, validation NRMSE and test NRMSE.
* **PobleSec:** A significant distinction is observed between the validation and test scores for this BS. In terms of validation MAE and NRMSE, the LSTM model demonstrates the best performance. However, the Transformer model shows superior predictive accuracy when considering the test NRMSE and MAE.
Regarding the federated model that achieves the lowest average errors, its performance depends on whether we consider the validation or test set. On the validation set, the vanilla LSTM model shows slightly lower errors than state-of-the-art Transformers, displaying the best accuracy. For instance, the LSTM obtains 1.2% more averaged validation
MAE than Transformer. However, on the unseen test set, both the Transformer and Transformer-LSTM models demonstrate the lowest error outperforming the vanilla LSTM by 2% and 1.5%, respectively. The higher error on the test set for models such as the LSTM is attributed to the presence of overfitting, which affects their generalization capability. It is worth noting that there is no single model that performs best across all areas and metrics, indicating that the optimal model depends on the target area and the specific metric that is prioritized.
In Fig. 2, we show the averaged MAE (across all areas) on the validation and test sets per model. As mentioned earlier, LSTM performs the best on the validation set, while Transformer and Transformer-LSTM show superior accuracy on the test set. These observations align with related works on time series forecasting, demonstrating that advanced models may not systematically outperform simpler ones [8, 9].
### _Energy and Communication Costs_
In ML pipelines, energy is consumed during both the training and inference phases [18]. In centralized settings, the training phase is typically performed once, while inference is repeated multiple times, when a client queries the trained
Fig. 2: MAE per model on the validation and test sets.
model for predictions. In edge computing scenarios, frequent training is also needed due to real-time data collection, posing environmental constraints.
Communication costs are also relevant in both centralized and federated scenarios due to data collection and exchange of model weights. However, centralized scenarios involve additional costs such as cooling energy, while FL mitigates environmental costs by only transferring a small amount of data per federated round. This offers an advantage to FL compared to centralized learning regarding network throughput and latency, especially in large-scale scenarios. It also implies the assumption that, when transmitting larger models in the FL scenario, a greater environmental impact is anticipated.
The model sizes that define the amount of data that needs to be transmitted per model broadcast, which quantifies the communication costs in an FL scenario, are presented in Table II (in column _Size_). As expected, integrating attention mechanisms leads to larger model sizes. Simpler models such as LSTM and CNN-LSTM have the smallest sizes, while Transformer-based models, exhibit much higher sizes. For instance, the Transformer model [31] is 5.6 times larger than the vanilla LSTM model. However, even the largest model (the Transformer-LSTM at 2217.3 KB) is still small enough in absolute terms to allow the FL methods to be usable in our scenario, since transferring this amount of data per BS per 120 seconds should be insignificant in comparison to the link capacities involved.
In addition to the communication costs, we quantify the energy consumed during training and inference, which is depicted in Fig. 3 (also \(E_{\text{Tr}}\) and \(E_{\text{Inf}}\), respectively, in Table II), using the CodeCarbon library. These values illustrate the total energy consumption (in Wh) per ML model during the federated training and inference phases. Specifically, for the inference phase, we consider the energy consumed for making 1.000 predictions. This quantification shows that more complex models like Transformers entail a higher energy cost. In particular, the BasicTransformer model consumes more than twice of the energy than the vanilla LSTM during the training phase, while the Transformer and Transformer-LSTM models have approximately five and seven times higher energy consumption, respectively. The CNN-LSTM and LSTM-LSTM models show significantly lower environmental impact compared to Transformers, but their energy is higher than the vanilla LSTM. It is worth noting that the inclusion of attention mechanisms increases the model complexity, size and energy consumption. Regarding the energy consumed during inference, we observe a similar behavior, where larger models consume significantly more energy compared to simpler ones. Overall, we conclude that more complex model architectures result in higher energy consumption during both the training and the inference phases, which can have a significant CO\({}_{2}\) footprint and overall environmental impact.
### _Sustainability Evaluation_
This section presents an evaluation of the considered models using the sustainability indicator proposed in Section III-D. The formula in Eq. (5) takes into account the energy consumption during both the training and inference phases as well as the additional communication costs incurred during the exchange of model parameters between the server and FL nodes. The resulting values for each model under consideration are presented in Table II (\(S_{\text{Tr}},S_{\text{Inf}}\) and \(S\)). It is important to note that all the factors in Eq. (6) and (7) are equally weighted.
The resulting sustainability values reveal that more complex models such as those based on encoder-decoder or Transformer architectures exhibit poorer performance in terms of sustainability, both during training and inference. This can be attributed to their larger model sizes and higher energy consumption. Although these models demonstrate slightly better results in terms of prediction error, such a marginal advantage does not outweigh the significantly better results achieved by much simpler models in terms of sustainability. For instance, the resulting \(S\) of the Transformer model is about 3, 2.5 and 2 times higher than the vanilla LSTM, CNN-LSTM and LSTM-LSTM models, respectively.
Figure 4 illustrates the trade-off between test MAE and energy consumption (in Wh) during the training phase of
Fig. 4: Trade-off between the test MAE and energy consumption (Wh) of each considered model.
Fig. 3: Total energy consumed (Wh) per model over 50 federated rounds. Training (left) and inference of 1.000 samples (right) costs are included.
each respective model. This figure demonstrates that, although larger models such as Transformers lead to lower test MAE than simpler models such as the vanilla LSTM, the required energy for training them is the highest among models, making them less suitable when energy consumption is considered as an aspect for evaluating the overall performance.
## V Conclusion
In this paper, we have investigated the sustainability and predictive performance of state-of-the-art DL models for federated cellular traffic forecasting. We have introduced a novel sustainability indicator for evaluating energy consumption with respect to accuracy, which enables convenient comparisons across various ML models in different experimental scenarios. We have shown that increasingly large and complex models provide very limited accuracy gains but have an enormous associated increase in energy consumption compared to simpler models. In the future, we aim to study the convergence speed of different models and extend the introduced sustainability indicator to capture aspects such as robustness. To demonstrate the generalization and scalability of federated learning, we will also evaluate an extensive and diverse set of clients and datasets. Finally, we will explore the trade-off between model selection and accuracy for each federated client and apply regularization techniques to improve model robustness.
| Cellular traffics予測は、5世代(5G)ネットワークとそれ以降のネットワークを最適化する上で重要な活動です。正確な予測は、 intelligent network design、リソースの割り当て、異常の抑制に不可欠です。機械学習(ML)は、ネットワークトラフィックを効果的に予測するための有望なアプローチですが、大量のデータを1つのデータセンターに集約することは、機密性、プライバシー、データ転送の要求という課題を提起します。これらの課題に対処するため、集約型学習(FL)は、並行処理された計算を通じて高精度な予測を提供する魅力的なMLトレーニングフレームワークとして出現します。しかし、これらの方法の環境影響は無視されてきたため、持続可能性について疑問を呈します。この論文では、FLにおける精度とエネルギー消費のトレードオフを解決するために、MLモデルの可否を評価するための新しい持続可能性指標を提案しました。さらに、バル |
2309.16234 | Analyzing Political Figures in Real-Time: Leveraging YouTube Metadata
for Sentiment Analysis | Sentiment analysis using big data from YouTube videos metadata can be
conducted to analyze public opinions on various political figures who represent
political parties. This is possible because YouTube has become one of the
platforms for people to express themselves, including their opinions on various
political figures. The resulting sentiment analysis can be useful for political
executives to gain an understanding of public sentiment and develop appropriate
and effective political strategies. This study aimed to build a sentiment
analysis system leveraging YouTube videos metadata. The sentiment analysis
system was built using Apache Kafka, Apache PySpark, and Hadoop for big data
handling; TensorFlow for deep learning handling; and FastAPI for deployment on
the server. The YouTube videos metadata used in this study is the video
description. The sentiment analysis model was built using LSTM algorithm and
produces two types of sentiments: positive and negative sentiments. The
sentiment analysis results are then visualized in the form a simple web-based
dashboard. | Danendra Athallariq Harya Putra, Arief Purnama Muharram | 2023-09-28T08:15:55 | http://arxiv.org/abs/2309.16234v1 | # Analyzing Political Figures in Real-Time: Leveraging YouTube Metadata for Sentiment Analysis
###### Abstract
Sentiment analysis using big data from YouTube videos metadata can be conducted to analyze public opinions on various political figures who represent political parties. This is possible because YouTube has become one of the platforms for people to express themselves, including their opinions on various political figures. The resulting sentiment analysis can be useful for political executives to gain an understanding of public sentiment and develop appropriate and effective political strategies. This study aimed to build a sentiment analysis system leveraging YouTube videos metadata. The sentiment analysis system was built using Apache Kafka, Apache PySpark, and Hadoop for big data handling; TensorFlow for deep learning handling; and FastAPI for deployment on the server. The YouTube videos metadata used in this study is the video description. The sentiment analysis model was built using LSTM algorithm and produces two types of sentiments: positive and negative sentiments. The sentiment analysis results are then visualized in the form a simple web-based dashboard.
sentiment analysis, big data, politics, YouTube
## I Introduction
General Elections (Pemilu) is one of the concrete manifestations of a democratic system. Through Pemilu, the public has the opportunity to participate in governance by electing their representatives who will represent them in the government structure [1]. Among the various types of elections, the Presidential Election (Pilpres) is always a highly anticipated moment and dubbed as the largest "democratic party". In 2024, Indonesia will hold a Pilpres to determine the candidate for the presidency who will lead Indonesia for the next 5 years.
Welcoming the Pilpres 2024, every political party is competing to determine the best presidential and vice-presidential candidate to be endorsed. For political parties, Pilpres is not only about the positions of President and Vice President, but also determines their seats in the future government structure. Therefore, it is crucial for political parties to devise the best political campaign strategies to win the hearts of the public. One of the efforts that political parties can undertake to evaluate the quality of their political figures is through public sentiment analysis.
Sentiment analysis, also known as opinion mining, is a field that studies the analysis of opinions, sentiments, evaluations, judgments, attitudes, and emotions of people towards entities such as products, services, organizations, individuals, issues, events, and others related [2]. Public sentiment analysis can be used as a tool for political parties to gain a better understanding of the opinions and views of the public towards their endorsed political candidates. With public sentiment analysis, political parties can design effective and responsive campaign strategies to meet the needs of the public.
In the current digital era, social media has become a platform for the public to express various things, including their views towards various political figures. Such expressions can be in the form of support or rejection, and can be expressed in various media such as text, audio, or video. Such expressions can be used as indicators of public sentiment for political parties to assess the quality of their endorsed political figures.
This research aims to build a'real-time' sentiment analysis system for political figures in Pilpres 2024 from various videos on YouTube through its video description as the metadata. The selection of YouTube as a source of big data is due to the fact that YouTube has become one of the means of political expression in the form of videos with various purposes [3, 4, 5]. The system is designed to be'real-time' in order to perform sentiment analysis on various YouTube videos metadata in'real-time'. The resulting system is intended for political executives to help gain an understanding of public sentiment towards their endorsed political figures so that they can devise appropriate and effective political strategies.
## II Methodology
### _System Design_
The sentiment analysis system was built using Apache Kafka, Apache PySpark, and Hadoop for handling big data; TensorFlow for deep learning; and FastAPI for deployment on the server. In terms of architecture, the system was built using a module-based approach and consists of modules including producer, streamer, HDFS, inference, and visualizer (Table
I). The system's workflow involves data retrieval (crawling) through the YouTube API by the producer module; internal data streaming by the streamer module and storing it into the Hadoop Distributed File System (HDFS) by the HDFS module; sentiment inference by the inference module; and displaying the sentiment inference results in a simple web dashboard by the visualizer module (Figure 1). The producer module can be set to perform data crawling on a scheduled and regular basis and then store the results into HDFS to ensure that the data in the system is always up-to-date in real-time.
#### Iii-B1 **Data Gathering via Youtube API**
To retrieve data streams from the YouTube API, a Kafka producer with a topic is needed that will send the metadata of a YouTube video. This task will then be performed by the producer module. Metadata is obtained using the search method provided by the YouTube search API, and the final result is in JSON format.
The search method requires a keyword that will be used to perform the search. This kind of keyword is similar to what we do when typing keywords for searching videos on YouTube. The keywords used will be adjusted based on each political figure, so that one political figure can use many keywords according to the wishes of the user. In this study, only the name of the political figure was used as a keyword ('anies' for Anies Rasyid Baswedan, 'ganjar' for Ganjar Pranowo, 'prabowo' for Prabowo Subianto, and 'puan' for Puan Maharani).
When using the search method from the YouTube API, the results obtained are not all videos related to the given keywords. Instead, these related videos will be divided into several sections (pages). Each page contains a maximum of 50 related videos, so an iterative method is needed to continue the results that have been obtained before. This can be done by passing the pageToken parameter when searching. The pageToken parameter is obtained from the metadata of the search method, specifically the nextPage section. Therefore, a keyword will also be iterated as long as nextPage from the previous metadata is not None.
The metadata properties taken for this project from the response data are videoId, channelId, title, description, and uri. Videold was used as a differentiator between videos, so there is no duplicate data saved. When saving to HDFS, the metadata will be combined with the political figures and also the keywords data that were searched. Although there are several items of metadata information saved, only the description will be further used for sentiment analysis. Figure 2 illustrates this entire process.
#### Iii-B2 **Storing Stream Data to HDFS**
The output of the previous process (by the producer module) will be captured using Kafka in the form of dstream data. First, this dstream data must be converted into a Spark DataFrame format. Next, the Spark DataFrame is then saved in the form of a parquet file in the HDFS folder. This task is performed by the streamer module.
#### Iii-B3 **Sentiment Analysis Model**
To perform sentiment inference on each YouTube video metadata, a sentiment analysis model is required. The metadata used is only the video description. The model used in this research is based on the Long Short-Term Memory (LSTM) algorithm [6] built using the TensorFlow library. LSTM is used because it is one of the popular methods for solving text-related problems. This is because LSTM is a type of Recurrent Neural Network
Fig. 1: System design
(RNN) that is suitable for text-related problems due to its consideration of the previous input context for the next input. This property is in line with the natural property of text, which needs to consider the context of previous words to understand the meaning of the current word in a sentence.
The model is then trained using the sentiment dataset from the 2017 Jakarta gubernatorial election (Pilkada) [7]. The dataset was taken from social media Twitter related to the 2017 Jakarta gubernatorial election and consists of two types of sentiment, positive and negative, with an equal number of each type. The dataset used has undergone two preprocessing steps, namely replacing emojis with special markers and removing stopwords. The dataset will be divided into two: training data and validation data. The training data is used to train the sentiment analysis model, while the validation data is used to test the performance of the model on data that has not been seen before by the model. We used a training to validation data ratio of 0.8:0.2 and proceeded to train our model on Google Colab.
Before training or inference on the model, the data used needs to undergo preprocessing first. The necessary preprocessing includes cleaning the text by removing URLs, hashtags, mentions, and emojis. The cleaned data will then be tokenized using text vectorization from TensorFlow. In addition to text preprocessing, label conversion to one-hot encoding is also required. The cleaned data will then be fed into the model for training.
The architecture of the sentiment analysis model used is as follows: First, the data will enter an embedding layer to convert the tokenized text into an embedding vector. Then, this embedding vector will be passed through an LSTM layer and two dense layers. The last dense layer will be used for classification.
Fig. 3: Dashboard design
Fig. 2: Data gathering
#### Iii-A4 **The Visualization of Sentiment Inference Results**
The aggregation of sentiment inference results is displayed through a simple web dashboard by the visualizer module. The visualizer module consists of two submodules, namely backend and frontend. The backend module is responsible for preparing and aggregating the required data by communicating directly with HDFS, while the frontend module is responsible for visualizing the data that has been prepared by the backend into the dashboard page. The backend module is developed using FastAPI, while the frontend module is developed using Bootstrap and Chart.js. On the dashboard page, the sentiment results will be displayed in the form of a doughnut chart comparing the number of positive and negative sentiments to facilitate readability (Figure 3). In the production stage implementation, the frontend can be placed on the app server, while the backend can be placed on the big data server (Figure 4).
### _Evaluation Strategy_
#### Iii-B1 **Sentiment Model Evaluation**
The evaluation of the sentiment model is performed using the validation dataset. The evaluation metrics used are precision (1), recall (2), and F1-score (3) for each type of sentiment, and accuracy (4) for overall performance.
\[precision=\frac{TP}{FP+TP} \tag{1}\]
\[recall=\frac{TP}{FN+TP} \tag{2}\]
\[F1score=\frac{2\times precision\times recall}{precision+recall} \tag{3}\]
\[accuracy=\frac{TP+TN}{TP+FP+FN+TN} \tag{4}\]
Fig. 4: Visualizer implementation design
Fig. 5: Dashboard page testing results using production data
#### Ii-C2 **System Evaluation**
The evaluation of the developed system is conducted through a usability testing approach. In this evaluation, the system is locally deployed and assessed for its functionality and potential weaknesses.
## III Result
Using validation data from the 2017 Jakarta Gubernatorial Election dataset, the F1-Score was obtained as shown in Table 3. The LSTM model algorithm training with 8 epochs was able to provide overall accuracy performance of 0.7056. In terms of sentiment label evaluation, the resulting model has the same precision value between the two labels. However, the recall value for the negative label is better than the positive label. The difference in recall values has an impact on the F1-score, where the negative label has a better F1-score than the positive label.
We then tested the system by deploying it locally. The results of sentiment inference are displayed in the form of aggregated numbers of negative and positive sentiments for each political figure up to the date of the page request. This information is then presented in the form of a doughnut chart. Figure 5 illustrates the system. Positive sentiment is given a green color, while negative sentiment is given a red color.
## IV Discussion
The sentiment analysis model used in this study is LSTM, trained with the Pilkada DKI 2017 sentiment dataset [5]. The testing results with that dataset were able to produce a model performance of 0.7056. In the analysis of the sentiment label, although both labels have the same precision value (0.72), there is a significant difference between the recall values for negative sentiment (0.82) and positive sentiment (0.59). Based on the recall formula (2), the higher the recall value, the more the model can classify correctly. The high recall value for negative sentiment implies a tendency for the model to classify negative sentiment more than positive sentiment.
However, despite the acceptable performance of the model, further studies are needed to improve the model's performance, as there are several other factors that might affect the system's performance and become limitations in this study.
* The video search process is highly influenced by the selected keywords, so it is necessary to choose the appropriate keywords to increase the expected number of relevant video searches.
* The video search process is limited by the YouTube API call limitations on the free tier, which is 2,000 calls per day.
* The model inference only used the video description, assuming there is a correspondence between the content and the video description (not clickbait).
## V Conclusion
YouTube, as the largest video platform, has been used as a political expression medium for the public. This study has successfully developed a political sentiment analysis system that leverages YouTube as a big data source. Using the LSTM algorithm, the built-in inference model can provide accuracy performance up to 0.7056. The keyword selection and the use of other model algorithms (such as deep learning) can be considered for future research to improve the resulting inference model performance.
## Acknowledgment
We would like to express our gratitude to the lecturer of IF5270 Big Data Systems course at the Master of Informatics Study Program, School of Electrical Engineering and Informatics, Institut Teknologi Bandung, who has taught us the fundamental concepts and technologies of big data systems. This research was initiated as a task for the course. We also would like to thank our colleagues who have supported our learning process during the class.
| ビッグデータに基づくユーチューブ動画メタデータを用いた感情分析は、様々な政治 figure を表す政治派の政治的な意見を分析することができる。これは、YouTubeが政治家や意見に関する様々な表現のプラットフォームの一つになったからである。得られた感情分析は、政治指導者にとって、国民の感情を理解し、適切で効果的な政治戦略を策定するための手がかりとなる。この研究は、ユーチューブ動画メタデータに基づいて感情分析システムを構築することを目的としていた。感情分析システムは、Apache Kafka、Apache PySpark、Hadoopを用いて大規模なデータの処理を行い、TensorFlowを用いて深層学習処理を行い、FastAPIを用いてサーバー上でデプロイを担う。この研究で使用されたユーチューブ動画メタデータは、動画の説明である。感情分析モデルはLSTMアルゴリズムを用いて構築され、ポジティブとネガティブの2つの感情を生成する。そして |
2307.16798 | Forster-Warmuth Counterfactual Regression: A Unified Learning Approach | Series or orthogonal basis regression is one of the most popular
non-parametric regression techniques in practice, obtained by regressing the
response on features generated by evaluating the basis functions at observed
covariate values. The most routinely used series estimator is based on ordinary
least squares fitting, which is known to be minimax rate optimal in various
settings, albeit under stringent restrictions on the basis functions and the
distribution of covariates. In this work, inspired by the recently developed
Forster-Warmuth (FW) learner, we propose an alternative series regression
estimator that can attain the minimax estimation rate under strictly weaker
conditions imposed on the basis functions and the joint law of covariates, than
existing series estimators in the literature. Moreover, a key contribution of
this work generalizes the FW-learner to a so-called counterfactual regression
problem, in which the response variable of interest may not be directly
observed (hence, the name ``counterfactual'') on all sampled units, and
therefore needs to be inferred in order to identify and estimate the regression
in view from the observed data. Although counterfactual regression is not
entirely a new area of inquiry, we propose the first-ever systematic study of
this challenging problem from a unified pseudo-outcome perspective. In fact, we
provide what appears to be the first generic and constructive approach for
generating the pseudo-outcome (to substitute for the unobserved response) which
leads to the estimation of the counterfactual regression curve of interest with
small bias, namely bias of second order. Several applications are used to
illustrate the resulting FW-learner including many nonparametric regression
problems in missing data and causal inference literature, for which we
establish high-level conditions for minimax rate optimality of the proposed
FW-learner. | Yachong Yang, Arun Kumar Kuchibhotla, Eric Tchetgen Tchetgen | 2023-07-31T16:05:57 | http://arxiv.org/abs/2307.16798v4 | # Forster-Warmuth Counterfactual Regression: A Unified Learning Approach
###### Abstract
Series or orthogonal basis regression is one of the most popular non-parametric regression techniques in practice, obtained by regressing the response on features generated by evaluating the basis functions at observed covariate values. The most routinely used series estimator is based on ordinary least squares fitting, which is known to be minimax rate optimal in various settings, albeit under fairly stringent restrictions on the basis functions and the distribution of covariates. In this work, inspired by the recently developed Forster-Warmuth (FW) learner (Forster and Warmuth, 2002), we propose an alternative series regression estimator that can attain the minimax estimation rate under strictly weaker conditions imposed on the basis functions and the joint law of covariates, than existing series estimators in the literature. Moreover, a key contribution of this work generalizes the FW-learner to a so-called counterfactual regression problem, in which the response variable of interest may not be directly observed (hence, the name "counterfactual") on all sampled units, and therefore needs to be inferred in order to identify and estimate the regression in view from the observed data. Although counterfactual regression is not entirely a new area of inquiry, we propose the first-ever systematic study of this challenging problem from a unified pseudo-outcome perspective. In fact, we provide what appears to be the first generic and constructive approach for generating the pseudo-outcome (to substitute for the unobserved response) which leads to the estimation of the counterfactual regression curve of interest with small bias, namely bias of second order. Several applications are used to illustrate the resulting FW-learner including many nonparametric regression problems in missing data and causal inference literature, for which we establish high-level conditions for minimax rate optimality of the proposed FW-learner.
Introduction
### Nonparametric regression
Nonparametric estimation plays a central role in many statistical contexts where one wishes to learn conditional distributions by means of say, a conditional mean function \(\mathbb{E}[Y|X=x]\) without a priori restriction on the model. Several other functionals of the conditional distribution can likewise be written based on conditional means, which makes the conditional mean an important problem to study. For example, the conditional cumulative distribution function of a univariate response \(Y\) given \(X=x\) can be written as \(\mathbb{E}[\mathbf{1}\{Y\leq t\}|X=x].\) This, in turn, leads to conditional quantiles. In general, any conditional function defined via \(\theta^{\star}(x)=\arg\min_{\theta\in\mathbb{R}}\mathbb{E}[\rho((X,Y);\theta) |X=x]\) for any loss function \(\rho(\cdot;\cdot)\) can be learned using conditional means.
Series, or more broadly, sieve estimation provides a solution by approximating an unknown function based on \(k\) basis functions, where \(k\) may grow with the sample size \(n\), ideally at a rate carefully tuned in order to balance bias and variance to the extent possible. The most straightforward approach to construct a series estimator is by the method of least squares, large sample properties of which have been studied extensively both in statistical and econometrics literature in nonparametric settings. To briefly describe the standard least squares series estimator, let \(m^{\star}(x):=\mathbb{E}[Y|X=x]\) denote the true conditional expectation where \(m^{\star}(\cdot)\) is an unrestricted unknown function of \(x\). Also consider a vector of approximating basis functions \(\bar{\phi}_{k}(x)=(\phi_{1}(x),\ldots,\phi_{k}(x))^{\top}\), which has the property that any square integrable \(m^{\star}(\cdot)\) can be approximated arbitrarily well, with sufficiently large \(k\), by some linear combination of \(\bar{\phi}_{k}(\cdot)\). Let \((X_{i},Y_{i}),i=1,\ldots,n\) denote an observed sample of data. The least squares series estimator of \(m^{\star}(x)\) is defined as \(\widehat{m}(x)=\bar{\phi}_{k}^{\top}(x)\widehat{\beta}\), where \(\widehat{\beta}=(\Phi_{k}^{\top}\Phi_{k})^{-1}\Phi_{k}^{\top}\mathbf{Y}\), and \(\Phi_{k}\) is the \(n\times k\) matrix \([\bar{\phi}_{k}(X_{1}),\ldots,\bar{\phi}_{k}(X_{n})]^{\top}\) with \(\mathbf{Y}=(Y_{1},\ldots,Y_{n})^{\top}\). Several existing works in the literature provide sufficient conditions for consistency, corresponding convergence rates, and asymptotic normality of this estimator, along with illustrations of these conditions in the case of polynomial series and regression splines, see, for example, Chen (2007), Newey (1997), Gyorfi et al. (2002). Under these conditions, the optimal rate of convergence are well-established for certain bases functions, such as the local polynomial kernel estimator (Chapter 1.6 of Tsybakov (2009)) and the local polynomial partition series (Cattaneo and Farrell (2013)). Belloni et al. (2015) relaxed some of these assumptions while applying this estimation procedure to statistical estimation problems and
provided uniform convergence rates. For instance, they weakened the requirement in Newey (1997) that the number \(k\) of approximating functions has to satisfy \(k^{2}/n\to 0\) to \(k/n\to 0\) for bounded (for example Fourier series) or local bases (such as splines, wavelets or local polynomial partition series), which was previously established only for splines (Huang (2003)) and local polynomial partitioning estimators (Cattaneo and Farrell (2013)); therefore presumably allowing for improved approximation of the function in view by using a larger number of basis functions to estimate the latter. One important limitation of least squares series estimator is that the rate of convergence heavily depends on stringent assumptions imposed on the bases functions. To be specific, a key quantity that plays a crucial role in all of these previous works, is given by \(\xi_{k}:=\sup_{x\in\mathcal{X}}\|\phi_{k}(x)\|\), where \(\mathcal{X}\) is the support of the covariates \(X\) and \(\|\cdot\|\) denote the \(l_{2}\) norm of a vector. They require \(\xi_{k}^{2}\log k/n\to 0\), so that for bases functions such as Fourier, splines, wavelets, and local polynomial partition series, \(\xi_{k}\leq\sqrt{k}\), yielding \(k\log k/n\to 0\). For other bases functions such as polynomial series, \(\xi_{k}\lesssim k\) corresponds to \(k^{2}\log k/n\to 0\), which is more restrictive.
In this paper, we develop a new type of series regression estimator that in principle can attain well-established minimax nonparametric rates of estimation in settings where covariates and outcomes are fully observed, under weaker conditions compared to existing literature (e.g. Belloni et al. (2015)) on the distribution of covariates and bases functions. The approach builds on an estimator we refer to as _Forster-Warmuth Learner_ (FW-Learner) originating in the online learning literature, which is obtained via a careful modification of the renowned non-linear Vovk-Azoury-Warmuth forecaster (Vovk, 2001; Forster and Warmuth, 2002). In particular, our method is optimal in that its error matches the well-established minimax rate of estimation for a large class of smooth nonparametric regression functions, provided that \(\mathbb{E}[Y^{2}|X]\) is bounded almost surely, regardless of the basis functions used, as long as the approximation error/bias with \(k\) bases decays optimally; see Theorem 1 for more details. This result is more general than the current literature whose rate of convergence depends on the type of basis. For example, Belloni et al. (2015) established that using the polynomials basis would imply a slower convergence rate compared to using a wavelet basis, although both have the same approximation error decay rate for the common Holder/Sobolev spaces. Theorem 1 provides the expected \(L_{2}\)-error of our FW-Learner under the full data setting, which is a non-trivial extension of the vanilla Forster-Warmuth estimator and is agnostic to the underlying choice of bases functions. The sharp upper bound on the error rate matches the minimax lower bound of this problem, demonstrating the optimality of the FW-Learner.
### Counterfactual regression
Moving beyond the traditional conditional mean estimation problem, we also develop a unified approach to study a more challenging class of problems we name nonparametric _counterfactual regression_, where the goal is still to estimate \(m^{\star}(x)=\mathbb{E}[Y|X=x]\) but now the response \(Y\) may not be fully/directly observed.
Prominent examples include nonparametric regression of an outcome prone to missingness, a canonical problem in missing data literature, as well as nonparametric estimation of the so-called conditional average treatment effect (CATE) central to causal inference literature. Thus, the key contribution of this work, is to deliver a unified treatment of such counterfactual regression problems with a generic estimation approach which essentially consists of two steps: (i) generate for all units a carefully constructed pseudo-outcome of the counterfactual outcome of interest; (ii) apply the FW-Learner directly to the counterfactual pseudo-outcome, in order to obtain an estimator of the counterfactual regression in view. The counterfactual pseudo-outcome in step (i) is motivated by modern semiparametric efficiency theory and may be viewed as an element of the orthogonal complement of the nuisance tangent space for the statistical model of the given counterfactual regression problem, see, e.g., Bickel et al. (1993), Van Der Vaart (1991), Newey (1990), Tsiatis (2006) for some references; as such the pseudo-outcome endows the FW-Learner with a "small bias" property that its bias is at most of a second order. In some key settings, the bias of the pseudo-outcome might be sufficiently small, occasionally it might even be exactly zero, so that it might altogether be ignored without an additional condition. This is in fact the case if the outcome were a priori known to be missing completely at random, such as in some two-stage sampling problems where missingness is by design, e.g. (Breslow and Cain, 1988); or if estimating the CATE in a randomized experiment where the treatment mechanism is known by design. More generally, the pseudo-outcome often requires estimating certain nuisance functions nonparametrically, however, for a large class of such problems considered in this paper, the bias incurred from such estimation is of product form, also known as mixed bias (Rotnitzky et al. (2021)). In this context, a key advantage of the mixed bias is that one's ability to estimate one of the nuisance functions well, i.e. relatively "fast rates", can potentially make up for slower rates in estimating another, so that, estimation bias of the pseudo-outcome can be negligible relative to the estimation risk of an oracle with ex ante knowledge of nuisance functions. In such cases, the FW-Learner is said to be _oracle optimal_ in the sense that its risk matches that of the oracle (up to a
multiplicative constant).
Our main theoretical contribution is a unified analysis of the FW-Learner described above, hereby establishing that it attains the oracle optimality property, under appropriate regularity conditions, in several important counterfactual regression problems, including (1) nonparametric regression under outcome missing at random, (2) nonparametric CATE estimation under unconfoundedness, (3) nonparametric regression under outcome missing not at random leveraging a so-called shadow variable (Li et al., 2021; Miao et al., 2023), (4) nonparametric CATE estimation in the presence of residual confounding leveraging proxies using the proximal causal inference framework (Miao et al., 2018; Tchetgen Tchetgen et al., 2020).
### Literature review, organization, and notation
Organization.The remainder of the paper is organized as follows. Section 1.4 introduces the notation that is going to be used throughout the paper. Section 2 formally defines our estimation problem and the Forster-Warmuth estimator, where Section 2.2 builds upon Section 2.1 going beyond the full data problem to counterfactual settings where the outcome of interest may not be fully observed. Section 3 applies the proposed methods to the canonical nonparametric regression problem subject to missing outcome data, where in Section 3.1 the outcome is assumed to be Missing At Random (MAR) given fully observed covariates Robins et al. (1994); while in Section 3.2 the outcome may be Missing Not At Random (MNAR) and identification hinges upon having access to a fully observed shadow variable (Miao et al., 2023; Li et al., 2021). Both of these examples may be viewed as nonparametric counterfactual regression models, whereby one seeks to estimate the nonparametric regression function under a hypothetical intervention that would in principle prevent missing data. Section 4 presents another application of the proposed methods to a causal inference setting, where the nonparametric counterfactual regression parameter of interest is the Conditional Average Treatment Effect (CATE); Section 4.1 assumes the so-called ignorability or unconfoundedness given fully observed covariates, while Section 4.2 accommodates unmeasured confounding for which proxy variables are observed under the recently proposed proximal causal inference framework (Miao et al., 2018; Tchetgen Tchetgen et al., 2020). Section 5 reports results from a simulation study comparing our proposed FW-Learner to a selective set of existing methods under a range of conditions, while Section 6 illustrates FW-Learner for the CATE in an analysis of the SUPPORT observational study (Conners et al. (1996)) to estimate the causal effect of right heart catheterization (RHC) on 30-day survival, as a function of a
continuous baseline covariate which measures a _patient's potential survival probability at hospital admission_, both under standard unconfoundedness conditions assumed in prior causal inference papers, including Tan (2006), Vermeulen and Vansteelandt (2015) and Cui and Tchetgen Tchetgen (2019), and proximal causal inference conditions recently considered in Cui et al. (2023) in the context of estimating marginal treatment effects.
Literature Review.There is growing interest in nonparametric/semiparametric regression problems involving high dimensional nuisance functions. Notable general frameworks recently proposed to address rich classes of such problems include Ai and Chen (2003) and Foster and Syrgkanis (2019), with the latter providing an oracle inequality for empirical risk minimization under the condition that an estimated loss function uniquely characterizing a nonparametric regression function of interest satisfies a form of orthogonality property, more precisely, that the estimated loss function admits second order bias. In another strand of work related to nonparametric regression with missing data on the outcome, Muller and Schick (2017) investigated the efficiency of a complete-case nonparametric regression under an outcome missing at random assumption (MAR); relatedly, Efromovich (2011) proposed a nonparametric series estimator that is shown to be minimax when predictors are missing completely at random (MCAR), and Wang et al. (2010) proposed an augmented inverse probability weighted nonparametric regression kernel estimator using parametric specifications of nuisance functions in the setting of an outcome missing at random. In the context of CATE estimation for causal inference, in a setting closely related to ours, Kennedy (2020) proposed a doubly robust two-stage CATE estimator, called the DR-Learner, and provided a general oracle inequality for nonparametric regression with estimated outcomes. In the same paper, he also proposed a local polynomial adaptation of the R-Learner (Nie and Wager (2021), Robinson (1988)), and characterized its (in-probability) point-wise error rate. He referred to this new estimator as Local Polynomial R-Learner (lp-R-Learner). Notably, the lp-R-Learner was shown to attain the corresponding oracle rate under weaker smoothness conditions for nuisance functions and the CATE than analogous estimators in Nie and Wager (2021) and Chernozhukov et al. (2017). The recent work of Kennedy et al. (2022) studied the minimax lower bound for the rate of estimation of the CATE under unconfoundedness (in terms of mean squared error) and proposed higher order estimators using recent estimation theory of higher-order influence functions (Robins et al., 2008, 2017) that is minimax optimal provided the covariate distribution is sufficiently smooth that it can be estimated at fast enough rates so that
estimation bias is negligible. Another related strand of work has focused on so-called meta-Learners based on generic machine learning estimators. For instance, Kunzel et al. (2019) proposed two learners (X-Learner and U-Learner) for CATE estimation through generic machine learning. In Section 5, we provide a simulation study comparing our proposed method to the X-Learner, the DR-Learner and to an oracle DR-Learner which uses the _Oracle pseudo-outcome_ with known nuisance functions in the second-stage regression.
In Section 4 we apply our method to estimating CATE, the average effect of the treatment for individuals who have specific values of a set of baseline covariates. By inferring CATE, researchers can potentially identify subgroups of the population that may benefit most from the treatment; information that is crucial for designing effective interventions tailored to the individual. Similar to Kennedy (2020) and Kennedy et al. (2022), we study this problem under the unconfoundedness assumption in Section 4.1. While their proposed lp-learner, which leverages the careful use of local polynomials to estimate the CATE, was shown to match an oracle estimator with complete knowledge of all nuisance parameters under certain smoothness conditions, our proposed FW-Learner is shown to match the oracle estimator for more general bases functions under minimal conditions on the latter. Therefore, in this light, our estimator affords the analyst with the freedom to use an arbitrary bases functions of choice to model the CATE.
In many non-experimental practical settings, un-confoundedness may not be credible on the basis of measured covariates, in which case, one may be concerned that residual confounding due to hidden factors may bias inferences about the CATE using the above methods. To address such concerns, the recent so-called "proximal causal inference" approach acknowledges that measured covariates are unlikely to fully control for confounding and may at best be viewed as proxies of known but unmeasured sources of confounding, see, e.g., Miao et al. (2018) and Tchetgen Tchetgen et al. (2020), where they formally leverage proxies for nonparametric identification of causal effects in the presence of hidden confounders. In Section 4.2, we develop an FW-proximal learner of the CATE using the proposed pseudo-outcome approach in which we leverage a characterization of the ortho-complement to the nuisance tangent space for the underlying proximal causal model derived in Cui et al. (2023), also see Ghassami et al. (2022). It is worth mentioning that recent concurrent work Sverdrup and Cui (2023) also estimates CATE under the proximal causal inference context with what they call a P-Learner using a two-stage loss function approach inspired by the R-Learner proposed in Nie and Wager (2021), which, in order to be oracle optimal, requires that the nuisance functions are estimated
at rates faster than \(n^{-1/4}\), a requirement we do not impose.
### Notation
We define some notation we use throughout the paper: \(a\lesssim b\) means \(a\leq Cb\) for a universal constant \(C\), and \(a\sim b\) means \(a\lesssim b\) and \(b\lesssim a\). We call a function \(\alpha\)-smooth if it belongs to the class of Holder smoothness order \(\alpha\), which will be introduced using similar language as Belloni et al. (2015): For \(\alpha\in(0,1]\), the Holder class of smoothness order \(\alpha,\Sigma_{\alpha}(\mathcal{X})\), is defined as the set of all functions \(f:\mathcal{X}\rightarrow\mathbb{R}\) such that for \(C>0\),
\[|f(x)-f(\widetilde{x})|\leq C\Bigl{(}\sum_{j=1}^{d}\bigl{(}x_{j}-\widetilde{x }_{j}\bigr{)}^{2}\Bigr{)}^{\alpha/2}\]
for all \(x=\left(x_{1},\ldots,x_{d}\right)^{\top}\) and \(\widetilde{x}=\left(\widetilde{x}_{1},\ldots,\widetilde{x}_{d}\right)^{\top}\) in \(\mathcal{X}\). The smallest \(C\) satisfying this inequality defines a norm of \(f\) in \(\Sigma_{\alpha}(\mathcal{X})\), which we denote by \(\|f\|_{s(\alpha)}.\) For \(\alpha>1,\Sigma_{\alpha}(\mathcal{X})\) can be defined as follows. For a \(d\)-tuple \(\bar{\alpha}=\left(\alpha_{1},\ldots,\alpha_{d}\right)\) of non-negative integers, let \(D^{\bar{\alpha}}=\partial_{x_{1}}^{\alpha_{1}}\ldots\partial_{x_{d}}^{\alpha_ {d}}\) be the multivariate partial derivative operator. Let \(\lfloor\alpha\rfloor\) denote the largest integer strictly smaller than \(\alpha\). Then \(\Sigma_{\alpha}(\mathcal{X})\) is defined as the set of all functions \(f:\mathcal{X}\rightarrow\mathbb{R}\) such that \(f\) is \(\lfloor\alpha\rfloor\) times continuously differentiable and for some \(C>0\),
\[\bigl{|}D^{\bar{\alpha}}f(x)-D^{\bar{\alpha}}f(\widetilde{x})\bigr{|}\leq C \Bigl{(}\sum_{j=1}^{d}\bigl{(}x_{j}-\widetilde{x}_{j}\bigr{)}^{2}\Bigr{)}^{ \left(\alpha-\lfloor\alpha\rfloor\right)/2}\text{ and }\bigl{|}D^{\bar{\beta}}f(x) \bigr{|}\leq C\]
for all \(x=\left(x_{1},\ldots,x_{d}\right)^{\prime}\) and \(\widetilde{x}=\left(\widetilde{x}_{1},\ldots,\widetilde{x}_{d}\right)^{\prime}\) in \(\mathcal{X}\) and for all \(d\)-tuples \(\bar{\alpha}=\left(\alpha_{1},\ldots,\alpha_{d}\right)\) and \(\bar{\beta}=\left(\beta_{1},\ldots,\beta_{d}\right)\) of non-negative integers satisfying \(\alpha_{1}+\cdots+\alpha_{d}=\lfloor\alpha\rfloor\) and \(\beta_{1}+\cdots+\beta_{d}\leq\lfloor\alpha\rfloor.\) Again, the smallest \(C\) satisfying these inequalities defines a norm of \(f\) in \(\Sigma_{\alpha}(\mathcal{X})\), denoted as \(\|f\|_{s(\alpha)}\). For any integer \(k\geq 2\), let \(\|f(\cdot)\|_{k}\) denote the function \(L_{k}\) norm such that \(\|f(O)\|_{k}:=(\mathbb{E}_{O}[f^{k}(O)])^{1/k}\), where \(O\) is any data that is the input of \(f\).
## 2 The Forster-Warmuth Nonparametric Counterfactual Regression Estimator
We introduce the Forster-Warmuth learner, which is a nonparametric extension of an estimator first proposed in the online learning literature (Forster and Warmuth, 2002). In Section 2.1, we study
the properties of FW-Learner in the standard nonparametric regression setting where data are fully observed, before considering the counterfactual setting of primary interest in Section 2.2 where the responses may only be partially observed.
### Full data nonparametric regression
Suppose that one observes independent and identically distributed observations \(\left(X_{i},Y_{i}\right),1\leq i\leq n\) on \(\mathcal{X}\times\mathbb{R}\). Let \(\mu\) be a base measure on the covariate space \(\mathcal{X}\); this could, for example, be the Lebesgue measure or the countable measure. The most common nonparametric regression problem aims to infer the conditional mean function \(m^{\star}(x):=\mathbb{E}\left[Y_{i}\mid X_{i}=x\right]\) as a function of \(x\). Let \(\Psi:=\{\phi_{1}(\cdot)\equiv 1,\phi_{2}(\cdot),\phi_{3}(\cdot),\ldots\}\) be a fundamental sequence of functions in \(L_{2}(\mu)\) i.e., linear combinations of these functions are dense in \(L_{2}(\mu)\)(Lorentz, 1966; Yang and Barron, 1999). Note that a fundamental sequence of functions need not be orthonormal.
For any \(f\in L_{2}(\mu)\) and any \(J\geq 1\), let
\[E_{J}^{\Psi}(f)\ :=\ \min_{a_{1},a_{2},\ldots,a_{J}}\left\|f-\sum_{k=1}^{J}a_{k }\phi_{k}\right\|_{L_{2}(\mu)}\]
denote the \(J\)-th degree approximation error of the function \(f\) by the first \(J\) functions in \(\Psi\). By definition of the fundamental sequence, \(E_{J}^{\Psi}(f)\to 0\) as \(J\to 0\) for any function \(f\in L_{2}(\mu)\). This fact is the motivation of the traditional series estimators of \(m^{\star}\) which estimate the minimizing coefficients \(a_{1},\ldots,a_{J}\) using ordinary least squares linear regression. Motivated by an estimator in the linear regression setting studied in Forster and Warmuth (2002), we define the FW-Learner of \(m^{\star}(\cdot)\), which we denote \(\widehat{m}_{J}(\cdot)\), trained on data \(\left\{\left(X_{i},Y_{i}\right),1\leq i\leq n\right\}\), using the first \(J\) elements of the fundamental sequence \(\bar{\phi}_{J}(x)=\big{(}\phi_{1}(x),\ldots,\phi_{J}(x)\big{)}^{\top}\):
\[\widehat{m}_{J}(x):=\big{(}1-h_{n}(x)\big{)}\bar{\phi}_{J}^{\top}(x)\Big{(} \sum_{i=1}^{n}\bar{\phi}_{J}(X_{i})\bar{\phi}_{J}^{\top}(X_{i})+\bar{\phi}_{J} (x)\bar{\phi}_{J}^{\top}(x)\Big{)}^{-1}\sum_{i=1}^{n}\bar{\phi}_{J}(X_{i})Y_{ i}, \tag{1}\]
where
\[h_{n}(x):=\bar{\phi}_{J}^{\top}(x)\Big{(}\sum_{i=1}^{n}\bar{\phi}_{J}(X_{i}) \bar{\phi}_{J}^{\top}(X_{i})+\bar{\phi}_{J}(x)\bar{\phi}_{J}^{\top}(x)\Big{)} ^{-1}\bar{\phi}_{J}(x)\ \in\ [0,1]. \tag{2}\]
The following result provides a finite-sample result on the estimation error of \(\widehat{m}_{J}\) as a function of \(J\).
**Theorem 1**.: _Suppose \(\mathbb{E}\big{[}Y^{2}|X\big{]}\leq\sigma^{2}\) almost surely \(X\) and suppose \(X\) has a density with respect to \(\mu\)
_that is upper bounded by \(\kappa\). Then the FW-Learner satisfies_
\[\big{\|}\widehat{m}_{J}-m^{\star}\big{\|}_{2}^{2}=\mathbb{E}\Big{[}\big{(} \widehat{m}_{J}(X)-m^{\star}(X)\big{)}^{2}\Big{]}\ \leqslant\ \frac{2\sigma^{2}J}{n}+\kappa(E_{J}^{\Psi}(m^{\star}))^{2}.\]
_Moreover, if \(\Gamma=\{\gamma_{1},\gamma_{2},\ldots\}\) is a non-increasing sequence and if \(m^{\star}\in\mathcal{F}(\Psi,\Gamma)=\{f\in L_{2}(\mu):\mathbb{E}_{k}^{\Psi}(f )\leqslant\gamma_{k}\,\forall\,k\geqslant 1\}\), then for \(J_{n}:=\min\{k\geqslant 1:\,\gamma_{k}^{2}\leqslant\sigma^{2}k/n\}\), we obtain_
\[\|\widehat{m}_{J_{n}}-m^{\star}\|_{2}^{2}\ \leqslant\ (2+\kappa)\frac{\sigma^{2}J_{n}}{n}.\]
See Section S.2 of the supplement for proof of this result. Note that Belloni et al. (2015, Theorem 4.1) established a similar result for the least squares series estimator implying that it yields the same oracle risk under more stringent conditions imposed on the bases functions as discussed in the introduction. The sets of functions \(\mathcal{F}(\Psi,\Gamma)\) are called _full approximation sets_ in Lorentz (1966) and Yang and Barron (1999, Section 4). If the sequence \(\Gamma\) also satisfies the condition \(0<c^{\prime}\leqslant\gamma_{2k}/\gamma_{k}\leqslant c\leqslant 1\) for all \(k\geqslant 1\), then Theorem 7 of Yang and Barron (1999) proves that the minimax rate of estimation of functions in \(\mathcal{F}(\Psi,\Gamma)\) is given by \(k_{n}/n\), where \(k_{n}\) is chosen so that \(\gamma_{k}^{2}\asymp k/n\). The upper bound in Theorem 1 matches this rate under the assumption \(c^{\prime}\leqslant\gamma_{2k}/\gamma_{k}\leqslant c\). This can be proved as follows: by definition of \(J_{n}\), \(\gamma_{J_{n}-1}^{2}\geqslant\sigma^{2}(J_{n}-1)/n\). Then using \(J_{n}-1\geqslant J_{n}/2\) and \(\gamma_{J_{n}-1}\leqslant\gamma_{J_{n}/2}\leqslant\gamma_{J_{n}}/c^{\prime}\), we get \(\gamma_{J_{n}}^{2}\geqslant(c^{\prime})^{2}\sigma^{2}J_{n}/(2n)\). Hence, \(\gamma_{J_{n}}\asymp\sigma^{2}J_{n}/n\). Therefore, Theorem 1 proves that the FW-Learner with a properly chosen \(J\) is minimax optimal for approximation sets.
Note that Theorem 1 does not require the fundamental sequence of functions \(\Psi\) to form an orthonormal bases. This is a useful feature when considering sieve-based estimators (Shen and Wong, 1994, Example 3), or partition-based estimators (Cattaneo and Farrell, 2013) or random kitchen sinks (Rahimi and Recht, 2008) or neural networks (Klusowski and Barron, 2018), just to name a few.
As a special case of Theorem 1 that is of particular interest for Holder or Sobolev spaces, suppose \(\gamma_{J}\leqslant C_{m}J^{-2\alpha_{m}/d}\) for some constant \(C_{m},\alpha_{m}>0\), and \(d\) is the intrinsic dimension1 of the covariates \(X\), then choosing \(J=\big{[}(n\alpha_{m}\kappa C_{m}/(d\sigma^{2}))^{d/(2\alpha_{m}+d)}\big{]}\) gives
Footnote 1: We say intrinsic dimension rather than the true dimension of covariates because some bases can take into account of potential manifold structure of the covariates to yield better decay depending on the manifold (or intrinsic) dimension.
\[\|\widehat{m}_{J}-m^{\star}\|_{2}^{2}\leqslant C\left(\frac{\sigma^{2}}{n} \right)^{2\alpha_{m}/(2\alpha_{m}+d)}, \tag{3}\]
where \(C\) is a constant; See S.2 for a proof. The decay condition \(\gamma_{J}\leqslant C_{m}J^{-2\alpha_{m}/d}\) is satisfied by functions in Holder and Sobolev spaces for the classical polynomial, Fourier/trigonometric bases (DeVore and Lorentz, 1993; Belloni et al., 2015)
From the discussion above, it is clear that the choice of the number of functions \(J\) used is crucial for attaining the minimax rate. In practice, we propose the use of split-sample cross-validation to determine \(J\)(Gyorfi et al., 2002, Section 7.1). Our simulations presented in Section 5 shows good performance of such an approach. We refer interested readers to Gyorfi et al. (2002, Chapter 7) and Vaart et al. (2006) for the theoretical properties of the split-sample cross-validation. The application of these results to FW-Learner is beyond the scope of the current paper and will be explored elsewhere.
### Forster-Warmuth Counterfactual Regression: The Pseudo-Outcome Approach
In many practical applications in health and social sciences it is not unusual for an outcome to be missing on some subjects, either by design, say in two-stage sampling studies where the outcome can safely be assumed to be missing at random with known non-response mechanism, or by happenstance, in which case the outcome might be missing not at random. An example of the former type might be a study (Cornelis et al., 2009) in which one aims to develop a polygenic risk prediction model for type-2 diabetes based on stage 1 fully observed covariate data on participants including high dimensional genotype (i.e., SNPs), age, and gender, while costly manual chart review by a panel of physicians yield reliable type-2 diabetes labels on a subset of subjects with known selection probability based on stage-1 covariates. In contrast, an example of the latter type might be a household survey in Zambia (Marden et al., 2018) in which eligible household members are asked to test for HIV, however, nearly 30% decline the test and thus have missing HIV status. The concern here might be that participants who decline to test might not be a priori exchangeable with participants who agree to test for HIV with respect to key risk factors for HIV infection, even after adjusting for fully observed individual and household characteristics collected in the household survey. Any effort to build an HIV risk regression model that generalizes to the wider population of Zambia requires carefully accounting for HIV status possibly missing not at random for a non-negligible fraction of the sample.
Beyond missing data, counterfactual regression also arises in causal inference where one might be interested in the CATE, the average causal effect experienced by a subset of the population defined
in terms of observed covariates. Missing data, in this case, arises as the causal effect defined at the individual level as a difference between two potential outcomes - one for each treatment value - can never be observed. This is because under the consistency assumption (Hernan and Robins, 2010, Section 3.4) the observed outcome for subjects who actually received treatment matches their potential outcome under treatment, while their potential outcome under no treatment is missing, and vice-versa for the untreated.
A major contribution of this paper is to propose a generic construction of a so-called pseudo-outcome which, as its name suggests, replaces the unobserved outcome with a carefully constructed response variable that (i) only depends on the observed data, possibly involving high dimensional nuisance functions that can nonetheless be identified from the observed data (e.g. propensity score), and therefore can be evaluated for all subjects in the sample and; (ii) has conditional expectation given covariates that matches the counterfactual regression of interest if as for an oracle, nuisance functions were known. The proposed pseudo-outcome approach applies to a large class of counterfactual regression problems including the missing data and causal inference problems described above. The proposed approach recovers in specific cases such as the CATE under unconfoundedness, previously proposed forms of pseudo-outcomes (Kennedy, 2020, Section 4.2), while offering new pseudo-outcome constructions in other examples (e.g., Proximal CATE estimation in Section 4.2). See Section 2.3 for details on constructing pseudo-outcomes.
Before describing the explicit construction of the pseudo-outcome, we first provide a key high-level corollary (assuming that a pseudo-outcome is given) which is the theoretical backbone of our approach. Suppose \(\widetilde{O}_{i},1\leq i\leq n\) represents independent and identically distributed random vectors of unobserved data of primary interest that include fully observed covariates \(X_{i},1\leq i\leq n\) as subvectors. Let \(O_{i},1\leq i\leq n\) be the observed data which are obtained from \(\widetilde{O}_{i},1\leq i\leq n\) through some coarsening operation. For concrete examples of \(\widetilde{O}_{i}\) and \(O_{i}\) in missing data and causal inference, see Table 1; more examples can be found in Sections 3 and 4. The quantity of interest is \(m^{\star}(x)=\mathbb{E}[\widetilde{f}(\widetilde{O}_{i})|X_{i}=x]\) for some known function \(\widetilde{f}(\cdot)\) operating on \(\widetilde{O}_{i}\). For example, in the context of missing data, we could be interested in \(\mathbb{E}[Y_{i}|X_{i}]\) so that \(\widetilde{f}(\widetilde{O}_{i})=f(X_{i},Z_{i},Y_{i})=Y_{i}\). Because \(\widetilde{O}_{i},1\leq i\leq n\) are unobserved, \(\widetilde{f}(\widetilde{O}_{i})\) may not be fully observed. The pseudo-outcome approach that we propose involves two steps:
* Find some identifying conditions such that that quantity of interest \(m^{\star}(x)=\mathbb{E}[\widetilde{f}(\widetilde{O}_{i})|X_{i}=x]\) can be rewritten as \(m^{\star}(x)=\mathbb{E}[f(O_{i})|X_{i}=x]\) for some (estimable) unknown function \(f(\cdot)\) applied to the observations \(O_{i}\). There may be several such \(f\) under the identifying
assumptions and the choice of \(f\) plays a crucial role in the rate of convergence of the estimator proposed; see Section 2.3 for more details on finding a "good" \(f\).
* Split \(\{1,2,\ldots,n\}\) into two (non-overlapping) parts \(\mathcal{I}_{1},\mathcal{I}_{2}\). From \(O_{i},i\in\mathcal{I}_{1}\), obtain an estimator \(\widehat{f}(\cdot)\) of \(f(\cdot)\). Now, with the fundamental sequence of functions \(\Psi\), create the data \((\bar{\phi}_{J}(X_{i}),\widehat{f}(O_{i})),i\in\mathcal{I}_{2}\) and obtain the FW-Learner: \[\widehat{m}_{J}(x):=(1-h_{\mathcal{I}_{2}}(x))\bar{\phi}_{J}^{\top}(x)\left( \sum_{i\in\mathcal{I}_{2}}\bar{\phi}_{J}(X_{i})\bar{\phi}_{J}^{\top}(X_{i})+ \bar{\phi}_{J}(x)\bar{\phi}_{J}^{\top}(x)\right)^{-1}\sum_{i\in\mathcal{I}_{2} }\bar{\phi}_{J}(X_{i})\widehat{f}(O_{i}),\] with \[h_{\mathcal{I}_{2}}(x)=\bar{\phi}_{J}^{\top}(x)\left(\sum_{i\in\mathcal{I}_{2 }}\bar{\phi}_{J}(X_{i})\bar{\phi}_{J}^{\top}(X_{i})+\bar{\phi}_{J}(x)\bar{\phi }_{J}^{\top}(x)\right)^{-1}\bar{\phi}_{J}(x),\] defined, similarly, as in (2).
The following lemma (proved in Section S.2) states the error bound of the FW-Learner \(\widehat{m}_{J}\) that holds for any pseudo-outcome \(\widehat{f}\).
**Corollary 1**.: _Let \(\sigma^{2}\) be an upper bound on \(\mathbb{E}[\widehat{f}^{2}(O)|X,\widehat{f}]\) almost surely \(X\), and suppose \(X\) has a density with respect to \(\mu\) that is bounded by \(\kappa\). Define \(H_{f}(x)=\mathbb{E}[\widehat{f}(O)|X=x,\widehat{f}]\). Then the FW
\begin{table}
\begin{tabular}{l l l} \hline & \(\widetilde{O}_{i}\) & \(O_{i}\) \\ \hline \hline \multirow{4}{*}{Missing data} & \((X_{i},Z_{i},Y_{i})\) & \multirow{4}{*}{\((X_{i},Z_{i},R_{i},Y_{i}R_{i})\)} \\ & \(Y_{i}\) is the response of interest, & \(R_{i}=1\) if \(Y_{i}\) is observed, \\ & \(Z_{i}\) is an additional covariate & and \(R_{i}=0\) if \(Y_{i}\) is unobserved. \\ & vector of no scientific interest. & \\ \hline \multirow{4}{*}{Causal inference} & \((X_{i},A_{i},Y_{i}^{1},Y_{i}^{0})\) & \multirow{4}{*}{\((X_{i},A_{i},Y_{i})\)} \\ & \(A_{i}\) is the treatment assignment, & \((X_{i},A_{i},Y_{i})\) \\ \cline{1-1} & \(Y_{i}^{1}\) is the counterfactual response & \(Y_{i}=A_{i}Y_{i}^{1}+(1-A_{i})Y_{i}^{0}\) \\ \cline{1-1} & if subject is in treatment group, and & is the observed response \\ \cline{1-1} & \(Y_{i}^{0}\) is the counterfactual response & given the observed treatment \(A_{i}\). \\ \cline{1-1} & if subject is in control group. & \\ \hline \end{tabular}
\end{table}
Table 1: Examples of unobserved full data and observed data.
_Learner \(\widehat{m}_{J}\) satisfies_
\[\Big{(}\mathbb{E}[(\widehat{m}_{J}(X)-m^{\star}(X))^{2}|\widehat{f}]\Big{)}^{1/2 }\leq\sqrt{\frac{2\sigma^{2}J}{|\mathcal{I}_{2}|}}+\sqrt{2\kappa}E_{J}^{\Psi}(m ^{\star})+\sqrt{6}\left(\mathbb{E}[(H_{f}(X)-m^{\star}(X))^{2}|\widehat{f}] \right)^{1/2}. \tag{4}\]
The first two terms of (4) represent the upper bound on the error of the FW-Learner that had access to the data \((X_{i},f(O_{i})),i\in\mathcal{I}_{2}\). The last term of (4), \(H_{f}-m^{\star}\), is the bias incurred from estimating the oracle pseudo-outcome \(f\) with the empirical pseudo-outcome \(\widehat{f}\). Here the choice of estimator of the oracle pseudo-outcome is key to rendering this bias term negligible relative to the leading two terms of equation (4). We return to this below.
If \(|\mathcal{I}_{1}|=|\mathcal{I}_{2}|=n/2\), \(m^{\star}\in\mathcal{F}(\Psi,\Gamma)\), the full approximation set discussed in Theorem 1, and we set \(J=J_{n}=\min\{k\geq 1:\,\gamma_{k}^{2}\leq\sigma^{2}k/n\}\), then Corollary 1 implies that \(\|\widehat{m}_{J}-m^{\star}\|_{2}\leq 2(1+\sqrt{\kappa})\sqrt{\sigma^{2}J_{n}/n }+\sqrt{6}\|H_{f}-m^{\star}\|_{2}.\) Because \(\sqrt{J_{n}/n}\) is the minimax rate in \(L_{2}\)-norm for functions in \(\mathcal{F}(\Psi,\Gamma)\), we get the FW-Learner with pseudo-outcome \(\widehat{f}(O)\) is minimax rate optimal as long as \(\|H_{f}-m^{\star}\|_{2}=O(\sqrt{J_{n}/n})\). In such a case, we call \(\widehat{m}_{J}\)_oracle minimax_ in that it matches the minimax rate achieved by the FW-Learner that has access to \(f(\cdot)\).
**Remark 2.1** Section 3 of Kennedy (2020) provides a result similar to Corollary 1 but with a more general regression procedure \(\widehat{\mathbb{E}}_{n}(\cdot)\) in the form of a weighted linear estimator, but the assumptions that the weights of the estimator must satisfy require a case by case basis analysis, which may not be straightforward; whereas our result is tailored to the Forster-Warmuth estimator which applies more broadly under minimal conditions. \(\diamond\)
**Remark 2.2** It is worth noting that cross-fitting rather than simple sample splitting can be used to improve efficiency. Specifically, by swapping the roles of \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) in (Step B), we can obtain two pseudo-outcomes \(\widehat{f}_{1}(\cdot),\widehat{f}_{2}(\cdot)\), and also two FW-Learners \(\widehat{m}_{J}^{(1)}(\cdot),\widehat{m}_{J}^{(2)}(\cdot)\). Instead of using only one of \(\widehat{m}_{J}^{(j)},j=1,2\), one can consider \(\widehat{m}_{J}(x)=2^{-1}\sum_{j=1}^{2}\widehat{m}_{J}^{(j)}\) and by Jensen's inequality, we obtain
\[\|\widehat{m}_{J}-m^{\star}\|_{2}\leq\sqrt{\frac{2\sigma^{2}J}{n}}+\sqrt{2 \kappa}E_{J}^{\Psi}(m^{\star})+\sqrt{\frac{3}{2}}\Big{(}\|H_{f_{1}}-m^{\star} \|_{2}+\|H_{f_{2}}-m^{\star}\|_{2}\Big{)},\]
where \(H_{f_{j}}(x)=\mathbb{E}[\widehat{f}_{j}(O)|X=x,\widehat{f}_{j}].\) A similar guarantee also holds for the average estimator obtained by repeating the sample splitting procedure.
### Construction of Pseudo-outcome (Step A)
For a given counterfactual regression problem, we construct the counterfactual pseudo-outcome using the efficient influence function (more precisely the non-centered gradient) of the functional formally defined as the "marginal" instance of the non-parametric counterfactual regression model in view, under given identifying assumptions. For instance, in the missing data regression problem, our quantity of interest is \(m^{\star}(x)=\mathbb{E}[\![Y|X=x]\!]\) and so, the marginal functional is simply \(\psi=\mathbb{E}[\![Y]\!]\), the mean outcome in the underlying target population; both conditional and marginal parameters are identified from the observed data under MAR or the shadow variable model assumptions. Likewise, in the case of the CATE, our quantity of interest is \(m^{\star}(x)=\mathbb{E}[\![Y^{1}-Y^{0}|X=x]\!]\) and so, the marginal functional is simply \(\psi=\mathbb{E}[\![Y^{1}-Y^{0}]\!]\), the population average treatment effect, both of which are identified under unconfoundedness, or the proximal causal inference assumptions. Importantly, although the nonparametric regression of interest \(m^{\star}(x)\) might not generally be pathwise-differentiable (see the definition in Section S.5 of the supplement), and therefore might not admit an influence function, under our identifying conditions and additional regularity conditions, the corresponding marginal functional \(\psi\) is a well-defined pathwise-differentiable functional that admits an influence function. Note that a non-parametric regression function that is absolutely continuous with respect to the Lebesgue measure will in general fail to be pathwise-differentiable without an additional modeling restriction (Bickel et al., 1993, Chapter 3).
Influence functions for marginal functionals \(\psi\) are in fact well-established in several semiparametric models. Furthermore, unless the model is fully nonparametric, there are infinitely many such influence functions and there is one efficient influence function that has the minimum variance. For example, in the setting of missing data with \(O=(X,Z,R,YR)\), under only missing at random (MAR) assumption (i.e., \(R_{i}\perp Y_{i}|(X_{i},Z_{i})\)), the model is well-known to be fully nonparametric in the sense that the assumption does not restrict the observed data tangent space, formally the closed linear span of the observed data scores of the model. The efficient influence function is given by
\[\mathrm{IF}(O;\psi):=\frac{R}{\pi^{\star}(X,Z)}Y-\left(\frac{R}{\pi^{\star}(X,Z )}-1\right)\mu^{\star}(X,Z)-\psi,\]
where \(\pi^{\star}(X,Z):=\mathbb{P}(R=1|X,Z)\) and \(\mu^{\star}(X,Z):=\mathbb{E}[\![Y|X,Z,R=1]\!]\). An estimator of \(\psi\) can be obtained by solving the empirical version of the estimating equation \(\mathbb{E}[\mathrm{IF}(O;\psi)]=0\). Interestingly, this influence function also satisfy \(m^{\star}(x)=\mathbb{E}[(\mathrm{IF}(O;\psi)+\psi)|X=x]\). Because \(\mathrm{IF}(O;\psi)+\psi\) is
only a function of \(O\), it can be used as \(f(O)\) for counterfactual regression. In this setting, one can easily construct other pseudo-outcomes. Namely, \(f_{1}(O):=RY/\pi^{\star}(X,Z)\) and \(f_{2}(O):=\mu^{\star}(X,Z)\), both satisfy \(\mathbb{E}[f_{j}(O)|X=x]=m^{\star}(x)\). The oracle pseudo-outcome \((\mathrm{IF}(O;\psi)+\psi)\) is the only one from those discussed that yields mixed bias and has double robustness property. This is our general strategy for constructing pseudo-outcome that has a smaller "bias" \(H_{f}-m^{\star}\). Spelled out the steps for finding a "good" pseudo-outcome for estimating \(m^{\star}(x)=\mathbb{E}[\widehat{f}(\widehat{O})|X=x]\) are:
1. Derive an influence function \(\mathrm{IF}(O;\eta^{\star},\psi)\) for the marginal functional \(\psi=\mathbb{E}[\widehat{f}(\widehat{O})]\). Here \(\eta^{\star}\) represents a nuisance component under a given semiparametric model for which identification of the regression curve is established. Note that by definition of influence function \(\mathbb{E}[\mathrm{IF}(O;\eta^{\star},\psi)]=0\).
2. Because \(\mathrm{IF}(O;\eta^{\star},\psi)+\psi\) is only a function of \(O\) and \(\eta^{\star}\). We set \(f(O)=\mathrm{IF}(O;\eta^{\star},\psi)+\psi\). Clearly, \(\mathbb{E}[f(O)]=\psi\). Verify that \(\mathbb{E}[f(O)|X=x]=m^{\star}(x)\). This holds true in a large class of semiparametric models; see Theorem 2 below.
3. Construct \(\widehat{f}(O)=\widehat{\Gamma}(O;\widehat{\eta},\psi)+\psi\), an estimate of the uncentered influence function based on the first split of the data.
The influence functions for both the marginal outcome mean and average treatment effect under MAR and unconfoundedness conditions, respectively, are well-known, the former is given above and studied in Section 3; while the latter is given and studied in Section 4 along with their analogs under MNAR with a shadow variable and unmeasured confounding using proxies, respectively. A more general result which formalizes the approach for deriving a pseudo-outcome in a given counterfactual regression problem is as follows.
**Theorem 2**.: _Suppose that the counterfactual regression function of interest \(m^{\star}(x)=\mathbb{E}[\widehat{f}(\widehat{O})\,|X=x]\) is identified in terms of the observed data \(O\) (distributed as \(F^{\ast}\in\mathcal{M}\)) by \(n^{\star}\left(x;\eta\right)=\mathbb{E}_{\eta}\left[r\left(O;\eta\right)|X=x \right]\) for a known function \(r\left(\cdot;\eta\right)\) in \(L^{2}\) indexed by an unknown, possibly infinite dimensional, nuisance parameter \(\eta\in\mathcal{B}\) (for a normed metric space \(\mathbb{B}\) with norm \(\|\cdot\|\)). Furthermore, suppose that there exists a function \(R(\cdot;\eta,n^{\star}\left(\eta\right)):O\mapsto R(O;\eta,n^{\star}\left( \eta\right))\) in \(L^{2}\) such that for any regular parametric submodel
\(F_{t}\) in \(\mathcal{M}\) with parameter \(t\in\left(-\varepsilon,\varepsilon\right)\) satisfying \(F_{0}=F^{*}\) and corresponding score \(S(\cdot)\), the following holds:_
\[\left.\frac{\partial\mathbb{E}\left[r\left(O;\eta_{t}\right)\left|X=x\right. \right]}{\partial t}\right|_{t=0}=\mathbb{E}\left[R(O;\eta,n^{*}\left(\eta \right))S\left(O\right)\left|X=x\right],\lx@note{footnote}{We also assume that this derivative is continuous in $t$.}\]
_with \(\mathbb{E}\left[R(O;\eta,n^{*}\left(\eta\right))|X\right]=0\), then_
\[\left.\left\|\mathbb{E}\left[R(O;\eta^{\prime},n^{*}\left(\eta^{\prime}\right) )+r\left(O;\eta^{\prime}\right)\left|X\right]-n^{*}\left(X;\eta\right)\right\| _{2}=O\left(\left\|\eta^{\prime}-\eta\right\|_{2}^{2}\right),\right.\]
_for any \(\eta^{\prime}\in\mathbb{B}\), and_
\[R(O;\eta,n^{*}\left(\eta\right))+r\left(O;\eta\right)-\psi\left(\eta\right)\]
_is an influence function of the functional \(\psi\left(\eta\right)=\mathbb{E}\left[r\left(O;\eta\right)\right]\)under \(\mathcal{M}\)._
The proof is in Section S.3 of the supplement.
Theorem 2 formally establishes that a pseudo-outcome for a given counterfactual regression \(\mathbb{E}_{\eta}\left[r\left(O;\eta\right)|X=x\right]\), can be obtained by effectively deriving an influence function of the corresponding marginal functional \(\psi=E_{X}\{\mathbb{E}_{\eta}\left[r\left(O;\eta\right)|X\right]\}\) under a given semiparametric model \(\mathcal{M}\). The resulting influence function is given by \(R(O;\eta)+r(O;\eta)-\psi\) and the oracle pseudo-outcome may appropriately be defined as \(f(O)=R(O;\eta)+r(O;\eta).\) Theorem 2 is quite general as it applies to the most comprehensive class of non-parametric counterfactual regressions studied to date. The result thus provides a unified solution to the problem of counterfactual regression, recovering several existing methods, and more importantly, providing a number of new results. Namely, the theorem provides a formal framework for deriving a pseudo-outcome which by construction is guaranteed to satisfy so-called "Neyman Orthogonality" property, i.e. that the bias incurred by estimating nuisance functions is at most of the second order (Chernozhukov et al., 2017). In the following sections, we apply Theorem 2 to key problems in missing data and causal inference for which we give a precise characterization of the resulting second-order bias. The four use-cases we discuss in detail below share a common structure in that the influence function of the corresponding marginal functional is linear in the regression function of interest, and falls within a broad class of so-called mixed-bias functionals introduced by Ghassami et al. (2022).
To further demonstrate broader applicability of Theorem 2, we additionally apply our approach to problems for which the counterfactual regression curve of interest operates on a "non-linear" scale
in Appendix S.1, in the sense that the influence function for the corresponding marginal functional depends on the counterfactual regression of interest on a nonlinear scale, and and as a result, might not strictly belong to the mixed-bias class. Nonetheless, as guaranteed by our theorem, the bias of the resulting pseudo-outcome is indeed of second order albeit not of mixed-bias form. These additional applications include the conditional quantile causal effect under confoundedness conditions, the CATE for generalized nonparametric regressions incorporating a possibly nonlinear link function such as the log or logit links, to appropriately account for the restricted support of count and binary outcomes respectively; The CATE for the treated, the compliers, and for the overall population each of which can be identified uniquely in the presence of unmeasured confounding under certain conditions by the so-called conditional Wald estimand, by carefully leveraging a binary instrumental variable (Wang and Tchetgen Tchetgen, 2018); and the nonparametric counterfactual outcome mean for a continuous treatment both under unconfoundedness and proximal causal identification conditions, respectively.
The pseudo-outcomes mentioned in Theorem 2 have several attractive statistical properties as they naturally account for the first-stage estimation of nuisance parameters in a manner that minimizes their impact on the second-stage FW-Learner. Specifically, the proposed pseudo-outcomes have product/mixed or second-order bias. In some cases with two or more nuisance functions, they can also have double/multiple robustness with respect to the estimated nuisance functions. An important class of such influence functions for \(\psi\) that includes the four examples considered in detail in the main text of the paper is the mixed-bias class studied in Ghassami et al. (2022). Specifically, hereto after, we will assume that the influence function of the marginal functional \(\psi\), corresponding to our counterfactual regressions is of the form
\[\text{IF}_{\psi}(O)=q^{\star}(O_{q})h^{\star}(O_{h})g_{1}(O)+q^{\star}(O_{q})g _{2}(O)+h^{\star}(O_{h})g_{3}(O)+g_{4}(O)-\psi, \tag{5}\]
where \(O_{q}\) and \(O_{h}\) are (not necessarily disjoint) subsets of the observed data vector \(O\) and \(g_{1},g_{2},g_{3}\), and \(g_{4}\) are known functions and \(\eta^{\star}=(h^{\star},q^{\star})\) represents nuisance functions that need to be estimated. Then, we can set the oracle pseudo-outcome function as \(f(O)=q^{\star}(O_{q})h^{\star}(O_{h})g_{1}(O)+q^{\star}(O_{q})g_{2}(O)+h^{ \star}(O_{h})g_{3}(O)+g_{4}(O)\), and empirical pseudo-outcome \(\widehat{f}(O)=\widehat{q}(O_{q})\widehat{h}(O_{h})g_{1}(O)+\widehat{q}(O_{ q})g_{2}(O)+\widehat{h}(O_{h})g_{3}(O)+g_{4}(O)\), where \(\widehat{h},\widehat{q}\) are estimators of the nuisance functions \(h^{\star}\) and \(q^{\star}\) using any nonparametric method; see Appendix S.4 for some nonparametric estimators that can adapt to the low-dimensional structure of \(\eta^{\star}\), when it is a conditional expectation. Using the similar proof that
shows Theorem 2 of Ghassami et al. (2022), it can be shown that conditioning on the training sample used to estimate the nuisance functions \(h^{\star}\) and \(q^{\star}\) with \(\widehat{h}\) and \(\widehat{q}\), the bias term \(H_{f}-m^{\star}\) above is equal to
\[\mathbb{E}\big{\{}g_{1}(O)(q^{\star}-\widehat{q})(O_{q})(h^{\star}- \widehat{h})(O_{h})|X,\widehat{q},\widehat{h}\big{\}}, \tag{6}\]
and therefore the bias term is of second order with product form. The proof is in Section S.5 of the supplement. The following sections elaborate these results in the four specific applications of interest.
## 3 FW-Learner for Missing Outcome
In this section, we suppose that a typical observation is given by \(O=(YR,R,X,Z)\), where \(R\) is a nonresponse indicator with \(R=1\) if \(Y\) is observed, otherwise \(R=0\). Here \(Z\) are fully observed covariates not directly of scientific interest but may be helpful to account for selection bias induced by the missingness mechanism. Specifically, Section 3.1 considers the MAR setting where the missingness mechanism is assumed to be completely accounted for by conditioning on the observed covariates \((X,Z)\)4, while Section 3.2 relaxes this assumption, allowing for outcome data missing not at random (MNAR) leveraging a shadow variable for identification.
Footnote 4: In the special case where assumption (**MAR**) holds upon conditioning on \(X\) only, complete-case estimation of \(m^{\star}\) is known to be minimax rate optimal (Efromovich, 2011, 2014; Müller and Schick, 2017).
### FW-Learner under MAR
Here, we make the MAR assumption that \(Y\) and \(R\) are conditionally independent given \((X,Z)\), and we aim to estimate the conditional mean of \(Y\) given \(X\), which we denote \(m^{\star}(x):=\mathbb{E}[Y\mid X=x]\).
**(MAR)**: \(O_{i}=(X_{i},Z_{i},R_{i},Y_{i}R_{i}),1\leq i\leq n\) are independent and identically distributed random vectors satisfying \(R_{i}\perp Y_{i}\mid(X_{i},Z_{i})\).
Under the missing at random assumption (**MAR**), the well-known efficient influence function that leads to the augmented inverse probability weighted (AIPW) estimator for the marginal function \(\psi=\mathbb{E}[Y]\), see e.g. Robins et al. (1994). Following (Step B), we now define empirical pseudo-outcome as follows. Split \(\{1,2,\ldots,n\}\) into two parts: \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\). Use the first split to estimate the nuisance
functions based on data \(\{(Y_{i}R_{i},R_{i},X_{i},Z_{i}),i\in\mathcal{I}_{1}\}\), denoted as \(\widehat{\pi}\) and \(\widehat{\mu}\). Use the second split and define the empirical pseudo-outcome
\[\widehat{f}(O)=\widehat{f}(YR,R,X,Z) :=\frac{R}{\widehat{\pi}(X,Z)}(YR)-\left(\frac{R}{\widehat{\pi}(X,Z)}-1\right)\widehat{\mu}(X,Z), \tag{7}\] \[=\frac{R}{\widehat{\pi}(X,Z)}Y-\left(\frac{R}{\widehat{\pi}(X,Z) }-1\right)\widehat{\mu}(X,Z),\]
Note that this corresponds to a member of the DR class of influence function (5) with \(h_{0}(O_{h})=1/\pi^{\star}(X,Z)\), \(q_{0}(O_{q})=\mu^{\star}(X,Z),g_{1}=-R,g_{2}=1,g_{3}=RY\) and \(g_{4}=0\). Recall \(\pi^{\star}(X,Z)=\mathbb{P}(R=1|X,Z)\) and \(\mu^{\star}(X,Z)=\mathbb{E}[Y|X,Z]\).
Let \(\widehat{m}_{J}(\cdot)\) represent the FW-Learner computed from the dataset \(\{(\bar{\phi}_{J}(X_{i}),\widehat{f}(O_{i})),i\in\mathcal{I}_{2}\}\), as in (Step B) and Corollary 1 guarantees the following result
\[(\mathbb{E}[(\widehat{m}_{J}(X)-m^{\star}(X))^{2}|\widehat{f}])^{1/2}\leq \sqrt{\frac{2\sigma^{2}J}{|\mathcal{I}_{2}|}}+\sqrt{2\kappa}E_{J}^{\Psi}(m^{ \star})+\sqrt{6}(\mathbb{E}[(H_{f}(X)-m^{\star}(X))^{2}|\widehat{f}])^{1/2}, \tag{8}\]
where \(\sigma^{2}\) is an upper bound on \(\mathbb{E}[\widehat{f}^{2}(O)\mid X,\widehat{f}]\) and \(H_{f}(x):=\mathbb{E}[\widehat{f}(O)|X=x,\widehat{f}].\) The following lemma states the mixed bias structure of \(H_{f}-m^{\star}\).
**Lemma 1**.: _With (7) as the empirical pseudo-outcome, under (\(\mathtt{MAR}\)), we have_
\[H_{f}(x)-m^{\star}(x)=\mathbb{E}\bigg{\{}R\left(\frac{1}{\widehat{\pi}(X,Z)}- \frac{1}{\pi^{\star}(X,Z)}\right)\left(\mu^{\star}(X,Z)-\widehat{\mu}(X,Z) \right)\bigg{|}\,X=x,\widehat{\pi},\widehat{m}\bigg{\}}.\]
This result directly follows from the mixed bias form (6) in the general class studied by Ghassami et al. (2022); also see Rotnitzky et al. (2021) and Robins et al. (2008); for completeness, we provide a direct proof in Section S.6.2 of the supplement. Lemma 1 combined with (8) gives the following error bound for the FW-Learner computed with pseudo-outcome (7).
**Theorem 3**.: _Let \(\sigma^{2}\) denote an almost sure upper bound on \(\mathbb{E}[\widehat{f}^{2}(O)|X,\widehat{\pi},\widehat{\mu}]\). Then, under (\(\mathtt{MAR}\)),_
the FW-Learner \(\widehat{m}_{J}(x)\) satisfies_
\[(\mathbb{E}[(\widehat{m}_{J}(X)-m^{\star}(X))^{2}|\widehat{f}])^{1/2}\leqslant \sqrt{\frac{2\sigma^{2}J}{|\mathcal{I}_{2}|}}+\sqrt{2\kappa}E_{J}^{ \Psi}(m^{\star}) \tag{9}\] \[+\sqrt{6}\mathbb{E}^{1/4}\left[\left(\frac{\pi^{\star}(X,Z)}{ \widehat{\pi}(X,Z)}-1\right)^{4}|\widehat{\pi}\right]\mathbb{E}^{1/4}[(\mu^{ \star}(X,Z)-\widehat{\mu}(X,Z))^{4}|\widehat{\mu}]\] \[\leqslant\sqrt{\frac{2\sigma^{2}J}{|\mathcal{I}_{2}|}}+\sqrt{2 \kappa}E_{J}^{\Psi}(m^{\star})+\sqrt{6}\mathbb{E}^{1/4}\Big{[}\big{(}\frac{1}{ \widehat{\pi}}-\frac{1}{\pi^{\star}}\big{)}^{4}(X,Z)\big{|}\widehat{\pi}\Big{]} \ \mathbb{E}^{1/4}\big{[}(\mu^{\star}-\widehat{\mu})^{4}(X,Z)\big{|}\widehat{\mu} \Big{]}.\]
The proof of this result is in Section S.6.2 of the supplement. Note that, because \(\widehat{f}(O)\) involves \(\widehat{\pi}\) in the denominator, the condition that \(\sigma^{2}\) is finite requires \(\widehat{\mu}\) and \(1/\widehat{\pi}\) to be bounded.
**Corollary 2**.: _Let \(d\) denote the intrinsic dimension of \((X,Z)\), if_
1. _The propensity score_ \(\pi^{\star}(x,z)\) _is estimated at an_ \(n^{-2\alpha_{\pi}/(2\alpha_{\pi}+d)}\) _rate in the_ \(L_{4}\)_-norm,_
2. _The regression function_ \(\mu^{\star}(x,z)\) _is estimated at an_ \(n^{-2\alpha_{\mu}/(2\alpha_{\mu}+d)}\) _rate in the_ \(L_{4}\)_-norm, and_
3. _The conditional mean function_ \(m^{\star}(\cdot)\) _with respect to the fundamental sequence_ \(\Psi\) _satisfies_ \(E_{J}^{\Psi}(m^{\star})\leqslant CJ^{-\alpha_{m}/d}\) _for some constant_ \(C\)_,_
_then_
\[\Big{(}\mathbb{E}[(\widehat{m}_{J}(X)-m^{\star}(X))^{2}|\widehat{ \pi},\widehat{\mu}]\Big{)}^{1/2}\lesssim\sqrt{\frac{\sigma^{2}J}{n}}+J^{- \alpha_{m}/d}+n^{-\frac{\alpha_{\pi}}{2\alpha_{\pi}+d}-\frac{\alpha_{\mu}}{2 \alpha_{\mu}+d}}. \tag{10}\]
When the last term of (10) is smaller than the oracle rate \(n^{-\frac{\alpha_{m}}{2\alpha_{m}+d}}\), the oracle minimax rate can be attained by balancing the first two terms. Therefore, the FW-Learner is oracle efficient if \(\alpha_{\mu}\alpha_{\pi}\geqslant d^{2}/4-(\alpha_{\pi}+\frac{d}{2})(\alpha_{ \mu}+\frac{d}{2})/(1+\frac{2\alpha_{m}}{d})\). In the special case when \(\alpha_{\mu}\) and \(\alpha_{\pi}\) are equal, if we let \(s=\alpha_{\mu}/d=\alpha_{\pi}/d\) and \(\gamma=\alpha_{\tau}/d\) denote the effective smoothness, and when \(s\geqslant\frac{\alpha_{\tau}/2}{\alpha_{m}+d}=\frac{\gamma/2}{\gamma+1}\), the last term in (9) is the bias term that comes from pseudo-outcome, which is smaller than that of the oracle minimax rate of estimation of \(n^{-\alpha_{m}/(2\alpha_{m}+d)}\) and the FW-Learner is oracle efficient.
### FW-Learner under MNAR: shadow variables
In the previous section, we constructed an FW-Learner for a nonparametric mean regression function under MAR. The MAR assumption may be violated in practice, for instance if there are unmeasured
factors that are both predictive of the outcome and nonresponse, in which case outcome data are said to be missing not at random and the regression may generally not be identified from the observed data only. In this section, we continue to consider the goal of estimating a nonparametric regression function, however allowing for outcome data to be missing not at random, by leveraging a so-called shadow variable for identification (Miao et al., 2023). In contrast to the MAR setting, the observed data we consider here is \(O_{i}=(X_{i},W_{i},R_{i},Y_{i}R_{i}),1\leqslant i\leqslant n\), where \(W_{i}\) is the shadow variable allowing identification of the conditional mean. Specifically, a shadow variable is a fully observed variable, that is (i) associated with the outcome given fully observed covariates and (ii) is independent of the missingness process conditional on fully observed covariates and the possibly unobserved outcome variable. Formally, a shadow variable \(W\) has to satisfy the following assumption.
**(SV)**: \(W\perp R\;\big{|}\;(X,Y)\) and \(W\not\perp Y\;\big{|}\;X\).
This assumption formalizes the idea that the missingness process may depend on \((X,Y)\), but not on the shadow variable \(W\) after conditioning on \((X,Y)\) and therefore, allows for missingness not at random.5 Under this condition, it holds (from Bayes' rule) that
Footnote 5: The assumption can be generalized somewhat, by further conditioning on fully observed covariates \(Z\) in addition to \(X\) and \(Y\) in the shadow variable conditional independence statement, as well as in the following identifying assumptions.
\[\mathbb{E}\Big{\{}\frac{1}{\mathbb{P}(R=1|X,Y)}\;\Big{|}\;R=1,X,W\Big{\}}= \frac{1}{\mathbb{P}(R=1|X,W)}. \tag{11}\]
Let \(e^{\star}(X,Y):=\mathbb{P}[R=1|X,Y]\) denote the _extended_ propensity score, which consistent with MNAR, will generally depend on \(Y\). Likewise, let \(\pi^{\star}(X,W):=\mathbb{P}[R=1|X,W]\). Clearly \(e^{\star}(X,Y)\) cannot be estimated via standard regression of \(R\) on \(X,Y\) given that \(Y\) is not directly observed for units with \(R=0\). Identification of the extended propensity score follows from the following completeness condition (Miao et al. (2023), Tchetgen Tchetgen et al. (2023)): define the map \(D:L_{2}\to L_{2}\) by \([Dg](x,w)=\mathbb{E}\big{\{}g(X,Y)|R=1,X=x,W=w\big{\}}\).
**(CC)**: \([Dg](X,W)=0\) almost surely if and only if \(g(X,Y)=0\) almost surely.
Given a valid shadow variable, suppose also that there exist a so-called outcome bridge function that satisfies the following condition (Li et al. (2021), Tchetgen Tchetgen et al. (2023)).
**(BF)**: There exists a function \(\eta^{\star}(x,w)\) that satisfies the integral equation
\[y=\mathbb{E}\{\eta^{\star}(X,W)|Y=y,X=x,R=1\}. \tag{12}\]
The assumption may be viewed as a nonparametric measurement error model, whereby the shadow variable \(W\) can be viewed as an error-prone proxy or surrogate measurement of \(Y\), in the sense that there exists a transformation (possibly nonlinear) of \(W\) which is conditionally unbiased for \(Y\). In fact, the classical measurement model which posits \(W=Y+\epsilon\) where \(\epsilon\) is a mean zero independent error clearly satisfies the assumption with \(\eta^{\star}\) given by the identity map. Li et al. (2021) formally established that existence of a bridge function satisfying the above condition is a necessary condition for pathwise differentiation of the marginal mean \(\mathbb{E}(Y)\) under the shadow variable model, and therefore, a necessary condition for the existence of a root-n estimator for the marginal mean functional in the shadow variable model. From our viewpoint, the assumption is sufficient for existence of a pseudo-outcome with second order bias.
Let \(\widehat{e}(\cdot)\) denote a consistent estimator of \(e^{\star}(\cdot)\) that solves an empirical version of its identifying equation (11). Similarly, let \(\widehat{\eta}(\cdot)\) be an estimator for \(\eta^{\star}(\cdot)\) that solves an empirical version of the integral equation (12); see e.g. Ghassami et al. (2022), Li et al. (2021) and Tchetgen Tchetgen et al. (2023). Following the pseudo-outcome construction of Section 2.2, the proposed shadow variable oracle pseudo-outcome follows from the (uncentered) locally efficient influence function of the marginal outcome mean \(\mathbb{E}(Y)\) under the shadow variable model, given by \(f(O)=RY/e^{\star}(X,Y)-\big{(}R/e^{\star}(X,Y)-1\big{)}\eta^{\star}(X,W)\); see Li et al. (2021), Ghassami et al. (2022), and Tchetgen Tchetgen et al. (2023). It is easily verified that \(\mathbb{E}[f(O)|X=x]=m^{\star}(x)\) under **(SV)**, **(CC)**, and **(BF)**. Note that this pseudo-outcome is a member of the mixed-bias class of influence functions (5) with \(h^{\star}=1/e^{\star}\), \(q^{\star}=\eta^{\star},g_{1}=-R,g_{2}=1,g_{3}=RY\) and \(g_{4}=0\). The corresponding empirical pseudo-outcome is given by
\[\widehat{f}(O)=\frac{R}{\widehat{e}(X,Y)}Y-\left(\frac{R}{\widehat{e}(X,Y)}- 1\right)\widehat{\eta}(X,W), \tag{13}\]
with \(\widehat{e}(\cdot,\cdot)\) and \(\widehat{\eta}(\cdot,\cdot)\) obtained from the first split of the data.
Following (Step B), we obtain the FW-Learner \(\widehat{m}_{J}(X)\). In practice, similar to Algorithm 1, cross-validation may be used to tune the truncation parameter \(J\). Set \(H_{f}(x)=\mathbb{E}[\widehat{f}(O)|X=x,\widehat{f}]\). The following lemma gives the form of the mixed-bias for \(\widehat{f}(\cdot)\).
**Lemma 2**.: _Under_ **(SV)**_,_ **(CC)**_,_ **(BF)**_, the pseudo-outcome_ (13) _satisfies_
\[H_{f}(x)-m^{\star}(x)\ =\ \mathbb{E}\bigg{\{}R\bigg{(}\frac{1}{\widehat{e}(X,Y)} -\frac{1}{e^{\star}(X,Y)}\bigg{)}(\eta^{\star}-\widehat{\eta})(X,W)\;\Big{|} \;X=x,\widehat{e},\widehat{\eta}\bigg{\}}.\]
This result directly follows from the mixed bias form (6) in the general class studied by Ghassami et al. (2022) in the shadow variable nonparametric regression setting. The proof is in Section S.6.3 of the supplement. Plugging this into Corollary 1 leads to the error rate of the FW-Learner \(\widehat{m}_{J}(x)\).
**Theorem 4**.: _Under the same notation as Theorem 3, and under (**SV**), (**CC**), (**BF**), the FW-Learner \(\widehat{m}_{J}(x)\) satisfies_
\[(\mathbb{E}[(\widehat{m}_{J}(X)-m^{\star}(X))^{2}|\widehat{f}])^{ 1/2}\leq\sqrt{\frac{2\sigma^{2}J}{|\mathcal{L}_{2}|}}+\sqrt{2\kappa}E_{J}^{ \Psi}(m^{\star}) \tag{14}\] \[+\sqrt{6}\min\Bigl{\{}\left\|\frac{1}{\widehat{e}(X,Y)}-\frac{1}{ e^{\star}(X,Y)}\right\|_{4}\ \left\|\mathbb{E}\bigl{[}\bigl{(}\eta^{\star}-\widehat{\eta}\bigr{)}(X,W)\ \bigr{|}\ X,Y\bigr{]}\right\|_{4},\] \[\left\|\mathbb{E}\bigl{[}\frac{1}{\widehat{e}(X,Y)}-\frac{1}{e^{ \star}(X,Y)}\ \big{|}\ X,W\bigr{]}\right\|_{4}\ \left\|\bigl{(}\eta^{\star}- \widehat{\eta}\bigr{)}(X,W)\right\|_{4}\Bigr{\}}\]
The proof of this result is in Section S.6.3 of the supplement. Note that \(\sigma^{2}\) is finite when \(\widehat{\eta}\) and \(1/\widehat{e}\) are bounded. Theorem 4 demonstrates that the FW-Learner performs nearly as well as the Oracle learner with a slack of the order of the mixed bias of estimated nuisance functions for constructing the pseudo-outcome. Unlike the MAR case, the nuisance functions under the shadow variable assumption are not just regression functions and hence, the rate of estimation of these nuisance components is not obvious. In what follows, we provide a brief discussion of estimating these nuisance components. Focusing on the outcome bridge function which solves equation (12), this equation is a so-called Fredholm integral equation of the first kind, which are well known to be ill-posed (Kress et al. (1989)). Informally, ill-posedness essentially measures the extent to which the conditional expectation defining the kernel of the integral equation \(Q\mapsto\mathbb{E}_{Q}\left[\eta\left(X_{i},W_{i}\right)\big{|}\ X_{i}=x,Y_{i}=y\right]\) smooths out \(\eta\). Let \(L_{2}(X)\) denote the class of functions \(\{f:\mathbb{E}_{X}[f^{2}(X)]\leq\infty\}\), and define the operator \(T:L_{2}(X,W)\to L_{2}(X,Y)\) as the conditional expectation operator conditional expectation operator given by
\[[T\eta](x,y):=\mathbb{E}\left[\eta\left(X_{i},W_{i}\right)\big{|}\ X_{i}=x,Y_{i }=y\right].\]
Let \(\Psi_{J}:=\operatorname{clsp}\left\{\psi_{J1},\ldots,\psi_{JJ}\right\}\subset L _{2}(X,W)\) denote a sieve spanning the space of functions of variables \(X,W\). One may then define a corresponding sieve \(L_{2}\) measure of ill-posedness coefficient as in Blundell et al. (2007) as \(\tau_{\eta}:=\sup_{\eta\in\Psi_{J}:\eta\neq 0}\|\eta\|_{L_{2}(X,W)}/\|T\eta\|_{L_{ 2}(X,Y)}\).
**Definition 1** (Measure of ill-posedness).: _Following Blundell et al. (2007), the integral equation (12) with \((X_{i},W_{i})\) of dimension \((d_{x}+d_{w})\) is said to be
1. _mildly ill-posed if_ \(\tau_{\eta}=O\left(J^{\varsigma_{\eta}/(d_{x}+d_{w})}\right)\) _for some_ \(\varsigma_{\eta}>0\)_;_
2. _severely ill-posed if_ \(\tau_{\eta}=O\left(\exp\left(\frac{1}{2}J^{\varsigma_{\eta}/(d_{x}+d_{w})} \right)\right)\) _for some_ \(\varsigma_{\eta}>0\)_._
Under the condition that integral equation (12) is mildly ill-posed and that \(\eta^{\star}\) is \(\alpha_{\eta}\)-Holder smooth, Chen and Christensen (2018) established that the optimal rate for estimating \(\eta^{\star}\) under the sup norm is \((n/\log n)^{-\alpha_{h}/(2(\alpha_{\eta}+\varsigma_{\eta})+d_{x}+d_{w})}\); see Lemma 5 in the supplement for details. Likewise, the integral equation (11) is also a Fredholm integral equation of the first kind with its kernel given by the conditional expectation operator \([T^{\prime}e](x,w):=\mathbb{E}\left[e(X_{i},Y_{i})\mid X_{i}=x,W_{i}=w\right]\) for any function \(u\in L_{2}(X,Y)\), and \(T^{\prime}\) is the adjoint operator of \(T\). Let \(\Psi_{J}^{\prime}:=\operatorname{clsp}\left\{\psi_{J1}^{\prime},\ldots,\psi_ {JJ}^{\prime}\right\}\subset L_{2}(X,Y)\) denote a (different) sieve spanning the space of functions of variables \(X,Y\). Its corresponding sieve \(L_{2}\) measure of ill-posedness may be defined as \(\tau_{e}=\sup_{o\in\Psi_{J}:o\neq 0}\|o\|_{L_{2}(X,Y)}/\|To\|_{L_{2}(X,W)}.\) Thus in the mildly ill-posed case \(\tau_{e}=O\left(J^{\varsigma_{e}/(d_{x}+1)}\right)\) for some \(\varsigma_{e}>0\), the optimal rate with respect to the sup norm for estimating \(e^{\star}\) is \((n/\log n)^{-\alpha_{e}/(2(\alpha_{e}+\varsigma_{e})+d_{x}+1)}\) when \(e^{\star}\) is \(\alpha_{e}\)-smooth and bounded.
Together with (14), this leads to the following characterization of the error of the FW-Learner \(\widehat{m}_{J}(X)\) if \(E_{J}^{\Psi}(m^{\star})\lesssim J^{-\alpha_{m}/d_{x}}\). Without loss of generality, suppose that
\[\min\Bigl{\{} \Bigl{\|}\frac{1}{\widehat{e}(X,Y)}-\frac{1}{e^{\star}(X,Y)} \Bigr{\|}_{4}\ \Bigl{\|}\mathbb{E}\bigl{[}(\eta^{\star}-\widehat{\eta})(X,W)\bigm{|}X,Y \bigr{]}\Bigr{\|}_{4}, \tag{15}\] \[\Bigl{\|}\mathbb{E}\bigl{[}\frac{1}{\widehat{e}(X,Y)}-\frac{1}{e ^{\star}(X,Y)}\bigm{|}X,W\bigr{]}\Bigr{\|}_{4}\ \Bigl{\|}\bigl{(}\eta^{\star}-\widehat{\eta} \bigr{)}(X,W)\Bigr{\|}_{4}\Bigr{\}}\] \[=\Bigl{\|}\mathbb{E}\bigl{[}\frac{1}{\widehat{e}(X,Y)}-\frac{1}{e ^{\star}(X,Y)}\bigm{|}X,W\bigr{]}\Bigr{\|}_{4}\ \Bigl{\|}\bigl{(}\eta^{\star}-\widehat{\eta} \bigr{)}(X,W)\Bigr{\|}_{4},\]
and suppose that \(\pi^{\star}\) is \(\alpha_{\pi}\)-Holder smooth, such that
\[\Bigl{\|}\mathbb{E}\bigl{[}\frac{1}{\widehat{e}(X,Y)}-\frac{1}{e ^{\star}(X,Y)}\bigm{|}X,W\bigr{]}\Bigr{\|}_{4}\] \[=\Bigl{\|}\mathbb{E}\bigl{[}\frac{1}{\widehat{e}(X,Y)}\bigm{|}X,W \bigr{]}-\frac{1}{\pi^{\star}(X,W)}\Bigr{\|}_{4}\]
is of the order of \(n^{-\alpha_{\pi}/(2\alpha_{\pi}+d_{x}+d_{w})}\) the minimax rate of estimation of the regression function \(\pi^{\star}\).
**Corollary 3**.: _Under the conditions in Lemma 5 in the supplement and assuming that the linear operator \(T\) is mildly ill-posed with exponent \(\varsigma_{\eta}\); then if \(m^{\star}\) satisfies \(E_{J}^{\Psi}(m^{\star})\lesssim J^{-\alpha_{m}/d_{x}}\), \(\pi^{\star}\) is \(\alpha_{\pi}\)-Holder smooth and \(\eta^{\star}\) is \(\alpha_{\eta}\)-Holder smooth, and equation_ (15) _holds, then the FW-Learner's estimation error
satisfies_
\[\left\|\widehat{m}_{J}(X)-m^{\star}(X)\right\|_{2}\lesssim\sqrt{\frac{\sigma^{2}J }{n}}+J^{-\alpha_{m}/d_{x}}+(n/\log n)^{-\alpha_{\eta}/(2(\alpha_{\eta}+\varsigma _{\eta})+d_{x}+d_{w})}n^{-\alpha_{\pi}/(2\alpha_{\pi}+d_{x}+d_{w})}. \tag{16}\]
**Remark 3.1** A few remarks on Corollary 3: (1) If the mixed bias term incurred for estimating nuisance functions is negligible relative to the first two terms in (16), then the order of the error of the FW-Learner matches that of the oracle with access to missing data; (2) In settings where operators \(T_{\eta},T_{e}\), say, \(T_{\eta}\), are severely ill-posed, i.e. where \(\tau_{\eta}=O\left(\exp\left(\frac{1}{2}J^{\varsigma_{\eta}/(d_{x}+d_{w})} \right)\right)\) for some \(\varsigma_{\eta}>0\), Theorem 3.2 of Chen and Christensen (2018) established that the optimal rate of estimating \(\eta^{\star}\) with respect to the sup norm is of the order \((\log n)^{-\alpha_{\eta}/\varsigma_{\eta}}\) which would likely dominate the error \(\left\|\widehat{m}_{J}-m^{\star}\right\|_{2}\). In this case, the FW-Learner may not be able to attain the oracle rate. In this case, whether the oracle rate is at all attainable remains an open problem in the literature. \(\diamond\)
## 4 FW-Learner of the CATE
Estimating the conditional average treatment effect (CATE) plays an important role in health and social sciences where one might be interested in tailoring treatment decisions based on the person's characteristics, a task that requires learning whether and the extent to which the person may benefit from treatment; e.g. personalized treatment in precision medicine (Ashley, 2016).
Suppose that we have observed i.i.d data \(O_{i}=(X_{i},A_{i},Y_{i}),1\leq i\leq n\) with \(A_{i}\) representing the binary treatment assignment, \(Y_{i}\) being the observed response, and covariates \(X_{i}\). The CATE is formally defined as \(m^{\star}(x)=\mathbb{E}\left(Y^{1}-Y^{0}|X=x\right)\), where \(Y^{a}\) is the potential outcome or counterfactual outcome, had possibly contrary to fact, the person taken treatment \(a\). The well-known challenge of causal inference is that one can at most observe the potential outcome for the treatment the person took and therefore, the counterfactual regression defining the CATE is in general not identified outside of a randomized experiment with perfect compliance, without additional assumptions. The next section describes the identification and FW-Learner of the CATE under standard unconfoundedness conditions, while the following Section 4.2 presents analogous results for the proximal causal inference setting which does not make the unconfoundedness assumption. Throughout, we make the assumption of consistency, that \(Y=AY^{1}+(1-A)Y^{0}\); and positivity, that \(\mathbb{P}(A=a|X,U)>0\) almost surely for all \(a\), where \(U\) denotes unmeasured confounders, and therefore is empty under unconfoundedness.
### FW-Learner for CATE under Ignorability
In this section, we make the additional assumption of unconfoundedness, so that the treatment mechanism is ignorable.
**No unmeasured confounding Assumption:**\((Y^{0},Y^{1})\perp A|X\). Under this condition, the CATE is nonparametrically identified by \(\tau^{\star}(x)=\mu_{1}(x)-\mu_{0}(x)\), where for \(a\in\{0,1\}\),
\[\mu^{\star}_{a}(x):=\mathbb{E}[Y|X=x,A=a];\]
Let \(\pi^{\star}(x):=\mathbb{P}(A=1|X=x)\). We will now define the Forster-Warmuth estimator for CATE. Split \(\{1,2,\ldots,n\}\) into two parts \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\). Based on \((X_{i},A_{i},Y_{i}),i\in\mathcal{I}_{1}\), estimate \(\pi^{\star},\mu^{\star}_{0},\mu^{\star}_{1}\) with \(\widehat{\pi},\widehat{\mu}_{0},\widehat{\mu}_{1}\), respectively. For \(i\in\mathcal{I}_{2}\), define the pseudo-outcome
\[\widehat{I}_{1}(X_{i},A_{i},Y_{i})=\frac{A_{i}-\widehat{\pi}(X_{i})}{\widehat {\pi}(X_{i})(1-\widehat{\pi}(X_{i}))}(Y_{i}-\widehat{\mu}_{A_{i}}(X_{i}))+ \widehat{\mu}_{1}(X_{i})-\widehat{\mu}_{0}(X_{i}),\]
which is an estimator of well-known (uncentered) efficient influence function of the marginal average treatment effect \(\mathbb{E}(Y^{1}-Y^{0})\), evaluated at preliminary estimates of nuisance functions, and is in our general mixed-bias class of influence functions given by (5) with \(h_{0}(O_{h})=\mu^{\star}_{W}(X),q_{0}(O_{q})=1/\pi^{\star}(X),g_{1}(O)=-1\{A= a\},g_{2}(O)=1\,\{A=a\}Y,g_{3}(O)=1\) and \(g_{4}(O)=0\). Write
\[H_{I_{1}}(x)=\mathbb{E}\Big{[}\widehat{I}_{1}(X,A,Y)|X=x\Big{]}.\]
We first provide a characterization of the conditional bias of the pseudo-outcome in the following lemma.
**Lemma 3**.: _The conditional bias of the pseudo outcome \(\widehat{I}_{1}(X_{i},A_{i},Y_{i})\)_
\[H_{I_{1}}(x)-\tau^{\star}(x) =\pi^{\star}(x)\Big{(}\frac{1}{\widehat{\pi}(x)}-\frac{1}{\pi^{ \star}(x)}\Big{)}\big{(}\widehat{\mu}_{1}(x)-\mu^{\star}_{1}(x)\big{)}\] \[\quad-(1-\pi^{\star}(x))\Big{(}\frac{1}{1-\widehat{\pi}(x)}- \frac{1}{1-\pi^{\star}(x)}\Big{)}\big{(}\widehat{\mu}_{0}(x)-\mu^{\star}_{0}( x)\big{)}.\]
This result directly follows from the mixed bias form (6) which recovers a well-know result in the literature, originally due to Robins and colleagues; also see Kennedy (2020). For convenience, the proof is reproduced in Section S.7.2 of the supplement. Let \(\widehat{\tau}_{J}(x)\) be the Forster-Warmuth estimator
computed from \(\{(\bar{\phi}_{J}(X_{i}),\widehat{I}_{1}(X_{i},A_{i},Y_{i})),i\in\mathcal{I}_{2}\}\).
We establish our first oracle result of the FW-Learner of the CATE.
**Theorem 5**.: _Under the assumptions given above, including unconfoundedness, suppose that \(\sigma^{2}\) is an upper bound for \(\mathbb{E}[\widehat{I}_{1}^{2}(X,A,Y)\mid X]\), then FW-Learner \(\widehat{\tau}_{J}(x)\) satisfies the error bound_
\[\big{\|}\widehat{\tau}_{J}(X)-\tau^{\star}(X)\big{\|}_{2}\leq \sqrt{\frac{2\sigma^{2}J}{|\mathcal{I}_{2}|}}+\sqrt{2}\big{\|}\sum_{j=J+1}^{ \infty}\theta_{j}^{\star}\phi_{j}(X)\big{\|}_{2}\\ +(1+\sqrt{2})\bigg{(}\Big{\|}\frac{\pi^{\star}(X)}{\widehat{\pi} (X)}-1\Big{\|}_{4}\Big{\|}\widehat{\mu}_{1}(X)-\mu_{1}^{\star}(X)\Big{\|}_{4}+ \Big{\|}\frac{1-\pi^{\star}(X)}{1-\widehat{\pi}(X)}-1\Big{\|}_{4}\Big{\|} \widehat{\mu}_{0}(X)-\mu_{0}^{\star}(X)\Big{\|}_{4}\bigg{)}.\]
See Section S.7.2 in the supplement for a formal proof of this result. Note that the condition that \(\sigma^{2}\) is bounded requires \(\widehat{\mu}_{0}\), \(\widehat{\mu}_{1},1/\widehat{\pi}\) and \(1/(1-\widehat{\pi})\) to be bounded.
**Corollary 4**.: _Let \(d\) denote the intrinsic dimension of \(X\). If_
1. _The propensity score_ \(\pi^{\star}(x,z)\) _is estimated at an_ \(n^{-2\alpha_{\pi}/(2\alpha_{\pi}+d)}\) _rate in the_ \(L_{4}\)_-norm;_
2. _The regression functions_ \(\mu_{0}^{\star}\) _and_ \(\mu_{1}^{\star}\) _are estimated at the rate of_ \(n^{-2\alpha_{\mu}/(2\alpha_{\mu}+d)}\) _in the_ \(L_{4}\)_-norm._
3. _The CATE_ \(\tau^{\star}\) _with respect to the fundamental sequence_ \(\Psi\) _satisfies_ \(E_{J}^{\Psi}(\tau^{\star})\leq CJ^{-\alpha_{\tau}/d}\) _for some constant_ \(C\)_,_
_Then, \(\widehat{\tau}_{J}(x)\) satisfies_
\[\Big{(}\mathbb{E}[(\widehat{\tau}_{J}(X)-\tau^{\star}(X))^{2}| \widehat{\pi},\widehat{\mu}]\Big{)}^{1/2}\lesssim\sqrt{\frac{\sigma^{2}J}{n} }+J^{-\alpha_{\tau}/d}+n^{-\frac{\alpha_{\pi}}{2\alpha_{\pi}+d}-\frac{\alpha _{\mu}}{2\alpha_{\mu}+d}}. \tag{17}\]
When the last term of (17) is smaller than the oracle rate \(n^{-\frac{\alpha_{\pi}}{2\alpha_{\tau}+d}}\), the oracle minimax rate can be attained by balancing the first two terms. Therefore, the FW-Learner is oracle efficient if \(\alpha_{\mu}\alpha_{\pi}\geq d^{2}/4-(\alpha_{\pi}+\frac{d}{2})(\alpha_{\mu}+ \frac{d}{2})/(1+\frac{2\alpha_{\pi}}{d})\). In the special case when \(\alpha_{\mu}\) and \(\alpha_{\pi}\) are equal, if we let \(s=\alpha_{\mu}/d=\alpha_{\pi}/d\) and \(\gamma=\alpha_{\tau}/d\) denote the effective smoothness, and when \(s\geq\frac{\alpha_{\tau}/2}{\alpha_{\tau}+d}=\frac{\gamma/2}{\gamma+1}\), the last term in (9) is the bias term that comes from the pseudo-outcome, which is smaller than that of the oracle minimax rate of estimation of \(n^{-\alpha_{\tau}/(2\alpha_{\tau}+d)}\), in which case, the FW-Learner is oracle efficient.
This method using split data has valid theoretical properties under minimal conditions and is similar to Algorithm 1 for missing outcome described in Appendix S.6, and cross-fitting can be applied
as discussed before in Section 2.2. We also provide an alternative methodology that builds upon the split data method. It uses the full data for both training and estimation, which is potentially more efficient by avoiding sample splitting. The procedure is similar to what we described in Algorithm 1 and is deferred to Algorithm 2 in the supplementary material.
Kennedy (2020) and Kennedy et al. (2022) studied the problem of estimating CATE under ignorability quite extensively-the latter paper derived the minimax rate for CATE estimation where distributional components are Holder-smooth, along with a new local polynomial estimator that is minimax optimal under some conditions. In comparison, our procedure is not necessarily minimax optimal in some regimes considered there, with the advantage that it is more general with minimum constraints on the bases functions.
**Remark 4.1** Note that although Theorem 5 and Corollary 4 continue to hold for modified CATE which marginalizes over some confounders, and therefore conditions on a subset of measured confounders, say \(\mathbb{E}\left(Y^{1}-Y^{0}\mid V=v\right)\) where \(V\) is a subset of covariates in \(X\), with the error bound of Corollary modified so that the second term of the bound (17) is replaced with \(J^{-\alpha_{\tau_{v}}/d_{v}}\), where \(\alpha_{\tau_{v}}/d_{v}\) is the effective smoothness of the modified CATE. The application given in Section 5 illustrates our methods for such marginalized CATE function which is particularly well-motivated from a scientific perspective. \(\diamond\)
### FW-Learner for CATE under proximal causal inference
Proximal causal inference provides an alternative approach for identifying the CATE in presence of unobserved confounding,provided that valid proxies of the latter are available (Miao et al., 2018; Tchetgen Tchetgen et al., 2020). Throughout, recall that \(U\) encodes (possibly multivariate) unmeasured confounders. The framework requires that observed proxy variables \(Z\) and \(W\) satisfy the following conditions.
**Assumption 1.**
* \(Y^{(a,z)}=Y^{(a)}\) almost surely, for all a and \(z\).
* \(W^{(a,z)}=W\) almost surely, for all a and \(z\).
* \(\left(Y^{(a)},W\right)\perp(A,Z)\mid(U,X)\), for \(a\in\{0,1\}\).
Note that Assumption 1 implies that \(Y\perp Z\mid A,U,X\) and \(W\perp(A,Z)\mid U,X\) as illustrated with the
causal diagram in Figure 1 which describes a possible setting where these assumptions are satisfied (the gray variable \(U\) is unobserved)and Cui et al. (2023) for identifiability.
A key identification condition of proximal causal inference is that exists an outcome confounding bridge function \(h^{\star}(w,a,x)\) that solves the following integral equation (Miao et al., 2018; Tchetgen Tchetgen et al., 2020)
\[\mathbb{E}[Y\mid Z,A,X]=\mathbb{E}\left[h^{\star}(W,A,X)\mid Z,A,X\right], \text{almost surely}. \tag{18}\]
Miao et al. (2023) then established sufficient conditions under which the CATE is nonparametrically identified by \(\mathbb{E}(h(W,1,X)-h(W,0,X)|X)\).
Alternatively, Cui et al. (2023) considered an alternative identification strategy based on the following condition. There exists a treatment confounding bridge function \(q^{\star}(z,a,x)\) that solves the following integral equation
\[\mathbb{E}\left[q^{\star}(Z,a,X)\mid W,A=a,X\right]=\frac{1}{\mathbb{P}(A=a \mid W,X)},\text{almost surely}. \tag{19}\]
Also see Deaner (2018) for a related condition. Cui et al. (2023) then established sufficient conditions under which the CATE is nonparametrically identified by \(\mathbb{E}(Y(-1)^{1-A}q(Z,A,X)|X)\). Let \(O=(X,Z,W,A,Y)\), Cui et al. (2023) derived the locally semiparametric efficient influence function for the marginal ATE (i.e. \(\mathbb{E}[Y^{(1)}-Y^{(0)}]\)) in a nonparametric model where one only assumes an outcome bridge function exists, at the submodel where both outcome and treatment confounding
Figure 1: A proximal DAG
functions exist and are uniquely identified, but otherwise are unrestricted:
\[\mathrm{IF}_{\psi_{0}}(O;h^{\star},q^{\star})=-1\{A=a\}q^{\star}(Z,A,X)h^{\star}(W,A,X)\] \[+\mathbb{1}\{A=a\}Yq^{\star}(Z,A,X)+h^{\star}(W,a,X)-\psi_{0},\]
which falls in the mixed-bias class of influence functions (5) with \(h_{0}(O_{h})=h^{\star}(W,A,X),q_{0}(O_{q})=q^{\star}(Z,A,X)\), \(g_{1}(O)=-1\{A=a\},g_{2}(O)=\mathbb{1}\{A=a\}Y,g_{3}(O)=1,g_{4}(O)=0\), and motivates the following FW-Learner of the CATE.
Proximal CATE FW-Learner estimator: Split the training data into two parts and train the nuisance functions \(\widehat{q},\widehat{h}\) on the first split and define \(\widehat{\tau}_{J}(x)\) to be the Forster-Warmuth estimator computed based on the data \(\big{\{}(\widehat{\phi}_{J}(X_{i}),\widehat{I}(X_{i},A_{i},Y_{i},Z_{i},W_{i}) ),i\in\mathcal{I}_{2}\big{\}}\), where the pseudo-outcome \(\widehat{I}\) is
\[\widehat{I}(O;\widehat{h},\widehat{q}):=\big{\{}A\widehat{q}(Z,1, X)-(1-A)\widehat{q}(Z,0,X)\big{\}}\{Y-\widehat{h}(W,A,X)\}\] \[+\widehat{h}(W,1,X)-\widehat{h}(W,0,X), \tag{20}\]
for any estimators \(\widehat{h},\widehat{q}\) of the nuisance functions \(h^{\star}\) and \(q^{\star}\).
Write \(H_{I}(X)=\mathbb{E}[\widehat{I}(O;\widehat{h},\widehat{q})|X]\), where the expectation is taken conditional on the first split of the training data. We have the following result.
**Lemma 4**.: _The pseudo-outcome (20) has conditional bias :_
\[H_{I}(x)-\tau^{\star}(x)=\mathbb{E}\bigg{[}A(h^{\star}-\widehat{ h})(W,1,x)\big{(}\widehat{q}(Z,1,x)-q^{\star}(Z,1,x)\big{)}\] \[-(1-A)(h^{\star}-\widehat{h})(W,0,x)\big{(}\widehat{q}(Z,0,x)-q^{ \star}(Z,1,x)\big{)}\;\Big{|}\;X=x\bigg{]}.\]
This result directly follows from the mixed bias form (6) in the general class studied by Ghassami et al. (2022); its direct proof is deferred to Section S.7.3 of the supplement. Together with Corollary 1 yields a bound for the error of the FW-Learner \(\widehat{\tau}_{J}\).
**Theorem 6**.: _Let \(\sigma^{2}\) be an upper bound on \(\mathbb{E}[\widehat{I}^{2}(X,Z,W,A,Y)\mid X]\), the FW-Learner \(\widehat{\tau}_{J}(x)\) satisfies:_
\[\big{\|}\widehat{\tau}_{J}(X)-\tau^{\star}(X)\big{\|}_{2}\leq \sqrt{\frac{2\sigma^{2}J}{|\mathcal{I}_{2}|}}+\sqrt{2}\Big{\|} \sum_{j=J+1}^{\infty}\theta_{j}^{\star}\phi_{j}(X)\Big{\|}_{2}\] \[+2(1+\sqrt{2})\min\Bigl{\{} \Big{\|}(\widehat{q}-q^{\star})(Z,1,X)\Big{\|}_{4}\Big{\|} \mathbb{E}\big{[}(\widehat{h}-h^{\star})(W,1,X)|Z,X\big{]}\Big{\|}_{4}\] \[+\Big{\|}(\widehat{q}-q^{\star})(Z,0,X)\Big{\|}_{4}\Big{\|} \mathbb{E}\big{[}(\widehat{h}-h^{\star})(W,0,X)|Z,X\big{]}\Big{\|}_{4},\] \[+\Big{\|}\mathbb{E}\big{[}(\widehat{q}-q^{\star})(Z,0,X)\Big{|} \;W,X\big{]}\Big{\|}_{4}\Big{\|}(\widehat{h}-h^{\star})(W,0,X)\Big{\|}_{4} \Bigr{\}}.\]
The proof is in Section S.7.3 of the supplement. Note that the condition that \(\sigma^{2}\) is bounded requires that \(\widehat{h}_{0}\), \(\widehat{h}_{1},\widehat{q}_{0}\) and \(\widehat{q}_{1}\) are bounded. The rest of this section is concerned with estimation of the bridge functions \(h^{\star}\) and \(q^{\star}\).
Estimation of bridge functions \(h^{\star}\) and \(q^{\star}\):Focusing primarily on \(h^{\star}\), we note that integral equation (18) is a Fredholm integral equation of the first kind similar to integral equations of Section 3.2 on shadow variable FW-Learner, with corresponding kernel given by the conditional expectation operator \([T_{h}h](z,a,x)=\mathbb{E}[h(W_{i},A_{i},X_{i})\mid Z_{i}=z,A_{i}=a,X_{i}=x]\).
Thus, minimax estimation of \(h^{\star}\) follows from Chen and Christensen (2018) and Chen et al. (2021) attaining the rate \((n/\log n)^{-\alpha_{h}/(2(\alpha_{h}+\varsigma_{h})+d_{x}+d_{w}+1)}\) assuming \(T_{h}\) is mildly ill-posed with exponent \(\varsigma_{h}\); a corresponding adaptive minimax estimator that attains this rate is also given by the authors which does not require prior knowledge about \(\alpha_{h}\) and \(\varsigma_{h}\). See details given in Lemma 6 in the supplement. Analogous results also hold for \(q^{\star}\) which can be estimated at the minimax rate of \((n/\log n)^{-\alpha_{q}/(2(\alpha_{q}+\varsigma_{q})+d_{x}+d_{z})}\) in the mildly ill-posed case, as established in Lemma 7 of the supplement,
where \(\alpha_{q}\) and \(\varsigma_{q}\) are similarly defined. Without loss of generality, suppose that
\[\min\Bigl{\{}\Bigl{\|}(\widehat{q}-q^{\star})(Z,1,X)\Bigr{\|}_{4} \Bigl{\|}\mathbb{E}\bigl{[}(\widehat{h}-h^{\star})(W,1,X)|Z,X\bigr{]}\Bigr{\|}_ {4}\] \[\qquad+\Bigl{\|}(\widehat{q}-q^{\star})(Z,0,X)\Bigr{\|}_{4} \Bigl{\|}\mathbb{E}\bigl{[}(\widehat{h}-h^{\star})(W,0,X)|Z,X\bigr{]}\Bigr{\|}_ {4},\] \[\qquad+\Bigl{\|}\mathbb{E}\bigl{[}(\widehat{q}-q^{\star})(Z,0,X) \ \Big{|}\ W,X\bigr{]}\Bigr{\|}_{4}\Bigl{\|}(\widehat{h}-h^{\star})(W,0,X) \Bigr{\|}_{4}\Bigr{\}}\] \[\qquad+\Bigl{\|}\mathbb{E}\bigl{[}(\widehat{q}-q^{\star})(Z,0,X) \ \Big{|}\ W,X\bigr{]}\Bigr{\|}_{4}\Bigl{\|}(\widehat{h}-h^{\star})(W,0,X) \Bigr{\|}_{4}\Bigr{\}}\] \[\qquad+\Bigl{\|}(\widehat{q}-q^{\star})(Z,0,X)\Bigr{\|}_{4} \Bigl{\|}\mathbb{E}\bigl{[}(\widehat{h}-h^{\star})(W,0,X)|Z,X\bigr{]}\Bigr{\|}_ {4}.\]
Further suppose that \(\mu^{\star}(X,Z):=\mathbb{E}\bigl{[}h^{\star}(W,0,X)|Z,X\bigr{]}\) is \(\alpha_{\mu}\)-smooth, and \(\Bigl{\|}\mathbb{E}\bigl{[}(\widehat{h}-h^{\star})(W,0,X)|Z,X\bigr{]}\Bigr{\|} _{4}\) matches the minimax rate of estimation for \(\mu^{\star}(X,Z)\) with respect to the \(L_{4}\)-norm given by \(n^{-\alpha_{\mu}/(2\alpha_{\mu}+d_{x}+d_{z})}\). Accordingly, Theorem 6, together with Lemma 6 and 7, leads to the following corollary.
**Corollary 5**.: _Under the above conditions, together with the conditions of Lemma 6 and 7 in the supplement, and assuming that the integral equation with respect to the operator \(T_{q}\) is mildly ill-posed, we have that:_
\[\bigl{\|}\widehat{\tau}_{J}(X)-\tau^{\star}(X)\bigr{\|}_{2}\lesssim\sqrt{ \frac{\sigma^{2}J}{n}}+J^{-\alpha_{\tau}/d_{x}}+(n/\log n)^{-\alpha_{q}/(2( \alpha_{q}+\varsigma_{q})+d_{x}+d_{z})}n^{-\alpha_{\mu}/(2\alpha_{\mu}+d_{x}+d _{z})}.\]
A remark analogous to Remark 3.1 equally applies to Corollary 5. The result thus establishes conditions under which proximal the FW-Learner can estimate the CATE at the same rate as an oracle with access to bridge functions. This result appears to be completely new to the fast-growing literature on proximal causal inference.
## 5 Simulations
In this section, we study the finite sample performance of the proposed estimator focusing primarily on the estimation of the CATE via simulations. We consider a relatively simple data-generating mechanism which includes a covariate \(X\) uniformly distributed on \([-1,1]\), a Bernoulli distributed treatment with conditional mean equal to \(\pi^{\star}(x)=0.1+0.8\times\operatorname{sign}(x)\) and \(\mu_{1}(x)=\mu_{0}(x)\) are equal
to the piece-wise polynomial function defined on page 10 of Gyorfi et al. (2002). Therefore we are simulating under the null CATE model. Multiple methods are compared in the simulation study. Specifically, the simulation includes all four methods described in Section 4 of Kennedy (2020): 1. a plug-in estimator that estimates the regression functions \(\mu_{0}^{\star}\) and \(\mu_{1}^{\star}\) and takes the difference (called the T-Learner by Kunzel et al. (2019), abbreviated as plugin below), 2. the X-Learner from Kunzel et al. (2019) (xl), 3. the DR-Learner using smoothing splines from Kennedy (2020) (drl), and 4. an oracle DR Learner that uses the oracle (true) pseudo-outcome in the second-stage regression (oracle.drl), we compare these previous methods to 5. the FW-Learner with basic spline basis (FW_bs), and 6. the least squares series estimator with basic spline basis (ls_bs), where cross-validation is used to determine the number of basis functions to use for 5. and 6. Throughout, nuisance functions \(\mu_{0}^{\star}\) and \(\mu_{1}^{\star}\) are estimated using smoothing splines, and the propensity score \(\pi^{\star}\) is estimated using logistic regression.
The top part of Figure 2 gives the mean squared error (MSE) for the six CATE estimators at training sample size \(n=2000\), based on 500 simulations with MSE averaged over 500 independent test samples. The bottom part of Figure 2 gives the ratio of MSE of each competing estimator compared to the FW-Learner (the baseline method is FW_bs) across a range of convergence rates for the propensity score estimator \(\widehat{\pi}\). The propensity score estimator is constructed as \(\widehat{\pi}=\text{expit}\left\{\text{logit}(\pi)+\epsilon_{n}\right\}\), where \(\epsilon_{n}\sim N\left(n^{-\alpha},n^{-2\alpha}\right)\) with varying convergence rate controlled by the parameter \(\alpha\), so that \(\text{RMSE}(\widehat{\pi})\sim n^{-\alpha}\). The results demonstrate that, at least in the simulated setting, our FW-Learner attains the smallest mean squared error among all methods, approaching that of the oracle as the propensity score estimation error decreases (i.e., as the convergence rate increases). The performance of the FW-Learner and the least squares series estimator is visually challenging to distinguish in the figure; however closer numerical inspection confirms that the FW-Learner outperforms the least squares estimator.
To further illustrate the comparison between the proposed FW-Learner and the least squares estimator, we performed an additional simulation study focusing on these two estimators using two different sets of basis functions, in a simulation setting similar than the previous simulation, other than the covariate which we instead generate under a heavy-tailed distribution that is an equal probability mixture of a uniform distribution on \([-1,1]\) and a standard Gaussian distribution. The results are reported in Figure 3, for both FW-Learner (FW) and Least Squares (LS) estimators with basic splines (bs), natural splines (ns) and polynomial basis (poly). We report the ratio of MSE of
all estimators against the FW-Learner with basic splines (FW_bs). The sample size for the left-hand plot is \(n=2000\), and \(n=400\) for the right-hand plot. The FW-Learner consistently dominates the least squares estimator for any given choice of bases function in this more challenging setting. This additional simulation experiment demonstrates the robust of the FW-Learner against possible heavy-tailed distribution when compared to least-squares Learner.
Figure 2: A comparison between different estimators, sample size \(n=2000\)—Top figure shows \(n\times\mathrm{MSE}\) of each estimator; The bottom plot shows the ratio of MSE of different estimators compared to the proposed Forster–Warmuth estimator with basic splines (baseline). The MSE is averaged over 500 simulations.
## 6 Data Application: CATE of Right Heart Catherization
We illustrate the proposed FW-Learner with an application of CATE estimation both assuming unconfoundedness and without making the assumption using proximal causal inference. Specifically, we reanalyze the Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments (SUPPORT) with the aim of evaluating the causal effect of right heart catheterization (RHC) during the initial care of critically ill patients in the intensive care unit (ICU) on survival time up to 30 days (Connors et al. (1996)). Tchetgen Tchetgen et al. (2020) and Cui et al. (2023) analyzed this
Figure 3: A comparison between FW and LS estimators with different basis for \(X\) with heavy-tailed distribution, baseline method is the FW-Learner with basic splines (FW_bs); Left: sample size \(n=2000\); Right: \(n=400\). The MSE is averaged over 500 simulations.
dataset to estimate the marginal average treatment effect of RHC, using the proximal causal inference framework, with an implementation of a locally efficient doubly robust estimator, using parametric estimators of the bridge functions. Data are available on 5735 individuals, 2184 treated and 3551 controls. In total, 3817 patients survived and 1918 died within 30 days. The outcome Y is the number of days between admission and death or censoring at day 30. We include all 71 baseline covariates to adjust for potential confounding. To implement the FW-Learner under unconfoundedness, the nuisance functions \(\pi^{\star}\), \(\mu_{0}^{\star}\) and \(\mu_{1}^{\star}\) are estimated using SuperLearner6 that includes both RandomForest and generalized linear model (GLM).
Footnote 6: SuperLearner is a stacking ensemble machine learning approach with uses cross-validation to estimate the performance of multiple machine learners and then creates an optimal weighted average of those models using test data. This approach has been formally established to be asymptotically as accurate as the best possible prediction algorithm that is tested. For details, please refer to Polley and van der Laan (2010).
Variance of the FW-Learner:In addition to producing an estimate of the CATE, one may wish to quantify uncertainty based on this estimate. We describe a simple approach for computing standard error for the CATE at a fixed value of \(x\) and corresponding pointwise confidence intervals. The asymptotic guarantee of the confidence intervals for the least squares estimator is established in Newey (1997) and Belloni et al. (2015) under some conditions. Because the FW-Learner is asymptotically equivalent to the Least-squares estimator, the same variance estimator as that of the least squares series estimator may be used to quantify uncertainty about the FW-Learner. Recall that the Least-squares estimator is given by \(\bar{\phi}(x)^{\top}\big{[}\sum_{i}(\bar{\phi}(X_{i})\bar{\phi}(X_{i})^{\top} \big{]}^{-1}\big{\{}\sum_{i}\bar{\phi}(X_{i})\widehat{I}_{i}\big{\}}\), the latter has variance \(\bar{\phi}(x)^{\top}\big{[}\sum_{i}(\bar{\phi}(X_{i})\bar{\phi}(X_{i})^{\top} \big{]}^{-1}\bar{\phi}(x)\times\sigma^{2}(\widehat{I})\), where \(\sigma^{2}(\widehat{I})\) is the variance of the pseudo-outcome \(\widehat{I}\); where we have implicitly assumed homoscedasticity, i.e. that the variance of \((\widehat{I})\) is independent of \(X\). Hence,
\[\text{var}(\widehat{\tau}(x))\approx\bar{\phi}(x)^{\top}\big{[}\sum_{i}(\bar{ \phi}(X_{i})\bar{\phi}(X_{i})^{\top}\big{]}^{-1}\bar{\phi}(x)\times\sigma^{2}( \widehat{I}).\]
Similar to Tchetgen Tchetgen et al. (2020) and Cui et al. (2023), our implementation of the Proximal FW-Learner specified baseline covariates (age, sex, cat1 coma, cat2 coma, dnr1, surv2md1, aps1) for confounding adjustment; as well as treatment and outcome confounding proxies \(Z=(\text{paf1},\text{paco21})\) and \(W=(\text{ph1},\text{hema1})\). Confounding bridge functions were estimated nonparametrically using the adversarial reproducing kernel Hilbert spaces (RKHS) learning approach of Ghassami et al. (2022). The estimated CATE and corresponding pointwise 95 percent confidence intervals are reported in Figure 4 as a function of the single variable measuring the 2-month model survival prediction
at data 1 (surv2md1), for both approaches, each using both splines and polynomials. Cross-validation was used throughout to select the number of knots for splines and the degree of the polynomial bases, respectively. The results are somewhat consistent for both bases functions, and suggest at least under unconfoundedness conditions that high risk patients likely benefited most from RHC, while low risk patients may have been adversely impacted by RHC. In contrast, The Proximal FW-Learner produced a more attenuated CATE estimate, which however found that RHC was likely harmful for low risk patients. Interestingly, these analyses provide important nuances to results reported in the original analysis of Connors et al. (1996) and the more recent analysis of Tchetgen Tchetgen et al. (2020) which concluded that RHC was harmful on average on the basis of the ATE.
Figure 4: CATE estimation with 95% confidence interval produced by the FW-Learner using polynomial and spline basis. Left: under unconfoundedness; Right: in proximal causal inference setting.
Discussion
This paper has proposed a novel nonparametric series estimator of regression functions that requires minimal assumptions on covariates and bases functions. Our method builds on the Forster-Warmuth estimator, which incorporates weights based on the leverage score \(h_{n}(x)=x^{\top}(\sum_{i=1}^{n}X_{i}X_{i}^{\top}+xx^{\top})^{-1}x\), to obtain predictions that can be significantly more robust relative to standard least-squares, particularly in small to moderate samples. Importantly, the FW-Learner is shown to satisfy an oracle inequality with its excess risk bound having the same order as \(J\sigma^{2}/n\), requiring only the relatively mild assumption of bounded outcome second moment (\(\mathbb{E}[Y^{2}\mid x]\leq\sigma^{2}\)). Recent works (Mourtada (2019), Vaskevicius and Zhivotovskiy (2023)) investigate the potential for the risk of standard least-squares to become unbounded when leverage scores are uneven and correlated with the residual noise of the model. By adjusting the predictions at high-leverage points, which are most likely to lead to an unstable estimator, the Forster-Warmuth estimator mitigates the shortcomings of the least squares estimator and achieves oracle bounds even for unfavorable distributions when least squares estimation fails. In fact, the Forster-Warmuth algorithm leads to the only known exact oracle inequality without imposing any assumptions on the covariates. This is a key strength of the FW-Learner we fully leverage in the context of nonparametric series estimation to obviate imposing unnecessary conditions on the basis functions.
Another major contribution we make is to propose a general method for counterfactual nonparametric regression via series estimation in settings where the outcome may be missing. Specifically, we generalize the FW-Learner using a generic pseudo-outcome that serves as substitution for the missing response and we characterize the extent to which accuracy of the pseudo-outcome can potentially impact the estimator's ability to match the oracle minimax rate of estimation on the MSE scale. We then provide a generic approach for constructing a pseudo-outcome with "small bias" property for a large class of counterfactual regression problems, based on a doubly robust influence functions of the functional obtained via marginalizing the counterfactual regression in view. This insight provides a constructive solution to the counterfactual regression problem and offers a unified solution to several open nonparametric regression problems in both missing data and causal inference literatures. The versatility of the approach is demonstrated by considering estimation of nonparametric regression when the outcome may be missing at random; or when the outcome may be missing not at random by leveraging a shadow variable. As well as by considering estimation of the CATE under standard
unconfoundedness conditions; and when hidden confounding bias cannot be ruled out on the basis of measured covariates, however proxies of unmeasured factors are available that can be leveraged using proximal causal inference framework. While some of these settings such as CATE under unconfoundedness have been studied extensively, others such as the CATE under proximal causal inference have only recently developed.
Overall, this paper brings together aspects of traditional linear models, nonparametric models and modern literature of semiparametric theory, with applications in different contexts. This marriage of classical and modern techniques is in similar spirit as recent frameworks such as Orthogonal Learning (Foster and Syrgkanis, 2019), however our assumptions and approach appear to be fundamentally different in that, at least for specific examples considered herein, our assumptions are somewhat weaker yet lead to a form of oracle optimality. We nevertheless believe that both frameworks open the door to many future exciting directions to explore. A future line of investigation might be to extend the estimator using more accurate pseudo-outcomes of the unobserved response using recent theory on higher order influence functions (Robins et al., 2008, 2017), along the lines of Kennedy et al. (2022) who constructs minimax estimators of the CATE under unconfoundness conditions and weaker smoothness conditions on the outcome and propensity score models, however requiring considerable restrictions on the covariate distribution.Another interesting direction is the potential application of our methods to more general missing data settings, such as monotone or nonmonotone coarsening at random settings (Robins et al., 1994; Laan and Robins, 2003; Tsiatis, 2006), and corresponding coarsening not at random settings, e.g. Robins et al. (2000), Tchetgen Tchetgen et al. (2018), Malinsky et al. (2022). We hope the current manuscript provides an initial step towards solving this more challenging class of problems and generates both interest and further developments in these fundamental directions.
| Series or orthogonal basis regressionは、実務において最も人気のある非パラメトリック回帰手法の一つであり、基関数の評価値に基づいて特徴量に重回分析を行うことで得られます。最も頻繁に使用されるシリーズの推定値は、普通 least squares 適合に基づいており、これはさまざまな設定で最小最大レート最適であると知られていますが、基関数の限定と共変量分布などの厳しい制限があることを除けば、それには限定があります。この研究において、最近開発された Forster-Warmuth (FW) 学習器をインスピレーションを得て、基関数の制限と共変量分布などの厳しい制限を緩和した上で、最小最大評価の推定率を達成できる新しいシリーズ回帰推定値を提案しました。さらに、この研究の重要な貢献は、FW-学習器を、直接観測できない応答変数を持つ、いわゆる反証推定問題に一般化することです |
2309.12052 | Optimizing V2V Unicast Communication Transmission with Reinforcement
Learning and Vehicle Clustering | Efficient routing algorithms based on vehicular ad hoc networks (VANETs) play
an important role in emerging intelligent transportation systems. This highly
dynamic topology faces a number of wireless communication service challenges.
In this paper, we propose a protocol based on reinforcement learning and
vehicle node clustering, the protocol is called Qucts, solve
vehicle-to-fixed-destination or V2V messaging problems. Improve message
delivery rates with minimal hops and latency, link stability is also taken into
account. The agreement is divided into three levels, first cluster the
vehicles, each cluster head broadcasts its own coordinates and speed, to get
more cluster members. Also when a cluster member receives another cluster head
broadcast message, the cluster head generates a list of surrounding clusters,
find the best cluster to the destination as the next cluster during message
passing. Second, the protocol constructs a Q-value table based on the state
after clustering, used to participate in the selection of messaging clusters.
Finally, we introduce parameters that express the stability of the vehicle
within the cluster, for communication node selection. This protocol hierarchy
makes Qucts an offline and online solution. In order to distinguish unstable
nodes within a cluster, Coding of each road, will have vehicles with planned
routes, For example, car hailing and public bus. Compare the overlap with other
planned paths vehicles in the cluster, low overlap is labeled as unstable
nodes. Vehicle path overlap rate without a planned path is set to the mean
value. Comparing Qucts with existing routing protocols through simulation, Our
proposed Qucts scheme provides large improvements in both data delivery rate
and end-to-end delay reduction. | Yu Wang | 2023-09-21T13:17:50 | http://arxiv.org/abs/2309.12052v1 | Optimizing V2V Unicast Communication Transmission with Reinforcement Learning and Vehicle Clustering
###### Abstract
Efficient routing algorithms based on vehicular ad hoc networks (VANETs) play an important role in emerging intelligent transportation systems. This highly dynamic topology faces a number of wireless communication service challenges. In this paper, we propose a protocol based on reinforcement learning and vehicle node clustering, the protocol is called Qucts, solve vehicle-to-fixed-destination or V2V messaging problems. Improve message delivery rates with minimal hops and latency, link stability is also taken into account. The agreement is divided into three levels, first cluster the vehicles, each cluster head broadcasts its own coordinates and speed, to get more cluster members. Also when a cluster member receives another cluster head broadcast message, the cluster head generates a list of surrounding clusters, find the best cluster to the destination as the next cluster during message passing. Second, the protocol constructs a Q-value table based on the state after clustering, used to participate in the selection of messaging clusters. Finally, we introduce parameters that express the stability of the vehicle within the cluster, for communication node selection. This protocol hierarchy makes Qucts an offline and online solution. In order to distinguish unstable nodes within a cluster, Coding of each road, will have vehicles with planned routes, For example, car hailing and public bus. Compare the overlap with other planned paths vehicles in the cluster, low overlap is labeled as unstable nodes. Vehicle path overlap rate without a planned path is set to the mean value. Comparing Qucts with existing routing protocols through simulation, Our proposed Qucts scheme provides large improvements in both data delivery rate and end-to-end delay reduction.
Q-learning, Vehicular Ad Hoc Networks, Clustering, Position based routing, V2V communication
## I Introduction
VANET is a key component of an intelligent transportation system, it includes vehicle-to-vehicle (V2V) communication, vehicle and infrastructure (V2I) communications. V2I performance depends on the coverage of the roadside unit (RSU). V2I communications utilize infrastructure located at the roadside, provide internet access for vehicles, achieving wide-scale information dissemination. In comparison, V2V communication only allows the exchange of information between neighboring vehicles. However, V2V communication has two main advantages over V2I: V2V shorter communication distance reduces path loss, helps improve transmission reliability; V2V communications can support delay-sensitive vehicle applications, these applications occur between neighboring vehicles, for example, traffic hazard warnings.However, the disadvantage of V2V is that vehicles joining or leaving can cause frequent communication breakdowns in the network, Therefore designing efficient routing algorithms is a challenging task in VANET.
Unicast communication has a wide range of application scenarios in VANET, unicast communication is a one-to-one communication model, where data is transmitted from a single source to a single destination. It can be used for information exchange between vehicles, vehicle-to-infrastructure communications, remote diagnostics and maintenance, remote control as well as vehicle advertising and entertainment.
Unlike mobile self-organizing networks (MANET), VANET has fast vehicle node movement, frequent link breaks, short link maintenance time, characteristics of complex communication scenarios. Delay Tolerant Network (DTN) are specialized for networks with frequent link breaks, DTN are primarily used in small mobile devices, but cannot meet large scale in vehicle network performance standards. Location based vehicular routing is a common routing strategy in VANET,it uses the vehicle's location information to optimize packet transmission and routing decisions. This type of routing takes into account factors such as vehicle position and speed, to ensure efficient data transmission and link stability. HAEQR is also a routing algorithm based on reinforcement learning, the main goal of the algorithm is to find a good strategy in a collaborative decision-making problem with multiple agents, to maximize cumulative rewards. However, the algorithm makes each vehicle node as a learning environment for the agent, there are problems of slow convergence and high computational overheads.
To overcome the above problems. In this paper, we propose a new unicast communication transmission scheme Qucts,the solution is characterized by high data delivery rates and low end-to-end latency. The main contributions of this paper are as follows:
* Differences from existing methods, consider different
clusters as environment states for Q-learning, instead of vehicles. This reduces the state space considerably, increased convergence rate. When a relay node is out of transmission range, the stable nodes in the cluster will be reassigned as relay nodes. Instead of having to rediscover the optimal transmission path, reduced computational overhead.
* Coding for each road. For vehicles with planned routes, internet and public vehicles. Comparison of path overlap rate, vehicle nodes with high overlap are considered as stable nodes. The vehicle path overlap rate for which planning paths are not available is set to the mean value. Marking of unstable nodes, improved link stability.
* Introduction of indicators of vehicle stability within clusters, cluster heads form an angular range based on the direction of the previous and next clusters. The most stable node in the range is selected as the relay node for communication.
The rest of this paper is structured as follows. Section II reviews related work on VANET routing. Section III describes the vehicle clustering system model. Section IV describes the learning process of Q-learning and the selection of transmission relay nodes. Section V presents the simulation results. Section VI summarizes the paper and outlines our future work.
## II Related Work
Location-based VANET routing algorithms aim to utilize vehicle geolocation information to support routing decisions, these algorithms can help vehicles communicate and transmit data more efficiently in mobile networks. Determine the transmission path of the packet based on the location of the destination node and the location of the neighboring nodes. One of the representatives is GPSR, GPSR is a greedy routing algorithm, it selects the nearest neighbor node to the target node as the next hop. Packets are transmitted through a series of peripheral nodes. Reference () proposes a related improved algorithm.
OLSR proposes a protocol based on link state, broadcast link status information periodically through each node, includes connection status with its neighboring nodes, link quality and hop count information. The protocol is characterized by low overhead, fast convergence to the optimal path and support for multicast and unicast. But in a dense network, these control messages may result in higher network overheads, bandwidth and energy consumption. Especially in the VANET environment, due to the rapid movement of vehicles, can lead to excessive network overhead. LAR proposed a method to fully utilize node location information, protocols for Reducing Route Requests. LAR Advantage in Adapting to Network Topology Changes, and the design is relatively simple and easy to implement. However, since the nodes need to be position-checked, LAR may introduce some additional latency, especially if the nodes move around a lot.
QLAODV is an improved routing protocol based on the Q-learning approach, it uses a Q-learning algorithm to infer the state of VANET environment, each vehicle maintains a Q-value table, and based on the Q-value table as the node forwarding routing table. However, when there are too many vehicle nodes it can lead to an overly large state space, and thus slower convergence. CHAQR proposes an improved algorithm based on Q-learning, guiding forwarding actions of nodes by introducing heuristic functions and delay information among nodes, accelerated convergence of learning. But it still doesn't solve the problem of an oversized state space.
Qgrid proposes a new unicast routing protocol, by meshing the known maps, based on the known vehicles in the grid and the vehicles that are about to enter the grid, selecting the best grid using Q-learning. Form an optimal grid pathway, significantly reduced state space and accelerated convergence. However, the meshing algorithm still has many shortcomings. First the mesh is too large, intensive learning is less effective. Too small a grid division will result in fewer vehicles in the grid, increase the number of times a link is redisconnected, reduced link stability. Secondly, vehicle density is not uniformly distributed in complex urban environments. When the density of vehicles in the neighborhood is low, selected grids may not have vehicles, need to recalculate the optimal mesh, increased computational overhead.
## III Clustering Algorithms And Cluster Based Model
We assume that all vehicles have GPS devices to obtain real-time information about their traveling direction, speed, geographic location, etc. While the vehicles are traveling, they periodically exchange hello packets, which contain basic real-time information about the vehicles, such as vehicle identifiers, cluster header IDs, and current status.
To improve the efficiency of the system, the hello packet transmission beacon interval (BI) is introduced, which can be adaptively adjusted according to the link lifetime and vehicle status. The node selected as the cluster head broadcasts hello messages at a certain interval to attract more cluster members to join. When members in other nearby clusters receive the broadcast message from the cluster head, they will pass the hello packets to the cluster head of the cluster they are in, thus enabling the entire cluster network to accurately perceive the location and status of other nearby clusters.
### _Status of Vehicles_
In our proposed clustering approach, the vehicles are categorized into five states: the initial state IS, the cluster head CH, the cluster member CM, the temporary cluster head TCH, and the cluster member CMT that participates in unicast transmission.
* **Initial State IS**: At the beginning, all vehicles are in the initial state, the vehicle does not belong to any cluster.
* **Cluster Head CH**: There is only one cluster head in each cluster, establishing one-hop communication with the cluster members. Cluster header contains two critical lists: one is the list of cluster members for recording the information of cluster members, and the other is the list of surrounding clusters for recording the statement
of neighboring clusters. Also, in unicast communication, CH assigns cluster members as relay nodes for communication.
* **Cluster Member CM**: One-hop communication vehicles that act as cluster head CH; they are alternative nodes that participate in unicast data transmission or are only members of clusters with no particular responsibilities.
* **Temporary Cluster Head TCH**: When a vehicle leaves its original cluster, or the timer expires in IS state, and the neighboring nodes around it are in CM state, it will claim itself as a temporary cluster head. Until it finds a new cluster or a vehicle with IS state joins.
* **Cluster Member Transmission CMT**: Cluster member is involved in data transmission. When the previous cluster sends a data transmission request, the cluster head selects the CMT node based on certain conditions to act as a relay node for data transmission.
After the vehicle's initial state IS is started, the vehicle sets a timer Tis until Tis expires. If there are no neighboring nodes or all neighboring nodes are in CM state, the vehicle switches its state to temporary cluster head TCH. When the condition CMjoin to become a cluster member is satisfied, the vehicle state will change from TCH to CM, or a TCH node has three or more TCH neighbors, which triggers the election of the cluster head and changes the vehicle state. When there is a neighbor vehicle with an IS state and the condition CMjoin to become a cluster member is satisfied, the vehicle state will change from IS to CM. When the situation CHcondition to become a cluster head is satisfied, the vehicle state will change from IS to CH. If the vehicle leaves the original cluster and does not belong to the new cluster, the state will return to the initial state IS. The specific process is shown in Fig 1.
### _Adaptive Tuning of Hello Packet_
In most clustering algorithms, hello packets are usually distributed according to a fixed period. The selection of the period is crucial to ensure that the information about the surrounding vehicles is updated on time, and by choosing an appropriate period, excessive communication overhead and management costs can be avoided while maintaining the real-time nature of the communication information. If the selected period is too large, the surrounding vehicles' status information may lag because of the long interval between information updates. Whereas, if too small a period is chosen, then the communication overhead and the management and maintenance costs will increase.
Therefore, we employ a state and link lifetime-based strategy called Link Expiry Time (LET)[**wang2013passcar**]. This strategy is used to adaptively adjust the transmission period of hello packets to reflect the duration of continuous communication between two vehicles.is defined as:
\[LET_{ij}=\frac{\left|\triangle v_{ij}\right|\times TR-\triangle v_{ij}\times \triangle D_{ij}}{\left(\triangle v_{ij}\right)^{2}} \tag{1}\]
Where TR is the distance of stable transmission between the vehicles, \((x_{i},y_{i})\) is the location of the vehicle \(v_{i}\), \(v_{i}\) is the speed of vehicle \(i\). \(\triangle v_{ij}=v_{i}-v_{j}\), \(\triangle D_{ij}\approx\sqrt{\left(x_{i}-x_{j}\right)^{2}+\left(y_{i}-y_{j} \right)^{2}}\) are the relative velocity and relative distance of \(v_{i}\), \(v_{j}\) respectively.
Most vehicles are generally in the CM state in a natural road environment. Compared to cluster heads, cluster members do not need to manage other vehicles as much as cluster heads, so their Beacon Interval (BI) can be set longer. Such a setting can reduce the communication overhead while keeping the data fresh. In other words, a larger BI value can be chosen to minimize the transmission overhead when the LET is long.
For those CM nodes with larger LET, this indicates that they have a longer link lifetime and thus can be set with a relatively more significant BI value. However, for CHs, they usually account for less in a road environment. In time, they must connect to IS and TCH vehicles within the transmission range to get more CM nodes. Therefore, the BI value needs to be relatively small for cluster heads to maintain real-time awareness and connectivity to surrounding vehicles.
For IS state vehicles, they need to update the surrounding cluster heads' information on time so they can join the cluster as soon as possible. Similarly, TCH vehicles need to quickly sense the nearby environment to enter new clusters or to enable nearby IS-state vehicles to join and form new clusters. Therefore, the BI value also needs to be set relatively small for both cases. Based on the above description, we set the beacon interval BI as follows:
Fig. 1: Vehicle status change flowchart
\[BI_{i}=\left\{\begin{array}{cl}0.3&ST_{i}\in(CH,IS,TCH)\\ 0.3&LET_{ij}\in(0,0.5)\,,\,\,ST_{i}\in(CM)\\ 0.5&LET_{ij}\in[5,20)\,,\,\,\,ST_{i}\in(CM)\\ 1.0&LET_{ij}\in[20,+\infty)\,,\,\,ST_{i}\in(CM)\end{array}\right. \tag{2}\]
Where \(ST_{i}\) is the current state of vehicle \(v_{i}\), \(LET_{ij}\) is the LET between CM vehicle \(v_{i}\) and CH vehicle \(v_{j}\). If the vehicle status is in CH, IS or TCH, then the vehicle's BI is set to 0.3. The BI of a vehicle in CM depends on the length of the LET.
### _Cluster Head Selection Mechanism_
If a vehicle in the IS state receives a hello packet from another vehicle in the IS state or TCH state, this triggers the cluster head selection mechanism. The relative speed and distance can be calculated from the information in the received packet, such as speed, direction, coordinates, and the numeric code of the passing path.
\[\overline{\triangle v_{i}}=\frac{\sum_{v_{j}\in STS_{i}}|\triangle v_{ij}|}{ STS\_num_{i}} \tag{3}\]
\[\overline{\triangle D_{i}}=\frac{\sum_{v_{j}\in STS_{i}}|\triangle D_{ij}|}{ STS\_num_{i}} \tag{4}\]
LET mean is calculated as follows:
\[\overline{\triangle LET_{i}}=\frac{\sum_{v_{j}\in STS_{i}}LET_{ij}}{STS\_num_ {i}} \tag{5}\]
where \(STS_{i}\) is the set of vehicle \(v_{i}\) neighbor vehicle IS, \(STS\_num_{i}\) is the number of vehicle \(v_{i}\) neighbors.
\[\overline{\triangle PMD_{i}}=\frac{\sum_{v_{j}\in STS_{i}}PMD_{ij}}{STS\_num_ {i}} \tag{6}\]
As shown in Fig. 2. pairs in order to plan paths for vehicles, such as online car-hailing and buses. The path code matching degree (PMD) can be calculated from the received hello packets. \(PMD_{ij}\) represents the number of overlapping path codes of \(v_{i}\) and \(v_{j}\), and \(PMD_{max}\) is the vehicle with the longest overlapping path code. For vehicles without planned paths, PMD is set to the mean value. The \(v_{i}\) weighted sum is calculated as follows:
\[\begin{split} MW_{i}=\beta\times\frac{\overline{\triangle v_{i}}} {\triangle v_{max}}+\delta\times\frac{\overline{\triangle D_{i}}}{\triangle D _{max}}+\varepsilon\times\left(1-\frac{\overline{LET_{i}}}{LET_{max}}\right) \\ +\mu\times\left(1-\frac{\overline{PMD_{i}}}{PMD_{max}}\right) \end{split} \tag{7}\]
\[\beta+\delta+\varepsilon+\mu=1 \tag{8}\]
Where \(MW_{i}\) represents the weighted sum of the migration rates of vehicles \(v_{i}\). This value is calculated by considering the average speed, relative distance, link survival time, and overlap of planned paths of the vehicles. Specifically, the \(MW_{i}\) of a vehicle is directly proportional to the average speed and relative distance and inversely proportional to the link survival time and planned path overlap. In other words, the smaller value of \(MW_{i}\) indicates the higher stability of the node. Therefore, the vehicle with the smallest \(MW_{i}\) is found among the neighboring nodes of the IS and is selected as the CH node. The node that becomes CH also broadcasts hello requests regularly, and if the vehicles in the IS state and TCH state meet the requirement, they will join the cluster.
### _Cluster Formation_
We assume that vehicles enter the roadway one by one according to a specific traffic flow and select cluster heads during the clustering process. Once the vehicles enter the road, they start from the IS state. A hello message is broadcasted with BI every 0.3 seconds, which contains the identifier, speed, direction of movement, and coordinate information. If the vehicle has planned a path, it will also include the path code to reach the destination. In addition, a timer is set for each vehicle as it enters the roadway. When the timer expires, if the vehicle state is still unselected and there are no neighbor nodes or IS state neighbor nodes around. Then, the vehicle will declare itself as a TCH; however, if there are neighboring points in the IS state or three or more TCH nodes within the transmission range, the cluster formation process will be initiated.
The node in the IS state first starts the computation and selects the node with the minimum \(MW_{i}\) value as the cluster head. Then, it starts broadcasting the hello message. The node will determine by itself whether it is located within the cluster head's transmission range and traveling in the same direction. If the conditions of transmission range and traveling in the same direction are satisfied, then the node will send an acknowledgment request to the cluster head with its hello packet. At the same time, it will change its state from IS state to CM and set the cluster-ID.This handshake mechanism, which autonomously determines whether to join a cluster based on the receipt of the hello packet, significantly reduces the computational burden of the CHs and makes the formation of clusters more efficient.
In addition, if there exist two CHs entering each other's communication range and the distance L between them is
Fig. 2: Coding different roads separately and comparing the overlap of vehicle paths
less than the threshold TR, which satisfies the cluster fusion condition, the cluster head with a higher \(MW_{i}\) value will give up its cluster head state and become CM state instead. This mechanism helps to avoid excessive cluster head competition and conflict to improve the stability and efficiency of clusters.
## IV Q-learning Cluster-based Routing And CMT Selection Mechanism
QCBR is a combination of Q-learning and cluster-based routing divided into three levels. First, it clusters the vehicles and generates a list of neighboring clusters; then, in the clustered cluster, it finds the optimal ordering of clusters by the Q-learning algorithm. Finally, it assigns CMT communication nodes based on the position information of the previous and next clusters to realize unicast communication. In the following, we will describe the Q-learning and CMT node selection process in detail.
### _Q-learning_
Typically, a reinforcement learning model chooses an action \(a\) to perform in an environment state \(s\) and then enters the next state. The agent evaluates the value of the "state-action" based on the reward value fed back from the environment and the next state after acting. If an action \(a_{t}\) of an intelligent body brings a positive reward value \(R_{t}\), then the action that performs this strategy will be reinforced. The agent's task is to find the optimal strategy to maximize the expected sum of discounted rewards in a discrete state. Model-based algorithms usually require that the reward is only available when the target node receives a message. Therefore, model-based approaches may be less practical in real-world applications. In contrast, Q-learning is a model-free algorithm that finds the optimal policy through a Markov Decision Process (MDP), even if Agnet has no prior knowledge of the impact on the environment. This approach allows for improved decision-making strategies through continuous learning and experimentation without needing a priori Q-values.
Finding the optimal cluster ordering using reinforcement learning involves the following steps:
* **Learning Environment**: Learning environment with clustered clusters as agent.
* **State Space**: The state space is a cluster of one or more vehicles.
* **Action**: When a CMT node transmits a packet to a CMT node in another cluster, it indicates a change in the packet state. It is also the transmission of a packet from one cluster to another.
* **Agent**: Each packet transmitted across a cluster can be viewed as an agent.
* **reward Function**: The value obtained by an agent performing an action is known as the reward value, and a specific reward value is obtained whenever a packet travels from the current cluster to another.
The Q-learning algorithm mainly utilizes Equation (9) to iteratively update the Q value until convergence. The optimal policy can be constructed by selecting the action with the highest value in each state.
\[\begin{split} Q\left(s_{t},a_{t}\right)&\leftarrow \left(1-\alpha\right)Q\left(s_{t},a_{t}\right)+\alpha\left(f_{R}\left(s_{t},a_{ t}\right)\right.\\ &\left.+\gamma\underset{a^{{}^{\prime}}}{max}Q\left(f_{S}\left(s_ {t},a_{t}\right),a^{{}^{\prime}}\right)\right)\end{split} \tag{9}\]
In Equation (9), \(Q\left(s_{t},a_{t}\right)\) is the actual value of the state-to-action correspondence, often referred to as the Q-value. The learning rate \(\alpha\) determines to what extent the newly acquired information will overwrite the old information. With a learning rate of 0, the agent does not learn anything, while with a learning rate of 1, the agent only considers the latest information. \(f_{R}\left(s_{t},a_{t}\right)\) is the reward value, and \(\gamma\) is a discount factor determining future rewards' importance. \(a^{{}^{\prime}}\) denotes the following action that corresponds to the next state.
Since the states and actions are discrete, the reward function is bounded, while the \(\left(s_{t},a_{t}\right)\) pair can be accessed an infinite number of times, and the Q-learning function converges in a finite amount of time. This means that Q-learning can converge to an optimal Q-value function within a finite number of learning iterations. In the design of QCBR routing, the main goal is to transmit the message from the cluster of the message-sending vehicle to the destination with high probability, which is similar to the typical example in the Q-learning algorithm. In this, the agent learns the Q-value function to make decisions to maximize the cumulative reward. Each cluster head maintains a table of Q-values that accurately represents the relationships between clusters. When the cluster head needs to select the next cluster to deliver a message, it selects the cluster with the highest Q-value as the next hop cluster.
Fig. 3 shows how Q-learning works, where different dots represent different clusters, and each cluster \(N_{i}\) represents a different state in the state space \(S\). The arrows indicate the direction of packet transmission, and packet transmission between different clusters has different Q-values. In this example, the transmission operation connecting to the destination node E will be rewarded 100 points, while other operations are not rewarded. The message is transmitted from the start point S to transmit to the end point E. At the same time,
Fig. 3: Reinforcement learning process
cluster D is about to leave the area, resulting in the inability to efficiently transmit the message through cluster D to the end point E. Therefore, there are three different feasible ways of selecting the clusters that can determine the transmission path of the message: \(S\to N_{1}\to N_{5}\to N_{7}\to E\), \(S\to N_{2}\to N_{5}\to N_{7}\to T\)\(E\), \(S\to N_{3}\to N_{6}\to N_{8}\to E\).
### _Cluster-based Routing_
Since the number of vehicles in each cluster may vary, clusters with more members should be considered when selecting the next cluster for message transmission. The advantage of this is that the CH can find an alternative node as the next CMT node faster, even if the CMT node leaves the transmission range. Our goal is to select stable, reliable links and have fewer hops to propagate messages. Therefore, we set the discount factor \(\gamma\) as a dynamic parameter positively related to the number of cluster members. We can use a segmented function to represent the discount factor \(\gamma\), where CMN denotes the number of cluster members within a cluster.
\[\overline{CMN}=\frac{1}{N}{\sum_{k=1}^{N}{CMN_{k}}} \tag{10}\]
Where \(\overline{CMN}\) is the cluster membership mean of all clusters in the map and \(N\) is the number of clusters in the map.
\[{CHV_{ij}}=\sqrt{\left|{{{{CHV_{i}^{2}}-{CHV_{j}^{2}}}}}\right|} \tag{11}\]
\[{CHD_{ie}}\approx\sqrt{\left({{{CHX_{i}}-{CHX_{e}}}}\right)^{2}+\left({{{CHY_{i }}-{CHY_{e}}}}\right)^{2}} \tag{12}\]
where is the difference in speed between \({CHV_{ij}}\) cluster heads \({CH_{i}}\) and \({CH_{j}}\), and \({CHD_{ie}}\) is the relative distance between cluster head \({CH_{i}}\) and end point E. Since the cluster head is always at the center of the cluster, the speed of CH represents the speed of the cluster. Set \({CHV_{ij}}=1\) when \({CHV_{ij}}\leq 1\) at that time.
\[\omega=\frac{1}{{{{CHV_{ij}}}}} \tag{13}\]
\[\varphi=\frac{1}{{{{CHD_{ie}}}}} \tag{14}\]
\[\gamma=\begin{cases}0.9\omega\varphi&if\hskip 28.452756pt\overline{CMN}\leq CMN_{i }\\ 0.7\omega\varphi&if\hskip 28.452756pt\frac{\overline{CMN}}{2}\leq CMN_{i}< \overline{CMN}\\ 0.5\omega\varphi&if\hskip 28.452756pt0\leq CMN_{i}<\overline{\frac{CMN}{2}} \end{cases} \tag{15}\]
Depending on the density of vehicles in different clusters, the \(\gamma\) ranges from 0 to 0.9. We can determine whether the density of neighboring clusters is suitable for the next cluster to deliver the message. At the same time, we introduce the relative speed between clusters and the distance from the cluster to the end point E as parameters \(\omega\) and \(\varphi\) that affect the discount factor. Smaller relative speed between clusters usually corresponds to larger \(\gamma\) values. Ultimately, we can use the \(\gamma\) value to adjust the routing decision to optimize the message delivery efficiency better.
As shown in Algorithm 1, we assume that each vehicle has an offline-learned Q-value table that follows Equation (9). In addition, we set a \(TTL\) field for each packet to determine the survival time of the message. In each transmission, \(TTL-0.5\) is for intra-cluster node transmission, and \(TTL-1\) is for inter-cluster transmission. Suppose the message cannot reach the destination directly. If \(TTL>0\), the message will continue transmitting according to the specific policy. Once \(TTL\leq 0\), the message will be discarded.
```
1:Vehicle \(S\) has a message destined to a specific location \(E\)
2:if\(E\) is within cluster \(N_{i}\) transmission range then
3: Transfer the message to \(E\)
4:if Messages are transmitted within clusters then
5:\(TTL=TTL-0.5\)
6:else
7: Messages are transferred between clusters
8:\(TTL=TTL-1\)
9:if\(TTL>0\) and neighbor(\(N_{i}\))\(\neq\emptyset\)then
10:for all neighbor(\(N_{i}\)) do
11: Calculate the neighbor cluster with the largest Q
12:endfor
13: Select the cluster with the largest Q value as the next hop cluster
14:endif
15:endif
16:endif
17:else
18: Discard the message
19:endif
```
**Algorithm 1** QCBR: Q-learning and Cluster Based Routing
### _Transmission Relay Node CMT Selection Mechanism_
To achieve unicast communication after clustering, specific cluster members must be selected as CMT nodes. However, some CM nodes may be about to leave the cluster, and selecting these soon-to-be-leaving CM nodes as CMT nodes will result in frequent disconnections and reconnections of the communication link. This will adversely affect the end-to-end communication delay and data delivery rate. Therefore, we have designed a selection mechanism for CMT nodes, the details of which are given below.
For those vehicles that have planned their paths, when \(PMD_{ij}=0,1\), it means that \(v_{i}\) and \(v_{j}\) do not have overlapping road segments or have only one overlapping path, which means that they are about to leave the communication range of the current cluster head, and therefore are labeled as unstable nodes (UN).
\[PMD_{ij}=\begin{cases}0,1&unstable\\ \geq 2&stable\end{cases} \tag{16}\]
As shown in Fig. 4. When the previous cluster CMT node sends a message to pass the request, the cluster head selects
the node in the cluster that is located within the \(LET\in[0,20)\) in the range of the pinch angle \(\theta\) of the aligned node based on the coordinates of the previous cluster CMT node and its LET value. The CMT node selection is categorized into the following scenarios:
* **Case 1:** First, the CM with \(LET\in[0,20)\) in the range of pinch angle \(\theta\) and the smallest value of \(MW_{i}\) is selected as the CMT node of the cluster. Because the nodes in this range are located in the middle of the cluster, a balance is achieved between node stability and hop count.
* **Case 2:** Then, the node with \(LET\in[0,5)\) and the smallest \(MW_{i}\) value is selected in the range of angle \(\theta\). The nodes in this range are usually located at the edge position of the cluster and can directly receive information from the CMT nodes in the previous cluster, but due to the small value of LET, there may be a risk of disengaging from the cluster, which can lead to link fluctuation.
* **Case 3:** Next, the node with the smallest value of \(MW_{i}\) in \(LET\in[20,+\infty)\) is selected because the node in this position is relatively close to the CH, and there is no need to consider the pinch range. Although these nodes are more stable and less likely to fall out of the transmission range of the cluster, they are usually farther away from the CMT node of the previous cluster, which may lead to an increase in the number of hops for data transmission.
* **Case 4:** Finally, when the vehicle density is too low to fulfill the previous conditions, UN nodes or CH nodes are selected as CMT nodes.
When a CMT node within a cluster loses connectivity, the CMT node selection mechanism is retriggered. The cluster alignment formed by Q-learning informs the cluster head about the next cluster that needs to transmit information. Then, based on the cluster head's list of neighboring clusters, it determines the location of the next cluster head and its distance L. When the CMT node \(LET\in[0,5)\) is in the cluster, \(LET\in[20,+\infty)\) is selected as the next CMT node. If the cluster head spacing L satisfies \(TR\leq L<2IR\) and the CMT node \(LET\in[5,+\infty)\) within the cluster, then the information transmission requests directly to the next cluster. Suppose \(2TR<L\), the cluster head is triggered to select another intra-cluster CMT node. The selection step is the same as above. When a TCH node receives a transmission request, it automatically changes to a CMT state. This mechanism ensures message delivery in case of low vehicle density.
### _QCBR Protocol Message Forwarding Process_
This protocol has been designed considering different vehicle densities in the city. Usually, QCBR selects clusters with higher vehicle density to disseminate messages. However, in some cases, such as suburban areas and other environments with lower vehicle densities, vehicles may not be able to form clusters, which results in messages not being transmitted. Therefore, we introduce a timer mechanism for vehicles in the IS state. When the timer expires, if there is no other node in the IS state around, then the state of the vehicle will be changed to TCH, thus enabling it to participate in message transmission.
As shown in Fig. 5, the message is sent from the blue vehicle S node, forwarded by the CMT node carrying the message, and finally reaches the destination node E.
## V Simulations
We will validate the performance of QCBR in different scenarios and compare it with four other existing location-based, reinforcement learning, and geo-grid-based protocols. The detailed setup and experimental results are shown below.
### _Experimental Scenarios and Assessment Indicators_
To simulate a realistic traffic environment, we intercepted a 3000m \(\times\) 3000m area map using Open Street Map, which contains a complex road network covering both dense and sparse road areas. We use sumo, omnet++, and veins for simulation experiments. The road simulation tool sumo (Simulation of Urban Mobility) handles large road networks and simulates traffic models. omnet++ is a discrete event simulator based on C++. Veins is a simulation framework for inter-vehicle communication that consists of an event-based network
Fig. 4: Selection of CMT nodes within a cluster
Fig. 5: Example of message forwarding with QCBR
simulator and a road simulation. Network simulator and road simulation model. The clustering algorithm and Q-learning settings during the simulation are shown in the following table.
We evaluated the experiment using the following metrics:
* **Hop Count**: The average number of hops per successful packet transmission.
* **Packet Delivery Ratio**: Ratio of successfully transmitted packets to the destination to the total number of packets sent by the source node.
* **End-to-End Delay**: The average time experienced by a packet to be sent from the source node to the final destination.
### _Comparing Routing Protocols_
We use the following protocols for comparison with our proposed QCBR:
* **GPSR**:GPSR is a geographic location-based greedy routing algorithm that selects the nearest neighboring node to the target node as the next hop. The packets are transmitted through a series of peripheral nodes.
* **QLAOD**V: OLAODV is a routing protocol based on a Q-learning approach where each vehicle maintains a Q-value table and selects the neighbor with the largest Q-value as the next hop node.
* **Qgrid**: Qgrid is a geographic grid-based routing protocol that assumes a digital map for each vehicle. It divides the
Fig. 8: Comparison of Routing Simulations for Different Protocols (vehicle node set to 600, PMD set to 120)
Fig. 6: Comparison of Routing Simulations for Different Protocols (vehicle node set to 200, PMD set to 40)
Fig. 7: Comparison of Routing Simulations for Different Protocols (vehicle node set to 400, PMD set to 80)
map into grids of uniform size, finds the best geographic grid transmission path through reinforcement learning, and unicasts the signal along the grid while guiding the vehicle along the best path. In the Qgrid setup, the reinforcement learning parameter part is consistent with ours, and the grid length is set to 200m.
### _Analysis of Experimental Results_
In urban environments, vehicle density is not uniformly distributed. Based on this, we set different vehicle node densities (200, 400, 600) to verify the transmission effect under different vehicle densities. Meanwhile, we set some vehicles as PMD nodes with known paths corresponding to buses and internet taxis traveling along fixed routes or planned paths in urban environments. The number of densities corresponding to one-fifth of PMDs to all the vehicles are (40, 80, 120).
In our experiments, we compare the performance differences under different vehicle densities. As shown in Fig. 6(a), the reinforcement learning-based algorithm significantly outperforms the location-based GPSR algorithm regarding data delivery rate at low vehicle densities. This is because the reinforcement learning algorithm considers the stability of neighboring nodes and does not just look for the node closest to the destination. However, since the increase in hops slightly increases the end-to-end delay, as shown in Fig. 6(c), the end-to-end delay is slightly higher than GPSR, but the data delivery rate is substantially higher. Our algorithm does not draw a significant gap with Qgird in the low-density setting. There are mutual advantages and disadvantages at different TTLs. Slightly higher than QLAODV in end-to-end data latency.
In the medium vehicle density condition, as shown in Fig. 7(a), we can see that our data delivery rate is improved. In case of increased vehicle density, CMT will have more alternative nodes. This improves the link stability. The effect improvement is relatively significant compared to QLAODV and GPSR. Compared to the grid-based reinforcement learning routing algorithm Qgird, our data delivery rate is slightly improved; there are certain limitations in the grid-based routing of Qgird. The division of the grid size significantly impacts the results; the division of the grid size is too large. However, it can speed up the convergence. Still, the grid has too many vehicles, and using the greedy algorithm to select the transmission nodes makes the reinforcement learning effect not obvious. As in Fig. 7(b), we observe that QCBR and Qgird have higher hop counts and delays, mainly due to selecting the likely successful paths rather than the shortest path.
In the case of high-density vehicles, QCBR significantly improves the data delivery rate compared to other routing protocols. QCBR introduces the mechanism of computationally planning paths, making distinguishing unstable nodes easier. From 8(a), it can be seen that the data delivery rate of GPSR and QLAODV even decreases when the vehicle density is high; GPSR with more vehicles causes the road situation to become complex. Finding the nearest node to the destination is more complex, thus affecting the data delivery rate. QLAODV routing state space becomes larger when there are too many neighboring nodes, leading to difficulty in convergence. In contrast to 7(a), more vehicle nodes are needed to reduce the data delivery rate.8 (c)The results show a significant increase in the delay of GPSR and QLAODV compared to the low-density case.
## VI Conclusion
We propose a unicast routing protocol called QCBR, which addresses the challenge of slow convergence of reinforcement learning in VANET environments. By combining reinforcement learning and clustering algorithms, the QCBR protocol significantly improves the data reception rate of vehicles reaching their destinations. Even in the case of unicast link disconnection, the QCBR protocol can quickly select alternative vehicles as CMT nodes to ensure communication continuity. In addition, our protocol exhibits stable communication performance in environments with different vehicle densities. Through a series of simulation experiments, we demonstrate that the QCBR protocol improves the data delivery rate and reduces the end-to-end delay relative to other unicast routing protocols based on reinforcement learning.
| 効率的な車両間アドホックネットワーク (VANET)
に基づくルーティングアルゴリズムは、
新しい intelligente transportation
システムにおいて重要な役割を果たしています。
この高度に動的なトポロジーは、
無線通信サービスの課題をいくつか抱えています。
この論文では、
reinforcement learning と車両ノードクラスタリングに基づいたプロトコルを提案します。
このプロトコルをQuctsと呼び、
車両から固定された目的地へのメッセージを送信する問題を解決します。
メッセージの配送率を向上させ、
最小のホップとレイテンシで、
リンクの安定性を確保します。
この協議は3つのレベルに分割され、
まず車両をクラスタに分割します。
各クラスタヘッダーは、自身の座標と速度をbroadcastsします。
これにより、
より多くのクラスタメンバーを得ることができます。
また、
クラスタメンバ |
2303.18242 | $\infty$-Diff: Infinite Resolution Diffusion with Subsampled Mollified
States | This paper introduces $\infty$-Diff, a generative diffusion model defined in
an infinite-dimensional Hilbert space, which can model infinite resolution
data. By training on randomly sampled subsets of coordinates and denoising
content only at those locations, we learn a continuous function for arbitrary
resolution sampling. Unlike prior neural field-based infinite-dimensional
models, which use point-wise functions requiring latent compression, our method
employs non-local integral operators to map between Hilbert spaces, allowing
spatial context aggregation. This is achieved with an efficient multi-scale
function-space architecture that operates directly on raw sparse coordinates,
coupled with a mollified diffusion process that smooths out irregularities.
Through experiments on high-resolution datasets, we found that even at an
$8\times$ subsampling rate, our model retains high-quality diffusion. This
leads to significant run-time and memory savings, delivers samples with lower
FID scores, and scales beyond the training resolution while retaining detail. | Sam Bond-Taylor, Chris G. Willcocks | 2023-03-31T17:58:08 | http://arxiv.org/abs/2303.18242v2 | # \(\infty\)-Diff: Infinite Resolution Diffusion
###### Abstract
We introduce \(\infty\)-Diff, a generative diffusion model which directly operates on infinite resolution data. By randomly sampling subsets of coordinates during training and learning to denoise the content at those coordinates, a continuous function is learned that allows sampling at arbitrary resolutions. In contrast to other recent infinite resolution generative models, our approach operates directly on the raw data, not requiring latent vector compression for context, using hypernetworks, nor relying on discrete components. As such, our approach achieves significantly higher sample quality, as evidenced by lower FID scores, as well as being able to effectively scale to higher resolutions than the training data while retaining detail.
## 1 Introduction
Denoising diffusion probabilistic models (Ho et al., 2020; Song and Ermon, 2019) have become a dominant choice for data generation, offering stable training and the ability to generate diverse and high quality samples. These methods function by defining a forward diffusion process which gradually destroys information by adding Gaussian noise, with a neural network then trained to denoise the data, in turn approximating the data distribution. Scaling diffusion models to higher resolutions has been the topic of various recent research, with approaches including iteratively upsampling lower resolution images (Ho et al., 2022) and operating in a compressed latent space (Rombach et al., 2022). Deep neural networks typically assume that data can be represented with a fixed uniform grid, however, the underlying signal is often continuous. As such, these approaches scale poorly with resolutions. Neural fields (Sitzmann et al., 2020; Xie et al., 2022) have been proposed to address this problem, where data is represented by mapping coordinates directly to intensities (such as pixel values), making the parameterisation independent to the data resolution.
A number generative models have been developed which attempt to represent the underlying data as functions to better enable scaling with resolution. These approaches are all based on neural fields, however, because neural fields are inherently independent between coordinates, these approaches rely
Figure 1: We define a diffusion process in an infinite dimensional image space by randomly sampling coordinates and training a model parameterised by neural operators to denoise at those coordinates.
on conditioning the networks on compressed finite size latent vectors to provide global information. Dupont et al. (2022) first uses meta-learning to compress the dataset into latent conditional neural fields, then approximates the distribution of latents with a DDPM (Ho et al., 2020) or Normalizing Flow (Rezende and Mohamed, 2015); Bond-Taylor and Willcocks (2021) form a VAE-like generative model with a single gradient step used to obtain latents; approaches which use hypernetworks to outputs the weight of neural fields include Dupont et al. (2022) who define the hypernetwork as a generator in an adversarial framework, and Du et al. (2021) who use manifold learning to represent the latent space of the hypernetwork.
Compressed latent-based neural field approaches such as these cannot be effectively used to parameterise a diffusion model, where both global and local information must be maintained in order to effectively denoise the data. In this paper we propose \(\infty\)-Diff, addressing these issues:
* We introduce a new mollified-state diffusion generative model which smooths states to be continuous, thereby allowing infinite resolution data to be modelled (see Fig. 2).
* Directly operating on raw data, \(\infty\)-Diff learns a continuous function by denoising data at randomly sampled coordinates allowing generalisation to arbitrary resolutions.
* Our approach achieves state-of-the-art FID scores on a variety of high-resolution image datasets, substantially outperforming other infinite resolution generative models.
## 2 Background
Diffusion Models (Ho et al., 2020; Sohl-Dickstein et al., 2015) are probabilistic generative models that model the data distribution by learning to denoise data samples corrupted with noise. There are two main interpretations, discrete time and continuous time.
### Discrete Time Diffusion
The discrete time interpretation is formed by defining a forward process \(q(\mathbf{x}_{1:T}|\mathbf{x}_{0})\) that gradually adds noise to the data, \(\mathbf{x}_{0}\sim q(\mathbf{x}_{0})\), over \(T\) steps, resulting in a sequence of latent variables \(\mathbf{x}_{1},\dots,\mathbf{x}_{T}\) such that \(q(\mathbf{x}_{T})\approx\mathcal{N}\left(\mathbf{x}_{T};\mathbf{0},\mathbf{I}\right)\). The reverse of this process can also be expressed as a Markov chain \(p(\mathbf{x}_{0:T})\). Choosing Gaussian transition densities chosen to ensure these properties hold, the densities may be expressed as
\[q(\mathbf{x}_{1:T}|\mathbf{x}_{0})=\prod_{t=1}^{T}q(\mathbf{x}_{t}|\mathbf{x}_{t-1}),\qquad q( \mathbf{x}_{t}|\mathbf{x}_{t-1})=\mathcal{N}(\mathbf{x}_{t};\sqrt{1-\beta_{t}}\mathbf{x}_{t-1},\beta_{t}\mathbf{I}), \tag{1}\]
\[p(\mathbf{x}_{0:T})=p(\mathbf{x}_{T})\prod_{t=1}^{T}p(\mathbf{x}_{t-1}|\mathbf{x}_{t}),\qquad p (\mathbf{x}_{t-1}|\mathbf{x}_{t})=\mathcal{N}(\mathbf{x}_{t-1};\mathbf{\mu}_{\theta}(\mathbf{x}_{ t},t),\mathbf{\Sigma}_{\theta}(\mathbf{x}_{t},t)), \tag{2}\]
where \(0<\beta_{1},\dots,\beta_{T}<1\) is a pre-defined variance schedule and the covariance is typically of the form \(\mathbf{\Sigma}_{\theta}(\mathbf{x}_{t},t)=\sigma_{t}^{2}\mathbf{I}\). Aiding training efficiency, \(q(\mathbf{x}_{t}|\mathbf{x}_{0})\) can be expressed in closed form as \(q(\mathbf{x}_{t}|\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{t};\sqrt{\sigma_{t}}\mathbf{x}_{0},( 1-\bar{\alpha}_{t})\mathbf{I})\) where \(\bar{\alpha}_{t}=\prod_{s=1}^{t}\alpha_{s}\) for \(\alpha_{s}=1-\beta_{s}\). Training is possible by optimising the evidence lower bound on the negative log-likelihood which can be expressed as the KL-divergence between the forward process posteriors and backward transitions at each step
\[\mathcal{L}=\sum_{t\geq 1}\mathbb{E}_{q}\left[D_{\mathrm{KL}}(q(\mathbf{x}_{t-1}|\bm {x}_{t},\mathbf{x}_{0})\,\|p(\mathbf{x}_{t-1}|\mathbf{x}_{t}))\right]=\sum_{t\geq 1} \mathbb{E}_{q}\left[\frac{1}{2\sigma_{t}^{2}}\|\tilde{\mathbf{\mu}}_{t}(\mathbf{x}_{t}, \mathbf{x}_{0})-\mathbf{\mu}_{\theta}(\mathbf{x}_{t},t)\|_{2}^{2}\right]. \tag{3}\]
Figure 2: Modelling data as functions allows sampling at arbitrary resolutions using the same model with different sized noise. Left to right: \(64\times 64\), \(128\times 128\), \(256\times 256\) (original), \(512\times 512\), \(1024\times 1024\).
for \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{t-1};\tilde{\mathbf{\mu}}_{t }(\mathbf{x}_{t},\mathbf{x}_{0}),\tilde{\beta}_{t}\mathbf{I})\), where \(\tilde{\mathbf{\mu}}_{t}\) and \(\tilde{\beta}_{t}\) can be derived in closed form. The connection between diffusion probabilistic models such as these, and denoising score-matching models can be made more explicit by making the approximation (De Bortoli et al., 2021),
\[p(\mathbf{x}_{t-1}|\mathbf{x}_{t}) =p(\mathbf{x}_{t}|\mathbf{x}_{t-1})\exp(\log p(\mathbf{x}_{t-1})-\log p(\mathbf{x} _{t})) \tag{4}\] \[\approx\mathcal{N}(\mathbf{x}_{t-1};\sqrt{1-\beta_{t}}\mathbf{x}_{t}+ \beta_{t}\nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}_{t}),\beta_{t}\mathbf{I}), \tag{5}\]
which holds for small values of \(\beta_{t}\). While \(\nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}_{t})\) is not available, it can be approximated using denoising score matching methods (Hyvarinen, 2005; Vincent, 2011). Given that \(\nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}_{t})=\mathbb{E}_{p(\mathbf{x}_{0}|\mathbf{x}_{t})}[ \nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}_{t}|\mathbf{x}_{0})]\) we can learn an approximation to the score with a neural network parameterised by \(\theta\), \(s_{\theta}(\mathbf{x}_{t},t)\approx\nabla\log p(\mathbf{x}_{t})\)(Song and Ermon, 2019), by minimising a reweighted variant of the ELBO (Eq. 3).
### Continuous Time Diffusion
Song et al. (2021b) generalised the discrete time diffusion to an infinite number of noise scales, resulting in a stochastic differential equation (SDE) with trajectory \(\{\mathbf{x}(t)\}_{t=0}^{T}\). The Markov chain defined in Eq. (1) is equivalent to an Euler-Maruyama discretisation of the following SDE
\[\mathrm{d}\mathbf{x}=f(\mathbf{x},t)\,\mathrm{d}t+g(t)\,\mathrm{d}\mathbf{w},\quad\mathbf{x}( 0)\sim p_{0},\quad\mathbf{x}(T)\sim p_{T}. \tag{6}\]
where \(\mathbf{w}\) is the standard Wiener process (Brownian motion), \(f\) is the drift coefficient of the SDE, and \(g\) is the diffusion coefficient. To allow inference, this SDE can be reversed; this can be written in the form of another SDE (Anderson, 1982)
\[\mathrm{d}\mathbf{x}=[f(\mathbf{x},t)-g(t)^{2}\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})]\, \mathrm{d}t+g(t)\,\mathrm{d}\bar{\mathbf{w}}. \tag{7}\]
where \(\bar{\mathbf{w}}\) is the Wiener process in which time flows backward. The Markov chain defined in Eq. (5) is equivalent to an Euler-Maruyama discretisation of this SDE where the score function \(\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\) is approximated by \(s_{\theta}(\mathbf{x},t)\). For all diffusion processes, there is an ordinary differential equation whose trajectories share the same distributions \(\{p_{t}(\mathbf{x})\}_{t=1}^{T}\) as the SDE. The corresponding deterministic process for the SDE defined in Eq. (7) can be written as,
\[\mathrm{d}\mathbf{x}=\left[f(\mathbf{x},t)-\frac{1}{2}g(t)^{2}\nabla_{\mathbf{x}}\log p_{t }(\mathbf{x})\right]\mathrm{d}t, \tag{8}\]
referred to as the probability flow (Song et al., 2021b). This deterministic interpretation has a number advantages such as faster sampling, and allowing exact-likelihoods to be calculated through the instantaneous change of variables formula (Chen et al., 2018).
## 3 Infinite Dimensional Diffusion Models
In contrast to previous generative diffusion models which assume that the data lies on a uniform grid, we assume that the data is a continuous function. For an introduction to infinite dimensional analysis see Da Prato (2006). In this case, data points \(x\sim q\) are functions defined on a separable Hilbert space \(\mathcal{H}\) with domains \(\mathbb{R}^{d}\) which are sampled from a probability distribution defined in the dual space of \(\mathcal{H}\), \(q\in\mathcal{H}^{*}\), i.e. \(q:\mathcal{H}\to\mathbb{R}\); for simplicity we consider the case where \(\mathcal{H}\) is the space of \(L^{2}\) functions from \([0,1]^{n}\) to \(\mathbb{R}^{d}\) although the following sections can be applied to other spaces. In this section we introduce infinite dimensional diffusion models which gradually destroy continuous signals until virtually no information remains.
### White Noise Diffusion
One consideration to extend diffusion models to infinite dimensions is to use continuous white noise where each coordinate is an independent and identically distributed Gaussian random variable. In other words \(\mathcal{N}(0,C_{I})\) where the covariance operator is \(C_{I}(z(s),z(s^{\prime}))=\delta(s-s^{\prime})\), using the Dirac delta function \(\delta\). The transition densities defined in Section 2.1 can be extended to infinite dimensions,
\[q(x_{t}|x_{t-1})=\mathcal{N}(x_{t};\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}C_{I}), \quad p(x_{t-1}|x_{t})=\mathcal{N}(x_{t-1};\mu_{\theta}(x_{t},t),\Sigma_{ \theta}(x_{t},t)), \tag{9}\]
where similar to the finite dimensional case we fix to \(\Sigma_{\theta}(x_{t},t)=\sigma_{t}^{2}C_{I}\). While most similar to the finite dimensional approach, this method is problematic since white noise defined as such does
not lie in \(\mathcal{H}\)(Da Prato and Zabczyk, 2014) and therefore neither does \(x_{t}\). However, in practice we are computationally unable to operate in infinite dimensions, instead we operate on a discretisation of the continuous space through an orthogonal projection, in which case the norm is finite. That is, if each \(x\) has \(n\) spatial dimensions, then the coordinate space is \(D=[0,1]^{n}\); by sampling \(m\) coordinates, \(\mathbf{c}\in(\begin{subarray}{c}D\\ m\end{subarray})\) we thereby discretise \(x\) as \(x(\mathbf{c})\in\mathbb{R}^{m\times d}\). We can therefore approximate Eq. (3) by Monte-Carlo approximating each function, where we assume that \(\tilde{\mu}\) and \(\mu_{\theta}\) are able to operate on subsets through calculation in closed form or approximation,
\[\mathcal{L}=\sum_{t=1}^{T}\mathbb{E}_{q(x_{t})}\,\mathbb{E}_{\mathbf{c}\sim U\left( \begin{subarray}{c}D\\ m\end{subarray}\right)}\left[\|\tilde{\mu}(x_{t}(\mathbf{c}),x_{0}(\mathbf{c}))-\mu_ {\theta}(x_{t}(\mathbf{c}),t)\|_{2}^{2}\right]. \tag{10}\]
### Smoothing States with Mollified Diffusion
While we can can build a generative model with white noise, in practice we may be using a neural architecture which assumes the input does lie in \(\mathcal{H}\). Because of this, we propose an alternative diffusion process that approximates the non-smooth input space with a smooth function by convolving functions with a mollifier \(T\) e.g. a truncated Gaussian kernel. Convolving white noise in such a manner results in a Gaussian random field (Higdon, 2002). Using the property that a linear transformation \(y=Tx+b\) of a normally distributed variable \(x\sim\mathcal{N}(\mu,C)\) is given by \(y\sim\mathcal{N}(T\mu+b,TCT^{t})\), where \({}^{t}\) is the transpose operator, mollifying \(q(x_{t}|x_{0})\) results in,
\[q(x_{t}|x_{0})=\mathcal{N}(x_{t};\sqrt{\tilde{\alpha}_{t}}Tx_{0},(1-\bar{ \alpha}_{t})TT^{t}). \tag{11}\]
From this we are able to derive a closed form representation of the posterior (proof in Appendix B.1),
\[q(x_{t-1}|x_{t},x_{0})=\mathcal{N}(x_{t-1}|\tilde{\mu}_{t}(x_{t},x_{0}),\tilde {\beta}_{t}TT^{t}), \tag{12}\]
\[\text{where}\quad\tilde{\mu}_{t}(x_{t},x_{0})=\frac{\sqrt{\alpha_{t-1}}\beta_ {t}}{1-\bar{\alpha}_{t}}Tx_{0}+\frac{\sqrt{\alpha_{t}}(1-\bar{\alpha}_{t-1})} {1-\bar{\alpha}_{t}}x_{t}\quad\text{and}\quad\tilde{\beta}_{t}=\frac{1-\bar{ \alpha}_{t-1}}{1-\bar{\alpha}_{t}}\beta_{t}.\]
Defining the reverse transitions similarly as mollified Gaussian densities, \(p_{\theta}(x_{t-1}|x_{t})=\mathcal{N}(x_{t-1}|\mu_{\theta}(x_{t},t),\sigma_{t} ^{2}TT^{t})\), then we can parameterise \(\mu_{\theta}:\mathcal{H}\to\mathcal{H}\) to directly predict \(x_{0}\). The loss defined in Eq. (3) can be extended to infinite dimensions (Pinski et al., 2015), which in contrast to the approach in Section 3.1 is well defined in infinite dimensions (see Appendix B.2). However, a key finding of Ho et al. (2020) is that parameterising to predict the noise yields much higher image quality. In this case, directly predicting white noise is impractical due to the continuous nature of \(\mu_{\theta}\). Instead, we reparameterise \(\mu_{\theta}\) to predict \(T\xi\), for \(\xi\sim\mathcal{N}(0,C_{I})\), motivated by the rewriting the loss as
\[\mathcal{L}_{t-1}=\mathbb{E}_{q}\left[\frac{1}{2\sigma_{t}^{2}}\left\|T^{-1} \left(\frac{1}{\sqrt{\alpha_{t}}}\left(x_{t}(x_{0},\xi)-\frac{\beta_{t}}{\sqrt {1-\bar{\alpha}_{t}}}T\xi\right)-\mu_{\theta}(x_{t},t)\right)\right\|_{\mathcal{ H}}^{2}\right], \tag{13}\]
\[\mu_{\theta}(x_{t},t)=\frac{1}{\sqrt{\alpha_{t}}}\left[x_{t}-\frac{\beta_{t}}{ \sqrt{1-\bar{\alpha}_{t}}}f_{\theta}(x_{t},t)\right]. \tag{14}\]
Directly predicting \(x_{0}\) inherently gives an estimate of the unmollified data, but when predicting \(T\xi\) we are only able to sample \(Tx_{0}\), however, we can undo this process using a Wiener filter. While a similar technique could be used to calculate the loss, in this case \(T^{-1}\) does not affect the minima so we can instead use \(\mathcal{L}_{t-1}^{\text{simple}}=\mathbb{E}_{q}[\|f_{\theta}(x_{t},t)-T\xi\|_{ \mathcal{H}}^{2}]\). It is also possible to define a diffusion process where only the noise is mollified, however, we found this to not work well likely because the noise becomes of lower frequency than the underlying data. We can also directly apply the DDIM variation of DDPMs (Song et al., 2021) to this setting, resulting in a deterministic sampling process from \(T\xi\)
Figure 3: Example Diffusion Processes. Mollified diffusion smooths diffusion states allowing the space to be more effectively modelled with continuous operators.
### Continuous Time Diffusion
To examine the properties of the proposed infinite dimensional diffusion process, we consider infinite dimensional stochastic differential equations, which have been well studied. Specifically, SDEs taking values in a Banach space \(\mathcal{X}\) continuously embedded into a separable Hilbert space \(\mathcal{H}\),
\[\mathrm{d}x=F(x,t)\,\mathrm{d}t+\sqrt{2}\,\mathrm{d}w(t), \tag{15}\]
where the drift \(F\) takes values in \(\mathcal{H}\), the process \(x\) takes values in \(\mathcal{X}\), and \(w\) is a cylindrical Wiener process on \(\mathcal{H}\), a natural generalisation of finite dimensional Brownian motion. Infinite-dimensional SDEs have a unique strong solution as long as the coefficients satisfy some regularity conditions (Gawarecki and Mandrekar, 2010), in particular, the Lipschitz condition is met if for all \(x,y\in C([0,T],\mathcal{H})\), \(0\leq t\leq T\), there exists \(h>0\) such that
\[\|F(x,t)-F(y,t)\|_{\mathcal{H}}\leq h\sup_{0\leq s\leq T}\|x(s)-y(s)\|. \tag{16}\]
Similar to the finite-dimensional case, the reverse time diffusion can be described by another SDE that runs backwards in time (Follmer and Wakolbinger, 1986),
\[\mathrm{d}x=[F(x,t)-2D\log p_{t}(x)]\,\mathrm{d}t+\sqrt{2}\,\mathrm{d}\bar{w} (t). \tag{17}\]
However, unlike the finite-dimensional case, this only holds under certain conditions. If the interaction is too strong then the terminal position of the SDE may contain so much information that it is possible to reconstruct the trajectory of any coordinate. This is avoided by ensuring the drift terms have finite energy, \(\mathbb{E}[\int_{0}^{T}F^{i}(x,t)^{2}\,\mathrm{d}t]<\infty\), \(\forall i\in D\). Mollifying the diffusion space is similar to preconditioning SDEs; MCMC approaches obtained by discretising such SDEs yeild compelling properties such as sampling speeds robust under mesh refinement (Cotter et al., 2013).
## 4 Parameterising the Diffusion Process
In order to model the score-function in Hilbert space, there are certain properties that is essential for the class of learnable functions to satisfy so as to allow training on infinite resolution data:
1. Can take as input points positioned at arbitrary coordinates.
2. Generalises to different numbers of input points than trained on, sampled on a regular grid.
3. Able to capture both global and local information.
4. Scales to very large numbers of input points, i.e. efficient in terms of runtime and memory.
Recent diffusion models often use a U-Net (Ronneberger et al., 2015) consisting of a convolutional encoder and decoder with skip-connections between resolutions allowing both global and local information to be efficiently captured. Unfortunately, U-Nets function on a fixed grid making them unsuitable. However, we can take inspiration to build an architecture satisfying the desired properties.
### Neural Operators
Neural Operators (Kovachki et al., 2021; Li et al., 2020) are a framework designed for efficiently solving partial differential equations by learning to directly map the PDE parameters to the solution in a single step. However, more generally they are able to learn a map between two infinite dimensional function spaces making them suitable for parameterising an infinite dimensional diffusion model.
Let \(\mathcal{X}\) and \(\mathcal{S}\) be separable Banach spaces representing the spaces of noisy denoised data respectfully; a neural operator is a map \(\mathcal{F}_{\theta}\colon\mathcal{X}\to\mathcal{S}\). Since \(x\in\mathcal{X}\) and \(s\in\mathcal{S}\) are both functions, we only have access to pointwise evaluations. Let \(\mathbf{c}\in(\frac{D}{m})\) be an \(m\)-point discretisation of the domain and assume we have observations \(x(\mathbf{c})\in\mathbb{R}^{m\times d}\). To be discretisation invariant, the neural operator may be evaluated at any \(c\in D\), potentially \(c\notin\mathbf{c}\), thereby allowing a transfer of solutions between different discretisations i.e. satisfying properties 1 and 2. Each operator layer is built using a non-local integral kernel operator, \(\mathcal{K}(x;\phi)\), which aggregates information spatially,
\[(\mathcal{K}(x;\phi)v)(c)=\int_{D}\kappa(c,b,x(c),x(b);\phi)v_{l}(b)\,\mathrm{d }b,\qquad\forall c\in D. \tag{18}\]
Deep networks can be built in a similar manner to conventional methods, by stacking layers of linear operators with non-linear activation functions, \(v_{0}\mapsto v_{1}\mapsto\cdots\mapsto v_{L}\) where \(v_{l}\mapsto v_{l+1}\) is defined as
\[v_{l+1}(c)=\sigma(Wv_{l}(c)+(\mathcal{K}(x;\phi)v_{l})(c)),\qquad\forall c\in D, \tag{19}\]
for pointwise linear transformation \(W\colon\,\mathbb{R}^{d}\to\mathbb{R}^{d}\) and non-linear activation function \(\sigma\colon\,\mathbb{R}\to\mathbb{R}\).
### Architecture
While neural operators which satisfy all the required properties exist, such as Galerkin attention (Cao, 2021) (a softmax-free linear attention operator) and MLP-Mixers (Tolstikhin et al., 2021), scaling beyond small numbers of coordinates is still challenging due to the high memory costs associated with caching activations for backpropagation. Instead we design a U-Net inspired multiscale architecture (Fig. 4) that aggregates information locally and globally at different points through the network.
In a continuous setting, there are two main approaches to downsampling: (1) selecting a subset of coordinates (Wang and Golland, 2022) and (2) interpolating points to a regularly spaced grid (Rahman et al., 2022). We found that with repeated application of (1), approximating integral operators on non-uniformly spaced grids with very few points did not perform nor generalise well, likely due to the high variance. On the other hand, while working with a regular grid removes some sparsity properties, issues with variance are much lower. As such, we use a hybrid approach with sparse operators applied on the raw data, which is then interpolated to a fixed grid and a grid-based architecture is applied; if the fixed grid is of sufficiently high dimension, this combination should be sufficient. While an FNO (Li et al., 2021; Rahman et al., 2022) architecture could be used, we achieved better results with dense convolutions (Nichol and Dhariwal, 2021), with sparse operators used for resolution changes.
At the sparse level we use convolution operators (Kovachki et al., 2021), finding them to be more performant than Galerkin attention, with global context no longer essential due to the multiscale architecture. In this case, the operator is defined using a translation invariant kernel restricted to the local neighbourhood of each coordinate, \(N(c)\),
\[x(c)=\int_{\mathbb{N}(c)}\kappa(c-y)v(y)\,\mathrm{d}y,\qquad\forall c\in D. \tag{20}\]
We restrict \(\kappa\) to be a depthwise kernel due to the greater parameter efficiency for large kernels (particularly for continuously parameterised kernels) and finding that they are more able to generalise when trained with fewer sampled coordinates; although the sparsity ratio is the same for regular and depthwise convolutions, because there are substantially more values in a regular kernel, there is more spatial correlation between values. When a very large number of sampled coordinates are used, fully continuous convolutions are extremely impractical in terms of memory usage and run-time. In practice, however, images are obtained and stored on a discrete grid. As such, by treating images as high dimensional, but discrete entities, we can take advantage of efficient sparse convolution libraries (Choy et al., 2019; Contributors, 2022), making memory usage and run-times much more reasonable. Specifically, we use TorchSparse (Tang et al., 2022), modified to allow depthwise convolutions. Wang and Golland (2022) proposed using low discrepancy coordinate sequences to approximate the integrals due to their better convergence rates. However, we found uniformly sampled points to be more effective, likely because the reduced structure results in points sampled close together which allows high frequency details to be captured more easily.
Figure 4: \(\infty\)-Diff uses a hierarchical architecture that operates on irregularly sampled functions at the top level to efficiently capture fine details, and on fixed grids at the other levels to capture global structure. This approach allows scaling to complex high resolution data.
## 5 Experiments
In this section we demonstrate that the proposed mollified diffusion process modelled with neural operator based networks and trained on coordinate subsets are able to generate high quality, high resolution samples. We explore the properties of this approach including discretisation invariance, the impact of the number of coordinates during training, and compare the sample quality of our approach with other infinite dimensional generative models. We train models on three \(256\times 256\) datasets, CelebA-HQ (Karras et al., 2018), FFHQ (Karras et al., 2019), and LSUN Church (Yu et al., 2015); unless otherwise specified models are trained on \(\nicefrac{{1}}{{4}}\) of pixels, randomly selected.
When training diffusion models, very large batch sizes are necessary due the high variance (Hoogeboom et al., 2023), making training on high resolution data on a single GPU impractical. To address this, we use the diffusion autoencoder framework (Prechakul et al., 2022) which reduces stochasticity by dividing the generation process into two stages. To encode data we use the first half of our proposed architecture (Fig. 4), which still operates on sparsely sampled coordinates. When sampling, we use the deterministic DDIM interpretation with 100 steps. Additional details are in Appendix A. Source code is available at [https://github.com/samb-t/inffty-diff](https://github.com/samb-t/inffty-diff).
Sample QualitySamples from our approach can be found in Fig. 5 which are high quality, diverse, and capture fine details. In Table 1 we quantitatively compare with other approaches that treat inputs as infinite dimensional data, as well as more traditional approaches that assume data lies on a fixed grid. As proposed by Kynkaanniemi et al. (2023), we calculate FID (Heusel et al., 2017) using CLIP features (Radford et al., 2021) which is better correlated with human perception of image quality. Our approach scales to high resolutions much more effectively than the other function-based approaches as evidenced by the substantially lower scores. Visual comparison between samples from our approach and other function-based approaches can be found in Fig. 6 where samples from our approach can be seen to be higher quality and display more details without blurring or adversarial
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & CelebAHQ-64 & CelebAHQ-128 & FFHQ-256 & Church-256 \\ \hline Finite-Dimensional & & & & \\ \hline CIPS (Anokhin et al., 2021) & - & - & 5.29 & 10.80 \\ StyleSwin (Zhang et al., 2022) & - & 3.39 & 3.25 & 8.28 \\ UT (Bond-Taylor et al., 2022) & - & - & 3.05 & **5.52** \\ StyleGAN2 (Karras et al., 2020) & - & **2.20** & **2.35** & 6.21 \\ \hline Infinite-Dimensional & & & & \\ \hline D2F (Dupont et al., 2022a) & 40.4\({}^{*}\) & - & - & - \\ DPF (Zhuang et al., 2023) & 13.21\({}^{*}\) & - & - & - \\ GEM (Du et al., 2021) & 14.65 & 23.73 & 35.62 & 87.57 \\ GASP (Dupont et al., 2022b) & 9.29 & 27.31 & 24.37 & 37.46 \\ \(\infty\)**-Diff (Ours)** & **4.57** & **3.02** & **3.87** & **10.36** \\ \hline \hline \end{tabular}
\end{table}
Table 1: FIDCLIP (Kynkaanniemi et al., 2023) evaluation against finite-dimensional methods as well as other infinite-dimensional approaches which are trained on coordinate subsets. \({}^{*}\)=Inception FID.
Figure 5: Samples from \(\infty\)-Diff models trained on sets of randomly subsampled coordinates.
artefacts. All of these approaches are based on neural fields (Xie et al., 2022) where coordinates are treated independently; in contrast, our approach uses neural operators to transform functions using spatial context thereby allowing more details to be captured. Both GASP (Dupont et al., 2022) and GEM (Du et al., 2021) rely on compressed latent-conditional hypernetworks which makes efficiently scaling difficult. D2F (Dupont et al., 2022) relies on a deterministic compression stage which loses detail due to the finite vector size. DPF (Zhuang et al., 2023) uses small fixed sized coordinate subsets as global context with other coordinates modelled implicitly, thereby causing blur.
Discretisation InvarianceIn Fig. 2 we demonstrate the discretisation invariance properties of our approach. After training on random coordinate subsets from \(256\!\times\!256\) images, we can sample from this model at arbitrary resolutions which we show at resolutions from \(64\!\times\!64\) to \(1024\!\times\!1024\) by initialising the diffusion with different sized noise. We experimented with (alias-free) continuously parameterised kernels (Romero et al., 2022) but found bi-linearly interpolating kernels to be more effective. At each resolution, even exceeding the training data, samples are consistent and diverse. In Fig. 7 we analyse how the number of sampling steps affects quality at different sampling resolutions.
Coordinate SparsityOne factor influencing the quality of samples is the number of coordinates sampled during training; fewer coordinates means fewer points from which to approximate each integral. We analyse the impact of this in Table 3, finding that as expected, performance decreases with fewer coordinates, however, this effect is fairly minimal. With fewer coordinates also comes substantial speedup and memory savings; at \(256\!\times\!256\) with \(\nicefrac{{1}}{{4}}\) coordinates the speedup is \(1.4\!\times\).
Architecture AnalysisIn Table 2 we ablate the impact of various architecture choices against the architecture described in Section 4.2, matching the architecture as closely as possible. In particular, sparse downsampling (performed by randomly subsampling coordinates; we observed similar with equidistant subsampling, Qi et al., 2017) fails to capture the distribution. Similarly using a spatially nonlinear kernel (Eq. 18), implemented as conv, activation, conv, does not generalise well unlike linear kernels (we observed similar for softmax transformers, Kovachki et al., 2021).
Super-resolutionThe discretisation invariance properties of the proposed approach makes superresolution a natural application. We evaluate this in a simple way, passing a low resolution image through the encoder, then sampling at a higher resolution; see Fig. 8 where it is clear that more details have been added. A downside of this specific approach is that information is lost in the encoding process, however, this could potentially by improved by incorporating DDIM encodings (Song et al., 2021).
Figure 8: Super-resolution
Figure 6: Qualitative comparison with other infinite dimensional approaches.
InpaintingInpainting is possible with mollified diffusion (Fig. 9), using reconstruction guidance (Ho et al., 2022), \(x_{t-1}\gets x_{t-1}-\lambda\nabla_{x_{t}}\|m\odot(\tilde{\mu}_{0}(x_{t},t)- T\tilde{x})\|_{2}^{2}\) for inpainting mask \(m\), learned estimate of \(Tx_{0}\), \(\tilde{\mu}_{0}\), and image to be inpainted \(\tilde{x}\). The diffusion autoencoder framework gives an additional level of control when inpainting since the reverse diffusion process can be applied to encodings from a chosen time step \(t_{s}\), allowing control over how different the inpainted region is from the original image.
## 6 Discussion
There are a number of interesting directions to improve our approach including more powerful and efficient neural operators, more efficient sparse methods (including better optimised sparse convolutions), and better integral approximation for instance by adaptively sampling coordinates. One interesting architecture direction is RIN (Jabri et al., 2022) which adaptively allocates compute to complex regions. Other approaches for high resolution diffusion models have focused on architectural improvements and noise schedules (Hoogeboom et al., 2023). In contrast, we reduce runtime and memory usage through sparsity; many of these improvements are complementary to our approach.
In recent years there have been a number of advances in the study of diffusion models which are also complementary to our approach, these include consistency models (Song et al., 2023) which form single/few-step generative models from diffusion models, mapping between arbitrary densities (Albergo et al., 2023), and reducing sampling times via Schrodinger bridges (De Bortoli et al., 2021), critically-damped diffusion (Dockhorn et al., 2022), and faster solvers (Lu et al., 2022). Similar to our proposal of mollifying the diffusion process, a number of recent works have used blurring operations to improve diffusion. Rissanen et al. (2023) corrupt data by repeatedly blurring images until they are approximately a single colour; due to the deterministic nature of this process, noise is added to split paths. This process is formalised further by Hoogeboom and Salimans (2023).
Concurrent with this work, a number of methods independently proposed approaches for modelling diffusion models in infinite dimensions (Franzese et al., 2023; Hagemann et al., 2023; Lim et al., 2023; Zhuang et al., 2023), these approaches are complementary to ours and distinct in a number of ways. In particular, none of these approaches offer the ability to scale to high resolution data demonstrated in this work, typically being only applied to very simple data (e.g. Gaussian mixtures and MNIST). These approaches all rely on either conditional neural fields or operating on uniform grids of coordinates, whereas our approach operates directly on raw sparse data, enabling better scaling. The closest to our work in terms of scalability is Diffusion Probabilistic Fields (Zhuang et al., 2023) which denoises each coordinate independently, using a small subset of coordinates to provide context, however, this is considerably more restrictive than our approach and computational inefficiency limits resolutions to be much smaller than ours (up to \(64\times 64\)).
There are a number of other neural field GAN approaches similar to GASP (Dupont et al., 2022) such as CIPS (Anokhin et al., 2021) and Poly-INR (Singh et al., 2023), however, these approaches rely on convolutional discriminators thereby requiring all coordinates from fixed size images, preventing scaling to effectively infinite resolutions. Also of relevance are Neural Processes (Dutordoir et al., 2022; Garnelo et al., 2018) which also learn distributions over functions similar to Gaussian Processes. However, these approaches address conditional inference, whereas we construct an unconditional generative model that is applied to substantially more complex data such as high resolution images.
## 7 Conclusion
In conclusion, we found that mollified-state diffusion models with transition densities represented by neural operators are able to generate high quality infinite dimensional samples. Despite only observing subsets of pixels during training, sample quality is competitive with state-of-the-art models trained on all pixels at once. Prior infinite dimensional approaches use latent conditional neural fields; our findings demonstrate that sparse neural operators which operate directly on raw data are a capable alternative, offering significant advantages by not treating all coordinates independently, as evidenced by substantially lower FID scores. Future work would benefit from improved neural operators that can effectively operate at greater levels of sparsity to further improve the efficiency of our approach.
Figure 9: Inpainting. | この論文では、無限次元ヒルベルト空間で定義された生成的ディフュージョンモデル$\infty$-Diffについて紹介しています。これは、無限解像度データモデルです。座標のランダムサンプルサブセットを訓練データとし、その場所のみのノイズ除去を行うことで、任意解像度サンプリングのための連続関数を学習します。点状関数を用いる従来のニューラルフィールドベースの無限次元モデルとは異なり、本手法はヒルベルト空間間をマッピングするために非局所積分算子を使用しており、空間的整合性を高めます。これは、直接サンプリングされたスカラー関数の多スケールアーキテクチャによって達成され、これにより稀な座標に直接作用します。また、モlificationされたディフュージョンプロセスによって不規則性を滑らかにします。高解像度データセットの実験において、このモデルは$8\times$のサンプリング率でも高品質なディ |
2301.00274 | Convergence of inductive sequences of spectral triples for the spectral
propinquity | In the context of metric geometry, we introduce a new necessary and
sufficient condition for the convergence of an inductive sequence of quantum
compact metric spaces for the Gromov-Hausdorff propinquity, which is a
noncommutative analogue of the Gromov-Hausdorff distance for compact metric
spaces. This condition is easy to verify in many examples, such as quantum
compact metric spaces associated to AF algebras or certain twisted convolution
C*-algebras of discrete inductive limit groups. Our condition also implies the
convergence of an inductive sequence of spectral triples in the sense of the
spectral propinquity, a generalization of the Gromov-Hausdorff propinquity on
quantum compact metric spaces to the space of metric spectral triples. In
particular we show the convergence of the state spaces of the underlying
C*-algebras as quantum compact metric spaces, and also the convergence of the
quantum dynamics induced by the Dirac operators in the spectral triples. We
apply these results to new classes of inductive limit of even spectral triples
on noncommutative solenoids and Bunce-Deddens C*-algebras. Our construction,
which involves length functions with bounded doubling, adds geometric
information and highlights the structure of these twisted C*-algebras as
inductive limits. | Carla Farsi, Frederic Latremoliere, Judith Packer | 2022-12-31T19:17:32 | http://arxiv.org/abs/2301.00274v1 | # Convergence of inductive sequences of spectral triples for the spectral propinquity
###### Abstract.
In the context of metric geometry, we introduce a new necessary and sufficient condition for the convergence of an inductive sequence of quantum compact metric spaces for the Gromov-Hausdorff propinquity, which is a noncommutative analogue of the Gromov-Hausdorff distance for compact metric spaces. This condition is easy to verify in many examples, such as quantum compact metric spaces associated to AF algebras or certain twisted convolution \(\mathrm{C}^{*}\)-algebras of discrete inductive limit groups. Our condition also implies the convergence of an inductive sequence of spectral triples in the sense of the spectral propinquity, a generalization of the Gromov-Hausdorff propinquity on quantum compact metric spaces to the space of metric spectral triples. In particular we show the convergence of the state spaces of the underlying \(\mathrm{C}^{*}\)-algebras as quantum compact metric spaces, and also the convergence of the quantum dynamics induced by the Dirac operators in the spectral triples. We apply these results to new classes of inductive limit of even spectral triples on noncommutative solenoids and Bunce-Deddens \(\mathrm{C}^{*}\)-algebras. Our construction, which involves length functions with bounded doubling, adds geometric information and highlights the structure of these twisted \(\mathrm{C}^{*}\)-algebras as inductive limits.
Key words and phrases:Spectral triples, Noncommutative metric geometry, quantum Gromov-Hausdorff distance, Monge-Kantorovich distance, Quantum Metric Spaces, Quantum Tori, Noncommutative solenoids, Bunce-Deddens algebras 2
## 1. Introduction
Spectral triples, introduced by Connes in 1985 as a noncommutative generalization of Dirac operators acting on bundles over manifolds [11, 12], have emerged as a powerful means to encode geometric information over noncommutative operator algebras. Motivated in part by ideas from mathematical physics, and by the recurrent usefulness of various notions of limits of C\({}^{*}\)-algebras, the second author introduced in [47] a distance on metric spectral triples, up to an obvious notion of unitary equivalence, thus enabling the discussion of approximations of certain spectral triples by others, in a geometric sense. This distance is named the spectral propinquity, and is built from a noncommutative analogue of the Gromov-Hausdorff distance for noncommutative geometry, called the Gromov-Hausdorff propinquity [35, 38, 39, 40]. Thus, convergence of spectral triples is defined as part of a larger framework for convergence of quantum compact metric spaces, which are noncommutative analogues of algebras of Lipschitz functions over compact metric spaces. Within this framework, the propinquity was extended to certain modules over quantum compact metric spaces [48], and even C\({}^{*}\)-correspondences [46] with additional metric data inspired by metric connections. The propinquity also was extended to various dynamical systems [41, 44]. These extensions have been used by the second author to define the spectral propinquity over metric spectral triples.
The spectral propinquity \(\Lambda^{\mathsf{spec}}\) has been applied to approximations of spectral triples on fractals [29] and on quantum tori [45], with the latter example rooted in matrix models in physics and the problem of their convergence. Indeed, the spectral propinquity endows the space of all metric spectral triples with its own geometry, and it allows to capture some geometric intuition within the well understood framework of a topology. For instance, while quantum tori are not inductive limits of finite dimensional C\({}^{*}\)-algebras, spectral triples over quantum tori can now be approximated by spectral triples over full matrix algebras to arbitrary precision using the spectral propinquity -- a common heuristics in mathematical physics, now formalized. Convergence for the spectral propinquity implies convergence of the state spaces of the underlying algebras for a form of Gromov-Hausdorff distance, convergence of the quantum dynamics obtained by exponentiating the Dirac operators, and implies convergence of the spectra and the bounded continuous functional calculus for the Dirac operators, with implications for the convergence of physically important quantities such as the spectral actions [31].
In this paper, we consider the question of when an _inductive sequence of metric spectral triples_[20] converges, in the sense of the spectral propinquity, to its inductive limit. To illustrate the power of our result, besides the class of AF algebras, we construct even metric spectral triples on noncommutative solenoids [49] and on some Bunce-Deddens algebras [8, 14] and show that they are limits of metric spectral triples on, respectively, quantum tori and bundles of full matrix algebras over the circle, in the sense of the spectral propinquity \(\Lambda^{\mathsf{spec}}\). In this way, we provide a noncommutative geometric version of the fact that solenoid groups can be seen as metric limits of tori, and Bunce-Deddens algebras are metric limits of algebras of matrix valued functions over the circle.
A spectral triple \((\mathfrak{A},\mathscr{H},D)\) is given by a unital C\({}^{*}\)-algebra \(\mathfrak{A}\) acting on a Hilbert space \(\mathscr{H}\) and a (usually unbounded) self-adjoint operator \(D\) on \(\mathscr{H}\), which has bounded commutator with the elements of a dense \(*\)-subalgebra of \(\mathfrak{A}\), and has compact resolvent (see Definition (3.1)). Spectral triples contain much geometric information, including metric data. Indeed, Connes noted in [12] that spectral triples define a canonical extended pseudo-distance on the state space of their underlying C\({}^{*}\)-algebras, which, in particular,
recovers the geodesic distance when working with the usual spectral triple given by the Dirac operator acting on the square integrable sections of the spinor bundle of a compact connected Riemannian spin manifold without boundary.
Rieffel in [55, 56] then cast this metric aspect of noncommutative geometry under a new light, starting from the observation that Connes' distance induced by a spectral triple is a noncommutative analogue of the Monge-Kantorovich metric [27, 28]; it was thus natural to define a quantum compact metric space as an ordered pair \((\mathfrak{A},\mathsf{L})\) of a unital \(\mathrm{C}^{*}\)-algebra \(\mathfrak{A}\) and a noncommutative analogue of a Lipschitz seminorm \(\mathsf{L}\) such that, in particular, if we set, for any two states \(\varphi,\psi\) of \(\mathfrak{A}\),
\[\mathsf{mk}_{\mathsf{L}}(\varphi,\psi)\coloneqq\sup\left\{|\varphi(a)-\psi(a )|:\mathsf{L}(a)\leq 1\right\}\]
then \(\mathsf{mk}_{\mathsf{L}}\) is a distance inducing the weak-\({}^{*}\) topology on the state space of \(\mathfrak{A}\). The exact list of requirements on the seminorm \(\mathsf{L}\) have evolved as the study of noncommutative metric geometry matured, and we will use the definition of a _quantum compact metric space_ given in [38, 39] and recalled in Definition (2.3). Indeed, a spectral triple whose Connes' metric induces the weak-\({}^{*}\) topology on the state space of its underlying \(\mathrm{C}^{*}\)-algebra then automatically gives a quantum compact metric space; such a spectral triple is called a _metric spectral triple_.
Metric spectral triples may thus be studied within the context of noncommutative metric geometry. As a result, the second author introduced a distance on the space of metric spectral triples. The first step in defining this distance, called the spectral propinquity, is the construction of a noncommutative geometric analogue of the _Gromov-Hausdorff distance_[17, 22, 23] between quantum compact metric spaces, which we will recall in subsection (2.1). The first such analogue was introduced by Rieffel [57], motivated by the possibility of formalizing certain convergence results found in the mathematical physics literature. While several such analogues have been offered, we will work with the _Gromov-Hausdorff propinquity_\(\Lambda^{*}\), introduced by the second author in [35, 38, 39, 40] precisely to be well adapted to \(\mathrm{C}^{*}\)-algebras theory and the type of seminorms given by spectral triples. The propinquity in general is designed precisely to enable distance computations between quantum compact metric spaces defined on unrelated \(\mathrm{C}^{*}\)-algebras, such as between matrix algebra and quantum tori. However, in this work, we investigate what additional properties of the propinquity we can derive when we work with inductive limits of \(\mathrm{C}^{*}\)-algebras.
We begin this work by establishing a characterization of convergence of inductive limits of quantum compact metric spaces to their inductive limit, in terms of _bridge builders_, a type of \(*\)-automorphism with a natural relation to quantum metrics.
**Definition** (Definition (2.20)).: For each \(n\in\mathds{N}\cup\{\infty\}\), let \((\mathfrak{A}_{n},\mathsf{L}_{n})\) be a quantum compact metric space, such that \(\mathfrak{A}_{\infty}=\mathrm{cl}\left(\bigcup_{n\in\mathds{N}}\mathfrak{A}_{n}\right)\), where \((\mathfrak{A}_{n})_{n\in\mathds{N}}\) is an increasing (for \(\subseteq\)) sequence of \(\mathrm{C}^{*}\)-subalgebras of \(\mathfrak{A}_{\infty}\), with the unit of \(\mathfrak{A}_{\infty}\) in \(\mathfrak{A}_{0}\).
A \(*\)-automorphism \(\pi:\mathfrak{A}_{\infty}\to\mathfrak{A}_{\infty}\) is a _bridge builder_ for \(((\mathfrak{A}_{n},\mathsf{L}_{n})_{n\in\mathds{N}},(\mathfrak{A}_{\infty}, \mathsf{L}_{\infty}))\) when, for all \(\varepsilon>0\), there exists \(N\in\mathds{N}\) such that if \(n\geq N\), then
\[\forall a\in\mathrm{dom}\left(\mathsf{L}_{\infty}\right)\quad\exists b\in \mathrm{dom}\left(\mathsf{L}_{n}\right):\quad\mathsf{L}_{n}(b)\leq\mathsf{L}_ {\infty}(a)\text{ and }\|\pi(a)-b\|_{\mathfrak{A}_{\infty}}<\varepsilon\mathsf{L}_{ \infty}(a)\]
and
\[\forall b\in\mathrm{dom}\left(\mathsf{L}_{n}\right)\quad\exists a\in\mathrm{ dom}\left(\mathsf{L}_{\infty}\right):\quad\mathsf{L}_{\infty}(a)\leq\mathsf{L}_{n}(b) \text{ and }\|\pi(a)-b\|_{\mathfrak{A}_{\infty}}<\varepsilon\mathsf{L}_{n}(b),\]
where \(\|\cdot\|_{\mathfrak{A}_{\infty}}\) is the \(\mathrm{C}^{*}\)-norm on \(\mathfrak{A}_{\infty}\).
Bridge builders are powerful means to prove metric convergence for the propinquity and notable because it is usually very difficult to find necessary conditions for metric convergence in the sense of the propinquity (besides the trivial convergence for the diameters). Thus, this theorem is of independent interest from our study of spectral triples, and addresses the relationship between inductive limits and limits in a metric sense as in [47, 35]. Our first main result is therefore the following theorem about convergence for the propinquity \(\Lambda^{*}\) of certain inductive sequences.
**Theorem** (Theorem (2.22)).: _For each \(n\in\mathds{N}\cup\{\infty\}\), let \((\mathfrak{A}_{n},\mathsf{L}_{n})\) be a quantum compact metric space, where \((\mathfrak{A}_{n})_{n\in\mathds{N}}\) is an increasing (for \(\subseteq\)) sequence of \(C^{*}\)-subalgebras of \(\mathfrak{A}_{\infty}\) such that \(\mathfrak{A}_{\infty}=\mathsf{cl}\left(\bigcup_{n\in\mathds{N}}\mathfrak{A}_{n}\right)\), with the unit of \(\mathfrak{A}_{\infty}\) in \(\mathfrak{A}_{0}\). We assume that there exists \(\exists M>0\) such that for all \(n\in\mathds{N}\):_
\[\frac{1}{M}\mathsf{L}_{n}\leq\mathsf{L}_{\infty}\leq M\cdot\mathsf{L}_{n}\ \text{on}\ \mathrm{dom}\left(\mathsf{L}_{n}\right).\]
_Then_
\[\lim_{n\to\infty}\Lambda^{*}\left((\mathfrak{A}_{n},\mathsf{L}_{n}),( \mathfrak{A}_{\infty},\mathsf{L}_{\infty})\right)=0,\]
_if, and only if, for any subsequence \((\mathfrak{A}_{g(n)},\mathsf{L}_{g(n)})_{n\in\mathds{N}}\) of \((\mathfrak{A}_{n},\mathsf{L}_{n})_{n\in\mathds{N}}\), there exists a strictly increasing function \(f:\mathds{N}\to\mathds{N}\) and a bridge builder \(\pi\) for \(((\mathfrak{A}_{g\circ f(n)},\mathsf{L}_{g\circ f(n)})_{n\in\mathds{N}},( \mathfrak{A}_{\infty},\mathsf{L}_{\infty}))\)._
The second step in the construction of the spectral propinquity \(\Lambda^{\mathsf{spec}}\) on the space of metric spectral triples is the extension of the Gromov-Hausdorff propinquity to a distance on the class of \(C^{*}\)-correspondences over quantum compact metric spaces endowed with a form of quantum metric, and with a compatible action of some monoid. The \(C^{*}\)-correspondence associated with a metric spectral triple \((\mathfrak{A},\mathcal{H},D)\) is the Hilbert space \(\mathcal{H}\), seen as a \(\mathfrak{A}\cdot\mathds{C}\cdot\mathds{C}^{*}\)-correspondence, with the quantum metric given by the graph norm of \(D\), and with the action of \([0,\infty)\) on \(\mathcal{H}\) given by \(t\in[0,\infty)\mapsto\exp(it\,D)\). Convergence for the spectral propinquity, by design, implies the convergence of the underlying quantum compact metric spaces, but the converse does not hold in general. These matters will be recalled in detail in Subsection (3.1).
We then turn to the more specific context of inductive sequences of metric spectral triples. Inductive sequences of spectral triples were introduced in [20], and are a natural source of spectral triples; our interest is in the convergence of such sequences for the spectral propinquity, i.e. in the sense of an actual metric. We establish in the present work, as our second main result, that an inductive sequence of metric spectral triples converges for the spectral propinquity when there exists a fully quantum isometric bridge builder for the underlying sequence of quantum compact metric spaces. Again, it is a surprising result that a mild strengthening of convergence for the Gromov-Hausdorff propinquity implies the much stronger convergence for the spectral propinquity, a fact which does not hold for arbitrary sequences of metric spectral triples, but holds thanks to the structure of inductive limits. Our second main theorem is given as follows.
**Theorem** (Theorem (3.17)).: _Let \((\mathfrak{A}_{\infty},\mathcal{H}_{\infty},D_{\infty})\) be a metric spectral triple which is the inductive limit of a sequence \((\mathfrak{A}_{n},\mathcal{H}_{n},D_{n})_{n\in\mathds{N}}\) of metric spectral triples, in the sense of Definition (3.15). For each \(n\in\mathds{N}\cup\{\infty\}\), let_
\[\mathrm{dom}\left(\mathsf{L}_{n}\right):=\left\{a\in\mathfrak{A}_{n}:a=a^{*},a \,\mathrm{dom}\left(D_{n}\right)\subseteq\mathrm{dom}\left(D_{n}\right)\ \text{and}\ \left[D_{n},a\right]\ \text{is bounded}\right\},\]
_and, for all \(a\in\mathrm{dom}\left(\mathsf{L}_{n}\right)\), let \(\mathsf{L}_{n}(a)\) be the operator norm of \(\left[D_{n},a\right]\)._
_If there exists a bridge builder \(\pi:(\mathfrak{A}_{\infty},\mathsf{L}_{\infty})\to(\mathfrak{A}_{\infty},\mathsf{ L}_{\infty})\) for \(((\mathfrak{A}_{n},\mathsf{L}_{n})_{n\in\mathbb{N}},(\mathfrak{A}_{\infty}, \mathsf{L}_{\infty}))\) which is a full quantum isometry of \((\mathfrak{A}_{\infty},\mathsf{L}_{\infty})\), i.e. such that \(\pi(\operatorname{dom}\,(\mathsf{L}_{\infty}))\subseteq\operatorname{dom}\,( \mathsf{L}_{\infty})\) and \(\mathsf{L}_{\infty}\circ\pi=\mathsf{L}_{\infty}\) on \(\operatorname{dom}\,(\mathsf{L}_{\infty})\), then_
\[\lim_{n\to\infty}\Lambda^{\operatorname{spec}}((\mathfrak{A}_{n},\mathscr{H}_{ n},\mathit{D}_{n}),(\mathfrak{A}_{\infty},\mathscr{H}_{\infty},\mathit{D}_{ \infty}))=0.\]
We conclude our paper with the construction of new even spectral triples on certain twisted group C\({}^{*}\)-algebras \(C^{*}(G,\sigma)\) where the discrete group \(G=\bigcup_{n\in\mathbb{N}}G_{n}\) is the union of a strictly increasing sequence of subgroups \(G_{n}\) of \(G\). These examples include noncommutative solenoids [49] and certain Bunce-Deddens algebras [8]. Our construction is motivated by the desire to see our new spectral triples over \(C^{*}(G,\sigma)\) as limits, for the spectral propinquity, of an inductive sequence of metric spectral triples constructed over the inductive sequence \((C^{*}(G_{n},\sigma))_{n\in\mathbb{N}}\). This metric aspect distinguishes our spectral triples from other spectral triples on noncommutative solenoids [1, 2] or Bunce-Deddens algebras [24], and is applicable, in principle, to many other examples. Moreover, noncommutative solenoids were shown in [48] to be limits, for the propinquity, of quantum tori, for a different family of quantum metrics which did not come from a spectral triple.
In general, it is difficult to prove that a given spectral triple is metric. Examples of metric spectral triples can be found over certain manifolds, quantum tori [12, 15, 16, 34, 45], or more generally, over unital C\({}^{*}\)-algebras endowed with ergodic actions of compact Lie groups [21, 55], over certain C\({}^{*}\)-crossed-products [24], over quantum groups [13], over Podles spheres [3], over AF algebras [7], over certain fractals [10, 30], and more. We note that there are known examples of spectral triples which are not metric [26].
It is therefore quite interesting to obtain new examples of metric spectral triples, and moreover, to prove that they are interesting limits of spectral triples for the spectral propinquity. We thus establish the following third main result of this paper, which draws on the first two in its proof.
**Theorem** (Simplified form of Theorem (4.16)).: _Let \(G=\bigcup_{n\in\mathbb{N}}G_{n}\) be an Abelian discrete group, with \((G_{n})_{n\in\mathbb{N}}\) a strictly increasing sequence of subgroups of \(G\). Let \(\sigma\) be a \(2\)-cocycle of \(G\), with values in \(\mathbb{T}:=\{z\in\mathbb{C}:|z|=1\}\)._
_Let \(\mathbb{L}_{H}\) be a length function over \(G\) whose restriction to \(G_{n}\) is proper for all \(n\in\mathbb{N}\), such that the sequence \((G_{n})_{n\in\mathbb{N}}\) converges to \(G\) for the Hausdorff distance induced on the closed subsets of \(G\) by \(\mathbb{L}_{H}\). Let_
\[\mathbb{F}:g\in G\longmapsto\operatorname{scale}(\min\{n\in\mathbb{N}:g\in G_ {n}\}),\]
_where \(\operatorname{scale}:\mathbb{N}\to[0,\infty)\) is a strictly increasing function._
_If the proper length function \(\mathbb{L}:=\max\{\mathbb{L}_{H},\mathbb{F}\}\) satisfies that, for some \(\theta>1\), there exists \(c>0\) such that for all \(r\geq 1\):_
\[\left|\{g\in G:\mathbb{L}(g)\leq\theta\cdot r\}\right|\leq c\left|\{g\in G: \mathbb{L}(g)\leq r\}\right|,\]
_then_
\[\lim_{n\to\infty}\Lambda^{\operatorname{spec}}((C^{*}(G,\sigma),\ell^{2}(G) \otimes\mathbb{C}^{2},\mathit{D}),(C^{*}(G_{n},\sigma),\ell^{2}(G_{n})\otimes \mathbb{C}^{2},\mathit{D}_{n}))=0,\]
_where for all \(n\in\mathbb{N}\cup\{\infty\}\) and for all \((\xi_{1},\xi_{2})\) in_
\[\left\{\xi\in\ell^{2}(G_{n})\otimes\mathbb{C}^{2}:\sum_{g\in G_{n}}(\mathbb{L }_{H}(g)^{2}+\mathbb{F}(g)^{2})\left\|\xi(g)\right\|_{\mathbb{C}^{2}}^{2}< \infty\right\},\]
_we set_
\[\mathit{D}\xi:g\in G\longmapsto\begin{pmatrix}\mathbb{F}(g)\xi_{2}(g)+ \mathbb{L}_{H}(g)\xi_{1}(g)\\ \mathbb{F}(g)\xi_{2}(g)-\mathbb{L}_{H}(g)\xi_{1}(g)\end{pmatrix}.\]
_In the above spectral triples, \(C^{*}(G,\sigma)\) and \(C^{*}(G_{n},\sigma)\) act via their left regular \(\sigma\)-projective representations._
We then apply this theorem to construct metric spectral triples on noncommutative solenoids, i.e. the twisted group \(C^{*}\)-algebras \(C^{*}\left(\left(\mathbb{Z}\left[\frac{1}{p}\right]\right)^{2},\sigma\right)\) where
\[\mathbb{Z}\left[\frac{1}{p}\right]\coloneqq\left\{\frac{k}{p^{n}}:k\in \mathbb{Z},n\in\mathds{N}\right\},\]
with \(p\) a prime natural number, and where \(\sigma\) is a \(2\)-cocycle of \(\left(\mathbb{Z}\left[\frac{1}{p}\right]\right)^{2}\). In this case, using the notation of the above theorem, we choose \(\mathds{L}_{H}\) to be the restriction to \(\left(\mathbb{Z}\left[\frac{1}{p}\right]\right)^{2}\) of any norm on \(\mathds{R}^{2}\), while \(\mathds{F}\) can be chosen by setting \(\mathds{F}(g)\coloneqq p^{\min\left\{n\in\mathds{N}:g\in\left(\frac{1}{p^{n}} \mathbb{Z}\right)^{2}\right\}}\) for all \(g=(g_{1},g_{2})\in\left(\mathbb{Z}\left[\frac{1}{p}\right]\right)^{2}\). Alternatively, following the ideas of [19], which motivated the present work, we can choose \(\mathds{F}(g_{1},g_{2})\coloneqq\max\{|g_{1}|_{p},|g_{2}|_{p}\}\) for all \(g_{1},g_{2}\in\mathbb{Z}\left[\frac{1}{p}\right]\), where \(|\cdot|_{p}\) is the \(p\)-adic absolute value.
Similarly, we can apply [52] to see that the Bunce-Deddens algebras are given as the twisted group \(C^{*}\)-algebra \(C^{*}\left(\mathbb{Z}(\alpha)\times\mathbb{Z},\sigma\right)\) for an appropriate choice of a \(2\)-cocycle \(\sigma\) and a sequence \(\alpha=(\alpha_{n})_{n\in\mathds{N}}\) of nonzero natural numbers such that \(\frac{\alpha_{n+1}}{\alpha_{n}}\) is a prime number for all \(n\in\mathds{N}\), where the group \(\mathbb{Z}(\alpha)\) is the subgroup of the circle group \(\mathbb{T}\) given by all roots of unity of order \(\alpha_{n}\) for \(n\) ranging over \(\mathds{N}\). We endow \(\mathbb{Z}(\alpha)\) with the discrete topology. The supernatural number number describing the \(*\)-isomorphism class of the Bunce-Deddens algebra thus obtained is \(\left(p^{|(n\in\mathds{N}:\frac{\alpha_{n+1}}{\alpha_{n}}-p|)}\right)_{p\text{ prime}}.\) For our purpose, we will work with sequences \(\alpha\) for which \(\left(\frac{\alpha_{n+1}}{\alpha_{n}}\right)_{n\in\mathds{N}}\) is bounded. In this case, we will choose \(\mathds{L}_{H}\) to be the sum or the max (or one of many other choices) of the restriction of a length function over \(\mathbb{T}\) to \(\mathbb{Z}(\alpha)\), and the absolute value on \(\mathbb{Z}\). Observing that
\[\mathbb{Z}(\alpha)=\bigcup_{n\in\mathds{N}}\widehat{\mathbb{Z}_{/}\alpha_{n}},\]
where \(\widehat{\mathbb{Z}_{/}m}\) is the group of all \(m\)-th roots of unity, we then set \(\mathds{F}(\zeta,z)\coloneqq\min\{\alpha_{n}:\zeta\in\widehat{\mathbb{Z}_{/} \alpha_{n}}\}\) for all \((\zeta,z)\in\mathbb{Z}(\alpha)\times\mathbb{Z}\). This provides a new way to look at Bunce-Deddens algebras as limits of algebras of continuous sections of bundles of matrix algebras over circles in a geometric sense, as an echo of the topological fact that they are \(\mathfrak{AT}\) algebras. This work thus provides an approach to endowing Bunce-Deddens algebras with a different quantum metric from [29], with the advantage that our quantum metrics are induced by spectral triples -- solving the main difficulty in [29], at least for these Bunce-Deddens algebras to which our present work applies.
_Acknowledgements._ This work was partially supported by the Simons Foundation (Simons Foundation collaboration grant #523991 [C. Farsi] and # 31698 [J. Packer].)
## 2. A Characterization of Convergence in the Propinquity for Inductive Sequences
We introduce in this section the notion of _bridge builders_ associated with inductive sequences of quantum compact metric spaces, which can be used to characterize the
convergence of such sequences to their inductive limits in the sense of the Gromov-Hausdorff propinquity. We begin with a review of the notions of quantum compact metric spaces and propinquity, and then we prove our main theorem, which underlies all the rest of our work.
### Preliminaries: the Gromov-Hausdorff Propinquity
Our work is concerned with quantum compact metric spaces, which are noncommutative analogues of the algebras of Lipschitz functions over a compact metric space. Our definition is the result of a natural evolution from the notion of compact quantum metric spaces introduced in [55] by Rieffel, designed as the natural context for the construction of the propinquity. This subsection will also set some of the basic notation which we will use throughout this paper.
**Notation 2.1**.: By default, we denote the norm of a normed vector space \(E\) by \(\|\cdot\|_{E}\), and for us, the set \(\mathds{N}\) of natural numbers always contains zero.
**Notation 2.2**.: If \(\mathfrak{A}\) is a unital \(C^{*}\)-algebra, then the unit of \(\mathfrak{A}\) will simply be denoted by \(1\). The state space of the \(C^{*}\)-algebra \(\mathfrak{A}\) is denoted by \(\mathcal{S}(\mathfrak{A})\). For any \(a\in\mathfrak{A}\), we write \(\Re a=\frac{a+a^{*}}{2}\) and \(\Im a=\frac{a-a^{*}}{2i}\). The space \(\{a\in\mathfrak{A}:a=a^{*}\}\) is denoted by \(\mathfrak{sa}(\mathfrak{A})\) and is closed under the Jordan product \(a,b\in\mathfrak{sa}(\mathfrak{A})\mapsto\Re(ab)\) and the Lie product \(a,b\in\mathfrak{sa}(\mathfrak{A})\mapsto\Im(ab)\), making \(\mathfrak{sa}(\mathfrak{A})\) a Jordan-Lie algebra.
**Definition 2.3** ([11, 38, 39, 55, 57, 58]).: Fix \(\Omega\geq 1\) and \(\Omega^{\prime}\geq 0\). An \((\Omega,\Omega^{\prime})\)-quantum compact metric space \((\mathfrak{A},\mathsf{L})\) is given by a unital \(C^{*}\)-algebra \(\mathfrak{A}\) and a seminorm \(\mathsf{L}\) defined on a dense Jordan-Lie subalgebra \(\operatorname{dom}\left(\mathsf{L}\right)\) of \(\mathfrak{sa}(\mathfrak{A})\) such that:
1. \(\{a\in\operatorname{dom}\left(\mathsf{L}\right):\mathsf{L}(a)=0\}=\mathds{R}1\),
2. the Monge-Kantorovich metric \(\operatorname{mk}_{\mathsf{L}}\), defined on the state space \(\mathcal{S}(\mathfrak{A})\) of \(\mathfrak{A}\), by, for all \(\varphi,\psi\in\mathcal{S}(\mathfrak{A})\): \[\operatorname{mk}_{\mathsf{L}}(\varphi,\psi):=\sup\left\{|\varphi(a)-\psi(a )|:a\in\operatorname{dom}\left(\mathsf{L}\right),\mathsf{L}(a)\leq 1\right\}\] is a metric which induces the weak-\({}^{*}\) topology on \(\mathcal{S}(\mathfrak{A})\),
3. for all \(a,b\in\mathfrak{sa}(\mathfrak{A})\), \[\max\left\{\mathsf{L}(\Re(ab)),\mathsf{L}(\Im(ab))\right\}\leq\Omega\left( \|a\|_{\mathfrak{A}}\,\mathsf{L}(b)+\mathsf{L}(a)\,\|b\|_{\mathfrak{A}} \right)+\Omega^{\prime}\mathsf{L}(a)\mathsf{L}(b);\] this inequality being referred to as the \((\Omega,\Omega^{\prime})\)-Leibniz inequality,
4. the set \(\{a\in\operatorname{dom}\left(\mathsf{L}\right):\mathsf{L}(a)\leq 1\}\) is closed in \(\mathfrak{A}\).
Any such a seminorm \(\mathsf{L}\) is called a _Lipschitz seminorm_ on \(\mathfrak{A}\).
**Convention 2.4**.: By convention, if \(\mathsf{L}\) is a Lipschitz seminorm on some unital \(C^{*}\)-algebra \(\mathfrak{A}\), we will write \(\mathsf{L}(a)=\infty\) whenever \(a\notin\operatorname{dom}\left(\mathsf{L}\right)\), with the convention that \(0\infty=0\) and \(\infty+x=x+\infty=\infty\) for all \(x\in[0,\infty]\). With this convention, \(\mathsf{L}\) is lower semicontinuous over \(\mathfrak{sa}\left(\mathfrak{A}\right)\) as a \([0,\infty]\)-valued function (not just on \(\operatorname{dom}\left(\mathsf{L}\right)\) but on the entire space \(\mathfrak{sa}\left(\mathfrak{A}\right)\)).
**Convention 2.5**.: Throughout this paper, we fix \(\Omega\geq 1\) and \(\Omega^{\prime}\geq 0\). These parameters will be implicit in our notation; when working with spectral triples, one may always assume \(\Omega=1\) and \(\Omega^{\prime}=0\).
_Remark 2.6_.: If \((\mathfrak{A},\mathsf{L})\) is a quantum compact metric space, then we record the following fact which we shall use repeatedly: if \(a\in\operatorname{dom}\left(\mathsf{L}\right)\), then \(\mathsf{L}(a+t1)=\mathsf{L}(a)\) for all \(t\in\mathds{R}\), since
\[\mathsf{L}(a)=\mathsf{L}(a+t1-t1)\leq\mathsf{L}(a+t1)+\mathsf{L}(t1)=\mathsf{ L}(a+t1)+t\underbrace{\mathsf{L}(1)}_{=0}=\mathsf{L}(a+t1)\leq\mathsf{L}(a)+t \mathsf{L}(1)=\mathsf{L}(a).\]
Since the state space of a quantum compact metric space is a compact metric space for the Monge-Kantorovich metric, it has bounded diameter. Moreover, its diameter can used to obtain a natural bound on the norm of some self-adjoint elements, which is a simple but very useful result, which we now recall.
**Notation 2.7**.: The diameter of a metric space \((E,d)\) is denoted by \(\operatorname{diam}\left(E,d\right)\). If \((\mathfrak{A},\mathsf{L})\) is a quantum compact metric space, then we will write \(\operatorname{qdiam}\left(A,\mathsf{L}\right)\) for \(\operatorname{diam}\left(\mathcal{S}(A),\mathsf{mk}_{\mathsf{L}}\right)\). If \(E\) is actually a normed vector space, then we simply write \(\operatorname{diam}\left(A,E\right)\) for the diameter of any subset \(A\) of \(E\) for the norm \(\left\|\cdot\right\|_{E}\) of \(E\).
We recall the following fact, which we will use repeatedly.
**Theorem 2.8** ([55, Propostion 1.6]).: _If \((\mathfrak{A},\mathsf{L})\) is a quantum compact metric space, and if \(\mu\in\mathcal{S}(\mathfrak{A})\), then \(\left\|a-\mu(a)\mathsf{1}\right\|_{\mathfrak{A}}\leqslant\mathsf{L}(a) \operatorname{qdiam}\left(\mathfrak{A},\mathsf{L}\right)\)._
Proof.: For all \(\varphi\in\mathcal{S}(\mathfrak{A})\), we note that \(|\varphi(a-\mu(a)1)|=|\varphi(a)-\mu(a)|\leqslant\mathsf{L}(a)\operatorname{ qdiam}\left(\mathfrak{A},\mathsf{L}\right)\). Since \(a-\mu(a)1\) is self-adjoint, we conclude that \(\left\|a-\mu(a)1\right\|_{\mathfrak{A}}\leqslant\mathsf{L}(a)\operatorname{ qdiam}\left(\mathfrak{A},\mathsf{L}\right)\).
The property difficult to establish when working with quantum compact metric spaces is, of course, that the Monge-Kantorovich metric induces the weak-\({}^{*}\) topology. Rieffel provided various characterizations; we will find the following helpful in this paper:
**Theorem 2.9** ([51]).: _Let \(\mathsf{L}\) be a seminorm defined on some dense subspace \(\operatorname{dom}\left(\mathsf{L}\right)\) of \(\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\leftleftleftleftleftleft\{ \leftleft\{\leftleft\{\leftleft\{\leftleftleft\{ \leftleft\{\leftleft\{\leftleftleft\{ \leftleftleft\{\leftleftleft\{ \leftleftleftleft\{ \leftleftleftleft\{ \leftleftleftleft\{ \leftleftleftleftleft\{ \leftleftleftleftleft\{ \leftleftleftleftleftleft\{ \leftleftleftleftleft\{ \leftleftleftleft\{\leftleftleftleftleftleft\{ \leftleftleftleft\{\leftleftleftleftleftleftleft\{ \leftleftleft\leftleftleft\{\leftleftleftleftleftleft\{ \leftleftleft\leftleft\{\leftleftleftleftleft\{ \leftleftleft\leftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleftleft{ \leftleft\leftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleftleft\{ \leftleftleftleft\leftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleft {\leftleftleftleft\leftleftleft\{\leftleftleftleftleftleft\{ \leftleftleftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleftleftleftleft {\leftleftleft\leftleftleft\{\leftleftleftleftleftleft\{\leftleftleftleftleftleft {\leftleftleft\leftleftleftleft\{\leftleftleftleftleftleftleft\{ \leftleftleftleft\leftleftleft\{\leftleftleftleftleftleft\{\leftleftleftleftleftleft {\leftleftleftleft\leftleftleft\{\leftleftleftleftleftleftleft\{ \leftleftleftleftleft\{\leftleftleftleftleftleft\{\leftleftleftleftleftleftleftleft {\leftleftleft\leftleftleftleft\{\leftleftleftleftleftleftleftleftleft\{ \leftleftleftleftleft\{\leftleftleftleftleftleft\{\leftleftleftleftleftleftleft {\leftleftleft\leftleftleftleft\{\leftleftleftleftleftleftleftleft\{ \leftleftleftleft\leftleftleftleft\{\leftleftleftleftleftleft\{ \leftleftleftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleftleftleftleft {\leftleftleft\leftleftleft\{\leftleftleftleftleftleft\{ \leftleftleftleft\leftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleft {\leftleftleftleft\leftleftleftleft\{\leftleftleftleftleftleftleft\{ \leftleftleftleft\leftleftleft\{\leftleftleftleft\{\leftleftleftleftleftleft {\leftleftleftleft\leftleftleft\{\leftleftleftleftleft\{ \leftleftleftleft\leftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleftleft {\leftleftleft\leftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleftleft {\leftleftleft\leftleftleftleft\{\leftleftleftleftleft\{ \leftleftleftleftleft\leftleftleft\{\leftleftleftleft\leftleft\{ \leftleftleft\left\leftleft\{\leftleft\left\left\{\leftleftleft\leftleft\{ \left\left\left\{\leftleft\left\left\{\leftleft\left\left\leftleft\{ \left\left\left\{\left\left\left\left\{\leftleft\left\left\{\leftleftleft\leftleft {\left\left\left\left\{\left\left\left\left\{\leftleft\left\left\{ \left\left\left\left\{\left\left\left\left\{\left\left\left\left\left\{ \left\left\left\left\{\left\left\left\left\{\left\left\left\left\{\left\left\left \left\{\left\left\left\{\left\left\left\left\{\left\left\left\left\{ \left\left\left\left\{\left\left\left\left\{\left\left\left\left\{\left\left\left\left\{ \left\left\left\left\{\left\left\left\left\{\left\left\left\left\{\left\left\left\left \left\{\left\left\left\{\left\left\left\left\{\left\left\left\left\{\left\left\left \left\left\{\left\left\left\{\left\left\left\left\{\left\left\left\left\{ \left\left\left\left\{\left\left\left\left\{\left\left\left\left\left\{\left\left\left\left\{ \left\left\left\left\{\left\left\left\left\{\left\left\left\left\{\left\left\left\left\{ \left\left\left\left\{\left\left\left\left\left\{\left\left\left\left\{\left\left\left\left\{ \left\left\left\left\{\left\left\left\left\{\left\left\left\left\{\left\left\left\left\{ \left\left\left\left\{\left\left\left\left\{\left\left\left\left\{\left\left\left\left\{ \left\left\left\left\left\{\left\left\left\left\left\{\left\left\left\left\{ \left\left\left\left\left\{\left\left\left\left\left\{\left\left\left\left\{ \left\left\left\left\left\{\left\left\left\left\left\{\left\left\left\left\{ \left\left\left\left\left\left\{\left\left\left\left\left\{\left\left\left\left\{ \left\left\left\left\left\left\{\left\left\left\left\left\{\left\left\left\left\left\{ \left\left\left\left\left\{\left\left\left\left\left\{\left\left\left\left\left\{ \left\left\left\left\left\left\{\left\left\left\left\left\left\{\left\left\left\left\left\{\left\left\left\left\{ \left\left\left\left\left\left\{\left\left\left\left\left\{\left\left\left\left\left\{ \left\left\left\left\left\{\left\left\left\left\left\left\{\left\left\left\left\left\{ \left\left\left\left\left\left\{\left\left\left\left\left\{\left\left\left\left\left\{ \left\left\left\left\left\left\{\left\left\left\left\left\left\{\left\left\left\left\left\{\left\left\left\left\left\{\left\left\left\left\left\left\{\left\left\left\left\left\{\left\left\left\left\left\{\left\left\left\left\left\right\left\{\left\left\left\left\left\{\left\left\left\left\left\{\left\left\left\left\left\left\{\left\left\left\left\left\right\left\{\left\left\left\left\left\{\left\left\left\left\left\left\{\left\left\left\left\left\right\left\{\left\left\left\left\left\{\left\left\left\left\left\left\{\left\left\left\left\left\right\left\{\left\left\left\left\left\right\left\{\left\left\left\left\left\left\{\left\left\left\left\left\left\right\left\{\left\left\left\left\left\left\right\right\left\{\left\left\left\left\left\left\{\left\left\left\left\left\right\left\right\left\left\{\left\left\left\left\left\right\left\right\left\right\left\{\left\left\left\left\left\right\left\{\left\left\left\left\right\left\right\left\right\left\right\right\left\{ \left\left\left\left\{\left\left\left\left\left\left\right\left\right\{\left\left\left\left\left\right\right\right\left\{ \left\left\left\left\left\left\{\left\left\left\left\left\left\right\left\right\left\right\right\right\right\} \right\left\{\left\left\{\left\left\left\left\{\left\left\left\left\left\left\right\{\left\left\left\left\left\left\right\right\right\left\left\{ \left\left\left\left\left\left\left\left\left\right\right\left\right\right\right\right\left\{ \left\left\left\left\{\left\left\left\left\left\left\left\left\right\left\right\right\right\right\left\{ \left\left\left\left\left\left\left\right\{\left\left\left\left\left\left\left\right\right\left\right\left\right\{ \left\left\left\left\left\left\left\{\left\left\left\left\left\left\left\left\left\left\left\right\{ \left\left\left\left\left\left\left\left\left\left\right\left\{\left\left\left\left\left\left\right\left\left\right\left\{ \left\left\left\left\left\left\left\left\left\right\{\left\left\left\left\left\right\right\right\left\left\left\right\{ \left\left\left\left\left\left\left\left\left\left\{\left\right\right\left\left\right\right\left\left\left\right\right\left\left\right\right\left\left\right\left\right\left\left\left\left\{ \left\left\left\left\left\left\left\right\right\left\right\left\right\left\left\right\left\right\left\right\left\right\left\left\right\left\left\right\left\right\left
**Definition 2.11**.: Let \((\mathfrak{A}_{1},\mathsf{L}_{1})\) and \((\mathfrak{A}_{2},\mathsf{L}_{2})\) be two quantum compact metric spaces. A _Lipschitz morphism_\(\pi:(\mathfrak{A}_{1},\mathsf{L}_{1})\to(\mathfrak{A}_{2},\mathsf{L}_{2})\) from \((\mathfrak{A}_{1},\mathsf{L}_{1})\) to \((\mathfrak{A}_{2},\mathsf{L}_{2})\) is a surjective \(*\)-morphism \(\pi\) from \(\mathfrak{A}_{1}\) to \(\mathfrak{A}_{2}\) such that \(\pi(\operatorname{dom}(\mathsf{L}_{1}))\subseteq\operatorname{dom}(\mathsf{ L}_{2})\). Moreover, if, for all \(b\in\operatorname{dom}(\mathsf{L}_{2})\):
\[\mathsf{L}_{2}(b)=\inf\{\mathsf{L}_{1}(a):\pi(a)=b\}\,,\]
then \(\pi\) is called a _quantum isometry_. If \(\pi\) is a quantum isometry and a bijection whose inverse is also a quantum isometry, then \(\pi\) is called a _full quantum isometry_; in this case \(\pi\) is a \(*\)-isomorphism such that for all \(a\in\mathfrak{sa}(\mathfrak{A}_{1})\):
\[\mathsf{L}_{2}\circ\pi(a)=\mathsf{L}_{1}(a).\]
The propinquity is a metric computed by isometrically "embedding" two quantum compact metric spaces into an arbitrary third one, which in the contravariant picture of noncommutative geometry, leads us to the following definition for a _tunnel_. Crucially, a non-negative number can be associated to a tunnel using the Hausdorff distance.
**Notation 2.12**.: The Hausdorff distance induced by the distance function of a metric space \((X,d)\) on the hyperspace of closed subsets of \(X\) is denoted by \(\mathsf{Haus}[d]\). If \(N\) is a norm on a vector space, we denote by \(\mathsf{Haus}[N]\) the Hausdorff distance induced by the metric given by the norm \(N\). By default, if \(E\) is a normed vector space, we simplify our notation and simply write \(\mathsf{Haus}[E]\) for the Hausdorff distance induced by the distance defined by the norm \(\left\|\cdot\right\|_{E}\) of \(E\).
**Notation 2.13**.: If \(\pi:\mathfrak{A}\to\mathfrak{B}\) is a unital \(*\)-morphism, then we define
\[\pi^{*}:\varphi\in\mathcal{S}(\mathfrak{B})\longrightarrow\varphi\circ\pi \in\mathcal{S}(\mathfrak{A}).\]
**Definition 2.14** ([35, Definition 3.1],[40, Definition 2.11,Definition 3.6]).: Let \((\mathfrak{A}_{1},\mathsf{L}_{1})\) and \((\mathfrak{A}_{2},\mathsf{L}_{2})\) be two quantum compact metric spaces. A _tunnel_\(\tau=(\mathfrak{D},\mathsf{L}_{\mathfrak{D}},\pi_{1},\pi_{2})\) is given by a quantum compact metric space \((\mathfrak{D},\mathsf{L}_{\mathfrak{D}})\) and two quantum isometries \(\pi_{1}:(\mathfrak{D},\mathsf{L}_{\mathfrak{D}})\to(\mathfrak{A}_{1}, \mathsf{L}_{1})\) and \(\pi_{2}:(\mathfrak{D},\mathsf{L}_{\mathfrak{D}})\to(\mathfrak{A}_{2}, \mathsf{L}_{2})\). The _domain_\(\operatorname{dom}(\tau)\) of \(\tau\) is \((\mathfrak{A}_{1},\mathsf{L}_{1})\) and the _codomain_ codom(\(\tau\)) of \(\tau\) is \((\mathfrak{A}_{2},\mathsf{L}_{2})\).
The _extent_\(\chi(\tau)\) of \(\tau\) is the non-negative number:
\[\chi(\tau):=\max_{j\in\{1,2\}}\mathsf{Haus}[\mathsf{mk}_{\mathsf{L}_{ \mathfrak{D}}}]\,\Big{(}\pi_{j}^{*}(\mathcal{S}(\mathfrak{A}_{j})),\mathcal{S }(\mathfrak{D})\Big{)}\,.\]
_Remark 2.15_.: We emphasize that all quantum compact metric spaces involved in our tunnels in this paper must satisfy the same \((\Omega,\Omega^{\prime})\)-Leibniz inequality for our _fixed_\(\Omega,\Omega^{\prime}\).
There always exists a tunnel between any two quantum compact metric spaces, and the extent of a tunnel is always finite. We thus define:
**Definition 2.16**.: The _(dual) Gromov-Hausdorff propinquity_\(\Lambda^{*}((\mathfrak{A},\mathsf{L}_{\mathfrak{D}}),(\mathfrak{B}, \mathsf{L}_{\mathfrak{B}}))\) between any two quantum compact metric spaces \((\mathfrak{A},\mathsf{L}_{\mathfrak{A}})\) to \((\mathfrak{B},\mathsf{L}_{\mathfrak{B}})\) is defined by:
\[\Lambda^{*}((\mathfrak{A},\mathsf{L}_{\mathfrak{A}}),(\mathfrak{B},\mathsf{L}_{ \mathfrak{B}})):=\inf\{\chi(\tau):\tau\text{ tunnel from }(\mathfrak{A},\mathsf{L}_{ \mathfrak{A}})\text{ to }(\mathfrak{B},\mathsf{L}_{\mathfrak{B}})\,\}\,.\]
The (dual) propinquity is well-behaved, as summarized in the following theorem:
**Theorem 2.17** ([38, 35]).: _The dual propinquity is a complete metric up to full quantum isometry. Moreover, if \((X_{n},d_{n})_{n\in\mathds{N}}\) is a sequence of compact metric spaces, then \((X_{n},d_{n})_{n\in\mathds{N}}\) converges to a compact metric space \((X,d)\) for the Gromov-Hausdorff distance if, and only if \(\lim_{n\to\infty}\Lambda^{*}((C(X_{n}),\mathsf{L}_{d_{n}}),(C(X),\mathsf{L}_{d }))=0\), where \(\mathsf{L}_{d}\) denotes the Lipschitz seminorm induced by any metric \(d\)._
There are several interesting known examples of convergence for the propinquity, including approximations of quantum tori by fuzzy tori [33], approximations of spheres by matrix algebras [9], continuity of quantum tori in their cocycle parameter [33], continuity of UHF algebras with respect to the Baire space seen as their natural parameter space, continuity of the Effros-Shen algebras in their irrational parameters [5], and more.
### Main result
We begin with a simple sufficient condition to ensure that a seminorm is indeed a Lipschitz seminorm on an inductive limit of unital \(\mathrm{C}^{*}\)-algebras, when each of the \(\mathrm{C}^{*}\)-subalgebra in the inductive sequence is already equipped with a Lipschitz seminorm. This condition is quite natural and generalizes, for instance, the idea behind the construction of Lipschitz seminorms on AF algebras in [5].
**Proposition 2.18**.: _Let \(\mathfrak{A}_{\infty}\) be a unital \(C^{*}\)-algebra. For each \(n\in\mathds{N}\), let \((\mathfrak{A}_{n},\mathsf{L}_{n})\) be a quantum compact metric space, where \((\mathfrak{A}_{n})_{n\in\mathds{N}}\) is an increasing sequence of \(C^{*}\)-subalgebras of \(\mathfrak{A}_{\infty}\) with the unit of \(\mathfrak{A}_{\infty}\) in \(\mathfrak{A}_{0}\). Assume moreover that \(\mathfrak{A}_{\infty}=\mathrm{cl}\left(\bigcup_{n\in\mathds{N}}\mathfrak{A}_ {n}\right)\). Let \(\mathsf{L}_{\infty}\) be a seminorm defined on a dense Jordan-Lie subalgebra \(\mathrm{dom}\left(\mathsf{L}_{\infty}\right)\) of \(\mathfrak{sa}\left(\mathfrak{A}_{\infty}\right)\), such that:_
1. \(\{a\in\mathrm{dom}\left(\mathsf{L}_{\infty}\right):\mathsf{L}_{\infty}(a)=0\}= \mathds{R}1\)_,_
2. _the unit ball of_ \(\mathsf{L}_{\infty}\) _is closed in_ \(\mathfrak{A}_{\infty}\)_,_
3. \(\mathsf{L}_{\infty}\) _is_ \((\Omega,\Omega^{\prime})\)_-Leibniz._
_If there exists a unital isometric positive linear map \(\pi:\mathfrak{A}_{\infty}\to\mathfrak{A}_{\infty}\) such that, for all \(\varepsilon>0\), there exists \(N\in\mathds{N}\) with the property that:_
\[\forall a\in\mathrm{dom}\left(\mathsf{L}_{\infty}\right)\quad\exists b\in \mathrm{dom}\left(\mathsf{L}_{N}\right):\quad\mathsf{L}_{N}(b)\leq\mathsf{L}_ {\infty}(a)\text{ and }\|\pi(a)-b\|_{\mathfrak{A}_{\infty}}<\varepsilon\mathsf{L}_{ \infty}(a),\]
_then \((\mathfrak{A}_{\infty},\mathsf{L}_{\infty})\) is a quantum compact metric space._
Proof.: Let \(\mu\in\mathcal{S}(\mathfrak{A}_{\infty})\). By assumption, \(\mu\in\mathcal{S}(\mathfrak{A}_{n})\) for all \(n\in\mathds{N}\) -- where we use the same symbol \(\mu\) to denote the restriction of \(\mu\) to \(\mathfrak{A}_{n}\). Let
\[B_{\infty}\coloneqq\left\{a\in\mathrm{dom}\left(\mathsf{L}_{\infty}\right): \mu\circ\pi(a)=0,\mathsf{L}_{\infty}(a)\leq 1\right\}.\]
Now, let \(\varepsilon>0\) and let \(n\in\mathds{N}\). We set
\[B_{n}\coloneqq\left\{a\in\mathrm{dom}\left(\mathsf{L}_{n}\right):|\mu(a)|< \frac{\varepsilon}{4},\mathsf{L}_{n}(a)\leq 1\right\}.\]
Let \(a\in B_{n}\), and let \(\varphi\in\mathcal{S}(\mathfrak{A}_{n})\). By Theorem (2.8), we have the following inclusion:
\[B_{n}\subseteq\left\{a\in\mathrm{dom}\left(\mathsf{L}_{n}\right):\mathsf{L}_{ n}(a)\leq 1,\|a\|_{\mathfrak{A}_{n}}\leq\mathrm{qdiam}\left(\mathfrak{A}_{n}, \mathsf{L}_{n}\right)+\frac{\varepsilon}{4}\right\}\]
and the latter set is compact since \(\mathsf{L}_{n}\) is a Lipschitz seminorm, by Corollary (2.10). So \(B_{n}\) is totally bounded. In fact, since \(\mathsf{L}_{n}\) is lower semicontinuous and \(\mu\) is continuous, the set \(B_{n}\) is also closed in the complete space \(\mathfrak{A}_{\infty}\), so \(B_{n}\) is compact.
By assumption on \(\pi\), there exists \(N\in\mathds{N}\) such that
\[\forall a\in B_{\infty}\quad\exists b\in\mathrm{dom}\left(\mathsf{L}_{N} \right):\quad\mathsf{L}_{N}(b)\leq 1\text{ and }\|\pi(a)-b\|_{\mathfrak{A}_{\infty}}<\frac{ \varepsilon}{4}.\]
In particular, if \(a\in B_{\infty}\) and \(b\in\mathrm{dom}\left(\mathsf{L}_{N}\right)\) with \(\mathsf{L}_{N}(b)\leq 1\) and \(\|\pi(a)-b\|_{\mathfrak{A}_{\infty}}<\frac{\varepsilon}{4}\), then \(|\mu(b)|\leq\|b-\pi(a)\|_{\mathfrak{A}_{\infty}}+|\mu(\pi(a))|<\frac{ \varepsilon}{4}\), so \(b\in B_{N}\).
Since \(B_{N}\) is compact in \(\mathfrak{sa}\left(\mathfrak{A}_{N}\right)\) by Corollary (2.10), there exists a \(\frac{\varepsilon}{4}\)-dense subset \(F\subseteq B_{N}\) of \(B_{N}\). So
\[\mathsf{Haus}[\mathfrak{A}_{\infty}]\left(\pi(B_{\infty}),F\right)\leq\mathsf{Haus }[\mathfrak{A}_{\infty}]\left(\pi(B_{\infty}),B_{N}\right)+\mathsf{Haus}[ \mathfrak{A}_{\infty}]\left(B_{N},F\right)<\frac{\varepsilon}{2}.\]
The domain \(\operatorname{dom}\left(L_{\infty}\right)\) is dense in \(\mathfrak{sa}\left(\mathfrak{X}\right)\), so it is not empty and thus \(\left\{a\in\operatorname{dom}\left(L_{\infty}\right):L_{\infty}(a)\leq 1\right\}\) is not empty, since \(L\) is a seminorm. Thus, by Remark (2.6), the set \(B_{\infty}\) is not empty as well. We thus obtain:
\[\emptyset\neq B_{\infty}=\bigcup_{b\in F}\left\{a\in B_{\infty}:\left\|\pi(a )-b\right\|_{\mathfrak{X}_{\infty}}<\frac{\varepsilon}{2}\right\}.\]
Therefore, if we define
\[G\coloneqq\left\{b\in F:\left\{a\in B_{\infty}:\left\|\pi(a)-b\right\|_{ \mathfrak{X}_{\infty}}<\frac{\varepsilon}{2}\right\}\neq\emptyset\right\},\]
then \(G\neq\emptyset\) and \(B_{\infty}=\bigcup_{b\in G}\left\{a\in B_{\infty}:\left\|\pi(a)-b\right\|_{ \mathfrak{X}_{\infty}}<\frac{\varepsilon}{2}\right\}\). For each \(b\in G\), we pick \(t(b)\in B_{\infty}\) such that \(\left\|\pi(t(b))-b\right\|_{\mathfrak{X}_{\infty}}<\frac{\varepsilon}{2}\). Let now \(a\in B_{\infty}\). There exists \(b\in G\) such that \(\left\|\pi(a)-b\right\|_{\mathfrak{X}_{\infty}}<\frac{\varepsilon}{2}\). Then
\[\left\|a-t(b)\right\|_{\mathfrak{X}_{\infty}} =\left\|\pi(a-t(b))\right\|_{\mathfrak{X}_{\infty}}\] \[\leq\left\|\pi(a)-b\right\|_{\mathfrak{X}_{\infty}}+\left\|b-\pi( t(b))\right\|_{\mathfrak{X}_{\infty}}\] \[<\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon.\]
Thus, \(t(G)\) is a \(\varepsilon\)-dense subset of \(B_{\infty}\). So \(B_{\infty}\) is totally bounded in \(\mathfrak{A}_{\infty}\). Therefore, noting that \(\mu\circ\pi\) is a state of \(\mathfrak{A}_{\infty}\), we conclude by Theorem (2.9) that \(\operatorname{mk}_{L_{\infty}}\) induces the weak-\({}^{*}\) topology on \(\mathcal{S}(\mathfrak{A}_{\infty})\). Since all other required properties are assumed, \(L_{\infty}\) is indeed a Lipschitz seminorm.
The next natural question is to find a sufficient condition to strengthen Proposition (2.18) and obtain convergence of the sequence \((\mathfrak{A}_{n},L_{n})_{n\in\mathbb{N}}\) to \((\mathfrak{A}_{\infty},L_{\infty})\) in the sense of the propinquity. To this end, we introduce the notion of a bridge builder -- a map which, among other things, satisfy the condition in Proposition (2.18). In fact, we basically "symmetrize" the condition in Proposition (2.18) and require that we work with \(*\)-morphism (which will allow us to construct seminorms with the Leibniz property), rather than just positive linear maps.
**Notation 2.19**.: We will write \(\overline{\mathbb{N}}\coloneqq\mathbb{N}\cup\{\infty\}\) for the one point compactification of \(\mathbb{N}\).
**Definition 2.20**.: For each \(n\in\mathbb{N}\cup\{\infty\}\), let \((\mathfrak{A}_{n},L_{n})\) be a quantum compact metric space, where \((\mathfrak{A}_{n})_{n\in\mathbb{N}}\) is an increasing (for \(\subseteq\)) sequence of C\({}^{*}\)-subalgebras of \(\mathfrak{A}_{\infty}\) such that \(\mathfrak{A}_{\infty}=\operatorname{cl}\left(\bigcup_{n\in\mathbb{N}} \mathfrak{A}_{n}\right)\) and the unit of \(\mathfrak{A}_{\infty}\) is in \(\mathfrak{A}_{0}\).
A \(*\)-automorphism \(\pi:\mathfrak{A}_{\infty}\to\mathfrak{A}_{\infty}\) is a _bridge builder_ for \(((\mathfrak{A}_{n},L_{n})_{n\in\mathbb{N}},(\mathfrak{A}_{\infty},L_{\infty}))\) when, for all \(\varepsilon>0\), there exists \(N\in\mathbb{N}\) such that if \(n\geq N\), then
\[\forall a\in\operatorname{dom}\left(L_{\infty}\right)\quad\exists b\in \operatorname{dom}\left(L_{n}\right):\quad L_{n}(b)\leq L_{\infty}(a)\text{ and }\left\|\pi(a)-b\right\|_{\mathfrak{A}_{\infty}}< \varepsilon L_{\infty}(a)\]
and
\[\forall b\in\operatorname{dom}\left(L_{n}\right)\quad\exists a\in \operatorname{dom}\left(L_{\infty}\right):\quad L_{\infty}(a)\leq L_{n}(b) \text{ and }\left\|\pi(a)-b\right\|_{\mathfrak{A}_{\infty}}< \varepsilon L_{n}(b).\]
**Proposition 2.21**.: _For each \(n\in\mathbb{N}\cup\{\infty\}\), let \((\mathfrak{A}_{n},L_{n})\) be a quantum compact metric space, where \((\mathfrak{A}_{n})_{n\in\mathbb{N}}\) is an increasing (for \(\subseteq\)) sequence of C\({}^{*}\)-subalgebras of \(\mathfrak{A}_{\infty}\) such that \(\mathfrak{A}_{\infty}=\operatorname{cl}\left(\bigcup_{n\in\mathbb{N}} \mathfrak{A}_{n}\right)\) and the unit of \(\mathfrak{A}_{\infty}\) is in \(\mathfrak{A}_{0}\)._
_If there exists a bridge builder for \(((\mathfrak{A}_{n},L_{n})_{n\in\mathbb{N}},(\mathfrak{A}_{\infty},L_{\infty}))\), then_
\[\lim_{n\to\infty}\Lambda^{*}((\mathfrak{A}_{n},L_{n}),(\mathfrak{A}_{\infty},L_{ \infty}))=0.\]
Proof.: Let \(\pi:\mathfrak{A}_{\infty}\to\mathfrak{A}_{\infty}\) be the given bridge builder. Let \(\varepsilon>0\). There exists \(N\in\mathbb{N}\) such that if \(n\geq N\), then
\[\forall a\in\operatorname{dom}\left(L_{\infty}\right)\quad\exists b\in \operatorname{dom}\left(L_{n}\right):\quad L_{n}(b)\leq L_{\infty}(a)\wedge \left\|\pi(a)-b\right\|_{\mathfrak{A}_{\infty}}<\varepsilon L_{\infty}(a),\]
* \(\forall b\in\operatorname{dom}\left(\mathsf{L}_{n}\right)\quad\exists a\in \operatorname{dom}\left(\mathsf{L}_{\infty}\right):\quad\mathsf{L}_{\infty}(a) \leqslant\mathsf{L}_{n}(b)\wedge\|\pi(a)-b\|_{\mathfrak{A}_{\infty}}<\varepsilon \mathsf{L}_{n}(b)\).
Fix \(n\geqslant N\). We define, for all \(a\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\) and \(b\in\operatorname{dom}\left(\mathsf{L}_{n}\right)\):
\[\mathsf{T}_{n}(a,b)\coloneqq\max\left\{\mathsf{L}_{\infty}(a),\mathsf{L}_{n} (b),\frac{1}{\varepsilon}\|\pi(a)-b\|_{\mathfrak{A}_{\infty}}\right\}.\]
It is a standard argument that \((\mathfrak{A}_{\infty}\oplus\mathfrak{A}_{n},\mathsf{T}_{n})\) is a quantum compact metric space:
1. the domain \(\operatorname{dom}\left(\mathsf{T}_{n}\right)=\operatorname{dom}\left( \mathsf{L}_{\infty}\right)\oplus\operatorname{dom}\left(\mathsf{L}_{n}\right)\) of \(\mathsf{T}_{n}\) is dense in \(\mathfrak{sa}\left(\mathfrak{A}_{\infty}\oplus\mathfrak{A}_{n}\right)\) since \(\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\) is dense in \(\mathfrak{sa}\left(\mathfrak{A}_{\infty}\right)\) and \(\operatorname{dom}\left(\mathsf{L}_{n}\right)\) is dense in \(\mathfrak{sa}\left(\mathfrak{A}_{n}\right)\),
2. if \(\mathsf{T}_{n}(a,b)=0\) for some \((a,b)\in\operatorname{dom}\left(\mathsf{T}_{n}\right)\), then \(\mathsf{L}_{\infty}(a)=0\) so \(a=t1\) for some \(t\in\mathds{R}\), and \(\mathsf{L}_{n}(b)=0\) so \(b=s1\) for some \(s\in\mathds{R}\) (it matters here that the unit is the same in \(\mathfrak{A}_{\infty}\) and \(\mathfrak{A}_{n}\)), and \(0=\|\pi(a)-b\|_{\mathfrak{A}_{\infty}}=|t-s|\) so \((a,b)=t(1,1)\);
3. \(\mathsf{T}_{n}\) is the maximum of two lower semicontinuous functions and one continuous function, so it is lower semicontinuous over \(\mathfrak{sa}\left(\mathfrak{A}_{\infty}\oplus\mathfrak{A}_{n}\right)\);
4. a direct computation shows that \(\mathsf{T}_{n}\) is \((\Omega,\Omega^{\prime})\)-Leibniz since \(\mathsf{L}_{\infty}\) and \(\mathsf{L}_{n}\) both are, and \(\pi\) is a \(*\)-morphism;
5. fixing any state \(\mu\) of \(\mathfrak{A}_{\infty}\) and setting \(\varphi:(a,b)\in\mathfrak{A}_{\infty}\oplus\mathfrak{A}_{n}\mapsto\mu(a)\), then \(\varphi\in\mathcal{S}(\mathfrak{A}_{\infty}\oplus\mathfrak{A}_{n})\), and \[\{(a,b)\in\operatorname{dom}\left(\mathsf{T}_{n}\right):\mathsf{T}_{n}(a,b) \leqslant 1,\varphi(a,b)=0\}\subseteq\\ \left\{a\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right): \mathsf{L}_{\infty}(a),\mu(a)=0\right\}\times\left\{b\in\operatorname{dom} \left(\mathsf{L}_{n}\right):\mathsf{L}_{n}(b)\leqslant 1,|\mu\circ\pi^{-1}(b)| \leqslant\varepsilon\right\}\] and, as seen in the proof of Proposition (2.18), the set on the right hand side is a product of two compact set, and thus compact; thus the set on the left hand side is compact (closed in a compact set) and thus, \(\mathsf{T}_{n}\) is indeed a Lipschitz seminorm, invoking Theorem (2.9).
We now check that \(\tau_{n}\coloneqq(\mathfrak{A}_{\infty}\oplus\mathfrak{A}_{n},\mathsf{T}_{n},\psi_{n},\theta_{n})\), with \(\psi_{n}:(a,b)\in\mathfrak{A}_{\infty}\oplus\mathfrak{A}_{n}\mapsto a\in \mathfrak{A}_{\infty}\) and \(\theta_{n}:(a,b)\in\mathfrak{A}_{\infty}\oplus\mathfrak{A}_{n}\mapsto b\in \mathfrak{A}_{n}\), is a tunnel, in the sense of Definition (2.14).
Let \(a\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\). By assumption, there exists \(b\in\operatorname{dom}\left(\mathsf{L}_{n}\right)\) with \(\mathsf{L}_{n}(b)\leqslant\mathsf{L}_{\infty}(a)\) and \(\|\pi(a)-b\|_{\mathfrak{A}_{\infty}}\leqslant\varepsilon\mathsf{L}_{\infty}(a)\). Therefore, \(\mathsf{T}_{n}(a,b)=\mathsf{L}_{\infty}(a)\). Since by construction, \(\mathsf{T}_{n}(a,c)\geqslant\mathsf{L}_{\infty}(a)\) for all \(a\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\) and \(c\in\operatorname{dom}\left(\mathsf{L}_{n}\right)\), we have shown that \(\psi_{n}\) is a quantum isometry by Definition (2.11).
Let now \(b\in\operatorname{dom}\left(\mathsf{L}_{n}\right)\). Again by assumption on \(\pi\), there exists \(a\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\) such that \(\|\pi(a)-b\|_{\mathfrak{A}_{\infty}}<\varepsilon\mathsf{L}_{n}(b)\) and \(\mathsf{L}_{\infty}(a)\leqslant\mathsf{L}_{n}(b)\). Thus \(\mathsf{T}_{n}(a,b)=\mathsf{L}_{n}(b)\). Once again, \(\mathsf{T}_{n}(c,b)\geqslant\mathsf{L}_{n}(b)\) by construction for all \(c\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\), so \(\theta_{n}\) is indeed a quantum isometry, so \(\tau_{n}\) is a tunnel.
We now compute the extent of \(\tau_{n}\), in the sense of Definition (2.14). Let \(\varphi\in\mathcal{S}(\mathfrak{A}_{\infty}\oplus\mathfrak{A}_{n})\). Using Hahn-Banach theorem, we extend \(\varphi\) to a state \(\varphi^{\prime}\) of \(\mathfrak{A}_{\infty}\oplus\mathfrak{A}_{\infty}\). Let \(\mu:a\in\mathfrak{A}_{\infty}\mapsto\varphi^{\prime}(a,\pi(a))\); since \(\pi\) is a unital \(*\)-morphism, \(\mu\) is a state of \(\mathfrak{A}_{\infty}\). By construction, if \(\mathsf{T}_{n}(a,b)\leqslant 1\) then \(\|\pi(a)-b\|_{\mathfrak{A}_{\infty}}\leqslant\varepsilon\) and thus
\[|\varphi(a,b)-\mu\circ\psi_{n}(a,b)| =|\varphi^{\prime}(a,b)-\varphi^{\prime}(a,\pi(a))|\] \[\leqslant|\varphi^{\prime}(0,b-\pi(a))|\] \[\leqslant\|b-\pi(a)\|_{\mathfrak{A}_{\infty}}\leqslant\varepsilon.\]
Thus \(\operatorname{Haus}\left[\operatorname{mk}_{\mathsf{T}_{n}}\right](\psi_{n}^{*}( \mathcal{S}(\mathfrak{A}_{\infty})),\mathcal{S}(\mathfrak{A}_{\infty}\oplus \mathfrak{A}_{n}))\leqslant\varepsilon\).
Let now \(\mu^{\prime}:b\in\mathfrak{A}_{n}\mapsto\varphi(\pi^{-1}(b),b)\). Since \(\pi\) is a \(*\)-automorphism of \(\mathfrak{A}_{\infty}\), the map \(\mu^{\prime}\) is a state of \(\mathfrak{A}_{n}\). Moreover:
\[|\varphi(a,b)-\mu^{\prime}\circ\theta_{n}(a,b)| =|\varphi(a,b)-\varphi(\pi^{-1}(b),b)|\] \[=|\varphi(a-\pi^{-1}(b),0)|\] \[\leqslant\left\|a-\pi^{-1}(b)\right\|_{\mathfrak{A}_{\infty}}\] \[=\left\|\pi(a)-b\right\|_{\mathfrak{A}_{\infty}}\leqslant\varepsilon.\]
Thus \(\mathsf{Haus}\big{[}\mathsf{mk}_{\mathsf{T}_{n}}\big{]}\left(\theta_{n}^{*}( \mathcal{S}(\mathfrak{A}_{n})),\mathcal{S}(\mathfrak{A}_{\infty})\right) \leqslant\varepsilon\).
Hence, the extent \(\chi(\tau_{n})\) of \(\tau_{n}\) is at most \(\varepsilon\). By Definition (2.16), we thus have shown that for all \(n\geqslant N\),
\[\Lambda^{*}((\mathfrak{A}_{n},\mathsf{L}_{n}),(\mathfrak{A}_{\infty},\mathsf{ L}_{\infty}))\leqslant\varepsilon, \tag{2.1}\]
which concludes our proof.
Our main result in this section is the following theorem, which shows that the natural sufficient condition in Definition (2.20) and Proposition (2.21) is, in fact, very close to necessary, under a mild and natural condition. This is notable because in general, it is difficult to exhibit nontrivial necessary conditions for convergence in the sense of the propinquity (besides, say, the fact that diameters must converge). It also shows that the existence of bridge builders is the natural setup for establishing convergence of inductive limits in the sense of the propinquity, thus providing a complete answer for the relationship between convergence of inductive sequences of quantum compact metric spaces in the categorical sense and the propinquity sense, under a commonly met condition.
**Theorem 2.22**.: _For each \(n\in\mathds{N}\cup\{\infty\}\), let \((\mathfrak{A}_{n},\mathsf{L}_{n})\) be a quantum compact metric space, where \((\mathfrak{A}_{n})_{n\in\mathds{N}}\) is an increasing (for \(\subseteq\)) sequence of \(C\)*-subalgebras of \(\mathfrak{A}_{\infty}\) such that \(\mathfrak{A}_{\infty}=\mathrm{cl}\left(\bigcup_{n\in\mathds{N}}\mathfrak{A}_{n}\right)\) and the unit of \(\mathfrak{A}_{\infty}\) is in \(\mathfrak{A}_{0}\). We assume that there exists \(M>0\) such that for all \(n\in\mathds{N}\):_
\[\frac{1}{M}\mathsf{L}_{n}\leqslant\mathsf{L}_{\infty}\leqslant M\cdot\mathsf{ L}_{n}\text{ on }\operatorname{dom}\left(\mathsf{L}_{n}\right).\]
_Then_
\[\lim_{n\to\infty}\Lambda^{*}\left((\mathfrak{A}_{n},\mathsf{L}_{n}),( \mathfrak{A}_{\infty},\mathsf{L}_{\infty})\right)=0,\]
_if, and only if, for any subsequence \((\mathfrak{A}_{g(n)},\mathsf{L}_{g(n)})_{n\in\mathds{N}}\) of \((\mathfrak{A}_{n},\mathsf{L}_{n})_{n\in\mathds{N}}\), there exists a strictly increasing function \(f:\mathds{N}\to\mathds{N}\) and a bridge builder \(\pi\) for \(((\mathfrak{A}_{g\circ f(n)},\mathsf{L}_{g\circ f(n)})_{n\in\mathds{N}},( \mathfrak{A}_{\infty},\mathsf{L}_{\infty}))\)._
Proof.: First, assume that for any subsequence \((\mathfrak{A}_{g(n)},\mathsf{L}_{g(n)})_{n\in\mathds{N}}\), there exists a strictly increasing function \(f:\mathds{N}\to\mathds{N}\) and a bridge builder \(\pi\) for \(((\mathfrak{A}_{g\circ f(n)},\mathsf{L}_{g\circ f(n)})_{n\in\mathds{N}},( \mathfrak{A}_{\infty},\mathsf{L}_{\infty}))\). By Proposition (2.21), we conclude that every subsequence of \((\mathfrak{A}_{n},\mathsf{L}_{n})_{n\in\mathds{N}}\) has a subsequence converging to \((\mathfrak{A}_{\infty},\mathsf{L}_{\infty})\). Therefore, \((\mathfrak{A}_{n},\mathsf{L}_{n})_{n\in\mathds{N}}\) converges to \((\mathfrak{A}_{\infty},\mathsf{L}_{\infty})\) since the propinquity is, indeed, a metric (up to full quantum isometry).
Let us now assume that \((\mathfrak{A}_{n},\mathsf{L}_{n})_{n\in\mathds{N}}\) converges to \((\mathfrak{A}_{\infty},\mathsf{L}_{\infty})\) for the propinquity. Since any subsequence will converge as well, it is sufficient to prove our statement for \(g\) being the identity, and this will simplify our notation.
Since \((\mathfrak{A}_{n},\mathsf{L}_{n})_{n\in\mathds{N}}\) converges to \((\mathfrak{A}_{\infty},\mathsf{L}_{\infty})\), there exists a sequence
\[(\tau_{n})_{n\in\mathds{N}}\coloneqq(\mathfrak{D}_{n},\mathsf{T}_{n},\psi_{n },\theta_{n})_{n\in\mathds{N}}\]
of tunnels, as in Definition (2.14), with \(\lim_{n\to\infty}\chi(\tau_{n})=0\), while, for each \(n\in\mathds{N}\), we have \(\operatorname{dom}\left(\tau_{n}\right)=(\mathfrak{A}_{\infty},\mathsf{L}_{ \infty})\) and \(\operatorname{codom}\left(\tau_{n}\right)=(\mathfrak{A}_{n},\mathsf{L}_{n})\). To ease notation, the target set
of \(a\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\) with \(l\geqslant\mathsf{L}_{\infty}(a)\) defined by \(\tau_{n}\) will be denoted by \(\mathsf{t}_{n}\left(a|l\right)\), rather than \(\mathsf{t}_{\tau_{n}}\left(a|l\right)\); we recall from [35, 38] that:
\[\mathsf{t}_{n}\left(a|l\right)=\left\{\theta_{n}(d):d\in\psi_{n}^{-1}(\{a\}), \mathsf{T}_{n}(d)\leqslant l\right\}.\]
This proof heavily relies on the properties of target sets, as discussed in [35, 38, 39, 40]. In [35], various estimates which we will refer to in this proof are expressed using the _length_\(\lambda(\tau)\) of a tunnel \(\tau\), rather than the extent \(\chi(\tau)\); however as seen in [40, Proposition 2.12], for any tunnel \(\tau\), we have \(\lambda(\tau)\leqslant\chi(\tau)\leqslant 2\lambda(\tau)\). We will use this inequality without further mention to express all our results here in terms of extents.
**Claim 2.23**.: For all \(a\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\), there exists a strictly increasing function \(f:\mathbb{N}\to\mathbb{N}\) and an element \(\pi(a)\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\) such that, for all \(l\geqslant\mathsf{L}_{\infty}(a)\),
\[\lim_{n\to\infty}\mathsf{Haus}[\mathfrak{A}_{\infty}]\left(\mathsf{t}_{f(n)} \left(a|l\right),\{\pi(a)\}\right)=0,\]
and \(\|\pi(a)\|_{\mathfrak{A}_{\infty}}=\|a\|_{\mathfrak{A}_{\infty}}\).
Proof of Claim (2.23).: First, since the sequence \((\chi(\tau_{n}))_{n\in\mathbb{N}}\) converges (to 0), it is bounded; let \(K^{\prime}>0\) such that \(\chi(\tau_{n})\leqslant K^{\prime}\) for all \(n\in\mathbb{N}\).
Let \(a\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\). Let \(l=\mathsf{L}_{\infty}(a)\). For any \(K>0\), let
\[\mathfrak{A}_{\infty}[K]\coloneqq\left\{b\in\operatorname{dom}\left(\mathsf{ L}_{\infty}\right):\mathsf{L}_{\infty}(b)\leqslant K,\|b\|_{\mathfrak{A}_{ \infty}}\leqslant\|a\|_{\mathfrak{A}_{\infty}}+KK^{\prime}\right\}.\]
The set \(\mathfrak{A}_{\infty}[K]\) is compact in \(\mathfrak{sa}\left(\mathfrak{A}_{\infty}\right)\) by Corollary (2.10). By [35, Corollary 4.5] and since \(\mathsf{L}_{\infty}\leqslant M\mathsf{L}_{n}\) on \(\operatorname{dom}\left(\mathsf{L}_{n}\right)\), the sequence \((\mathsf{t}_{n}\left(a|l\right))_{n\in\mathbb{N}}\) is a sequence of compact subsets of \(\mathfrak{A}_{\infty}[Ml]\), and
\[\lim_{n\to\infty}\operatorname{diam}\left(\mathsf{t}_{n}\left(a|l\right), \mathfrak{A}_{\infty}\right)=0.\]
Since \(\mathfrak{A}_{\infty}[Ml]\) is compact in \(\mathfrak{A}_{\infty}\), the Hausdorff distance \(\mathsf{Haus}[\mathfrak{A}_{\infty}]\) induced on the set of closed subsets of \(\mathfrak{A}_{\infty}[MI]\) by the norm \(\|\cdot\|_{\mathfrak{A}_{\infty}}\) of \(\mathfrak{A}_{\infty}\) gives a compact topology as well. Therefore, there exists a subsequence \((\mathsf{t}_{f(n)}\left(a|l\right))_{n\in\mathbb{N}}\) of \((\mathsf{t}_{n}\left(a|l\right))_{n\in\mathbb{N}}\) which converges, for \(\mathsf{Haus}[\mathfrak{A}_{\infty}]\), to a singleton \(\{\pi(a)\}\) of \(\mathfrak{A}_{\infty}[MI]\). In particular, \(\mathsf{L}_{\infty}(\pi(a))\leqslant Ml=M\mathsf{L}_{\infty}(a)\).
Let now \(L\geqslant l\). By definition, \(\mathsf{t}_{f(n)}\left(a|l\right)\subseteq\mathsf{t}_{f(n)}\left(a|l\right)\) for all \(n\in\mathbb{N}\) and
\[\lim_{n\to\infty}\operatorname{diam}\left(\mathsf{t}_{f(n)}\left(a|L\right), \mathfrak{A}_{\infty}\right)=0,\]
so we conclude easily as well that
\[\lim_{n\to\infty}\mathsf{Haus}[\mathfrak{A}_{\infty}]\left(\mathsf{t}_{f(n)} \left(a|L\right),\{\pi(a)\}\right)=0.\]
By [35, Proposition 4.4], we also note that if \(b_{n}\in\mathsf{t}_{f(n)}\left(a|l\right)\) for each \(n\in\mathbb{N}\), then
\[\|\pi(a)\|_{\mathfrak{A}_{\infty}}=\lim_{n\to\infty}\|b_{n}\|_{\mathfrak{A}_{ \infty}}\leqslant\limsup_{n\to\infty}\left(\|a\|_{\mathfrak{A}_{\infty}}+\chi \left(\tau_{f(n)}\right)l\right)=\|a\|_{\mathfrak{A}_{\infty}}.\]
Similarly, since \(a\in\mathsf{t}_{\mathsf{t}_{f(n)}^{-1}}\left(b_{n}|l\right)\), we also have
\[\|a\|_{\mathfrak{A}_{\infty}}\leqslant\limsup_{n\to\infty}\left(\|b_{n}\|_{ \mathfrak{A}_{\infty}}+l\chi\left(\tau_{f(n)}\right)\right)=\|\pi(a)\|_{ \mathfrak{A}_{\infty}}.\]
So indeed, \(\|\pi(a)\|_{\mathfrak{A}_{\infty}}=\|a\|_{\mathfrak{A}_{\infty}}\). This proves our claim.
**Claim 2.24**.: There exists a unital \(*\)-endomorphism \(\pi\) of \(\mathfrak{A}_{\infty}\) such that \(\pi(\operatorname{dom}\left(\mathsf{L}_{\infty}\right))\subseteq\operatorname{ dom}\left(\mathsf{L}_{\infty}\right)\), and a strictly increasing function \(f:\mathbb{N}\to\mathbb{N}\) such that, for all \(a\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\), and for all \(l\geqslant\mathsf{L}_{\infty}(a)\),
\[\lim_{n\to\infty}\mathsf{Haus}[\mathfrak{A}_{\infty}]\left(\mathsf{t}_{f(n)} \left(a|l\right),\{\pi(a)\}\right)=0.\]
Proof of Claim (2.24).: Since \(\mathfrak{A}_{\infty}\) is separable, there exists a countable dense subset \(S_{\infty}\) of \(\mathfrak{sa}\left(\mathfrak{A}_{\infty}\right)\) with \(S_{\infty}\subseteq\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\). Using Claim (2.23), a diagonal argument shows that there exists a strictly increasing sequence \(f:\mathbb{N}\to\mathbb{N}\) such that, for all \(a\in S_{\infty}\) and for all \(l\geq\mathsf{L}_{\infty}(a)\), we have \(\lim_{n\to\infty}\mathsf{Haus}[\mathfrak{A}_{\infty}]\left(\mathsf{t}_{f(n)} \left(a\middle|l\right),\{\pi(a)\}\right)=0\).
Let now \(a\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\), and let \(l\geq\mathsf{L}_{\infty}(a)\). Let \(\varepsilon>0\). Since \(S_{\infty}\) is dense in \(\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\), there exists \(a_{\varepsilon}\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\) such that \(\|a-a_{\varepsilon}\|_{\mathfrak{A}_{\infty}}<\frac{\varepsilon}{5}\). Note that \(\mathsf{L}_{\infty}(a_{\varepsilon})<\infty\) but in general, there is no relation between \(\mathsf{L}_{\infty}(a_{\varepsilon})\) and \(\mathsf{L}_{\infty}(a)\).
Let \(l=\max\{\mathsf{L}_{\infty}(a),\mathsf{L}_{\infty}(a_{\varepsilon})\}\). Since it is convergent for the Hausdorff distance \(\mathsf{Haus}[\mathfrak{A}_{\infty}]\), the sequence \(\left(\mathsf{t}_{f(n)}\left(a_{\varepsilon}\middle|l\right)\right)_{n\in \mathbb{N}}\) is Cauchy for \(\mathsf{Haus}[\mathfrak{A}_{\infty}]\).
Therefore, there exists \(N\in\mathbb{N}\) such that, for all \(p,q\geq N\), we have
\[\mathsf{Haus}[\mathfrak{A}_{\infty}]\left(\mathsf{t}_{f(p)}\left(a_{ \varepsilon}\middle|l\right),\mathsf{t}_{f(q)}\left(a_{\varepsilon}\middle|l \right)\right)<\frac{\varepsilon}{5}.\]
Since \(\lim_{n\to\infty}\chi\left(\mathsf{t}_{f(n)}\right)=0\), there exists \(N^{\prime}\in\mathbb{N}\) such that if \(n\geq N^{\prime}\) then \(\chi\left(\mathsf{t}_{f(n)}\right)<\frac{\varepsilon}{5(l+1)}\). Therefore, if \(n\geq N^{\prime}\), then by [35, Corollary 4.5],
Let now \(p,q\geq\max\{N,N^{\prime}\}\). We compute:
\[\mathsf{Haus}[\mathfrak{A}_{\infty}]\left(\mathsf{t}_{f(p)}\left(a \middle|l\right),\mathsf{t}_{f(q)}\left(a\middle|l\right)\right) \leq\mathsf{Haus}[\mathfrak{A}_{\infty}]\left(\mathsf{t}_{f(p)} \left(a\middle|l\right),\mathsf{t}_{f(p)}\left(a_{\varepsilon}\middle|l \right)\right)\] \[\quad+\mathsf{Haus}[\mathfrak{A}_{\infty}]\left(\mathsf{t}_{f(p) }\left(a_{\varepsilon}\middle|l\right),\mathsf{t}_{f(q)}\left(a_{\varepsilon }\middle|l\right)\right)\] \[\quad+\mathsf{Haus}[\mathfrak{A}_{\infty}]\left(\mathsf{t}_{f(q) }\left(a_{\varepsilon}\middle|l\right),\mathsf{t}_{f(q)}\left(a\middle|l \right)\right)\] \[<\frac{2\varepsilon}{5}+\frac{\varepsilon}{5}+\frac{2\varepsilon }{5}=\varepsilon.\]
Thus, \(\left(\mathsf{t}_{f(n)}\left(a\middle|l\right)\right)_{n\in\mathbb{N}}\) is Cauchy for \(\mathsf{Haus}[\mathfrak{A}_{\infty}]\). Since \(\mathfrak{sa}\left(\mathfrak{A}_{\infty}\right)\) is complete, so is the set of all closed subsets of \(\mathfrak{sa}\left(\mathfrak{A}_{\infty}\right)\) with the Hausdorff distance \(\mathsf{Haus}[\mathfrak{A}_{\infty}]\). Therefore, \(\left(\mathsf{t}_{f(n)}\left(a\middle|l\right)\right)_{n\in\mathbb{N}}\) converges to some compact subset in \(\mathfrak{sa}\left(\mathfrak{A}_{\infty}\right)\). In fact, since
\[\lim_{n\to\infty}\operatorname{diam}\left(\mathsf{t}_{f(n)}\left(a\middle|l \right),\mathfrak{A}_{\infty}\right)=0\]
by [35, Corollary 4.5], the sequence \(\left(\mathsf{t}_{f(n)}\left(a\middle|l\right)\right)_{n\in\mathbb{N}}\) converges to some singleton. As observed in Claim (2.23), this limit does not depend on \(l\); we denote it by \(\{\pi(a)\}\). Again using the same argument, we also note that \(\|\pi(a)\|_{\mathfrak{A}_{\infty}}=\|a\|_{\mathfrak{A}_{\infty}}\).
Since \(\mathsf{L}_{\infty}\) is lower semicontinuous over \(\mathfrak{A}_{\infty}\), and since by construction, \(\pi(a)\) is the limit in \(\mathfrak{A}_{\infty}\) of any sequence \((b_{n})_{n\in\mathbb{N}}\) with \(b_{n}\in\mathsf{t}_{f(n)}\left(a\middle|\mathsf{L}_{\infty}(a)\right)\) for all \(n\in\mathbb{N}\), we also conclude that
\[\mathsf{L}_{\infty}(\pi(a)) \leq\liminf_{n\to\infty}\mathsf{L}_{\infty}(b_{n}) \text{by lower semicontinuity of }\mathsf{L}_{\infty},\] \[\leq\lim_{n\to\infty}M\cdot\mathsf{L}_{n}(b) \text{since }\mathsf{L}_{\infty}\leq M\cdot\mathsf{L}_{n}\text{ for all }n\in\mathbb{N},\] \[\leq M\cdot\mathsf{L}_{\infty}(a) \text{since }\mathsf{L}_{n}(b)\leq\mathsf{L}_{\infty}(a)\text{, as }b\in\mathsf{t}_{f(n)}\left(a\middle|\mathsf{L}_{\infty}(a)\right)\,.\]
Let \(a,a^{\prime}\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\). Let \(t\in\mathbb{R}\). Since \(\mathsf{t}_{f(n)}\left(a\middle|l\right)+t\cdot\mathsf{t}_{f(n)}\left(a^{ \prime}\middle|l\right)\subseteq\mathsf{t}_{f(n)}\left(a+ta^{\prime}\middle| (1+\left|t\right|)l\right)\) for all \(n\in\mathbb{N}\) by [35, Corollary 4.5], we immediately conclude that \(\{\pi(a)\}+t\cdot\{\pi(a^{\prime})\}\subseteq\{\pi(a+ta^{\prime})\}\), i.e. \(\pi\) is linear. A similar argument shows that \(\pi\) is a Jordan-Lie morphism over \(\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\), using [35, Proposition 4.8].
As a linear map \(\pi\) with \(\|\pi(a)\|_{\mathfrak{A}_{\infty}}=\|a\|_{\mathfrak{A}_{\infty}}\) for all \(a\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\), we can uniquely extend \(\pi\) to \(\mathfrak{sa}\left(\mathfrak{A}_{\infty}\right)\) as a uniformly continuous map over \(\mathfrak{sa}\left(\mathfrak{A}_{\infty}\right)\); this map is of course again a Jordan-Lie morphism from \(\mathfrak{sa}\left(\mathfrak{A}_{\infty}\right)\) to \(\mathfrak{sa}\left(\mathfrak{A}_{\infty}\right)\) and an isometry.
A straightforward argument shows that we can uniquely extent \(\pi\) to a continuous Jordan-Lie algebra endomorphism of \(\mathfrak{A}_{\infty}\), and thus \(\pi\) thus extended is a unital \(*\)-endomorphism with \(\mathsf{L}_{\infty}\circ\pi\leqslant\mathsf{L}_{\infty}\) over \(\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\).
We already know that \(\pi\) is an isometry on \(\mathfrak{sa}\left(\mathfrak{A}_{\infty}\right)\) and a \(*\)-morphism, so it is injective on \(\mathfrak{A}_{\infty}\): if \(\pi(a)=0\) then \(\pi(\Re a)=0\) so \(\Re a=0\), and \(\pi(\Im a)=0\) so \(\Im a=0\), and thus \(a=0\). In particular, since \(\pi\) is now an injective \(*\)-morphism, it is an isometry on \(\mathfrak{A}_{\infty}\) (rather than just \(\mathfrak{sa}\left(\mathfrak{A}_{\infty}\right)\)). This proves our claim. Q.E.D.
_Claim 2.25_.: For all \(\varepsilon>0\), there exists \(N\in\mathds{N}\) such that for all \(n\geqslant N\), and for all \(a\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\) with \(\mathsf{L}_{\infty}(a)\leqslant 1\), we have
\[\mathsf{Haus}[\mathfrak{A}_{\infty}]\left(\{\pi(a)\},\mathfrak{t}_{f(n)} \left(a|1\right)\right)<\varepsilon.\]
Proof of Claim (2.25).: Let \(\varepsilon>0\). Fix \(\mu\in\mathcal{S}(\mathfrak{A}_{\infty})\). The set
\[B\coloneqq\left\{a\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right): \mathsf{L}_{\infty}(a)\leqslant 1,\mu(a)=0\right\}\]
is compact in \(\mathfrak{sa}\left(\mathfrak{A}_{\infty}\right)\) by Theorem (2.9). Therefore, there exists a finite subset \(F\subseteq B\) such that \(\mathsf{Haus}[\mathfrak{A}_{\infty}]\left(F,B\right)<\frac{\varepsilon}{4}\). Since \(F\) is finite, by Claim (2.24), there exists \(N\in\mathds{N}\) such that, for all \(a\in F\) and for all \(n\geqslant N\), we have \(\mathsf{Haus}[\mathfrak{A}_{\infty}]\left(\{\pi(a)\},\mathfrak{t}_{f(n)} \left(a|\mathsf{L}_{\infty}(a)\right)\right)<\frac{\varepsilon}{4}\). Moreover, there exists \(N^{\prime}\in\mathds{N}\) such that, if \(n\geqslant N^{\prime}\), then \(\chi\left(\tau_{n}\right)<\frac{\varepsilon}{4}\).
Now, let \(n\geqslant\max\{N,N^{\prime}\}\), \(a\in B\) and \(b\in\mathfrak{t}_{f(n)}\left(a|1\right)\). There exists \(a^{\prime}\in F\) such that \(\left\|a-a^{\prime}\right\|_{\mathfrak{A}_{\infty}}<\frac{\varepsilon}{4}\). Let \(b^{\prime}\in\mathfrak{t}_{f(n)}\left(a^{\prime}|1\right)\). By [35, Corollary 4.5], we compute the following expression:
\[\left\|\pi(a)-b\right\|_{\mathfrak{A}_{\infty}} \leqslant\left\|\pi(a)-\pi(a^{\prime})\right\|_{\mathfrak{A}_{ \infty}}+\left\|\pi(a^{\prime})-b^{\prime}\right\|_{\mathfrak{A}_{\infty}}+ \left\|b^{\prime}-b\right\|_{\mathfrak{A}_{\infty}}\] \[\leqslant\underbrace{\left\|\pi(a-a^{\prime})\right\|_{\mathfrak{ A}_{\infty}}}_{\text{$\pi$ is linear}}+\underbrace{\frac{\varepsilon}{4}}_{\text{by choice of $N$}}+\underbrace{\left\|a-a^{\prime}\right\|_{\mathfrak{A}_{\infty}}+\chi\left( \tau_{n}\right)}_{\text{by \@@cite[cite]{[\@@bibref{}{BJ}{}{}]}}}\] \[\leqslant 2\left\|a-a^{\prime}\right\|_{\mathfrak{A}_{\infty}}+ \frac{\varepsilon}{4}+\frac{\varepsilon}{4}\] \[\leqslant\frac{\varepsilon}{2}+\frac{\varepsilon}{4}+\frac{ \varepsilon}{4}=\varepsilon.\]
We thus have proven our uniform convergence claim over \(B\). Let now \(a\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\). Then of course, \(a-\mu(a)1\in B\), since \(\mathsf{L}_{\infty}(a-\mu(a)1)\leqslant\mathsf{L}_{\infty}(a)+\mathsf{L}_{ \infty}(\mu(a)1)=\mathsf{L}_{\infty}(a)\leqslant 1\) (in fact, \(\mathsf{L}_{\infty}(a)=\mathsf{L}_{\infty}(a-\mu(a)1)\)). If \(b\in\mathfrak{t}_{f(n)}\left(a|1\right)\) then \(b-\mu(a)1\in\mathfrak{t}_{f(n)}\left(a-\mu(a)1|1\right)\) by construction, and thus \(\left\|\pi(a)-b\right\|_{\mathfrak{A}_{\infty}}=\left\|\pi(a-\mu(a)1)-(b-\mu( a)1)\right\|_{\mathfrak{A}_{\infty}}<\varepsilon\).
Thus, as claimed, \(\mathsf{Haus}[\mathfrak{A}_{\infty}]\left(\{\pi(a)\},\mathfrak{t}_{f(n)} \left(a|1\right)\right)<\varepsilon\) for all \(n\geqslant\max\{N,N^{\prime}\}\) and for all \(a\in B\). This proves our claim. Q.E.D.
_Claim 2.26_.: For all \(\varepsilon>0\), there exists \(N\in\mathds{N}\) such that, if \(n\geqslant N\), then
* \(\forall a\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\),
* \(\forall b\in\operatorname{dom}\left(\mathsf{L}_{n}\right)\),
* \(\forall b\in\operatorname{dom}\left(\mathsf{L}_{n}\right)\),
Proof of Claim (2.26).: Let \(\varepsilon>0\). Let \(N\in\mathds{N}\) be chosen as in Claim (2.25), so that for all \(a\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\) with \(\mathsf{L}_{\infty}(a)\leqslant 1\), and for all \(n\geqslant N\), we have \(\mathsf{Haus}[\mathfrak{A}_{\infty}]\left(\{\pi(a)\},\mathfrak{t}_{f(n)} \left(a|1\right)\right)<\varepsilon\). Let now \(n\geqslant N\).
If \(a\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\backslash\mathds{R}1_{ \mathfrak{A}_{\infty}}\), and if \(b\in\mathfrak{t}_{f(n)}\left(a|\mathsf{L}_{\infty}(a)\right)\), then \(\mathsf{L}_{\infty}(a)>0\), \(\mathsf{L}_{n}(b)\leqslant\mathsf{L}_{\infty}(a)\), and \(\frac{b}{\mathsf{L}_{\infty}(a)}\in\mathfrak{t}_{f(n)}\left(\frac{a}{\mathsf{L }_{\infty}(a)}\right|1\right)\) and thus \(\left\|\pi\left(\frac{a}{\mathsf{L}_{\infty}(a)}\right)-\frac{b}{\mathsf{L }_{\infty}(a)}\right\|_{\mathfrak{A}_{\infty}}<\varepsilon\). So \(\left\|\pi(a)-b\right\|_{\mathfrak{A}_{\infty}}<\varepsilon\). \(\mathsf{L}_{\infty}(a)\), as needed.
Now, let \(b\in\operatorname{dom}\left(\mathsf{L}_{n}\right)\backslash\mathds{R}1_{ \mathfrak{A}}\). Let \(b^{\prime}=\frac{b}{\mathsf{L}_{n}(b)}\), so \(\mathsf{L}_{n}(b^{\prime})=1\). Let \(a^{\prime}\in\mathfrak{t}_{\mathfrak{t}_{f^{-1}}}\left(b^{\prime}|1\right)\), so in particular \(\mathsf{L}_{\infty}(a^{\prime})\leqslant 1\). By symmetry, \(b^{\prime}\in\mathfrak{t}_{f(n)}\left(a^{\prime}|1\right)\). Therefore, \(\left\|\pi(a^{\prime})-b^{\prime}\right\|_{\mathfrak{A}_{\infty}}<\varepsilon\)
Hence, letting \(a=L_{n}(b)a^{\prime}\), we conclude that \(\|\pi(a)-b\|_{\mathfrak{A}_{\infty}}\leq L_{n}(b)\varepsilon\) and \(L_{\infty}(a)\leq L_{n}(b)\), as desired.
Last, it is immediate that since \(\pi(1)=1\), our claim holds whenever \(L_{\infty}(a)=0\) or \(L_{n}(b)=0\), i.e., for any \(a,b\in\mathbb{R}1\). This proves our claim. Q.E.D.
_Claim 2.27_.: The map \(\pi\) constructed in Claim (2.24) is a \(*\)-automorphism.
Proof of Claim (2.27).: The map isometry of \(\mathfrak{A}_{\infty}\), hence it is a \(*\)-monomorphism of \(\mathfrak{A}_{\infty}\), via Claim (2.24),
Now, let \(b\in\bigcup_{n\in\mathbb{N}}\operatorname{dom}\left(L_{n}\right)\), so \(b\in\operatorname{dom}\left(L_{m}\right)\) for some \(m\in\mathbb{N}\). Thus \(b\in\operatorname{dom}\left(L_{\infty}\right)\) by assumption. Let \(l=L_{m}(b)\). By assumption, \(L_{\infty}(b)\leq ML_{m}(b)=Ml\). Let \(\varepsilon>0\) and let \(N\in\mathbb{N}\) given by Claim (2.26). Since \(L_{n}(b)\leq ML_{\infty}(b)\leq M^{2}l\), for all \(n\geq\max\{N,m\}\), and there exists \(a_{n}\in\mathfrak{A}_{\infty}\) with \(\|\pi(a_{n})-b\|_{\mathfrak{A}_{\infty}}<\varepsilon M^{2}l\) (and \(L_{\infty}(a)\leq L_{n}(b)\), which we do not need for this claim). As \(\varepsilon>0\) was arbitrary, the element \(b\) lies in the closure of the range of \(\pi\). Since \(\mathfrak{A}_{\infty}\) is complete and \(\pi\) is an isometry, the range of \(\pi\) is closed, and we now have shown that the range of \(\pi\) is a closed set containing the total subspace \(\bigcup_{n\in\mathbb{N}}\operatorname{dom}\left(L_{n}\right)\) of \(\mathfrak{A}_{\infty}\); consequently, \(\pi\) is a surjection as well. Thus as claimed, \(\pi\) is a \(*\)-automorphism of \(\mathfrak{A}_{\infty}\).
Moreover, by construction, for all \(a\in\operatorname{dom}\left(L_{\infty}\right)\), as noted in Claim (2.23), we have \(L_{\infty}(\pi(a))\leq ML_{\infty}(a)\) -- in particular, \(\pi(a)\in\operatorname{dom}\left(L_{\infty}\right)\). So \(\pi(\operatorname{dom}\left(L_{\infty}\right))\subseteq\operatorname{dom} \left(L_{\infty}\right)\) and thus \(\pi\) is a Lipschitz morphism. This proves our claim. Q.E.D.
This concludes the proof of our theorem.
_Remark 2.28_.: Limits, for the propinquity, are unique up to full quantum isometry. Therefore, the appearance of some map \(\pi\) in Theorem (2.22) is to be expected. However, the map \(\pi\) in Theorem (2.22) is quite a bit more general than a full quantum isometry -- in fact, it need not be Lipschitz for us to use Proposition (2.21) -- even though Theorem (2.22) shows that it can always be chosen to be so. The map \(\pi\) is really used here as a tool to construct a special kind of bridge. In general, the function \(\pi\) is not expected to be unique: if \(L_{n}\) is just the restriction to \(\mathfrak{A}_{n}\) of \(L_{\infty}\) for all \(n\in\mathbb{N}\), and if \(\theta\) is a full quantum isometry of \((\mathfrak{A}_{\infty},L_{\infty})\), then \(\pi\circ\theta\) can be used in place of \(\pi\), of course. The situation is more delicate when \(L_{n}\) varies, but there will usually be many maps \(\pi\) if there is one.
Theorem (2.22) characterizes the convergence of inductive sequences in the sense of the propinquity, under the condition of uniform equivalence of the Lipschitz seminorms on the sequence. The condition of uniform equivalence of Lipschitz seminorms is in essence our compatibility condition between the Lipschitz seminorms and the inductive limit structure in Theorem (2.22): using the notation of Theorem (2.22), as seen in [37], under the hypothesis that \(\operatorname{dom}\left(L_{n}\right)=\mathfrak{A}_{n}\cap\operatorname{dom} \left(L_{\infty}\right)\), the Lipschitz seminorms \(L_{n}\) and \(L_{\infty}\) are equivalent for each \(n\in\mathbb{N}\), and we require, in the assumptions of Theorem (2.22), that we want this equivalence be uniform. This leads us to several natural questions: does convergence of \((\mathfrak{A}_{n},L_{n})_{n\in\mathbb{N}}\) imply some uniform equivalence of the Lipschitz seminorms \(L_{n}\) (\(n\in\overline{\mathbb{N}}\)) (i.e. is our assumption redundant)? Does the existence of a bridge builder imply uniform equivalence of the Lipschitz seminorms? Does convergence of an inductive limit for the propinquity imply the existence of a bridge builder without the assumption of uniform equivalence of the Lipschitz seminorms? Moreover, does the convergence of \((\mathfrak{A}_{n},L_{n})_{n\in\mathbb{N}}\) to \((\mathfrak{A}_{\infty},L_{\infty})\) for the propinquity imply the convergence of \((\mathfrak{A}_{n},L_{k})_{k\geq n}\) to \((\mathfrak{A}_{n},L_{\infty})\) for a fixed \(n\in\mathbb{N}\), for the propinquity?
We now will show with two examples that all of the above questions have negative answers, so there is no obvious generalization of Theorem (2.22). First, we see that it is possible to have convergence for the propinquity of an inductive sequence of quantum
compact metric spaces, using the identity as a bridge builder, and yet, not have uniform equivalence of the Lipschitz seminorms.
_Example 2.29_.: Let \(X=[0,1]\) with its usual metric. If \(Y\subseteq X\) with at least two points, then we set \(\mathsf{L}_{Y}(f)=\sup\left\{\frac{|f(x)-f(y)|}{|x-y|}:x\neq y,x,y\in Y\right\}\) for all \(f\in C(X)\), allowing for \(\infty\). For each \(n\in\mathbb{N}\), and for all \(f\in C(X)\), we set:
\[\mathsf{L}_{n}(f)=\mathsf{L}_{\left[0,1-\frac{1}{n^{2}}\right]}(f)+\frac{1}{n }\mathsf{L}_{\left[1-\frac{1}{n^{2}},1\right]}(f),\]
allowing again for \(\infty\).
Let
\[f_{n}:x\in[0,1]\longmapsto\begin{cases}0&\text{if }x\leqslant 1-\frac{1}{n^{2}},\\ x-(1-\frac{1}{n^{2}})&\text{otherwise.}\end{cases}\]
By construction, \(\mathsf{L}_{[0,1]}(f_{n})=1\) for all \(n\in\mathbb{N}\). On the other hand, \(\mathsf{L}_{n}(f)=0+\frac{1}{n}\cdot 1=\frac{1}{n}\). So there does not exists \(M>0\) such that \(\mathsf{L}_{[0,1]}\leqslant M\mathsf{L}_{n}\) on the common domain of these Lipschitz seminorms (the algebra of Lipschitz functions for the usual metric).
We now prove that \((C(X),\mathsf{L}_{n})_{n\in\mathbb{N}}\) converges for the propinquity to \((C(X),\mathsf{L}_{[0,1]})\) -- this could be done here just as easily by proving the convergence for the Gromov-Hausdorff distance of \(X\) with a sequence of distances which agree with the usual distance on \([0,1-\frac{1}{n^{2}}]\) and is a dilation by a factor \(n\) of the usual distance on \([1-\frac{1}{n^{2}},1]\), but we will keep with our functional analytic perspective here.
We thus define, for all \(n\in\mathbb{N}\), and for all \(f,g\) Lipschitz functions over \([0,1]\) with its usual metric:
\[\mathsf{T}_{n}(f,g):=\max\left\{\mathsf{L}_{[0,1]}(f),\mathsf{L}_{n}(g),(n+1) \left\|f-g\right\|_{C(X)}\right\}.\]
Let \(f\in C(X)\) with \(\mathsf{L}_{[0,1]}(f)=1\). Then \(\mathsf{L}_{n}(f)\leqslant 1+\frac{1}{n}\). From this, we see that
\[\mathsf{T}_{n}\left(f,\frac{1}{1+\frac{1}{n}}f-\frac{1}{n+1}f(0)\right) \leqslant\max\left\{1,\frac{n+1}{n+1}\left\|f-f(0)1\right\|_{C(X)}\right\} \leqslant 1.\]
Let now \(g\in C(X)\) with \(\mathsf{L}_{n}(g)=1\). Thus \(\mathsf{L}_{\left[0,1-\frac{1}{n^{2}}\right]}(g)\leqslant 1\) and \(\mathsf{L}_{\left[1-\frac{1}{n^{2}},1\right]}(g)\leqslant n\). In particular, for all \(x\in[1-\frac{1}{n^{2}},1]\), we have \(\left|g(x)-g\left(1-\frac{1}{n^{2}}\right)\right|<n|x-1+\frac{1}{n^{2}}| \leqslant\frac{1}{n}\).
Let \(h\in C(X)\) defined by \(h(x)=g(x)\) if \(x\in\left[0,1-\frac{1}{n^{2}}\right]\), and \(h(x)=g\left(1-\frac{1}{n^{2}}\right)\) otherwise. By construction, \(\mathsf{L}_{[0,1]}(h)\leqslant 1\) and \(\left\|g-h\right\|_{C(X)}<\frac{1}{n}\). Thus \(\mathsf{T}_{n}(h,g)=1=\mathsf{L}_{n}(g)\).
Figure 1. Approximating \([0,1]\) with itself by modifying the metric on a small interval at the end (red)
Therefore, \((C(X)\oplus C(X),\mathbb{T}_{n},p_{1},p_{2})\), with \(p_{1}:(f,g)\in C(X)\oplus C(X)\mapsto f\) and \(p_{2}:(f,g)\in C(X)\oplus C(X)\mapsto g\), is easily seen to be a tunnel whose extent is at most \(\frac{1}{n}\) (the method is analogous to Proposition (2.21)).
Hence \((C(X),\mathbb{L}_{n})_{n\in\mathbb{N}}\) converges to \((C(X),\mathbb{L}_{[0,1]})_{n\in\mathbb{N}}\) for the propinquity. Moreover, the identity map satisfies Condition (2) of Theorem (2.22). Nonetheless, there is no \(M>0\) such that \(\forall n\in\mathbb{N}\quad\mathbb{L}_{[0,1]}\leq M\mathbb{L}_{n}\) on the common domain of these seminorms. So convergence in the propinquity does not imply uniform equivalence of the Lipschitz seminorms, even when working with a fixed, Abelian \(\mathrm{C}^{*}\)-algebra.
Now, we can also ask whether convergence for the propinquity of an inductive sequence, implies the existence of a bridge builder, and as we shall see in the next example, this is not the case: once again, convergence occurs without uniform equivalence of Lipschitz seminorms (and we prove that we have neither uniform dominance or uniform domination using both examples). Moreover, we see that \((\mathfrak{A}_{n},\mathbb{L}_{m})_{m\geq n}\) does not converge to \((\mathfrak{A}_{n},\mathbb{L}_{\infty})\) in this case, for any \(n\in\mathbb{N}\).
_Example 2.30_.: Let \(\mathfrak{A}_{\infty}\) be the \(\mathrm{C}^{*}\)-algebra of convergent sequences with values in \(\mathbb{C}\). For each \(n\in\mathbb{N}\), let \(\mathfrak{A}_{n}=\{(x_{k})_{k\in\mathbb{N}}:(x_{k})_{k\geq n}\text{ is constant }\}\), so \(\mathfrak{A}_{n}\) is a \(\mathrm{C}^{*}\)-subalgebra of \(\mathfrak{A}_{\infty}\) sharing the unit \((1)_{n\in\mathbb{N}}\) of \(\mathfrak{A}_{\infty}\).
For all \(n\in\mathbb{N}\), and for all \((x_{k})_{k\in\mathbb{N}}\in\mathfrak{A}_{n}\), we set
\[\mathbb{L}_{n}((x_{k})_{k\in\mathbb{N}})\coloneqq\sup\left\{\frac{|x_{p}-x_{q }|}{|\varphi_{n}(p)-\varphi_{n}(q)|}:p,q\in\mathbb{N},p\neq q\right\}\]
where:
\[\varphi_{n}:m\in\mathbb{N}\mapsto\begin{cases}\frac{1}{m}\text{ if }m>0,\\ 1+\frac{1}{n}\text{ if }m=0.\end{cases}\]
Of course, \(\mathbb{L}_{n}\) is indeed a seminorm on the finite dimensional \(\mathrm{C}^{*}\)-subalgebra \(\mathfrak{A}_{n}\) of \(\mathfrak{A}_{\infty}\).
We also set \(\mathbb{L}_{\infty}((x_{k})_{k\in\mathbb{N}})=\sup\left\{\frac{|x_{p}-x_{q}|}{ \left|\frac{1}{p+q}-\frac{1}{q+1}\right|}:p,q\in\mathbb{N},p\neq q\right\}\) for all \((x_{k})_{k\in\mathbb{N}}\in\mathfrak{A}_{\infty}\), allowing for the value \(\infty\). Of course, \(\bigcup_{n\in\mathbb{N}}\mathfrak{A}_{n}\subseteq\mathrm{dom}\,(\mathbb{L}_{ \infty})\).
Now, let
\[x:n\in\mathbb{N}\mapsto\begin{cases}1\text{ if }n=0,\\ 0\text{ otherwise.}\end{cases}\]
Figure 2. Approximating \(\overline{\mathbb{N}}\) by itself, by merging the first two points at \(\infty\)
By construction, \(L_{\infty}(x)=2\), yet \(L_{n}(x)=n\). So there is no \(M>0\) such that, for all \(n\in\mathbb{N}\), the inequality \(M\mathbb{L}_{n}\leqslant L_{\infty}\) on \(\operatorname{dom}\left(L_{n}\right)\) holds.
On the other hand, \(\lim_{n\to\infty}\Lambda^{*}((\mathfrak{A}_{n},\mathbb{L}_{n}),(\mathfrak{A}_{ \infty},\mathbb{L}_{\infty}))=0\). Indeed, let \(\pi:(x_{k})_{k\in\mathbb{N}}\mapsto(x_{0},x_{0},x_{1},x_{2},x_{3},\ldots)\in \mathfrak{A}_{\infty}\), \(\mathfrak{B}=\pi(\mathfrak{A}_{\infty})\), and let \(\theta:(x_{k})_{k\in\mathbb{N}}\in\mathfrak{B}\mapsto(x_{k+1})_{k\in\mathbb{N} }\in\mathfrak{A}_{\infty}\) -- of course, \(\theta\) is a \(*\)-isomorphism from \(\mathfrak{B}\) onto \(\mathfrak{A}_{\infty}\) such that \(\pi=\theta^{-1}\). We define \(L_{\mathfrak{B}}(\pi(x))=L_{\infty}(x)\) for all \(x\in\operatorname{dom}\left(L_{\infty}\right)\). This way, \(\pi\) is easily checked to be a full quantum isometry from \((\mathfrak{A}_{\infty},L_{\infty})\) to \((\mathfrak{B},L_{\mathfrak{B}})\).
Let \(\varepsilon>0\) and let \(N\in\mathbb{N}\) be such that if \(n\geqslant N\), then \(\frac{1}{n+1}<\frac{\varepsilon}{2}\). If \(x=(x_{k})_{k\in\mathbb{N}}\) with \(L_{\infty}(x)\leqslant 1\), and if \(l=\lim_{s\to\infty}x_{s}\), then by construction,
\[\frac{|x_{k}-l|}{\frac{1}{k+1}}=\lim_{s\to\infty}\frac{|x_{k}-x_{s}|}{\frac{1 }{k+1}-\frac{1}{s+1}}\leqslant 1\]
so \(|x_{k}-l|\leqslant\frac{1}{k+1}<\frac{\varepsilon}{2}\) for all \(k\geqslant N\). Therefore, if \(k\geqslant N\) then \(|x_{k}-x_{N}|<\varepsilon\).
Now, let \(n\geqslant N\). Let \(\mathfrak{D}_{n}=\mathfrak{A}_{n}\oplus\mathfrak{B}\), and for all \((a,b)\in\operatorname{dom}\left(\mathfrak{A}_{n}\right)\oplus\operatorname{ dom}\left(\mathfrak{B}\right)\), we set:
\[\mathsf{T}_{n}(a,b):=\max\left\{L_{n}(a),L_{\mathfrak{B}}(b),\frac{1}{ \varepsilon}\left\|\pi(a)-b\right\|_{\mathfrak{B}}\right\}.\]
We also set \(p_{n}:(a,b)\in\mathfrak{D}_{n}\mapsto a\in\mathfrak{A}_{n}\) and \(q_{n}:(a,b)\in\mathfrak{D}_{n}\mapsto\theta(b)\in\mathfrak{A}_{\infty}\). We are now going to prove that \(\tau_{n}:=(\mathfrak{D}_{n},\mathsf{TN}_{n},p_{n},q_{n})\) is indeed a tunnel from \((\mathfrak{A}_{n},\mathbb{L}_{n})\) to \((\mathfrak{A}_{\infty},\mathbb{L}_{\infty})\).
Let \(a\coloneqq(x_{k})_{k\in\mathbb{N}}\in\operatorname{dom}\left(L_{\infty}\right)\) with \(L_{\infty}(a)=1\), and let
\[a^{\prime}\coloneqq(x_{0},x_{0},x_{1},x_{2},\ldots,x_{N-1},x_{N},x_{N},x_{N},x_{N}\ldots)\in\mathfrak{A}_{n}.\]
By construction, \(L_{n}(a^{\prime})\leqslant 1\) and \(\left\|\pi(a)-a^{\prime}\right\|_{\mathfrak{A}_{\infty}}<\varepsilon\) by our choice of \(N\). Also by construction, \(L_{\mathfrak{B}}(\pi(a))=L_{\infty}(a)=1\). Thus \(\mathsf{T}_{n}(a^{\prime},\pi(a))\leqslant L_{\infty}(a)=1\). So, we have shown that, for any \(a\in\operatorname{dom}\left(L_{\infty}\right)\) with \(L_{\infty}(a)=1\), there exists an element \(d\coloneqq(a^{\prime},\pi(a))\in\mathfrak{D}_{n}\) such that \(\mathsf{TN}_{n}(d)=1=L_{\infty}(a)\) and \(q_{n}(d)=\theta(\pi(a))=a\). Therefore, the map \(q_{n}\) is indeed a quantum isometry from \((\mathfrak{D}_{n},\mathsf{T}_{n})\) to \((\mathfrak{A}_{\infty},\mathbb{L}_{\infty})\).
Let now \(a=(x_{k})_{k\in\mathbb{N}}\in\operatorname{dom}\left(L_{n}\right)\) with \(L_{n}(a)=1\). By definition, \(|x_{1}-x_{0}|\leqslant\frac{1}{n}<\varepsilon\). Let
\[b=(x_{1},x_{1},x_{2},x_{3},x_{4},\ldots).\]
By construction, \(b\in\operatorname{dom}\left(L_{\mathfrak{B}}\right)\) with \(L_{\mathfrak{B}}(b)\leqslant L_{n}(b)\), and \(\left\|a-b\right\|_{\mathfrak{A}_{\infty}}=|x_{1}-x_{0}|<\varepsilon\). Thus again \(\mathsf{T}_{n}(a,b)=L_{n}(b)\). So \(p_{n}:(a,b)\in\mathfrak{D}_{n}\mapsto a\in\mathfrak{A}_{n}\) is a quantum isometry. Therefore, \((\mathfrak{D}_{n},\mathsf{T}_{n},p_{n},q_{n})\) is indeed a tunnel from \((\mathfrak{A}_{n},\mathbb{L}_{n})\) to \((\mathfrak{A}_{\infty},\mathbb{L}_{\infty})\). We now compute an upper bound on its extent.
Let \(\varphi\in\mathscr{S}(\mathfrak{D}_{n})\) be a state of \(\mathfrak{D}_{n}\). If we set \(\mu:a\in\mathfrak{A}_{n}\mapsto\varphi(a,\pi(a))\), then \(\mu\in\mathscr{S}(\mathfrak{A}_{n})\) is again a state of \(\mathfrak{A}_{n}\). If \((a,b)\in\operatorname{dom}\left(\mathsf{T}_{n}\right)\) with \(\mathsf{T}_{n}(a,b)\leqslant 1\), then
\[|\varphi(a,b)-\mu\circ p_{n}(a,b)| =|\varphi(a,b)-\varphi(a,\pi(a))|\] \[=|\varphi(0,b-\pi(a))|\] \[\leqslant\|b-\pi(a)\|_{\mathfrak{A}_{\infty}}<\varepsilon,\]
so indeed \(\mathsf{Haus}\big{[}\mathsf{mk}_{\mathsf{T}_{n}}\big{]}\left(\mathscr{S}( \mathfrak{D}_{n}),p_{n}^{*}\mathscr{S}(\mathfrak{A}_{n})\right)<\varepsilon\).
On the other hand, let \(\nu:a\in\mathfrak{A}_{\infty}\mapsto\varphi^{\prime}(a,\pi(a))\) where \(\varphi^{\prime}\) is an extension of \(\varphi\) to a state of \(\mathfrak{A}_{\infty}\oplus\mathfrak{B}\) by the Hahn-Banach theorem. Once again, it is immediate that \(\mathsf{mk}_{\mathsf{T}_{n}}(\varphi,\nu\circ q_{n})<\varepsilon\). So \(\mathsf{Haus}\big{[}\mathsf{mk}_{\mathsf{T}_{n}}\big{]}\left(\mathscr{S}( \mathfrak{D}),q_{n}^{*}\mathscr{S}(\mathfrak{A}_{\infty})\right)<\varepsilon\).
Thus, for all \(n\geqslant N\), the extend of \(\chi\left(\tau_{n}\right)\) is at most \(\varepsilon\). We conclude:
\[\lim_{n\to\infty}\Lambda^{*}\left((\mathfrak{A}_{n},\mathbb{L}_{n}),(\mathfrak{A}_{ \infty},\mathbb{L}_{\infty})\right)=0.\]
However, for any fixed \(p\in\mathbb{N}\), it is easy to check, by a similar method, that
\[\lim_{n\to\infty}\Lambda^{*}\left((\mathfrak{A}_{p},\mathbb{L}_{n}),(\mathfrak{A}_{ p-1},\mathbb{L}_{\infty})\right)=0,\]
and since \(\dim\mathfrak{A}_{p-1}<\dim\mathfrak{A}_{p}\), the sequence \((\mathfrak{A}_{p},\mathsf{L}_{n})_{n\geq p}\) does not converge to \((\mathfrak{A}_{p},\mathsf{L}_{\infty})\).
The map \(\pi\) we have used here is not surjective. In fact, there is no bridge builder in this case. Indeed, assume that we have a unital \(*\)-morphism \(\pi:\mathfrak{A}_{\infty}\to\mathfrak{A}_{\infty}\) such that for all \(\varepsilon>0\), there exists \(N_{\pi}(\varepsilon)\in\mathbb{N}\) with the property that if \(n\geq N_{\pi}(\varepsilon)\), and if \(a\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\), then there exists \(b\in\operatorname{dom}\left(\mathsf{L}_{n}\right)\) with \(\mathsf{L}_{n}(b)\leq\mathsf{L}_{\infty}(a)\) and \(\|\pi(a)-b\|_{\mathfrak{A}_{\infty}}<\frac{\varepsilon}{2}\mathsf{L}_{\infty} (a)\).
Fix \(a\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\) with \(\mathsf{L}_{\infty}(a)=1\). Let \(\varepsilon>0\) and let \(n\geq N_{\pi}(\varepsilon)\) such that \(\frac{1}{n}<\frac{\varepsilon}{2}\). Define \((y_{k})_{k\in\mathbb{N}}\coloneqq\pi(a)\). Then there exists \(b\coloneqq(b_{k})_{k\in\mathbb{N}}\in\operatorname{dom}\left(\mathsf{L}_{n}\right)\) such that \(\mathsf{L}_{n}(b)\leq 1\) and \(\|\pi(a)-b\|_{\mathfrak{A}_{\infty}}<\frac{\varepsilon}{2}\). By definition of \(\mathsf{L}_{n}\), we thus conclude that \(|b_{1}-b_{0}|\leq\frac{1}{n}<\frac{\varepsilon}{2}\). Thus, \(|y_{1}-y_{0}|<\varepsilon\). As \(\varepsilon>0\) is arbitrary, we conclude that \(y_{1}=y_{0}\). Thus \(\pi\) can never be surjective -- in fact, it is valued in \(\mathfrak{B}\). So no bridge builder exists for this example.
As seen in Example (2.30), convergence of \(\left(\mathfrak{A}_{n},\mathsf{L}_{n}\right)_{n\in\mathbb{N}}\) to \((\mathfrak{A}_{\infty},\mathsf{L}_{\infty})\) for the propinquity does not imply the convergence of \((\mathfrak{A}_{n},\mathsf{L}_{p})_{p\in\mathbb{N}}\) to \((\mathfrak{A}_{n},\mathsf{L}_{\infty})\). We have the following immediate consequence of our work:
**Corollary 2.31**.: _Let \(\mathfrak{A}_{\infty}\) be a unital separable C\({}^{*}\)-algebra, such that \(\mathfrak{A}_{\infty}=\operatorname{cl}\left(\bigcup_{n\in\mathbb{N}} \mathfrak{A}_{n}\right)\), where \((\mathfrak{A}_{n})_{n\in\mathbb{N}}\) is an increasing (for \(\subseteq\)) sequence of \(C^{*}\)-subalgebras of \(\mathfrak{A}_{\infty}\), with the unit of \(\mathfrak{A}_{\infty}\) in \(\mathfrak{A}_{0}\). For each \(n\in\overline{\mathbb{N}}\), let \(\mathsf{L}_{n}\) be a Lipschitz seminorm on \(\mathfrak{A}_{n}\). If there exists a bridge builder \(\pi:\mathfrak{A}_{\infty}\to\mathfrak{A}_{\infty}\) for \(((\mathfrak{A}_{n},\mathsf{L}_{n})_{n\in\mathbb{N}},(\mathfrak{A}_{\infty}, \mathsf{L}_{\infty}))\) such that \(\pi(\mathfrak{A}_{n})\subseteq\mathfrak{A}_{n}\) for each \(n\in\mathbb{N}\), then for all \(n\in\overline{\mathbb{N}}\),_
\[\lim_{\begin{subarray}{c}p\to\infty\\ p\geq n\end{subarray}}\Lambda^{*}((\mathfrak{A}_{n},\mathsf{L}_{p}),( \mathfrak{A}_{n},\mathsf{L}_{\infty}))=0,\]
_and \(\lim_{n\to\infty}\Lambda^{\operatorname{spec}}((\mathfrak{A}_{n},\mathsf{L} _{n}),(\mathfrak{A}_{\infty},\mathsf{L}_{\infty}))=0\)._
Proof.: This follows by observing that the restriction of \(\pi\) to \(\mathfrak{A}_{n}\) is a bridge builder for \(((\mathfrak{A}_{n},\mathsf{L}_{p})_{p\geq n},(\mathfrak{A}_{n},\mathsf{L}_{ \infty}))\). Our result then follows from Proposition (2.21).
## 3. Convergence of Inductive Sequences of Metric Spectral Triples for the Spectral Propinquity
We now study the convergence of certain families of metric spectral triples for the spectral propinquity [47], whose construction we will recall below. We thus begin this section with the definition of a spectral triple, due to Connes, and the foundational concept for noncommutative Riemannian geometry.
**Definition 3.1** ([12, 11]).: A _spectral triple_\((\mathfrak{A},\mathcal{H},D)\) is given by a unital C\({}^{*}\)-algebra \(\mathfrak{A}\) of bounded linear operators on a Hilbert space \(\mathcal{H}\), and a self-adjoint operator \(D\) defined on some dense subspace \(\operatorname{dom}\left(D\right)\) of \(\mathcal{H}\), such that:
1. \(\{a\in\mathfrak{A}:a\cdot\operatorname{dom}\left(D\right)\subseteq\operatorname {dom}\left(D\right),\left[D,a\right]\text{ is bounded }\}\) is a dense \(*\)-algebra in \(\mathfrak{A}\),
2. \(D\) has compact resolvent.
The operator \(D\) is referred to as the Dirac operator of the spectral triple.
### Preliminaries: The Spectral Propinquity
The spectral propinquity is a distance, up to unitary equivalence, on the class of metric spectral triples.
**Notation 3.2**.: If \(T:D\subseteq E\to F\) is a linear operator defined from a dense subspace \(D\) of a normed vector space \(E\) to a normed vector space \(F\), then we write:
\[\left\|T\right\|_{E}^{F}\coloneqq\sup\left\{\left\|T\xi\right\|_{F}:\xi\in D, \left\|\xi\right\|_{E}\leq 1\right\}\]
allowing for the value \(\infty\). If \(F=E\), then \(\left\|T\right\|_{E}^{F}\) is simply denoted by \(\left\|T\right\|_{E}\).
**Definition 3.3**.: A spectral triple \((\mathfrak{A},\mathcal{H},D)\) is _metric_ if the Connes extended pseudo-distance, defined on the state space \(\mathcal{S}(\mathfrak{A})\) of \(\mathfrak{A}\) by:
\[\operatorname{mk}_{D}:\varphi,\psi\in\mathcal{S}(\mathfrak{A})\mapsto\sup \left\{|\varphi(a)-\psi(a)|:a\operatorname{dom}\left(D\right)\subseteq \operatorname{dom}\left(D\right)\text{ and }\left|\left|\left|\left|D,a\right|\right| \right|\right|_{\mathcal{H}}\leqslant 1\right\}\]
is in fact a metric on \(\mathcal{S}(\mathfrak{A})\), which induces the weak-\({}^{*}\) topology on \(\mathcal{S}(\mathfrak{A})\).
As soon as a spectral triple is metric, it induces a structure of quantum compact metric space on its underlying C\({}^{*}\)-algebra in a natural manner.
**Proposition 3.4** ([47, Proposition 1.10]).: _Let \((\mathfrak{A},\mathcal{H},D)\) be a spectral triple. We set:_
\[\operatorname{dom}\left(\mathsf{L}_{D}\right):=\{a\in\mathfrak{sa}(\mathfrak{ A}):a\operatorname{dom}\left(D\right)\subseteq\operatorname{dom}\left(D \right)\text{ and }\left|\left|D,a\right|\text{ is bounded}\right\}\]
_and for all \(a\in\operatorname{dom}\left(\mathsf{L}_{D}\right)\):_
\[\mathsf{L}_{D}(a):=\left|\left|\left|\left|D,a\right|\right|\right|_{ \mathcal{H}}.\right.\]
_The spectral triple \((\mathfrak{A},\mathcal{H},D)\) is metric if, and only if, \((\mathfrak{A},\mathsf{L}_{D})\) is a quantum compact metric space._
The construction of the spectral propinquity begins with the following observation. Recall from [47] that if \((\mathfrak{A},\mathcal{H},D)\) is a metric spectral triple, and if we set
* for all \(\xi\in\operatorname{dom}\left(D\right)\): (3.1) \[\operatorname{DN}_{D}(\xi):=\left|\xi\right|\mathbb{I}_{\mathcal{H}}+\left|D \xi\right|\mathbb{I}_{\mathcal{H}},\]
* \(\operatorname{dom}\left(\mathsf{L}_{D}\right):=\{a\in\mathfrak{sa}(\mathfrak{ A}):a\operatorname{dom}\left(D\right)\subseteq\operatorname{dom}\left(D\right)\text{, }\left|D,a\right|\text{ is bounded}\}\)
* for all \(a\in\operatorname{dom}\left(\mathsf{L}_{D}\right)\): \[\mathsf{L}_{D}(a):=\left|\left|\left|\left|\left|D,a\right|\right|\right| \right|_{\mathcal{H}},\]
then
\[\operatorname{metCor}\left(\mathfrak{A},\mathcal{H},D\right):=\left( \mathcal{H},\operatorname{DN}_{D},\mathfrak{A},\mathsf{L}_{D},\mathsf{C},0\right)\]
is an example of a _metrical C*-correspondence_, in the following sense:
**Definition 3.5**.: An \(\mathfrak{A}\)-\(\mathfrak{B}\)-\(C^{*}\)_-correspondence_\((\mathcal{M},\mathfrak{A},\mathfrak{B})\), for two C\({}^{*}\)-algebras \(\mathfrak{A}\) and \(\mathfrak{B}\), is a right Hilbert module \(\mathcal{M}\) over \(\mathfrak{B}\) (whose \(\mathfrak{B}\)-valued inner product is denoted by \(\langle\cdot,\cdot\rangle_{\mathcal{M}}\)), together with a unital \(*\)-morphism from \(\mathfrak{A}\) to the C\({}^{*}\)-algebra of adjoinable \(\mathfrak{B}\)-linear operators over \(\mathcal{M}\).
**Definition 3.6** ([47, Definition 2.2]).: An _\((\Omega,\Omega^{\prime},\Omega_{\operatorname{mod}},\Omega_{\operatorname{inner }})\)-metrical C*-correspondence_\((\mathcal{M},\operatorname{DN},\mathfrak{A},\mathsf{L},\mathfrak{B}, \mathsf{S})\), where \(\Omega,\Omega_{\operatorname{inner}}\geqslant 1\), \(\Omega_{\operatorname{mod}}\geqslant 2\), and \(\Omega^{\prime}\geqslant 0\), is given by two \((\Omega,\Omega^{\prime})\)-quantum compact metric spaces \((\mathfrak{A},\mathsf{L})\) and \((\mathfrak{B},\mathsf{S})\), an \(\mathfrak{A}\)-\(\mathfrak{B}\) C\({}^{*}\)-correspondence \((\mathcal{M},\mathfrak{A},\mathfrak{B})\), and a norm \(\operatorname{DN}\) defined on a dense \(\mathbb{C}\)-subspace \(\operatorname{dom}\left(\mathsf{TN}\right)\) of \(\mathcal{M}\), such that
1. \(\forall\omega\in\operatorname{dom}\left(\operatorname{DN}\right)\quad \operatorname{DN}(\omega)\geqslant\left|\omega\right|\omega\right|_{\mathcal{M}} \coloneqq\sqrt{\left\|\langle\omega,\omega\rangle_{\mathcal{M}}\right\|_{ \mathfrak{B}}}\),
2. \(\{\omega\in\operatorname{dom}\left(\operatorname{DN}\right):\operatorname{DN}( \omega)\leqslant 1\}\) is compact in \((\mathcal{M},\left\|\cdot\right\|_{\mathcal{M}})\),
3. for all \(a\in\operatorname{dom}\left(\mathsf{L}\right)\) and \(\omega\in\operatorname{dom}\left(\mathsf{TN}\right)\), \[\operatorname{DN}(a\omega)\leqslant\Omega_{\operatorname{mod}}(\left\|a\right\|_ {\mathfrak{A}}+\mathsf{L}(a))\operatorname{DN}(\omega),\]
4. for all \(\omega,\eta\in\operatorname{dom}\left(\operatorname{DN}\right)\), \[\max\{\mathsf{S}(\Re\langle\omega,\eta\rangle_{\mathcal{M}}),\mathsf{S}( \Im\langle\omega,\eta\rangle_{\mathcal{M}})\}\leqslant\Omega_{\operatorname{ inner}}\operatorname{DN}(\omega)\operatorname{DN}(\eta).\]
In particular, the norm \(\operatorname{DN}\) is called a _\(D\)-norm_.
**Convention 3.7**.: In this work, we _fix \(\Omega_{\operatorname{mod}}\geqslant 2\)_ and \(\Omega_{\operatorname{inner}}\geqslant 1\) all throughout the paper. _All_ quantum compact metric spaces will be assumed to be in the class of \((\Omega,\Omega^{\prime})\)-quantum compact metric spaces and all metrical C\({}^{*}\)-correspondences will be assume to be in the class of \((\Omega,\Omega^{\prime},\Omega_{\operatorname{mod}},\Omega_{\operatorname{inner }})\)-metrical C\({}^{*}\)-correspondences, unless otherwise specified.
Note that the compactness condition in Definition (3.6) borrows and extends on Theorem (2.9).
The importance of Definition (3.6) is that one can extend the propinquity to metrical C\({}^{*}\)-correspondences as follows. First, we employ a natural notion of morphism between metrical C\({}^{*}\)-correspondences.
**Definition 3.8** ([47, Definition 2.13]).: For each \(j\in\{1,2\}\), let
\[\mathbb{M}_{j}=\left(\mathcal{M}_{j},\mathsf{DN}_{j},\mathfrak{A}_{j},\mathsf{ L}_{j},\mathfrak{B}_{j},\mathsf{S}_{j}\right)\]
be a metrical C\({}^{*}\)-correspondence.
A _metrical quantum isometry_\((\Pi,\pi,\theta)\) from \(\mathbb{M}_{1}\) to \(\mathbb{M}_{2}\) is a given by:
1. a continuous, surjective \(\mathds{C}\)-linear map \(\Pi:\mathcal{M}_{1}\to\mathcal{M}_{2}\),
2. a quantum isometry \(\pi:(\mathfrak{A}_{1},\mathsf{L}_{1})\to(\mathfrak{A}_{2},\mathsf{L}_{2})\),
3. a quantum isometry \(\theta:(\mathfrak{B}_{1},\mathsf{S}_{1})\to(\mathfrak{B}_{2},\mathsf{S}_{2})\),
such that
1. \(\forall a\in\mathfrak{A}\quad\forall\omega\in\mathcal{M}_{1}\quad\Pi(a\omega )=\pi(a)\Pi(\omega)\),
2. \(\forall b\in\mathfrak{B}\quad\forall\omega\in\mathcal{M}_{2}\quad\Pi(\omega \cdot b)=\Pi(\omega)\theta(b)\),
3. \(\forall\omega,\eta\in\mathcal{M}_{1}\quad\theta(\langle\omega,\eta\rangle_{ \mathcal{M}_{1}})=\langle\Pi(\omega),\Pi(\eta)\rangle_{\mathcal{M}_{2}}\),
4. \(\Pi(\operatorname{dom}(\mathsf{DN}_{1}))\subseteq\operatorname{dom}( \mathsf{DN}_{2})\) and, for all \(\omega\in\operatorname{dom}(\mathsf{DN}_{2})\), the equality \(\mathsf{DN}_{2}(\omega)=\inf\{\mathsf{DN}_{1}(\eta):\eta\in\operatorname{dom }(\mathsf{DN}_{1}),\Pi(\eta)=\omega\}\).
The definition of a distance between metrical C\({}^{*}\)-correspondences, called the _metrical propinquity_, relies on a notion of isometric embedding called a tunnel, and is defined as follows.
**Definition 3.9** ([47, Definition 2.19]).: Let \(\mathbb{M}_{1}\) and \(\mathbb{M}_{2}\) be two metrical C\({}^{*}\)-correspondences. A _(metrical) tunnel_\(\tau=(\mathbb{J},\Pi_{1},\Pi_{2})\) from \(\mathbb{M}_{1}\) to \(\mathbb{M}_{2}\) is a triple given by a metrical C\({}^{*}\)-correspondence \(\mathbb{J}\), and for each \(j\in\{1,2\}\), a metrical quantum isometry \(\Pi_{j}:\mathbb{J}\to\mathbb{M}_{j}\).
_Remark 3.10_.: It is important to note that our tunnels involve \((\Omega,\Omega^{\prime},\Omega_{\operatorname{mod}},\Omega_{\operatorname{ inner}})\)-C\({}^{*}\)-metrical correspondences only (as per Convention (3.7)). We will dispense calling our tunnels \((\Omega,\Omega^{\prime},\Omega_{\operatorname{mod}},\Omega_{\operatorname{ inner}})\)-tunnels, to keep our notation simple, but it should be stressed that _fixing_\((\Omega,\Omega^{\prime},\Omega_{\operatorname{mod}},\Omega_{\operatorname{ inner}})\) and staying within the class of \((\Omega,\Omega^{\prime},\Omega_{\operatorname{mod}},\Omega_{\operatorname{ inner}})\)-C\({}^{*}\)-metrical correspondences is crucial to obtain a metric from tunnels.
We now proceed by defining the extent of a metrical tunnel; remarkably this only involves our previous notion of extent of a tunnel between quantum compact metric spaces.
**Definition 3.11** ([47, Definition 2.21]).: Let \(\mathbb{M}_{j}=(\mathcal{M}_{j},\mathsf{DN}_{j},\mathfrak{A}_{j},\mathsf{L}_ {j},\mathfrak{B}_{j},\mathsf{S}_{j})\) be a metrical C\({}^{*}\)-correspondence, for each \(j\in\{1,2\}\). Let \(\tau=(\mathbb{P},(\Pi_{1},\pi_{1},\theta_{1}),(\Pi_{2},\pi_{2},\theta_{2}))\) be a metrical tunnel from \(\mathbb{M}_{1}\) to \(\mathbb{M}_{2}\), with \(\mathbb{P}=(\mathcal{P},\mathsf{TN},\mathfrak{D},\mathsf{L}_{\mathfrak{D}}, \mathsf{E},\mathsf{L}_{\mathfrak{E}})\).
The _extent_\(\chi(\tau)\) of a metrical tunnel \(\tau\) is
\[\chi(\tau)\coloneqq\max\left\{\chi(\mathfrak{D},\mathsf{L}_{\mathfrak{D}},\pi _{1},\pi_{2}),\chi(\mathfrak{E},\mathsf{T}_{\mathfrak{E}},\theta_{1},\theta_{2} )\right\}.\]
Given two metric spectral triples, we can thus either take the Gromov-Hausdorff distance between their underlying quantum compact metric spaces, or take the metrical propinquity [42, 46] between the metrical C\({}^{*}\)-correspondence they define, which is defined as the infimum of the extent of every possible metrical tunnels between them. However, the spectral propinquity involves our work on the geometry of quantum dynamics [43, 44, 47] as well. We recall the construction of the spectral propinquity; the new
quantity called the \(\varepsilon\)-magnitude was introduced in [47, Definition 3.31], but is simpler to express for spectral triples, based on [31].
**Definition 3.12** ([31, Theorem 3.6]).: Let \((\mathfrak{A}_{1},\mathcal{H}_{1},D_{1})\) and \((\mathfrak{A}_{2},\mathcal{H}_{2},D_{2})\) be two metric spectral triples. Let
\[\tau:=\left(\begin{array}{cccc}(\mathcal{P},\mathsf{TN},\mathfrak{D},L_{ \mathfrak{D}},\mathfrak{E},\mathfrak{S})&,&\underline{(\Pi_{1},\pi_{1},\theta _{1})}&,&\underline{(\Pi_{2},\pi_{2},\theta_{2})}\\ \underline{\mathrm{metrical\ C^{*}}\text{-correspondence}}&\underline{\mathrm{ metrical\ quantum\ isometry}}&,&\underline{(\Pi_{2},\pi_{2},\theta_{2})}\\ \end{array}\right)\]
be a metrical tunnel from \(\mathrm{metCor}\left(\mathfrak{A}_{1},\mathcal{H}_{1},D_{1}\right)\) to \(\mathrm{metCor}\left(\mathfrak{A}_{2},\mathcal{H}_{2},D_{2}\right)\), We define the \(\varepsilon\)_-magnitude_\(\mu(\tau|\varepsilon)\) of \(\tau\) as the maximum of the extent \(\chi(\tau)\) of \(\tau\), and the \(\varepsilon\)_-reach_ of \(\tau\), which is the number:
\[\sup_{\begin{subarray}{c}\xi\in\mathrm{dom}(\mathfrak{D}_{j})\\ \mathsf{DN}_{j}(\xi)\leqslant 1\end{subarray}}\inf_{\begin{subarray}{c}\eta \in\mathrm{dom}(\mathfrak{D}_{k})\\ \mathsf{DN}_{k}(\eta)\leqslant 1\end{subarray}}\sup_{\begin{subarray}{c}\omega \in\mathrm{dom}(\mathsf{TN})\\ \mathsf{DN}_{k}(\eta)\leqslant 1\end{subarray}}\left|\langle\exp(itD_{j})\xi,\Pi_{j}( \omega)\rangle_{\mathcal{H}_{j}}-\langle\exp(itD_{k})\eta,\Pi_{k}(\omega) \rangle_{\mathcal{H}_{k}}\right|, \tag{3.2}\]
for \(\{j,k\}=\{1,2\}\).
**Definition 3.13** ([47, Definition 4.2]).: The _spectral propinquity_ between two metric spectral triples \((\mathfrak{A}_{1},\mathcal{H}_{1},D_{1})\) and \((\mathfrak{A}_{2},\mathcal{H}_{2},D_{2})\) is
\[\Lambda^{\mathsf{spec}}((\mathfrak{A}_{1},\mathcal{H}_{1},D_{1}),( \mathfrak{A}_{2},\mathcal{H}_{2},D_{2}))\coloneqq\] \[\inf\left\{\frac{\sqrt{2}}{2},\varepsilon>0:\mu(\tau|\varepsilon)< \varepsilon\text{ for }\tau\text{ a tunnel}\right.\] \[\left.\text{ from }\mathrm{metCor}\left(\mathfrak{A}_{1}, \mathcal{H}_{1},D_{1}\right)\text{ to }\mathrm{metCor}\left(\mathfrak{A}_{2},\mathcal{H}_{2},D_{2}\right)\right\}.\]
The key property of the spectral propinquity is that, for any two metric spectral triples \((\mathfrak{A}_{1},\mathcal{H}_{1},D_{1})\) and \((\mathfrak{A}_{2},\mathcal{H}_{2},D_{2})\), we have the following equivalence:
\[\Lambda^{\mathsf{spec}}((\mathfrak{A}_{1},\mathcal{H}_{1},D_{1}),(\mathfrak{A }_{2},\mathcal{H}_{2},D_{2}))=0\]
if, and only if, there exists a unitary \(U:\mathcal{H}_{1}\to\mathcal{H}_{2}\) such that
* \(U\mathrm{dom}\left(D_{1}\right)=\mathrm{dom}\left(D_{2}\right)\),
* \(U\mathfrak{D}_{1}=D_{2}U\) on \(\mathrm{dom}\left(D_{1}\right)\),
* \(a\in\mathfrak{A}_{1}\mapsto UaU^{*}\) is a \(*\)-isomorphism from \(\mathfrak{A}_{1}\) onto \(\mathfrak{A}_{2}\).
A nontrivial example of convergence in the sense of the spectral propinquity is provided in [45] with the approximation of spectral triples on quantum tori by spectral triples of certain matrix algebras known as fuzzy tori. These examples include many examples of previously informally stated convergences in mathematical physics, dealing with matrix models and their limits as the dimension of the algebra grows to infinity. Such examples are a major motivation for the construction of the spectral propinquity. Another example on fractals is presented in [29]. Moreover, convergence for the spectral propinquity implies convergence of the spectra of the Dirac operators and, in an appropriate sense, the convergence of the bounded functional calculi of these operators, among other properties. Of course, convergence for the spectral propinquity implies convergence of the underlying quantum compact metric spaces for the propinquity. In this paper, we will construct new examples of convergence for new spectral triples defined over noncommutative solenoids and over Bunce-Deddens algebras, seen as limits of spectral triples.
### Preliminaries: Inductive Limits of Spectral Triples
While the spectral propinquity allows the discussion of convergence of spectral triples defined on vastly different C*-algebras, there are certain more restricted situations where the C*-algebras of a sequence of spectral triples may be related in a manner compatible with the spectral triples. In [20], a simple notion of inductive limit for spectral triples is introduced, based on the following encoding of such a compatibility via a natural, and rigid, notion of morphism between spectral triples.
**Definition 3.14** ([20]).: An isometric morphism \((\pi,S)\) from \((\mathfrak{A}_{1},\mathcal{H}_{1},\mathit{D}_{1})\) to \((\mathfrak{A}_{2},\mathcal{H}_{2},\mathit{D}_{2})\) is given by a unital \(*\)-morphism \(\pi:\mathfrak{A}_{1}\to\mathfrak{A}_{2}\) and a linear isometry \(S:\mathcal{H}_{1}\to\mathcal{H}_{2}\) such that:
1. \(\pi(\operatorname{dom}\left(\mathsf{L}_{1}\right))\subseteq\operatorname{dom }\left(\mathsf{L}_{2}\right)\),
2. \(\operatorname{Sdom}\left(\mathit{D}_{1}\right)\subseteq\operatorname{dom} \left(\mathit{D}_{2}\right)\) and \(\mathit{SD}_{1}=\mathit{D}_{2}S\) on \(\operatorname{dom}\left(\mathit{D}_{1}\right)\),
3. \(\forall a\in\mathfrak{A}_{1}\quad Sa=\pi(a)S\).
Since \(S\) is a linear isometry, \(\mathcal{H}_{1}\) can be identified with the closed subspace \(S\mathcal{H}_{1}\) of \(\mathcal{H}_{2}\) via \(S\) at no cost in our definition. In that case, \(\mathit{D}_{1}\) is only defined on \(\mathcal{H}_{1}\subseteq\mathcal{H}_{2}\), and we simply require that \(\mathit{D}_{1}\) is the restriction of \(\mathit{D}_{2}\) to \(\operatorname{dom}\left(\mathit{D}_{1}\right)\).
We also note that if \(\pi(a)=0\) for some \(a\in\mathfrak{A}_{1}\), then \(\pi(a)S=Sa=0\). Since \(S\) is an isometry, \(a=0\). So \(\pi\) is actually automatically a \(*\)-monomorphism, and we thus can also identify \(\mathfrak{A}_{1}\) with the C*-subalgebra \(\pi(\mathfrak{A}_{1})\) of \(\mathfrak{A}_{2}\), since Definition (3.14) ensures that \(a\mathcal{H}_{1}\subseteq\mathcal{H}_{1}\) and \(\{\mathit{D}_{1},a\}\) is identified with \(P[\mathit{D}_{2},\pi(a)]P=P[\mathit{D}_{2},\pi(a)]=[\mathit{D}_{2},\pi(a)]P\) where \(P\) is the orthogonal projection of \(\mathcal{H}_{2}\) onto \(\mathcal{H}_{1}\). Furthermore, since \(\pi\) is unital, the unit of \(\mathfrak{A}_{2}\) is contained in \(\mathfrak{A}_{1}\) with this identification.
An inductive sequence of spectral triples, as defined in [20], with a somewhat more involved notation, is simply a sequence of the form \(((\mathfrak{A}_{n},\mathcal{H}_{n},\mathit{D}_{n}),(\pi_{n},S_{n}))_{n\in \mathds{N}}\) where \((\mathfrak{A}_{n},\mathcal{H}_{n},\mathit{D}_{n})\) is a spectral triple and \((\pi_{n},S_{n})\) is an isometric morphism from \((\mathfrak{A}_{n},\mathcal{H}_{n},\mathit{D}_{n})\) to \((\mathfrak{A}_{n+1},\mathcal{H}_{n+1},\mathit{D}_{n+1})\), for each \(n\in\mathds{N}\). As we have seen above, we can identify such a sequence with one of the following type, which we will take as our notion of inductive limit of spectral triples.
**Definition 3.15**.: Let \(\mathfrak{A}_{\infty}=\operatorname{cl}\left(\bigcup_{n\in\mathds{N}} \mathfrak{A}_{n}\right)\) be a C*-algebra which is the closure of an increasing sequence of C*-subalgebras \((\mathfrak{A}_{n})_{n\in\mathds{N}}\) in \(\mathfrak{A}_{\infty}\), with the unit of \(\mathfrak{A}_{\infty}\) in \(\mathfrak{A}_{0}\). A spectral triple \((\mathfrak{A}_{\infty},\mathcal{H}_{\infty},\mathit{D}_{\infty})\) is the inductive limit of a sequence \((\mathfrak{A}_{n},\mathcal{H}_{n},\mathit{D}_{n})_{n\in\mathds{N}}\) of spectral triples when:
1. \(\mathcal{H}_{\infty}=\operatorname{cl}\left(\bigcup_{n\in\mathds{N}}\right) \mathcal{H}_{n}\), where each \(\mathcal{H}_{n}\) is a Hilbert subspace of \(\mathcal{H}_{\infty}\),
2. for each \(n\in\mathds{N}\), the restriction of \(\mathit{D}_{\infty}\) to \(\operatorname{dom}\left(\mathit{D}_{n}\right)\) is \(\mathit{D}_{n}\),
3. for each \(n\in\mathds{N}\), the subspace \(\mathcal{H}_{n}\) is reducing for \(\mathfrak{A}_{n}\), which is equivalent to \(\mathfrak{A}_{n}\mathcal{H}_{n}\subseteq\mathcal{H}_{n}\).
We note, using the notation of Definition (3.15), that the operator which, to any \(\xi\in\bigcup_{n\in\mathds{N}}\operatorname{dom}\left(\mathit{D}_{n}\right)\), associates \(\mathit{D}_{n}\xi\) whenever \(\xi\in\operatorname{dom}\left(\mathit{D}_{n}\right)\) for any \(n\in\mathds{N}\), is indeed well-defined, and shown in [20] to be essentially self-adjoint, so \(\mathit{D}_{\infty}\) is the closure of this operator.
For our purpose, the following result from [20] will play an important role.
**Theorem 3.16** ([20, Theorem 3.1, partial]).: _If \((\mathfrak{A}_{n},\mathcal{H}_{n},\mathit{D}_{n})_{n\in\mathds{N}}\) is an inductive sequence of spectral triples converging to a spectral triple \((\mathfrak{A}_{\infty},\mathcal{H}_{\infty},\mathit{D}_{\infty})\), then for any \(\mathds{C}\)-valued continuous function \(f\in C_{0}(\mathds{R})\) which vanishes at infinity, the sequence \((P_{n}f(\mathit{D}_{n})P_{n})_{n\in\mathds{N}}\) converges to \(f(\mathit{D}_{\infty})\) in norm._
This section is concerned with the question: if a spectral triple is an inductive limit of spectral triples, then what additional assumptions should be made to get a more
geometric convergence, specifically in the sense of the spectral propinquity? In order to make sense of this question, we will work with metric spectral triples, which give rise to quantum compact metric spaces, and lie within the realm of noncommutative metric geometry.
### Main result
The notion of inductive limit of spectral triples is simpler to define than the spectral propinquity but only applies to rather narrow examples -- it is not applicable to fuzzy and quantum tori [45] or the fractals in [29]. It is certainly interesting to wonder how much metric information from the spectral triples are continuous with respect to the inductive limit process. In this section, we establish a sufficient condition for the convergence, in the sense of the spectral propinquity, of a sequence of metric spectral triples which already converges to a metric spectral triple in the categorical sense. This sufficient condition is simply the existence of an appropriate bridge builder which is also a full quantum isometry. Thus, the main difficulty in establishing convergence for the spectral propinquity, in this context, reduces to proving metric convergence for the propinquity using adequate tunnels.
**Theorem 3.17**.: _Let \((\mathfrak{A}_{\infty},\mathcal{H}_{\infty},D_{\infty})\) be a metric spectral triple which is the inductive limit of a sequence of metric spectral triples \((\mathfrak{A}_{n},\mathcal{H}_{n},D_{n})\), in the sense of Definition (3.15). For each \(n\in\overline{\mathbb{N}}\), let_
\[\operatorname{dom}\left(\mathsf{L}_{n}\right)\coloneqq\left\{a\in\mathfrak{ so}\left(\mathfrak{A}_{n}\right):a\operatorname{dom}\left(D_{n}\right)\subseteq \operatorname{dom}\left(D_{n}\right)\text{ and }\left\{\left|D_{n},a\right|\text{ is bounded}\right. }\right\},\]
_and for all \(a\in\operatorname{dom}\left(\mathsf{L}_{n}\right)\), define_
\[\mathsf{L}_{n}(a)\coloneqq\left\lVert\left[D_{n},a\right]\right\rVert_{ \mathcal{H}_{n}}.\]
_If there exists a full quantum isometry \(\pi:(\mathfrak{A}_{\infty},\mathsf{L}_{\infty})\to(\mathfrak{A}_{\infty}, \mathsf{L}_{\infty})\) which is also a bridge builder for \(((\mathfrak{A}_{n},\mathsf{L}_{n})_{n\in\mathbb{N}},(\mathfrak{A}_{\infty}, \mathsf{L}_{\infty}))\), then_
\[\lim_{n\to\infty}\Lambda^{\mathsf{spec}}((\mathfrak{A}_{n},\mathcal{H}_{n},D_ {n}),(\mathfrak{A}_{\infty},\mathcal{H}_{\infty},D_{\infty}))=0.\]
Proof.: Fix \(\varepsilon>0\). By Proposition (2.21), the sequence \((\mathfrak{A}_{n},\mathsf{L}_{n})_{n\in\mathbb{N}}\) converges to \((\mathfrak{A}_{\infty},\mathsf{L}_{\infty})\) for the propinquity. More specifically, set, for convenience, \(\tilde{\varepsilon}=\frac{\varepsilon}{2}>0\). Let \(N_{\pi}\in\mathbb{N}\) be given so that, for all \(n\gg N_{\pi}\), we have:
* \(\forall a\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\quad\exists b \in\operatorname{dom}\left(\mathsf{L}_{n}\right):\quad\mathsf{L}_{n}(b) \leqslant\mathsf{L}_{\infty}(a)\) and \(\left\lVert\pi(a)-b\right\rVert_{\mathfrak{A}_{\infty}}<\tilde{\varepsilon} \mathsf{L}_{\infty}(a)\),
* \(\forall b\in\operatorname{dom}\left(\mathsf{L}_{n}\right)\quad\exists a\in \operatorname{dom}\left(\mathsf{L}_{\infty}\right):\quad\mathsf{L}_{\infty}(a) \leqslant\mathsf{L}_{n}(b)\) and \(\left\lVert\pi(a)-b\right\rVert_{\mathfrak{A}_{\infty}}<\tilde{\varepsilon} \mathsf{L}_{n}(b)\).
For each \(n\in\mathbb{N}\), we constructed in Proposition (2.21) a tunnel \(\tau_{n}=(\mathfrak{D}_{n},\mathsf{T}_{n},\psi_{n},\theta_{n})\) with \(\mathfrak{D}_{n}=\mathfrak{A}_{\infty}\oplus\mathfrak{A}_{n}\), and for all \((a,b)\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\oplus \operatorname{dom}\left(\mathsf{L}_{n}\right)\),
\[\mathsf{T}_{n}(a,b)\coloneqq\max\left\{\mathsf{L}_{\infty}(a),\mathsf{L}_{n}(b ),\frac{1}{\tilde{\varepsilon}}\left\lVert\pi(a)-b\right\rVert_{\mathfrak{A}_ {\infty}}\right\},\]
while \(\psi_{n}:(a,b)\in\mathfrak{D}_{n}\mapsto a\), \(\theta_{n}:(a,b)\in\mathfrak{D}_{n}\mapsto b\). We proved that \(\chi(\tau_{n})<\tilde{\varepsilon}\). It is immediate, since \(\pi\) is a full quantum isometry, that \(\tau_{n}^{\prime}\coloneqq(\mathfrak{D}_{n},\mathsf{T}_{n},\pi\circ\psi_{n}, \theta_{n})\) is also a tunnel with the same extent as \(\tau_{n}\).
For each \(n\in\overline{\mathbb{N}}\) and for all \(\xi\in\operatorname{dom}\left(D_{n}\right)\), we define
\[\mathsf{D}\mathsf{N}_{n}(\xi)\coloneqq\left\lVert\xi\right\rVert_{\mathcal{H }_{n}}+\left\lVert D_{n}\xi\right\rVert_{\mathcal{H}_{n}},\]
following Expression (3.1).
Now, since \(\mathsf{D}\mathsf{N}_{\infty}\) is a D-norm, the set \(X_{\infty}=\left\{\xi\in\operatorname{dom}\left(D_{\infty}\right):\mathsf{D} \mathsf{N}_{\infty}(\xi)\leqslant 1\right\}\) is compact in \(\mathcal{H}_{\infty}\). Thus, there exists a finite subset \(F\subseteq X_{\infty}\) of \(X_{\infty}\) such that \(\mathsf{Haus}[\mathcal{H}_{\infty}]\left(X_{\infty},F\right)<\frac{\xi}{3}\). As \(D_{\infty}\) is the closure of an operator on \(\bigcup_{n\in\mathbb{N}}\mathcal{H}_{n}\) by [20], for any \(\xi\in F\), there exists a sequence \((\xi_{n})_{n\in\mathbb{N}}\), with \(\xi_{n}\in\bigcup_{j\in\mathbb{N}}\mathcal{H}_{j}\) for all \(n\in\mathbb{N}\), such that \(\lim_{n\to\infty}\xi_{n}=\xi\), and
\(\lim_{n\to\infty}\,\mathcal{D}_{\infty}\xi_{n}=\mathcal{D}_{\infty}\xi\). Since \(F\) is finite, there exists \(N_{F}\in\mathbb{N}\) such that if \(n\geq N_{F}\) and \(\xi\in F\), then \(\|\xi-\xi_{n}\|_{\mathcal{D}_{\infty}^{\prime}}<\frac{\xi}{3}\) and \(\|D_{\infty}\xi-\mathcal{D}_{\infty}\xi_{n}\|_{\mathcal{H}_{\infty}}<\frac{\xi} {3}\). Again by Definition (3.15), we also have \(\mathcal{D}_{\infty}\xi_{n}=\mathcal{D}_{n}\xi_{n}\).
Fix \(n\in\mathbb{N},n\geq N\coloneqq\max\{N_{\pi},N_{F}\}\). Let \(\mathcal{M}_{n}\coloneqq\mathcal{H}_{\infty}\oplus\mathcal{H}_{n}\), seen as a \(\mathfrak{D}_{n}\)-\((C\oplus\mathbb{C})\)\(\mathrm{C}^{*}\)-correspondence, with the \(\mathrm{C}^{*}\)-correspondence structure:
\[\forall(a,b)\in\mathfrak{D}_{n}\quad\forall(\xi,\eta)\in\mathcal{M}_{n}\quad( a,b)\triangleleft(\xi,\eta)\coloneqq(\pi(a)\xi,b\eta),\]
and
\[\forall(\xi,\eta),(\xi^{\prime},\eta^{\prime})\in\mathcal{M}_{n}\quad\langle (\xi,\xi^{\prime}),(\eta,\eta^{\prime})\rangle_{n}\coloneqq\left(\langle\xi, \xi^{\prime}\rangle_{\mathcal{H}_{\infty}},\langle\eta,\eta^{\prime}\rangle_{ \mathcal{H}_{n}}\right)\in\mathbb{C}\oplus\mathbb{C},\]
while
\[\forall(t,s)\in\mathbb{C}\oplus\mathbb{C}\quad\forall(\xi,\eta)\in\mathcal{M} _{n}\quad(\xi,\eta)\cdot(t,s)\coloneqq(t\xi,s\eta).\]
We note that here, \(\mathbb{C}^{2}\) is the \(\mathrm{C}^{*}\)-algebra of \(\mathbb{C}\)-valued functions over a two points set, and in particular, the norm of \((z,w)\in\mathbb{C}^{2}\) is \(\max\{|z|,|w|\}\).
We then define, for all \((\xi,\eta)\in\operatorname{dom}\left(\mathcal{D}_{\infty}\right)\oplus \operatorname{dom}\left(\mathcal{D}_{n}\right)\):
\[\operatorname{\mathsf{TN}}_{n}(\xi,\eta)\coloneqq\max\left\{\operatorname{ \mathsf{DN}}_{\infty}(\xi),\operatorname{\mathsf{DN}}_{n}(\eta),\frac{1}{\xi} \left\|\xi-\eta\right\|_{\mathcal{H}_{\infty}}\right\},\]
while we also set
\[\mathbb{Q}:(z,w)\in\mathbb{C}\oplus\mathbb{C}\mapsto\frac{1}{\xi}|z-w|.\]
It is immediate to see that \(\mathbb{Q}\) is a Lipschitz seminorm on \(\mathbb{C}\oplus\mathbb{C}\) (it is, in fact, the Lipschitz seminorm for the metric on the two point set which places these two points exactly \(\bar{\varepsilon}\) apart).
Now, we check that \(\operatorname{\mathsf{TN}}_{n}\) is a \(D\)-norm. Of course, for all \((\xi,\eta)\in\mathcal{M}_{n}\):
\[\operatorname{\mathsf{TN}}_{n}(\xi,\eta)\geq\max\{\operatorname{\mathsf{DN}}_ {\infty}(\xi),\operatorname{\mathsf{DN}}_{n}(\eta)\}\geq\max\left\{\left\|\xi \right\|_{\mathcal{H}_{\infty}},\left\|\eta\right\|_{\mathcal{H}_{n}}\right\} =\left\|(\xi,\eta)\right\|_{\mathcal{M}_{n}}.\]
We observe that
\[\{(\xi,\eta)\in\mathcal{M}_{n}: \operatorname{\mathsf{TN}}_{n}(\xi,\eta)\leq 1\}\subseteq\] \[\{\xi\in\operatorname{dom}\left(\mathcal{D}_{\infty}\right): \operatorname{\mathsf{DN}}_{\infty}(\xi)\leq 1\}\times\{\eta\in\operatorname{dom} \left(\mathcal{D}_{n}\right):\operatorname{\mathsf{DN}}_{n}(\eta)\leq 1\},\]
the latter set being compact as a product of two compact sets - since \(\operatorname{\mathsf{DN}}_{n}\) and \(\operatorname{\mathsf{DN}}_{\infty}\) are indeed \(\mathrm{D}\)-norms. Since in addition, \(\operatorname{\mathsf{TN}}_{n}\) is lower semicontinuous over \(\mathcal{M}_{n}\) as the maximum of three lower semicontinuous functions over this space, the unit ball of \(\operatorname{\mathsf{TN}}_{n}\) is indeed closed, hence compact, in \(\mathcal{M}_{n}\) (which is complete). We now check the Leibniz inequalities. If \((a,b)\in\operatorname{dom}\left(\top_{n}\right)\) and \((\xi,\eta)\in\operatorname{dom}\left(\operatorname{\mathsf{TN}}_{n}\right)\), then we compute:
\[\left\|(a,b)\triangleleft(\xi,\eta)\right\|_{\mathcal{H}_{\infty}} =\left\|\pi(a)\xi-b\eta\right\|_{\mathcal{H}_{\infty}}\] \[\leq\left\|\pi(a)-b\right\|_{\mathfrak{D}_{\infty}}\left\|\xi \right\|_{\mathcal{H}_{\infty}}+\left\|b\right\|_{\mathfrak{D}_{\infty}}\left\| \xi-\eta\right\|_{\mathcal{H}_{\infty}}\] \[\leq\bar{\varepsilon}\top_{n}(a,b)\operatorname{\mathsf{DN}}_{n}( \xi)+\left\|(a,b)\right\|_{\mathfrak{D}_{n}}\bar{\varepsilon}\operatorname{ \mathsf{TN}}_{n}(\xi,\eta)\] \[\leq\bar{\varepsilon}\left(\top_{n}(a,b)+\left\|(a,b)\right\|_{ \mathfrak{D}_{n}}\right)\operatorname{\mathsf{TN}}_{n}(\xi,\eta).\]
From this, it follows that for all \((a,b)\in\operatorname{dom}\left(\top_{n}\right)\) and for all \((\xi,\eta)\in\operatorname{dom}\left(\operatorname{\mathsf{TN}}_{n}\right)\),
\[\operatorname{\mathsf{TN}}_{n}((a,b)\triangleleft(\xi,\eta))\leq\left(\top_{n }(a,b)+\left\|(a,b)\right\|_{\mathfrak{D}_{n}}\right)\operatorname{\mathsf{ TN}}_{n}(\xi,\eta).\]
On the other hand, if \((\xi,\eta),(\xi^{\prime},\eta^{\prime})\in\operatorname{dom}(\mathsf{TN}_{n})\), we have:
\[\mathsf{Q}(((\xi,\eta),(\xi^{\prime},\eta^{\prime}))_{\mathcal{M}_{n}}) =\frac{1}{\tilde{\epsilon}}\left|\langle\xi,\xi^{\prime}\rangle_{ \mathcal{H}_{\infty}}-\langle\eta,\eta^{\prime}\rangle_{\mathcal{H}_{\infty}}\right|\] \[\leq\frac{1}{\tilde{\epsilon}}\left(\left|\langle\xi-\eta,\xi^{ \prime}\rangle_{\mathcal{H}_{\infty}}\right|+\left|\langle\eta,\xi^{\prime}- \eta^{\prime}\rangle_{\mathcal{H}_{\infty}}\right|\right)\] \[\leq\mathsf{TN}_{n}(\xi,\eta)\left\|\xi^{\prime}\right\|_{ \mathcal{H}_{\infty}}+\left\|\eta\right\|_{\mathcal{H}_{\infty}}\mathsf{TN}_{n }(\xi^{\prime},\eta^{\prime})\] \[\leq 2\mathsf{TN}_{n}(\xi,\eta)\mathsf{TN}_{n}(\xi^{\prime},\eta^{ \prime}).\]
We now define the maps:
\[\Pi_{n}:(\xi,\eta)\in\mathcal{M}_{n}\mapsto\xi\in\mathcal{H}_{\infty},\text{ and }\Theta_{n}:(\xi,\eta)\in\mathcal{M}_{n}\mapsto\eta\in\mathcal{H}_{n}.\]
Our goal is to show that
\[\Upsilon_{n}\coloneqq\left(\mathsf{M}_{n},(\Pi_{n},\pi\circ\psi_{n}),(\Theta_ {n},\theta_{n})\right)\text{ where }\mathsf{M}_{n}\coloneqq(\mathcal{M}_{n}, \mathsf{TN}_{n},\mathfrak{D}_{n},\mathsf{T}_{n},\mathbb{C}\oplus\mathbb{C}, \mathsf{Q})\]
is a metrical tunnel, using Definition (3.9).
By construction, \(\Pi_{n}(a\cdot\xi,b\cdot\eta)=\pi(a)\xi=\pi\circ\psi_{n}(a,b)\Pi_{n}(\xi,\eta)\) and \(\Theta_{n}(a\cdot\xi,b\cdot\eta)=b\eta=\theta_{n}(a,b)\Theta_{n}(\xi,\eta)\), for all \((a,b)\in\mathfrak{D}_{n}\) and \((\xi,\eta)\in\mathcal{M}_{n}\).
Now, let \(\xi\in\mathcal{H}_{\infty}\) with \(\mathsf{DN}_{\infty}(\xi)=1\). By construction of \(F\), there exists \(\xi^{\prime}\in F\) such that \(\left\|\xi-\xi^{\prime}\right\|_{\mathcal{H}_{\infty}}\leq\frac{\tilde{\xi}}{3}\). By our choice of \(N\), there exists \(\eta(=\xi^{\prime}_{n})\in\mathcal{H}_{n}\) such that \(\mathsf{DN}_{n}(\eta)\leq 1+\frac{\tilde{\epsilon}}{3}\) and \(\left\|\xi^{\prime}-\eta\right\|_{\mathcal{H}_{\infty}}<\frac{\tilde{\xi}}{3}\). Let \(\chi=\frac{1}{1+\frac{\tilde{\epsilon}}{3}}\eta\in\mathcal{H}_{n}\), so that \(\mathsf{DN}_{n}(\chi)\leq 1\). Moreover,
\[\left\|\xi-\chi\right\|_{\mathcal{H}_{\infty}} \leq\left\|\xi-\eta\right\|_{\mathcal{H}_{\infty}}+\frac{\frac{ \tilde{\xi}}{3}}{1+\frac{\tilde{\epsilon}}{3}}\left\|\eta\right\|_{\mathcal{H }_{\infty}}\] \[\leq\left\|\xi-\eta\right\|_{\mathcal{H}_{\infty}}+\frac{\frac{ \tilde{\xi}}{3}}{1+\frac{\tilde{\epsilon}}{3}}\mathsf{DN}_{n}(\eta)\] \[\leq\left\|\xi-\xi^{\prime}\right\|_{\mathcal{H}_{\infty}}+ \left\|\xi^{\prime}-\eta\right\|_{\mathcal{H}_{\infty}}+\frac{\tilde{\epsilon} }{3}\] \[<\tilde{\epsilon}.\]
Thus \(\mathsf{TN}_{n}(\xi,\chi)=1\), and therefore, \((\Pi_{n},\pi\circ\psi_{n})\) is indeed a metrical quantum isometry.
Now, let \(\eta\in\mathcal{H}_{n}\). By construction, \(D_{\infty}\eta=D_{n}\eta\), so \(\mathsf{DN}_{\infty}(\eta)=\mathsf{DN}_{n}(\eta)\). Therefore, \(\mathsf{TN}_{n}(\eta,\eta)=\mathsf{DN}_{n}(\eta)\). Again, we conclude that \((\Theta_{n},\theta_{n})\) is a metrical quantum isometry as well.
Therefore, \(\Upsilon_{n}\) is a metrical tunnel. It is immediate, of course, that the canonical surjections from \(\mathbb{C}\oplus\mathbb{C}\) to \(\mathbb{C}\) are quantum isometries -- the only Lipschitz seminorm on \(\mathbb{C}\) being the \(0\) function. So \(\Upsilon_{n}\) is a metrical tunnel.
We now compute the extent of \(\Upsilon_{n}\). It is, by Definition (3.11), the maximum of the extent of the tunnel \(\tau^{\prime}_{n}\), which is at most \(\tilde{\epsilon}\), and the extent of the tunnel \((\mathbb{C},0)\to(\mathbb{C}\oplus\mathbb{C},\mathsf{Q})\to(\mathbb{C},0)\), which is immediately computed to be \(\tilde{\epsilon}\). So the extent of \(\Upsilon_{n}\) is \(\tilde{\epsilon}\) as well.
Therefore, for all \(n\gg N\), we have
\[\Lambda^{*met}((\mathcal{H}_{n},\mathsf{DN}_{n},\mathfrak{A}_{n},\mathsf{L}_{n },\mathbb{C},0),(\mathcal{H}_{\infty},\mathsf{DN}_{\infty},\mathfrak{A}_{ \infty},\mathsf{L}_{\infty},\mathbb{C},0))\leq\chi(\Upsilon_{n})=\tilde{\epsilon}<\varepsilon,\]
and therefore,
\[\lim_{n\to\infty}\Lambda^{*met}((\mathcal{H}_{n},\mathsf{DN}_{n},\mathfrak{A}_{ n},\mathsf{L}_{n},\mathbb{C},0),(\mathcal{H}_{\infty},\mathsf{DN}_{\infty}. \mathfrak{A}_{\infty},\mathsf{L}_{\infty},\mathbb{C},0))=0.\]
It remains to compute an upper bound for the \(\varepsilon\)-reach of our tunnels \(Y_{n}\) (see Definition 3.12). We will once again use our finite set \(F\) with \(\mathsf{Haus}[\mathcal{H}_{\infty}]\left(F,X_{\infty}\right)<\frac{\tilde{ \varepsilon}}{3}\) where \(X_{\infty}\) is the closed unit ball of \(\mathsf{DN}_{\infty}\).
Now, let \((v_{k})_{k\in\mathds{N}}\) be a sequence of continuous functions on \(\mathds{R}\) vanishing at \(\infty\), valued in \([0,1]\), and converging pointwise to \(1\) over \(\mathds{R}\). Therefore, \((v_{k}(\bar{D}_{\infty}))_{k\in\mathds{N}}\) converges to \(\bar{D}_{\infty}\) in the strong operator topology. Since \(F\) is finite, there exists \(k\in\mathds{N}\) such that, for all \(\xi\in F\)
\[\|v_{k}(\bar{D}_{\infty})\xi-\xi\|_{\mathcal{H}_{\infty}}<\frac{\tilde{ \varepsilon}}{12}. \tag{3.3}\]
We identify, from now on, \(\bar{D}_{n}\) with the linear operator on \(\mathcal{H}_{\infty}\) whose restriction to \(\mathcal{H}_{n}\) is \(\bar{D}_{n}\), and whose restriction to \(\mathcal{H}_{n}^{\perp}\) is \(\bar{D}_{n}\); thus \(\operatorname{dom}\left(\bar{D}_{n}\right)\) is replaced with \(\operatorname{dom}\left(\bar{D}_{n}\right)\oplus\mathcal{H}_{n}^{\perp}\). We denote by \(P_{n}\) the orthogonal projection of \(\mathcal{H}_{\infty}\) onto \(\mathcal{H}_{n}\), so that \(P_{n}\bar{D}_{n}=\bar{D}_{n}P_{n}=\bar{D}_{n}\) on \(\operatorname{dom}\left(\bar{D}_{n}\right)\).
For each \(t\in[0,\infty)\), let \(u^{t}:s\in\mathds{R}\mapsto\exp(its)\), and for each \(n\in\overline{\mathds{N}}\), we denote \(u^{t}(\bar{D}_{n})\) by \(U_{n}^{t}\).
Fix \(t\in\mathds{R}\). The function \(u^{t}v_{k}\) is continuous over \(\mathds{R}\) and vanishes at infinity. By Theorem (3.16), since \((\mathfrak{A}_{\infty},\mathcal{H}_{\infty},\bar{D}_{\infty})\) is a spectral triple, and the inductive limit of the sequence \((\mathfrak{A}_{n},\mathcal{H}_{n},\bar{D}_{n})_{n\in\mathds{N}}\) of spectral triples, the sequence of operators \((P_{n}u^{t}v_{k}(\bar{D}_{n})P_{n})_{n\in\mathds{N}}\) converges in norm to \(u^{t}v_{k}(\bar{D}_{\infty})\). Moreover, \(u^{t}v_{k}(\bar{D}_{n})P_{n}=P_{n}u^{t}v_{k}(\bar{D}_{n})P_{n}\) for all \(n\in\mathds{N}\) by construction.
Let \(F^{\prime}\) be a finite subset of the compact set \(\left[0,\frac{1}{\varepsilon}\right]\) such that \(\mathsf{Haus}[\mathds{R}]\left(F^{\prime},\left[0,\frac{1}{\varepsilon} \right]\right)<\frac{\tilde{\varepsilon}}{12}\). Since \(F^{\prime}\) is finite, there exists \(N_{\nu}\in\mathds{N}\) such that if \(n\geqslant N_{\nu}\), then for all \(t\in F^{\prime}\):
\[\big{|}\big{|}U_{n}^{t}(v_{k}(\bar{D}_{n}))P_{n}-U_{\infty}^{t}(v_{k}(\bar{D} _{\infty}))\big{|}\big{|}_{\mathcal{H}_{\infty}}<\frac{\tilde{\varepsilon}}{ 12}. \tag{3.4}\]
Let \(n\in\overline{\mathds{N}}\). Now, we note that if \(\xi\in\operatorname{dom}\left(\mathsf{DN}_{n}\right)\) with \(\mathsf{DN}_{n}(\xi)\leqslant 1\), then for all \(s<t\in\mathds{R}\):
\[\big{|}U_{n}^{t}\xi-U_{n}^{s}\xi\big{|}_{\mathcal{H}_{n}} \leqslant\int_{s}^{t}\Big{|}\frac{d}{dr}\,U_{n}^{r}\xi\Big{|}_{ \mathcal{H}_{n}}\,dr\] \[\leqslant\int_{s}^{t}\big{|}U_{n}^{r}\bar{D}_{n}\xi\big{|}_{ \mathcal{H}_{n}}\,dr\] \[\leqslant|s-t|.\]
Thus, for all \(s,t\in\mathds{R}\) and \(\xi\in\operatorname{dom}\left(\mathsf{DN}_{n}\right)\) with \(\mathsf{DN}_{n}(\xi)\leqslant 1\), we have
\[\big{|}\big{|}U_{n}^{t}\xi-U_{n}^{s}\xi\big{|}_{\mathcal{H}_{n}}\leqslant|s-t|.\]
Now, let \(n\geqslant N^{\prime}\coloneqq\max\{N_{\nu},N_{F}\}\). Since \(\bar{D}_{n}\) and \(P_{n}\) commute, if \(\xi\in X_{\infty}\), then \(\mathsf{DN}_{n}(\xi)\leqslant\mathsf{DN}_{\infty}(\xi)\) and:
\[\mathsf{DN}_{n}(\nu_{k}(\bar{D}_{n})P_{n}\xi) =\|v_{k}(\bar{D}_{n})P_{n}\xi\|_{\mathcal{H}_{\infty}}+\|\bar{D}_{n }\nu_{k}(\bar{D}_{n})P_{n}\xi\|_{\mathcal{H}_{\infty}}\] \[\leqslant\|v_{k}\|_{C_{0}(\mathds{R})}\,\|\bar{D}_{n}\|_{\mathcal{H }_{\infty}}\mathsf{DN}_{n}(\xi)\leqslant 1.\]
For all \(\xi\in X_{\infty}\) and \(t\in\left[0,\frac{1}{\varepsilon}\right]\), let \(s\in F^{\prime}\) and \(\xi^{\prime}\in F\) such that \(|s-t|<\frac{\varepsilon}{12}\), \(\left\|\xi-\xi^{\prime}\right\|_{\mathcal{H}_{\infty}}<\frac{\varepsilon}{3}\), then:
\[\left\|U_{n}^{t}\nu_{k}(D_{n})P_{n}\xi-U_{\infty}^{t}\xi\right\|_{ \mathcal{H}_{\infty}} \leqslant\underbrace{\left\|U_{n}^{t}\nu_{k}(D_{n})P_{n}\xi-U_{n }^{s}\nu_{k}(D_{n})P_{n}\xi\right\|_{\mathcal{H}_{\infty}}}_{\leqslant|t-s|< \frac{\varepsilon}{12}\text{ since DN}_{n}(\nu_{k}(D_{n})P_{n}\xi)\leqslant 1}\] \[+\underbrace{\left\|U_{n}^{s}\nu_{k}(D_{n})P_{n}\xi-U_{\infty}^{s }\nu_{k}(D_{\infty})\xi\right\|_{\mathcal{H}_{\infty}}}_{\leqslant|\|U_{n}^{s }\nu_{k}(D_{n})P_{n}-U_{\infty}^{s}\nu_{k}(D_{\infty})\|_{\mathcal{H}_{\infty} }<\frac{\varepsilon}{12}}\text{ by Eq. \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:
_Remark 3.18_.: A corollary of Theorem (3.17) is that we obtain convergence for the bounded continuous functional calculus for the Dirac operators from the work in [31], which extends Theorem (3.16).
## 4. Even Spectral Triples on Twisted Group \(C^{*}\)-algebras
We now apply our results of the previous sections to the construction of inductive limits of spectral triples for the spectral propinquity on twisted \(C^{*}\)-algebras of discrete groups endowed with length functions. In particular we will prove in this section our third main theorem, Theorem (4.11). Our approach introduces new metric spectral triples on certain twisted group \(C^{*}\)-algebras which generalize the related, though distinct, past constructions using length functions over discrete groups such as the ones in [19]. Our main applications would be the construction of new spectral triples over noncommutative solenoids and some Bunce-Deddens algebras. In particular, we shall prove that the noncommutative solenoids spectral triples are limits of spectral triples over quantum 2-tori for the spectral propinquity. We will start with detailing in the next two subsections some background material that will be used to state and prove our main result.
### Discrete Groups, Proper Length Functions, 2-Cocycles, and Classical Spectral Triples
Let \(G_{\infty}\) be a discrete group, and let \(\sigma\) be a 2-cocycle over \(G_{\infty}\). Let \(\lambda\) be the left regular \(\sigma\)-projective representation of \(G_{\infty}\) on \(\ell^{2}(G_{\infty})\), defined by, for all \(g\in G_{\infty}\) and for all \(\xi\in\ell^{2}(G_{\infty})\):
\[\lambda(g)\xi:h\in G_{\infty}\longmapsto\sigma(g,g^{-1}h)\xi(g^{-1}h).\]
Of course, each operator \(\lambda(g)\) is unitary for each \(g\in G_{\infty}\). Let \(C^{*}_{\mathrm{red}}(G_{\infty},\sigma)\) be the reduced \(C^{*}\)-algebra of \(G_{\infty}\) twisted by \(\sigma\), i.e. the \(C^{*}\)-algebra of operators on \(\ell^{2}(G_{\infty})\) generated by \(\{\lambda(g):g\in G_{\infty}\}\). For any \(f\in\ell^{1}(G_{\infty})\), the operator \(\lambda(f)\) on \(\ell^{2}(G_{\infty})\) is defined as \(\sum_{g\in G_{\infty}}f(g)\lambda(g)\) -- it is easily checked that \(\big{|}\big{|}\lambda(f)\big{|}\big{|}_{\ell^{2}(G_{\infty})}\leq\big{|}f \big{|}_{\ell^{1}(f)}\). The reduced group \(C^{*}\)-algebra \(C^{*}_{\mathrm{red}}(G)\) is, in particular, \(C^{*}_{\mathrm{red}}(G_{\infty},1)\).
In [11], Connes introduced spectral triples \((C^{*}_{\mathrm{red}}(G_{\infty}),\ell^{2}(G_{\infty}),M_{\mathbb{L}})\) using any proper length function \(\mathbb{L}\) over \(G_{\infty}\), where \(M_{\mathbb{L}}\) is the operator of multiplication by \(\mathbb{L}\), defined on its natural domain in the Hilbert space \(\ell^{2}(G_{\infty})\). Connes proved that \(\big{|}\big{|}\big{|}(M_{\mathbb{L}},\lambda(g))\big{|}\big{|}_{\ell^{2}(G_{ \infty})}=\mathbb{L}(g)\) -- which immediately follows from the triangle inequality and the fact that \([M_{\mathbb{L}},\lambda(g)]\delta_{e}=\mathbb{L}(g)\sigma(g,1)\delta_{g}\), where, for all \(g\in G_{\infty}\):
\[\delta_{g}:h\in G_{\infty}\mapsto\begin{cases}1\text{ if }g=h,\\ 0\text{ otherwise.}\end{cases}\]
It then follows that for the \(*\)-algebra \(C_{c}(G_{\infty})\) of \(\mathbb{C}\)-valued functions with finite support, we obtain the inequality, for all \(f\in C_{c}(G_{\infty})\):
(4.1) \[\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|} \big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|}\big{|
where:
\[\mathds{Z}\left[\frac{1}{p}\right]:=\left\{\frac{a}{p^{n}}:n\in\mathds{N},a\in \mathds{Z}\right\},\]
and where \(p\in\mathds{N}\) is prime. It is natural to regard \(\mathds{Z}\left[\frac{1}{p}\right]\) as a subgroup of \(\mathds{Q}\), and thus equip it with the induced length function from the usual absolute value on \(\mathds{Q}\) (see Figure (3)). However, this length function is not proper -- and induces a non-discrete topology. We moreover note that \(\mathds{Z}\left[\frac{1}{p}\right]=\bigcup_{n\in\mathds{N}}\frac{1}{p^{n}} \mathds{Z}\), and we would like to capture this inductive limit structure metrically; while the sequence \(\left(\frac{1}{p^{n}}\mathds{Z}\right)_{n\in\mathds{N}}\) converges to \(\mathds{Z}\left[\frac{1}{p}\right]\) for the Hausdorff distance induced by \(|\cdot|\), we can not apply this observation directly to the associated twisted \(\mathrm{C}^{*}\)-algebras since \(|\cdot|\) does not define a spectral triple using Connes' methods.
Let us discuss this situation by returning to a general discrete group \(G_{\infty}\) and some \(2\)-cocycle \(\sigma\) on \(G_{\infty}\). We now assume that we are given a strictly increasing sequence \((G_{n})_{n\in\mathds{N}}\) of subgroups of \(G_{\infty}\) such that \(G_{\infty}=\bigcup_{n\in\mathds{N}}G_{n}\) -- in fancier terms, \(G_{\infty}\) is the inductive limit of the sequence of groups \((G_{n})_{n\in\mathds{N}}\), which we identify with a sequence of subgroups of \(G_{\infty}\) from now on. We also identify \(\sigma\) with its restriction to \(G_{n}\) for all \(n\in\mathds{N}\).
We now have a conundrum. If we choose a proper length function \(\mathds{L}\) on \(G_{\infty}\), then, since \(G_{\infty}=\bigcup_{n\in\mathds{N}}G_{n}\) with \((G_{n})_{n\in\mathds{N}}\) increasing, any finite subset of \(G_{\infty}\) is contained in some \(G_{N}\) (and thus in all \(G_{n}\) with \(n\geq N\)). This implies that \((G_{n})_{n\in\mathds{N}}\) converges to \(G_{\infty}\) for the pointed Gromov-Hausdorff distance for proper metric space, where we always use \(1\) as our base point, and the metrics are induced by \(\mathds{L}\) (see [22]). On the other hand, as soon as \(G_{\infty}\) is infinite -- which is the only interesting case to consider when \(G_{\infty}\) is the union of countably many groups, otherwise of course \(G_{\infty}\) is just \(G_{n}\) for \(n\) large enough -- not only the diameter of \(G_{\infty}\) is infinite -- it can not be a closed ball as these are finite -- but the subgroups \(G_{n}\) are not close to \(G_{\infty}\) for the Hausdorff distance induced by \(\mathds{L}\) in general. So, we can define the spectral triples \((C_{\mathrm{red}}^{*}(G_{n},\sigma),\ell^{2}(G_{n}),M_{\mathds{L}})\) as before since \(\mathds{L}\) is proper, but in general, there is no apparent reason why \(\left|\left|\left|M_{\mathds{L}},a\right|\right|\right|_{\ell^{2}(G_{\infty})}\) is particularly close to \(\left|\left|\left|M_{\mathds{L}},a\right|\right|\right|_{\ell^{2}(G_{n})}\) for \(a\in C_{\mathrm{red}}^{*}(G_{n},\sigma)\).
On the other hand, there may be length functions on \(G_{\infty}\) for which \((G_{n})_{n\in\mathds{N}}\) does converge in the Hausdorff distance for these length functions, but these length functions are not proper whenever \(G_{\infty}\) is infinite. We are thus led to build a new type of spectral triples which combine these two apparently opposite situations: one where we do not know how to build a spectral triple using a non-proper length with otherwise good metric properties for our purpose, and one with a proper length function which has bad metric property. The following construction is inspired, but different from [19], where a proper length function is constructed as a sum of a non-proper length function with a \(p\)-norm.
### The Spectral Triples
We now define our new spectral triples on a particular type of twisted group \(\mathrm{C}^{*}\)-algebras, which are the subject of our main third theorem, Theorem (4.11), and its corollaries.
From now on we assume that \(G_{\infty}\) is a discrete group endowed with a \(2\)-cocycle \(\sigma\) with values in \(\mathds{T}:=\{z\in\mathds{C}:|z|=1\}\), and that \(G_{\infty}\) is the union of a strictly increasing sequence for inclusion, \((G_{n})\), of subgroups of \(G_{\infty}\) such that \(G_{\infty}=\bigcup_{n\in\mathds{N}}G_{n}\).
We also assume that we are given a length function \(\mathds{L}_{H}\) on \(G_{\infty}\), whose restriction to each \(G_{n}\) is proper for each \(n\in\mathds{N}\), and with the property that
\[\lim_{n\to\infty}\operatorname{\mathsf{Haus}}[\mathds{L}_{H}]\left(G_{\infty},G_{n}\right)=0. \tag{4.2}\]
In addition we require that we are given a strictly increasing unbounded function scale : \(\mathds{N}\to[0,\infty)\), together with \(\mathds{F}:G_{\infty}\to[0,\infty)\) such that for all \(g\notin G_{0}\):
\[\mathds{F}(g)=\operatorname{scale}(\min\{n\in\mathds{N}:g\in G_{n}\}),\]
while \(\mathds{F}\) restricted to \(G_{0}\) satisfies:
* \(\forall g\in G_{0}\quad\mathds{F}(g)=\mathds{F}(g^{-1})\),
* \(\forall g,h\in G_{0}\quad\mathds{F}(gh)\leq\max\{\mathds{F}(g),\mathds{F}(h)\}\),
* \(\forall g\in G_{0}\quad\mathds{F}(g)\in[0,\operatorname{scale}(0)]\),
* \(\mathds{F}(1)=0\).
Clearly, the above assumptions provide us with many length functions on \(G_{\infty}\) and \(G_{n}\); we will use them in our spectral triples constructions.
One of our main examples for this section will be the noncommutative solenoids, whose fundamental components are described below. We will give more details on this example later in this work.
_Example 4.1_.: Let \(d\geq 2\) and \(p\) a prime number. Let \(G_{\infty}=\left(\mathds{Z}\left[\frac{1}{p}\right]\right)^{d}\), and let \(G_{n}=\left(\frac{1}{p^{n}}\mathds{Z}\right)^{d}\) for all \(n\in\mathds{N}\). We note that \(G_{\infty}=\bigcup_{n\in\mathds{N}}G_{n}\). We can then choose \(\mathds{L}_{H}\) to be the restriction of any norm on \(\mathds{R}^{d}\), and \(\operatorname{scale}:n\in\mathds{N}\to p^{n}\in[0,\infty)\), so that:
\[\mathds{F}:g\in G_{\infty}\mapsto\operatorname{scale}\left(\min\left\{n\in \mathds{N}:g\in\left(\frac{1}{p^{n}}\mathds{Z}\right)^{d}\right\}\right).\]
Now, for any function \(f:G_{n}\to\mathds{C}\), we denote by \(M_{f}\) the operator of multiplication by \(f\) on the subspace:
\[\operatorname{dom}\left(M_{f}\right)\coloneqq\left\{\xi\in\ell^{2}(G_{n}):(h \in G_{n}\mapsto f(h)\xi(h))\in\ell^{2}(G_{n})\right\}\]
of \(\ell^{2}(G_{n})\). Of course, \(M_{f}\) is bounded by \(\left\|f\right\|_{C(G_{n})}\) if \(f\) is bounded, and unbounded otherwise; nonetheless \(\operatorname{dom}\left(M_{f}\right)\) always contains \(C_{c}(G_{n})\) and thus is always dense in \(\ell^{2}(G_{n})\).
Let \(E\) be a finite dimensional Hilbert space with inner product \(\langle\cdot,\cdot\rangle_{E}\) and \(\dim E\in 2\mathds{N}\setminus\{0\}\), and let \(\mathfrak{c}\) be a \(*\)-representation of the Clifford algebra of \(\mathds{C}^{2}\) on \(E\). Let \(\gamma_{1}=\mathfrak{c}\left(\begin{pmatrix}1\\ 0\end{pmatrix}\right)\)
and \(\gamma_{2}=\epsilon\left(\begin{pmatrix}0\\ 1\end{pmatrix}\right)\). For our purpose, we record that for all \(j,k\in\{1,2\}\):
\[\gamma_{j}\gamma_{k}+\gamma_{k}\gamma_{j}=\begin{cases}2\text{ if }j=k,\\ 0\text{ otherwise.}\end{cases}.\]
_Remark 4.2_.: There is no particular reason to restrict ourselves to \(E=\mathbb{C}^{2}\), though it is the natural choice. In this case, we can choose the usual Weyl matrices:
\[\gamma_{1}=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\text{ and }\gamma_{2}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\]
as the most natural choice for our construction.
For each \(n\in\overline{\mathbb{N}}:=\mathbb{N}\cup\{\infty\}\), we identify the Hilbert space \(\ell^{2}(G_{n},E)\) of \(E\)-valued functions over \(G_{n}\) (with inner product \(\langle\xi,\eta\rangle_{\ell^{2}(G_{n},E)}:=\sum_{g\in G_{n}}\langle\xi(g), \eta(g)\rangle_{E}\)) with \(\ell^{2}(G_{n})\otimes E\). We then let
\[\operatorname{dom}\left(D_{n}\right)\coloneqq\left\{\xi\in\ell^{2}(G_{n},E): (\mathbb{L}_{H}(g)\gamma_{1}\xi(g)+\mathbb{F}(g)\gamma_{2}\xi(g))_{g\in G_{n}} \in\ell^{2}(G_{n},E)\right\}\]
and on \(\operatorname{dom}\left(D_{n}\right)\), we define the Dirac operator:
\[D_{n}\coloneqq M_{\mathbb{L}_{H}}\otimes\gamma_{1}+M_{\mathbb{F}}\otimes \gamma_{2}. \tag{4.3}\]
We now prove that \((C^{*}(G_{n},\sigma),\ell^{2}(G_{n})\otimes E,D_{n})\), as defined above, are indeed spectral triples, for all \(n\in\overline{\mathbb{N}}\). A first step is the computation of the domain of our Dirac operators of Equation (4.3). To do so, we will need the following lemma. Recall that a norm \(\|\cdot\|_{\mathbb{R}^{2}}\) on \(\mathbb{R}^{2}\) is _monotone_ when it is increasing with respect to the product order on \(\mathbb{R}^{2}\); the most important such norm for our purpose will be the max norm \(x,y\in\mathbb{R}^{2}\mapsto\left\|(x,y)\right\|_{\infty}=\max\{|x|,|y|\}\); we also note that we will often write elements of \(\mathbb{R}^{d}\) as simple \(d\)-tuples.
**Lemma 4.3**.: _With the notation and assumptions of this section, the following identities hold._
1. _For all_ \(g\in G_{\infty}\)_:_ \[\mathbb{F}(g^{-1})=\mathbb{F}(g)\] _;_
2. _For all_ \(g,h\in G_{\infty}\)_:_ \[\mathbb{F}(gh)\leq\max\left\{\mathbb{F}(g),\mathbb{F}(h)\right\}\leq\mathbb{F }(g)+\mathbb{F}(h).\]
_Moreover, if \(\|\cdot\|_{\mathbb{R}^{2}}\) is any monotone norm on \(\mathbb{R}^{2}\), then \(g\in G_{\infty}\mapsto\left\|(\mathbb{L}_{H}(g),\mathbb{F}(g))\right\|_{ \mathbb{R}^{2}}\) is a proper, unbounded length function over \(G_{\infty}\)._
Proof.: Let \(g\in G_{\infty}\), and let \(n\in\mathbb{N}\) be the unique natural number such that \(\mathbb{F}(g)=\operatorname{scale}(n)\), or \(n=0\) if \(\mathbb{F}(g)<\operatorname{scale}(0)\). If \(n=0\) then \(\mathbb{F}(g)=\mathbb{F}(g^{-1})\) by assumption. If \(n>0\), then \(g\in G_{n}\) and \(g\notin G_{p}\) for \(p<n\); therefore, \(g^{-1}\in G_{n}\) and \(g^{-1}\notin G_{p}\) if \(p<n\); hence, \(\mathbb{F}(g^{-1})=\operatorname{scale}(n)=\mathbb{F}(g)\).
Now, take \(h\in G_{\infty}\). Again, let \(m\in\mathbb{N}\) be uniquely defined by \(\mathbb{F}(h)=\operatorname{scale}(m)\) or \(m=0\) otherwise. Let \(k=\max\{m,n\}\). Thus \(g,h\in G_{k}\) and therefore, \(gh\in G_{k}\). First, if \(g,h,gh\in G_{0}\), then \(\mathbb{F}(gh)\leq\max\{\mathbb{F}(g),\mathbb{F}(h)\}\) by assumption on \(\mathbb{F}\). Otherwise, \(k>0\), and we simply observe that either \(gh\in G_{0}\) and then \(\mathbb{F}(gh)\leq\operatorname{scale}(0)<\operatorname{scale}(k)\), or \(gh\notin G_{0}\), and again \(\mathbb{F}(gh)\leq\operatorname{scale}(k)\); either way we observe:
\[\mathbb{F}(gh)\leq\operatorname{scale}(k)=\operatorname{scale}(\max\{n,m\})= \max\{\operatorname{scale}(n),\operatorname{scale}(m)\}=\max\{\mathbb{F}(g), \mathbb{F}(h)\}.\]
Fix a monotone norm \(\|\cdot\|_{\mathbb{R}^{2}}\) on \(\mathbb{R}^{2}\) and let
\[\mathbb{L}:g\in G_{\infty}\longmapsto\left\|(\mathbb{L}_{H}(g),\mathbb{F}(g) )\right\|_{\mathbb{R}^{2}}.\]
It is then immediate to check that if \(g,h\in G_{\infty}\), then, since \(\left\|\cdot\right\|_{\mathbb{R}^{2}}\) is monotone:
\[\left\|(\mathbb{L}_{H}(gh),\mathbb{F}(gh))\right\|_{\mathbb{R}^{2}} \leq\left\|(\mathbb{L}_{H}(g)+\mathbb{L}_{H}(h),\mathbb{F}(g)+\mathbb{F}(h)) \right\|_{\mathbb{R}^{2}}\] \[\leq\left\|(\mathbb{L}_{H}(g),\mathbb{F}(g))\right\|_{\mathbb{R}^ {2}}+\left\|(\mathbb{L}_{H}(h),\mathbb{F}(h))\right\|_{\mathbb{R}^{2}}.\]
Moreover \(\left\|(\mathbb{L}_{H}(g^{-1}),\mathbb{F}(g^{-1}))\right\|_{\mathbb{R}^{2}}= \left\|(\mathbb{L}_{H}(g),\mathbb{F}(g))\right\|_{\mathbb{R}^{2}}\) for all \(g\in G_{\infty}\).
Finally, if \(\left\|(\mathbb{L}_{H}(g),\mathbb{F}(g))\right\|_{\mathbb{R}^{2}}=0\), then \(\mathbb{L}_{H}(g)=0\), which in turns implies \(g=1\). On the other hand, \(\mathbb{F}(1)=0\) and \(\mathbb{L}_{H}(1)=0\), so \(\mathbb{L}(1)=0\). Thus as claimed, \(\mathbb{L}\) is a length function on \(G_{\infty}\).
Now, let be more specific in our choice of \(\left\|\cdot\right\|_{\mathbb{R}^{2}}\), and fix it to be the usual max norm \(\left\|\cdot\right\|_{\infty}\); we then rename our length \(\mathbb{L}_{\infty}\); so
\[\mathbb{L}_{\infty}(g)\coloneqq\max\left\{\mathbb{L}_{H}(g),\mathbb{F}(g) \right\}.\]
Fix \(n\in\mathbb{N}\). By definition, the following equality between closed balls hold:
\[\left\{g\in G_{\infty}:\mathbb{L}(g)\leq\operatorname{scale}(n)\right\}= \left\{g\in G_{n}:\mathbb{L}_{H}(g)\leq\operatorname{scale}(n)\right\}.\]
Since \(\mathbb{L}_{H}\) is proper on \(G_{n}\), this set is finite. So \(\mathbb{L}\) is indeed proper on \(G_{\infty}\).
By assumption, the function scale is unbounded on \(\mathbb{N}\) and, for all \(n\in\mathbb{N}\), there exists \(g\in G_{\infty}\setminus G_{n}\) (since \((G_{n})_{n\in\mathbb{N}}\) is assumed to be strictly increasing), i.e. \(\mathbb{F}(g)\geq\operatorname{scale}(n)\), so \(\mathbb{L}\) is unbounded.
We now return to a general monotone norm \(\left\|\cdot\right\|_{\mathbb{R}^{2}}\) on \(\mathbb{R}^{2}\). Since all norms on \(\mathbb{R}^{2}\) are equivalent, there exists \(c>0\) such that \(\frac{1}{c}\left\|\cdot\right\|_{\infty}\in\left\|\cdot\right\|_{\mathbb{R}^{2 }}\leq c\left\|\cdot\right\|_{\infty}\). Therefore,
\[\frac{1}{c}\mathbb{L}_{\infty}\leq\mathbb{L}\leq c\mathbb{L}_{\infty}.\]
It is now easy to check that \(\mathbb{L}\) is again proper and unbounded on \(G_{\infty}\). This concludes our proof.
_Remark 4.4_.: It is quite natural to simply set \(\mathbb{F}(g)=\operatorname{scale}(0)\) for all \(g\in G_{0}\setminus\{1\}\). The difference between such a choice of \(\mathbb{F}\), vs any other \(\mathbb{F}^{\prime}\), which meets our assumptions over \(G_{0}\), is a bounded perturbation. We refer to [36] for a discussion on bounded perturbations of spectral triples from the metric perspective.
As seen in the above discussion, the above length function \(\mathbb{L}_{H}\) will not be proper, so it won't define a spectral triple by itself, however \(\mathbb{L}\) is proper, and thus can be used to define a spectral triple on \(C^{*}(G_{\infty},\sigma)\). However, we take a slightly different route by working with what we shall prove is an even spectral triple, replacing the linear geometry of \(G_{\infty}\) with a sort of "two-dimensional" geometry (see Figure (3) for the noncommutative solenoid case).
We now prove that in the above hypotheses we can indeed define spectral triples. We begin with a computation of the domain of the proposed Dirac operators defined in Equation (4.3).
**Lemma 4.5**.: _With the notation and assumptions of this section, the following assertion holds; for all \(\xi\in E\) and for all \(a,b\in\mathbb{R}\):_
\[\left\|(a\gamma_{1}+b\gamma_{2})\xi\right\|_{E}^{2}=\left(a^{2}+b^{2}\right) \left\|\xi\right\|_{E}^{2}.\]
_In particular, for all \(n\in\overline{\mathbb{N}}\), the domain \(\operatorname{dom}\left(\mathbb{D}_{n}\right)\) of the Dirac operator \(\mathbb{D}_{n}\) is given by_
\[\left\{\xi\in\ell^{2}(G_{n},E):\sum_{g\in G_{n}}(\mathbb{L}_{H}(g)^{2}+ \mathbb{F}(g)^{2})\left\|\xi(g)\right\|_{E}^{2}<\infty\right\}.\]
Proof.: Let \(\xi\in E\). The following identity holds for all \(a,b\in\mathbbm{R}\):
\[\left|\!\left|\!\left|a\gamma_{1}\xi+b\gamma_{2}\xi\right|\!\right| \right|_{E}^{2} =a^{2}\langle\gamma_{1}\xi,\gamma_{1}\xi\rangle_{E}+ab\langle\gamma _{1}\xi,\gamma_{2}\xi\rangle_{E}+ab\langle\gamma_{2}\xi,\gamma_{1}\xi\rangle_{ E}+b^{2}\langle\gamma_{2}\xi,\gamma_{2}\xi\rangle_{E}\] \[=a^{2}\langle\gamma_{1}^{2}\xi,\xi\rangle_{E}+ab\langle(\gamma_{1 }\gamma_{2}+\gamma_{2}\gamma_{1})\xi,\xi\rangle_{E}+b^{2}\langle\gamma_{2}^{2} \xi,\xi\rangle_{E}\] \[=(a^{2}+b^{2})\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\left|\!\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\! \left|\!
if \(\xi,\eta\in\operatorname{dom}\left(\mathit{D}_{n}\right)\), it follows that:
\[\left\langle\mathit{D}_{n}\xi,\eta\right\rangle_{\ell^{2}(G_{n},E)} =\sum_{g\in G_{n}}\left\langle\left(\mathbb{L}_{H}(g)\gamma_{1}+ \mathbb{F}(g)\gamma_{2}\right)\xi,\eta\right\rangle_{E}\] \[=\sum_{g\in G_{n}}\left\langle\xi,\left(\mathbb{L}_{H}(g)\gamma_ {1}+\mathbb{F}(g)\gamma_{2}\right)\eta\right\rangle_{E}\] \[=\left\langle\xi,\mathit{D}_{n}\eta\right\rangle_{\ell^{2}(G_{n },E)},\]
so \(\mathit{D}_{n}\) is also a symmetric operator. By using Lemma (4.5), we now note that:
\[\operatorname{dom}\left(\mathit{D}_{n}^{2}\right)=\left\{\xi\in\ell^{2}(G_{n },E):\sum_{g\in G_{n}}\left(\mathbb{L}_{H}(g)^{2}+\mathbb{F}(g)^{2}\right)^{2 }\left\|\xi(g)\right\|_{E}^{2}<\infty\right\}\]
and, over \(\operatorname{dom}\left(\mathit{D}_{n}^{2}\right)\), the Clifford algebra relations imply:
\[\mathit{D}_{n}^{2}+1=\left(M_{\mathbb{L}_{H}}^{2}+M_{\mathbb{F}}^{2}+1\right) \otimes 1_{E}.\]
Now define an operator \(K\) on \(\ell^{2}(G_{n},E)\) by setting, for all \(\xi\in\ell^{2}(G_{n},E)\):
\[K\xi:g\in G_{n}\mapsto\frac{1}{\sqrt{\mathbb{L}_{H}(g)^{2}+\mathbb{F}(g)^{2}+ 1}}\xi(g).\]
By construction, \(K\) is positive. Moreover, if \(n\in\mathbb{N}\), then \(\mathbb{L}_{H}\) restricted to \(G_{n}\) is proper and \(\mathbb{F}\) is bounded over \(G_{n}\) by our hypotheses, so \(K\) is compact. If \(n=\infty\), by our hypotheses, for all \(r\geq 0\), the set \(\{g\in G_{\infty}:\mathbb{F}(g)\leq r\}\) is a subset of \(G_{k}\) for some \(k\in\mathbb{N}\). Since \(\mathbb{L}_{H}\) is proper on \(G_{k}\), the set \(\{g\in G_{\infty}:\mathbb{L}_{H}^{2}(g)+\mathbb{F}^{2}(g)\leq r\}\) is finite. Thus, the eigenspaces of \(K\) are all finite dimensional. It follows easily that \(K\) is compact, as well.
In any case, i.e., for all \(n\in\overline{\mathbb{N}}\), \((\mathit{D}_{n}^{2}+1)K^{2}\xi=\xi\) for all \(\xi\in\ell^{2}(G_{n},E)\), while \(K^{2}(\mathit{D}_{n}^{2}+1)\xi=\xi\) for all \(\xi\in\operatorname{dom}\left(\mathit{D}_{n}^{2}\right)\), as seen by a direct computation; in particular, we note that \(K\ell^{2}(G_{n},E)=\operatorname{dom}\left(\mathit{D}_{n}\right)\) by construction.
By Lemma (4.5), for all \(\xi\in\ell^{2}(G_{n},E)\), we obtain:
\[\sum_{g\in G_{n}}\left\|\mathit{D}_{n}K\xi(g)\right\|_{E}^{2}\] \[=\sum_{g\in G_{n}}\left\|\frac{\mathbb{L}_{H}(g)}{\sqrt{\mathbb{L }_{H}(g)^{2}+\mathbb{F}(g)^{2}+1}}(\gamma_{1}\xi(g))+\frac{\mathbb{F}(g)}{ \sqrt{\mathbb{L}_{H}(g)^{2}+\mathbb{F}(g)^{2}+1}}(\gamma_{2}\xi(g))\right\|_ {E}^{2}\] \[=\sum_{g\in G_{n}}\frac{\mathbb{L}_{H}(g)^{2}+\mathbb{F}(g)^{2}}{ \mathbb{L}_{H}(g)^{2}+\mathbb{F}(g)^{2}+1}\left\|\xi(g)\right\|_{E}^{2}\] \[\leq\left\|\xi\right\|_{\ell^{2}(G_{n},E)}^{2}.\]
Thus, \(\mathit{D}_{n}K\) is bounded, of norm at most \(1\). Consequently, \((\mathit{D}_{n}\pm i)K\) is also bounded on \(\ell^{2}(G_{n},E)\). Therefore, \((\mathit{D}\pm i)K^{2}\) is compact. It follows that \(\mathit{D}\pm i\) both have compact inverse \((\mathit{D}\mp i)K^{2}\). Specifically for our purpose, if \(\xi\in\ell^{2}(G_{n},E)\), then:
\[(\mathit{D}_{n}+i)\left((\mathit{D}_{n}-i)K^{2}\right)\xi=(\mathit{D}_{n}^{2} +1)K^{2}\xi=\xi.\]
Therefore, the range of \(\mathit{D}_{n}+i\) is \(\ell^{2}(G_{n},E)\). Similarly, the range of \(\mathit{D}_{n}-i\) is also \(\ell^{2}(G_{n},E)\). As \(\mathit{D}_{n}\) is also a symmetric operator defined on a dense domain, we conclude by [53, Sec. VIII.2] and [45, Lemma 2.48] that \(\mathit{D}_{n}\) is indeed self-adjoint, with compact resolvent (since the inverse of \(\mathit{D}_{n}+i\) is the compact \((\mathit{D}_{n}-i)K^{2}\)).
We will now verify the commutator spectral triples condition. Note that if \(g\in G_{n}\), then
\[\left\|\left[\mathit{D}_{n},\lambda_{E}(g)\right]\right\|\left|\ell^{2}(G_{n}, E)\leq\mathbb{L}_{H}(g)+\mathbb{F}(g)=\mathbb{L}(g).\]
Therefore, if \(f\in C_{c}(G_{n})\), then the operator \([D_{n},\lambda_{E}(f)]\) is bounded, and in fact,
\[\big{|}\big{|}[D_{n},\lambda_{E}(f)]\big{|}\big{|}_{\ell^{2}(G_{n},E)}\leq\sum_ {g\in G_{n}}|f(g)|\mathbb{E}(g).\]
We conclude that \((C_{\mathrm{red}}^{*}(G_{\infty},\sigma),\ell^{2}(G_{\infty}),D)\) is a spectral triple for all \(n\in\overline{\mathbb{N}}\).
We will now prove that our spectral triple is metric. Let \(a\in\mathrm{dom}\,(\mathbb{L}_{n})\) for some \(n\in\overline{\mathbb{N}}\). We then note that,
\[(1\otimes\gamma_{1})[D_{n},a^{\circ}]+[D_{n},a^{\circ}](1\otimes\gamma_{1})=[ M_{\mathbb{L}_{H}},a]\otimes 2,\]
which implies:
\[\big{|}\big{|}[M_{\mathbb{L}_{H}},a]\big{|}\big{|}_{\ell^{2}(G_{n})} \leq\frac{1}{2}\big{|}\big{|}(1\otimes\gamma_{1})[D_{n},a^{\circ }]+[D_{n},a^{\circ}](1\otimes\gamma_{1})\big{|}\big{|}_{\ell^{2}(G_{n},E)}\] \[\leq\frac{1}{2}\Big{(}\big{|}\big{|}(1\otimes\gamma_{1})[D_{n},a^ {\circ}]\big{|}\big{|}_{\ell^{2}(G_{n},E)}+\big{|}\big{|}[D_{n},a^{\circ}](1 \otimes\gamma_{1})\big{|}\big{|}_{\ell^{2}(G_{n},E)}\Big{)}\] \[\leq\frac{1}{2}\Big{(}\big{|}\big{|}[D_{n},a^{\circ}]\big{|} \big{|}_{\ell^{2}(G_{n},E)}+\big{|}\big{|}[D_{n},a^{\circ}]\big{|}\big{|}_{ \ell^{2}(G_{n},E)}\Big{)}\] \[=\big{|}\big{|}[D_{n},a^{\circ}]\big{|}\big{|}_{\ell^{2}(G_{n},E)}.\]
The same reasoning, with \(1\otimes\gamma_{2}\) in place of \(1\otimes\gamma_{1}\), leads to
\[\big{|}\big{|}[M_{\mathbb{F}},a]\big{|}\big{|}_{\ell^{2}(G_{n})}\leq\big{|} \big{|}[D_{n},a^{\circ}]\big{|}\big{|}_{\ell^{2}(G_{n},E)}.\]
Therefore, for all \(a\in\mathrm{dom}\,(\mathbb{L}_{n})\), we obtain:
\[\big{|}\big{|}[M_{\mathbb{L}},a]\big{|}\big{|}_{\ell^{2}(G_{n})}\leq\big{|} \big{|}[M_{\mathbb{L}_{F}},a]\big{|}\big{|}_{\ell^{2}(G_{n})}+\big{|}[M_{ \mathbb{F}},a]\big{|}\big{|}_{\ell^{2}(G_{n})}\leq 2\big{|}\big{|}[D_{n},a^{ \circ}]\big{|}\big{|}_{\ell^{2}(G_{n},E)}.\]
In particular, if \((C_{\mathrm{red}}^{*}(G_{n},\sigma),\ell^{2}(G_{n}),D_{\mathbb{L}})\) is a metric spectral triple, then, by [55, Lemma 1.10], so is \((C_{\mathrm{red}}^{*}(G_{n},\sigma),\ell^{2}(G_{n}),D_{n})\).
Finally, we will show that our spectral triples are in fact even, with grading given by \(1_{\ell^{2}(G_{n})}\otimes\gamma\) where \(\gamma:=i\gamma_{1}\gamma_{2}\). By construction, \(\gamma^{2}\) is the identity, and \(\gamma^{*}=\gamma\), so \(\gamma\) is a self-adjoint unitary; therefore so is \(1_{\ell^{2}(G_{n})}\otimes\gamma\), which splits \(\ell^{2}(G_{n},E)\) in its two spectral subspaces for \(1\) and \(-1\), in such a way that \(\lambda_{E}\) commutes with \(1\otimes\gamma\), while \(D_{n}(1\otimes\gamma)=-(1\otimes\gamma)D_{n}\). So \((C_{\mathrm{red}}^{*}(G_{n},\sigma),\ell^{2}(G_{n},E),D_{n})\) is an even spectral triple.
_Remark 4.8_.: With the notation of Lemma (4.7), we note that for each finite \(n\in\mathbb{N}\), the spectral triple \((C_{\mathrm{red}}^{*}(G_{n}),\ell^{2}(G_{n},E),D_{n})\) is, in some sense, a bounded perturbation of the odd spectral triple \((C_{\mathrm{red}}^{*}(G_{n}),\ell^{2}(G_{n}),M_{\mathbb{L}})\), since \(\mathbb{F}\) is bounded on \(G_{n}\). The situation is quite different when \(n=\infty\), of course.
_Remark 4.9_.: Suppose \(\rho\) is some other \(2\)-cocycle of \(G_{\infty}\), which is equivalent to \(\sigma\), i.e., for some function \(f:G_{\infty}\to\mathbb{T}\), the following holds:
\[\forall g,h\in G_{\infty}\quad\rho(g,h)=f(g)f(h)\overline{f(gh)}\sigma(g,h).\]
The operator \(M_{f}\) is then a unitary which intertwines the left regular \(\sigma\) and \(\rho\) projective representation of \(G_{\infty}\). Thus, \((\mathrm{Ad}M_{f})^{\circ}\) implements a \(*\)-isomorphism from \(\lambda_{E}(C^{*}(G_{\infty},\rho))\) onto \(\lambda_{E}(C^{*}(G_{\infty},\sigma))\). Furthermore, \(M_{f}^{\circ}\) commutes with \(D_{\infty}\). Therefore, the spectral triples \((C^{*}(G_{\infty},\sigma),\ell^{2}(G_{\infty},E),D_{\infty})\) and \((C^{*}(G_{\infty},\rho),\ell^{2}(G_{\infty},E),D_{\infty})\) are unitarily equivalent. In particular, whenever one is metric, so is the other, and then they are at distance zero from each others for the spectral propinquity.
### Main result
We begin this section by making some basic identifications that will be used throughout the rest of the paper. We will use the notation introduced in the above sections. Fixed \(n\in\mathbb{N}\), the C\({}^{*}\)-algebra \(C^{*}_{\text{red}}(G_{n},\sigma)\) is technically the closure, in the operator norm, of the linear span of the operators \(\lambda_{n}(g)\) defined on \(\ell^{2}(G_{n})\) by \(\lambda_{n}(g)\xi:h\in\ell^{2}(G_{n})\mapsto\sigma(g,g^{-1}h)\xi(g^{-1}h)\). On the other hand, since \(G_{n}\subseteq G_{\infty}\), we obtain a different unitary \(\sigma\)-projective representation of \(G_{n}\), via the restriction of the \(\sigma\)-projective representation \(\lambda\) of \(G_{\infty}\) to \(G_{n}\) on \(\ell^{2}(G_{\infty})\), giving us an alternative C\({}^{*}\)-algebra generated by \(\{\lambda(h):h\in G_{n}\}\). If \(S\subseteq G_{\infty}\) is any nonempty subset of \(G_{\infty}\), we identify the space \(\ell^{2}(S)\) with
\[\{\xi\in\ell^{2}(G_{\infty}):\forall g\in G_{\infty}\setminus S\quad\xi(g)=0\}.\]
Let \(Q_{n}\subseteq G_{\infty}\) be a subset of \(G_{\infty}\) such that every right coset of \(G_{n}\) in \(G_{\infty}\) is of the form \(G_{n}k\) for a unique \(k\in Q_{n}\). Of course,
\[\ell^{2}(G_{\infty})=\overline{\bigoplus}_{k\in Q_{n}}\ell^{2}(G_{n}k),\]
where \(\overline{\oplus}\) is the Hilbert sum, i.e. the closure of the direct sum.
Now, we set, for all \(k\in G_{\infty}\) and \(\xi\in\ell^{2}(G_{\infty})\):
\[\rho(k)\xi:h\in\ell^{2}(G_{\infty})\mapsto\sigma(hk,k^{-1})\xi(hk). \tag{4.4}\]
Thus defined, \(\rho\) is the right regular \(\tilde{\sigma}\)-projective representation of \(G_{\infty}\) on \(\ell^{2}(G_{\infty})\), where \(\tilde{\sigma}:g,h\in G_{\infty}\mapsto\sigma(h^{-1},g^{-1})\) is indeed a \(2\)-cocycle of \(G_{\infty}\).
_Remark 4.10_.: If the \(2\)-cocycle \(\sigma\) is normalized, i.e. \(\sigma(g,g^{-1})=1\) for all \(g\in G_{\infty}\), then \(\tilde{\sigma}=\overline{\sigma}\); we will however not need to work with normalized cocycles here.
Since \(\sigma\) is a \(2\)-cocycle, we obtain, for all \(g,h,k\in G_{\infty}\) and \(\xi\in\ell^{2}(G_{\infty})\):
\[\lambda(g)\rho(k)\xi(h) =\sigma(g,g^{-1}h)\rho(k)\xi(g^{-1}h)\] \[=\sigma(g,g^{-1}h)\sigma(g^{-1}hk,k^{-1})\xi(g^{-1}hk)\] \[=\sigma(\underbrace{g}_{=:y},\underbrace{g^{-1}hk}_{=:y}) \underbrace{k^{-1}}_{=:z})\sigma(\underbrace{g^{-1}hk}_{=y},\underbrace{k^{- 1}}_{=z})\xi(g^{-1}hk)\] \[=\sigma(\underbrace{g}_{=:x},\underbrace{g^{-1}hk}_{=y})\sigma( \underbrace{hk}_{=xy},\underbrace{k^{-1}}_{=z})\xi(g^{-1}hk)\] \[=\sigma(hk,k^{-1})(\lambda(g)\xi)(hk)\] \[=\rho(k)\lambda(g)\xi(h).\]
Therefore, \(\lambda(g)\) and \(\rho(k)\) commute, for all \(g,k\in G_{\infty}\). It is moreover immediate that \(\rho(k^{-1})\) maps \(\ell^{2}(G_{n}k)\) onto \(\ell^{2}(G_{n})\).
We now define the unitary \(V\) from \(\ell^{2}(G_{\infty})=\overline{\bigoplus}_{k\in Q_{n}}\ell^{2}(G_{n}k)\) to \(\overline{\bigoplus}_{k\in Q_{n}}\ell^{2}(G_{n})\) by setting, for all \(\xi=(\xi_{k})_{k\in Q_{n}}\in\overline{\bigoplus}_{k\in Q_{n}}\ell^{2}(G_{n}k)\):
\[V\xi=\left(\rho(k^{-1})\xi_{k}\right)_{k\in Q_{n}}\in\overline{\bigoplus}_{k \in Q_{n}}\ell^{2}(G_{n}).\]
By construction, \(V\) is unitary, and moreover, for any \(g\in G_{n}\):
\[V\lambda(g)V^{*}(\xi_{k})_{k\in Q_{n}}=(\lambda_{n}(g)\xi_{k})_{k\in Q_{n}}.\]
Thus, \(\operatorname{Ad}V\) is a \(*\)-isomorphism from the C\({}^{*}\)-subalgebra generated by \(\{\lambda(g):g\in G_{n}\}\) and the C\({}^{*}\)-algebra \(C^{*}_{\text{red}}(G_{n},\sigma)\) which maps \(\lambda(g)\) to \(\lambda_{n}(g)\) for all \(g\in G_{n}\).
From now on, we thus identify \(C^{*}_{\text{red}}(G_{n},\sigma)\) with the C\({}^{*}\)-algebra generated by \(\{\lambda(g):g\in G_{n}\}\) in \(C^{*}_{\text{red}}(G_{\infty},\sigma)\) and work exclusively in the latter. We will also identify \(\ell^{2}(G_{n},E)\) with \(\{\xi\in\ell^{2}(G_{\infty},E):\forall g\in G_{\infty}\setminus G_{n}\quad \xi(g)=0\}\).
To complete our picture, we also identify \(\mathcal{D}_{n}\) with the operator defined for \(\xi=\xi_{n}+\xi_{n}^{\perp}\), with \(\xi_{n}\in\ell^{2}(G_{n},E)^{2}\) and \(\xi_{n}^{\perp}\in\ell^{2}(G_{n}.E)=\overline{\bigoplus}_{k\in Q_{n}\setminus G _{n}}\ell^{2}(G_{n}k,E)\), by \(\mathcal{D}_{n}\xi=\mathcal{D}_{n}\xi_{n}\in\ell^{2}(G_{n},E)\). We then observe that if \(P_{n}\) is the orthogonal projection from \(\ell^{2}(G_{\infty})\) onto \(\ell^{2}(G_{n})\), we have, for all \(\xi\in\operatorname{dom}\left(\mathcal{D}_{n}\right)\) and for all \(g\in G_{\infty}\):
\[P_{n}^{\circ}\mathcal{D}_{n}\xi(g) =\begin{cases}(\mathbb{L}_{H}(g)\otimes\gamma_{1}+\mathbb{F}(g) \otimes\gamma_{2})\xi(g)\text{ if }g\in G_{n},\\ 0\text{ otherwise,}\end{cases}\] \[=\mathcal{D}_{\infty}P_{n}^{\circ}\xi(g).\]
We thus have shown that \(P_{n}^{\circ}\operatorname{dom}\left(\mathcal{D}_{n}\right)\subseteq \operatorname{dom}\left(\mathcal{D}_{\infty}\right)\) and \(P_{n}^{\circ}\mathcal{D}_{n}=\mathcal{D}_{\infty}P_{n}^{\circ}\). Moreover, \(P_{n}^{\circ}\mathcal{D}_{\infty}P_{n}^{\circ}=\mathcal{D}_{n}\) and thus, for all \(a\in\operatorname{dom}\left(\mathcal{L}_{n}\right)\) we compute the following expression, using the fact that \([P_{n},a]=0\),:
\[P_{n}^{\circ}[\mathcal{D}_{\infty},a^{\circ}]P_{n}^{\circ} =P_{n}^{\circ}\mathcal{D}_{\infty}a^{\circ}P_{n}^{\circ}-P_{n}^{ \circ}a^{\circ}\mathcal{D}_{\infty}P_{n}^{\circ}\] \[=\mathcal{D}_{n}^{\circ}\mathcal{D}_{\infty}P_{n}^{\circ}a^{ \circ}-a^{\circ}P_{n}^{\circ}\mathcal{D}_{\infty}P_{n}^{\circ}\] \[=\mathcal{D}_{n}a^{\circ}-a^{\circ}\mathcal{D}_{n}.\]
So we have, for all \(a\in\operatorname{dom}\left(\mathcal{L}_{\infty}\right)\):
\[\mathcal{L}_{n}(a)=\left|\left|\left|\mathcal{D}_{n},a^{\circ}\right|\right| \right|_{\ell^{2}(G_{n},E)}=\left|\left|P_{n}^{\circ}[\mathcal{D}_{\infty},a ^{\circ}]P_{n}^{\circ}\right|\right|_{\ell^{2}(G_{\infty},E)}\leq\mathcal{L}_{ \infty}(a). \tag{4.5}\]
With all of the above identifications, we thus have a natural unital \(*\)-morphism from \(C_{\operatorname{red}}^{*}(G_{n},\sigma)\) into \(C_{\operatorname{red}}^{*}(G_{\infty},\sigma)\) which becomes just the natural inclusion, and
\[\lambda(g)\ell^{2}(G_{n}k)\subseteq\ell^{2}(G_{n}k)\]
for each \(g\in G_{n}\) and \(k\in G_{\infty}\). By linearity and continuity, we conclude that if \(a\in C_{\operatorname{red}}^{*}(G_{n},\sigma)\), then \(a\ell^{2}(G_{n}k)\subseteq\ell^{2}(G_{n}k)\) for all \(k\in G_{\infty}\). We also note that \([\mathcal{D}_{\infty},a^{\circ}]\ell^{2}(G_{n}k,E)\subseteq\ell^{2}(G_{n}k,E)\) for all \(k\in G_{\infty}\) and \(a\in\operatorname{dom}\left(\mathcal{L}_{n}\right)\).
We will work for the rest of this section with the above identifications and their basic properties without further mention.
Our main theorem in this section involves, in particular, a strong result about the convergence of some of the quantum compact metric spaces induced by our spectral triples: namely, we obtain some convergence in the sense of the _Lipschitz distance_.
The _Lipschitz distance_\(\operatorname{LipD}\), extended to noncommutative metric geometry in [37], is defined between any two quantum compact metric spaces \((\mathfrak{A},\mathcal{L}_{\mathfrak{A}})\) and \((\mathfrak{B},\mathcal{L}_{\mathfrak{B}})\), by
\[\operatorname{LipD}((\mathfrak{A},\mathcal{L}_{\mathfrak{A}}),(\mathfrak{B}, \mathcal{L}_{\mathfrak{B}})):=\\ \inf\left\{\ln(k):\exists\pi:(\mathfrak{A},\mathcal{L}_{\mathfrak{A}}) \rightarrow(\mathfrak{B},\mathcal{L}_{\mathfrak{B}})\text{ Lipschitz *-isomorphism with }\frac{1}{k}\mathcal{L}_{\mathfrak{A}}\leq\mathcal{L}_{\mathfrak{B}}\circ\pi \leq k\mathcal{L}_{\mathfrak{A}}\right\},\]
with the convention that \(\inf\emptyset=\infty\). Thus \(\operatorname{LipD}\) is finite only between quantum compact metric spaces built over isomorphic C\({}^{*}\)-algebras. As shown in [37], the Lipschitz distance dominates the Gromov-Hausdorff propinquity; in fact, closed balls for the Lipschitz distance are compact in the propinquity.
In particular, if \(\mathfrak{A}\) is a unital C\({}^{*}\)-algebra, and if \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) are two Lipschitz seminorms over \(\mathfrak{A}\) with the same domain, then the identity is bi-Lipschitz, and we do obtain, by definition:
\[\operatorname{LipD}((\mathfrak{A},\mathcal{L}_{1}),(\mathfrak{A},\mathcal{L} _{2}))\leq\ln(C)\text{ if }\frac{1}{C}\mathcal{L}_{1}\leq\mathcal{L}_{2}\leq C\mathcal{L}_{1}.\]
We now prove our main result about inductive limits of discrete groups and the convergence, for the spectral propinquity, of their spectral triples. Note that below we use the notation established in Definition (4.6).
**Theorem 4.11**.: _With the notation and assumptions of Subsection 4.2, if_
\[(C_{\mathrm{red}}^{*}(G_{n},\sigma),\ell^{2}(G_{n},E),\not{D}_{n})\]
_is a metric spectral triple for all \(n\in\overline{\mathds{N}}\), and if_
\[\left\{a\in\mathrm{dom}\left(\mathsf{L}_{n}\right):\mathsf{L}_{n}(a)\leq 1 \right\}=\mathrm{cl}\left(\left\{a\in C_{c}(G_{n}):\mathsf{L}_{n}(a)\leq 1 \right\}\right),\]
_then_
\[\lim_{n\to\infty}\Lambda^{\mathsf{spec}}\left((C_{\mathrm{red}}^{*}(G_{n}, \sigma),\ell^{2}(G_{n},E),\not{D}_{n}),(C_{\mathrm{red}}^{*}(G_{\infty}, \sigma),\ell^{2}(G_{\infty},E),\not{D}_{\infty})\right)=0.\]
_Moreover, for any fixed \(k\in\mathds{N}\), the sequence \((C_{\mathrm{red}}^{*}(G_{k},\sigma),\mathsf{L}_{n})_{n\geq k}\) converges in the Lipschitz distance \(\mathrm{Lip}\,\mathsf{D}\) to the quantum compact metric space \((C_{\mathrm{red}}^{*}(G_{k},\sigma),\mathsf{L}_{\infty})\)._
Proof.: We shall check that the identity automorphism of \(C_{\mathrm{red}}^{*}(G_{\infty},\sigma)\) satisfies the hypothesis of Theorem (3.17).
Obviously, the identity is a full quantum isometry of \((C_{\mathrm{red}}^{*}(G_{\infty},\sigma),\mathsf{L}_{\infty})\).
Let \(C=2\,\mathrm{qdiam}\left(C^{*}(G_{\infty},\sigma),\mathsf{L}_{\infty}\right)\) -- note that since \(G_{\infty}\neq\{1\}\), we have \(C>0\). Let \(\mathrm{tr}:a\in C_{\mathrm{red}}^{*}(G_{\infty},\sigma)\mapsto\left\langle a \delta_{1},\delta_{1}\right\rangle_{\ell^{2}(G_{\infty})}\); \(\mathrm{tr}\) is a tracial state of \(C^{*}(G_{\infty},\sigma)\) which maps \(a\in C_{c}(G_{\infty})\) to \(a(1)\).
Fix \(\varepsilon\in\left(0,\frac{C}{2}\right)\). Since \((C_{\mathrm{red}}^{*}(G_{\infty},\sigma),\mathsf{L}_{\infty})\) is a quantum compact metric space by assumption, the set \(X_{\infty}:=\{a\in\mathrm{dom}\left(\mathsf{L}_{\infty}\right):\mathsf{L}_{ \infty}(a)\leq 1,\mathrm{tr}(a)=0\}\) is compact. Thus, there exists a finite \(\varepsilon\)-dense subset \(X_{\infty}^{\varepsilon}\subseteq X_{\infty}\). Since \(X_{\infty}=\mathrm{cl}\left(\left\{a\in C_{c}(G_{\infty}):\mathsf{L}_{\infty }(a)\leq 1,\mathrm{tr}(a)=0\right\}\right)\), we can moreover assume that \(X_{\infty}^{\varepsilon}\subseteq C_{c}(G_{\infty})\) as well.
Since \(X_{\infty}^{\varepsilon}\) is finite and each of its element has finite support, there exists a finite subset \(S\subseteq G_{\infty}\) which contains the support of all the elements in \(X_{\infty}^{\varepsilon}\). Since \(G_{\infty}=\bigcup_{n\in\mathds{N}}G_{n}\) and \((G_{n})_{n\in\mathds{N}}\) is increasing, there exists \(N_{1}\in\mathds{N}\) such that, for all \(n\geq N_{1}\), we have \(S\subseteq G_{n}\). Thus \(X_{\infty}^{\varepsilon}\subseteq C_{c}(G_{n})\). Moreover, by Expression (4.5), we also obtain \(\mathsf{L}_{n}(a)\leq\mathsf{L}_{\infty}(a)\) for all \(a\in X_{\infty}^{\varepsilon}\).
In summary,
\[\forall a\in X_{\infty}\quad\exists b\in X_{\infty}^{\varepsilon}\subseteq C _{c}(G_{n})\subseteq C_{\mathrm{red}}^{*}(G_{n},\sigma):\quad\|a-b\|_{C_{ \mathrm{red}}^{*}(G_{\infty},\sigma)}<\varepsilon\text{ and }\mathsf{L}_{n}(a)\leq \mathsf{L}_{\infty}(a)\leq 1.\]
If \(a\in\mathrm{dom}\left(\mathsf{L}_{\infty}\right)\), then there exists \(b\in X_{\infty}^{\varepsilon}\) such that \(\|a-\mathrm{tr}(a)-b\|_{C_{\mathrm{red}}^{*}(G_{\infty},\sigma)}<\varepsilon\). Of course, \(b+\mathrm{tr}(a)\in C_{\mathrm{red}}^{*}(G_{n},\sigma)\) and \(\mathsf{L}_{n}(b+\mathrm{tr}(a))=\mathsf{L}_{n}(b)\leq 1\). By homogeneity, it follows that for all \(a\in\mathrm{dom}\left(\mathsf{L}_{\infty}\right)\), and for all \(n\geq N_{1}\), there exists \(b\in\mathrm{dom}\left(\mathsf{L}_{n}\right)\) such that \(\|a-b\|_{C_{\mathrm{red}}^{*}(G_{\infty},\sigma)}<\varepsilon\mathsf{L}_{ \infty}(a)\) and \(\mathsf{L}_{n}(b)\leq\mathsf{L}_{\infty}(a)\).
Now, using our assumption of Equation (4.2), there exists \(N_{2}\in\mathds{N}\), with \(N_{2}\geq N_{1}\), such that
\[\mathsf{Haus}[\mathds{L}_{H}]\left(G_{\infty},G_{n}\right)<\frac{\varepsilon} {C^{2}}.\]
For each right coset \(c\) of \(G_{n}\) in \(G_{\infty}\), let \(k\in c\). Since the distance for \(\mathds{L}_{H}\) from \(k\in G_{\infty}\) to \(G_{n}\) is strictly less than \(\frac{\varepsilon}{C^{2}}\), there exists \(g\in G_{n}\) such that \(\mathds{L}_{H}(g^{-1}k)<\frac{\varepsilon}{C^{2}}\). Setting \(k_{c}=g^{-1}k\), we have by definition of right cosets that \(c=G_{n}k_{c}\). Therefore, there exists a subset \(Q_{n}\subseteq G_{\infty}\) of \(G_{\infty}\) such that, if \(k\in Q_{n}\) then \(\mathds{L}_{H}(k)<\frac{\varepsilon}{C^{2}}\), and if \(c\) is a right coset of \(G_{n}\) in \(G_{\infty}\), then there exists a unique \(k\in Q_{n}\) such that \(c=G_{n}k\).
Let \(n\geq N_{2}\) and let \(b\in C_{c}(G_{n})\subseteq C_{\mathrm{red}}^{*}(G_{n},\sigma)\) with \(b(1)=\mathrm{tr}(b)=0\). Note that \(b\in\mathrm{dom}\left(\mathsf{L}_{\infty}\right)\cap\mathrm{dom}\left(\mathsf{L }_{n}\right)\) so, in particular, both \(\mathsf{L}_{n}(b)\) and \(\mathsf{L}_{\infty}(b)\) are _finite_.
We thus have \(\ell^{2}(G_{\infty})=\overline{\oplus}_{k\in Q_{n}}\ell^{2}(G_{n}k)\), where \(\overline{\oplus}\) is the Hilbert sum (the closure of the sum). If \(h\in G_{n}\), then, by definition of a right coset, \(\lambda(h)\ell^{2}(G_{n}k)\subseteq\ell^{2}(G_{n}k)\) for all \(k\in Q_{n}\). As \(\not{D}_{\infty}(\mathrm{dom}\left(\not{D}_{n}\right))\subseteq\ell^{2}(G_{n}k,E)\) as well for all \(k\in Q_{n}\), we conclude that
\([\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
Therefore, by Equation (4.7),
\[\big{\|}\big{\|}[\langle D_{\infty},b^{\circ}\rangle]\big{\|}\big{\|} \big{\|}_{\ell^{2}(G_{n}k,E)}\] \[=\big{\|}\big{\|}[M_{\mathbb{L}_{H}},b]\rho(k)\big{\|}\big{\|}_{ \ell^{2}(G_{n}k)}^{\ell^{2}(G_{n}k)}\] \[\leq\underbrace{\big{\|}[M_{\mathbb{L}_{H}},\rho(k)]b\big{\|} \big{\|}_{\ell^{2}(G_{n})}^{\ell^{2}(G_{n}k)}+\big{\|}\big{\|}\rho(k)[M_{ \mathbb{L}_{H}},b]\big{\|}\big{\|}_{\ell^{2}(G_{n})}^{\ell^{2}(G_{n}k)}+\big{\|} \big{\|}b[M_{\mathbb{L}_{H}},\rho(k)]\big{\|}\big{\|}_{\ell^{2}(G_{n})}^{\ell^{ 2}(G_{n}k)}\] by Eq. (4.9) \[\leq\underbrace{\big{\|}[M_{\mathbb{L}_{H}},\rho(k)]\big{\|}_{ \ell^{2}(G_{n})}^{\ell^{2}(G_{n}k)}\big{\|}b\|_{\ell^{2}(G_{n})}+\big{\|} \big{\|}\rho(k)\big{\|}_{\ell^{2}(G_{n})}^{\ell^{2}(G_{n}k)}\] \[\quad+\|b\|_{C^{*}(G_{n},\sigma)}\big{\|}\big{\|}[M_{\mathbb{L}_{ H}},\rho(k)]\big{\|}_{\ell^{2}(G_{n})}^{\ell^{2}(G_{n}k)}\] \[\quad\leq\mathbb{L}_{H}(k)\underbrace{\|b\|_{C^{*}_{\mathrm{red}} (G_{n},\sigma)}+\mathbb{L}_{n}(b)+\|b\|_{C^{*}_{\mathrm{red}}(G_{n},\sigma)} \mathbb{L}_{H}(k)}_{\leq\frac{\varepsilon}{C^{2}}\mathbb{L}_{\infty}(b)}\] \[\leq\frac{\varepsilon}{C^{2}}\frac{C}{2}\mathbb{L}_{\infty}(b)+ \mathbb{L}_{n}(b)+\frac{C}{2}\mathbb{L}_{\infty}(b)\frac{\varepsilon}{C^{2}}\] \[\leq\mathbb{L}_{n}(b)+\frac{\varepsilon}{C}\mathbb{L}_{\infty}(b).\]
By Expression (4.6), we thus get
\[\mathbb{L}_{\infty}(b)\leq\mathbb{L}_{n}(b)+\frac{\varepsilon}{C}\mathbb{L}_{ \infty}(b).\]
Therefore, we have shown that since \(\varepsilon\in\big{(}0,\frac{C}{2}\big{)}\),
\[\forall b\in C_{c}(G_{n})\quad\mathrm{tr}(b)=0\implies\mathbb{L}_{\infty}(b) \leq\frac{1}{1-\frac{\varepsilon}{C}}\mathbb{L}_{n}(b). \tag{4.10}\]
Now, let \(b\in C_{c}(G_{n})\). We then easily compute:
\[\mathbb{L}_{\infty}(b)=\mathbb{L}_{\infty}(b-\mathrm{tr}(b)1)\leq\frac{1}{1- \frac{\varepsilon}{C}}\mathbb{L}_{n}(b-\mathrm{tr}(b)1)=\frac{1}{1-\frac{ \varepsilon}{C}}\mathbb{L}_{n}(b).\]
Now, let \(a\in\mathrm{dom}\,(\mathbb{L}_{n})\) with \(\mathbb{L}_{n}(a)\leq 1\). By assumption, there exists a sequence \((a_{k})_{k\in\mathbb{N}}\) converging in \(C^{*}_{\mathrm{red}}(G_{n},\sigma)\) to \(a\) such that \(\mathbb{L}_{n}(a_{k})\leq 1\) and \(a_{k}\in C_{c}(G_{n})\) for all \(k\in\mathbb{N}\). We thus have, by lower semicontinuity of \(\mathbb{L}_{n}\), and Expression (4.10):
\[\mathbb{L}_{\infty}(a)\leq\liminf_{k\to\infty}\mathbb{L}_{\infty}(a_{k})\leq \frac{1}{1-\frac{\varepsilon}{C}}\liminf_{k\to\infty}\mathbb{L}_{n}(a_{k}) \leq\frac{1}{1-\frac{\varepsilon}{C}}.\]
Thus, we have shown that, for all \(n\geq N\), if \(a\in\mathrm{dom}\,(\mathbb{L}_{n})\), then \(a\in\mathrm{dom}\,(\mathbb{L}_{\infty})\), and moreover,
\[\forall a\in\mathrm{dom}\,(\mathbb{L}_{n})\quad\mathbb{L}_{\infty}(a)\leq \frac{1}{1-\frac{\varepsilon}{C}}\mathbb{L}_{n}(a).\]
It is immediate by construction that \(\mathbb{L}_{n}\leq\mathbb{L}_{\infty}\) on \(\mathrm{dom}\,(\mathbb{L}_{n})\). Thus we have proven that for all \(n\geq N\) and \(k\geq n\), we have \(\mathbb{L}_{k}\leq\mathbb{L}_{\infty}\leq\frac{1}{1-\frac{\varepsilon}{C}} \mathbb{L}_{k}(a)\). As a byproduct of this, we have shown that \(\lim_{k\to\infty}\mathrm{LipD}((C^{*}(G_{n},\sigma),\mathbb{L}_{k}),(C^{*}(G_{n },\sigma),\mathbb{L}_{\infty})=0\).
We now pause to note that, thanks to our identifications discussed prior to this theorem, and the observation that \(\mathrm{dom}\,(\mathbb{L}_{n})\subseteq\mathrm{dom}\,(\mathbb{L}_{\infty})\) which we have just now established, \((C^{*}_{\mathrm{red}}(G_{n},\sigma),\ell^{2}(G_{n})\otimes E,\bar{D}_{n})_{n \in\mathbb{N}}\) is an inductive sequence of spectral triples in the sense of [20], where the \(*\)-morphisms from \(C^{*}_{\mathrm{red}}\,(G_{n},\sigma)\) to \(C^{*}_{\mathrm{red}}\,(G_{n+1},\sigma)\) and the linear isometry
from \(\ell^{2}(G_{n})\) to \(\ell^{2}(G_{n+1})\) are just the inclusion maps. Moreover \((C_{\mathrm{red}}^{*}(G_{\infty},\sigma),\ell^{2}(G_{\infty},E),D\!\!\!D_{\infty})\) is indeed the inductive limit of this system.
We now note that since \(\mathsf{L}_{\infty}\leq\left(\frac{1}{1-\frac{\varepsilon}{C}}\right)\mathsf{ L}_{n}\) and \(\varepsilon\in\left(0,\frac{C}{2}\right)\), we have
\[\operatorname{qdiam}\left(C_{\mathrm{red}}^{*}(G_{n},\sigma),\mathsf{L}_{n} \right)\leq\left(\frac{1}{1-\frac{\varepsilon}{C}}\right)\operatorname{qdiam} \left(C_{\mathrm{red}}^{*}(G_{\infty},\sigma),\mathsf{L}_{\infty}\right)=\frac {C^{2}}{2(C-\varepsilon)}\leq C.\]
Let \(b\in\operatorname{dom}\left(\mathsf{L}_{n}\right)\), and let \(a=\left(1-\frac{\varepsilon}{C}\right)b\in\operatorname{dom}\left(\mathsf{L}_ {\infty}\right)\). We then compute:
\[\left\|b-a\right\|_{C_{\mathrm{red}}^{*}(G_{\infty},\sigma)} =\left\|b-\left(1-\frac{\varepsilon}{C}\right)b\right\|_{C_{ \mathrm{red}}^{*}(G_{\infty},\sigma)}\] \[\leq\frac{\varepsilon}{C}\left\|b\right\|_{C_{\mathrm{red}}^{*}( G_{\infty},\sigma)}\] \[\leq\frac{\varepsilon}{C}\operatorname{qdiam}\left(C_{\mathrm{red }}^{*}(G_{n},\sigma),\mathsf{L}_{n}\right)\mathsf{L}_{n}(b)\] \[\leq\frac{\varepsilon}{C}C\mathsf{L}_{n}(b)=\varepsilon\mathsf{ L}_{n}(b),\]
while
\[\mathsf{L}_{\infty}(a)=\mathsf{L}_{\infty}\left(\left(1-\frac{\varepsilon}{C} \right)b\right)\leq\frac{1-\frac{\varepsilon}{C}}{\mathsf{L}_{n}\left(\left(1 -\frac{\varepsilon}{C}\right)b\right)}=\mathsf{L}_{n}(b).\]
Hence, if \(n\geq N_{2}\), then:
* \(\forall a\in\operatorname{dom}\left(\mathsf{L}_{\infty}\right)\quad\exists b \in\operatorname{dom}\left(\mathsf{L}_{n}\right):\quad\mathsf{L}_{n}(b)\leq \mathsf{L}_{\infty}(a)\) and \(\left\|b-a\right\|_{C_{\mathrm{red}}^{*}(G_{\infty},\sigma)}<\varepsilon \mathsf{L}_{\infty}(a)\),
* \(\forall b\in\operatorname{dom}\left(\mathsf{L}_{n}\right)\quad\exists a\in \operatorname{dom}\left(\mathsf{L}_{\infty}\right):\quad\mathsf{L}_{\infty}(a )\leq\mathsf{L}_{n}(b)\) and \(\left\|a-b\right\|_{C_{\mathrm{red}}^{*}(G_{\infty},\sigma)}<\varepsilon \mathsf{L}_{n}(b)\).
Therefore, by Theorem (3.17), we conclude that
\[\lim_{n\to\infty}\Lambda^{\mathrm{Spec}}((C_{\mathrm{red}}^{*}(G_{n},\sigma), \ell^{2}(G_{n},E),D\!\!\!D_{n}),(C_{\mathrm{red}}^{*}(G_{\infty},\sigma),\ell ^{2}(G_{\infty},E),D\!\!\!D_{\infty}))=0,\]
as claimed.
We now wish to apply Theorem (4.11) to the family in Example (4.1), as well as to the Bunce-Deddens algebras. Thus, we shall now focus on Abelian groups.
So from now on we assume that \(G_{\infty}\) is Abelian. Therefore we will employ the additive notation for the groups \(G_{n}\) (\(n\in\overline{\mathbb{N}}\)). Since Abelian groups are amenable, we will also from now on identify \(C_{\mathrm{red}}^{*}(G_{n},\sigma)\) with \(C^{*}(G_{n},\sigma)\) for all \(n\in\overline{\mathbb{N}}\).
A key condition for Theorem (4.11) is always met when working with Abelian groups, as seen in the following lemma.
**Lemma 4.12**.: _With the assumptions and notation of Subsection (4.2), for any \(n\in\overline{\mathbb{N}}\), if \(G_{n}\) is Abelian, then we have that_
\[\left\{a\in\operatorname{dom}\left(\mathsf{L}_{n}\right):\mathsf{L}_{n}(a) \leq 1\right\}=\operatorname{cl}\left(\left\{a\in C_{c}(G_{n}):\mathsf{L}_{n} (a)\leq 1\right\}\right).\]
Proof.: Fix \(n\in\overline{\mathbb{N}}\). Since \(\mathsf{L}_{n}\) is lower semicontinuous, we get
\[\operatorname{cl}\left(\left\{a\in\operatorname{dom}\left(\mathsf{L}_{n} \right)\cap C_{c}(G_{n}):\mathsf{L}_{n}(a)\leq 1\right\}\right)\subseteq \left\{a\in\operatorname{dom}\left(\mathsf{L}_{n}\right):\mathsf{L}_{n}(a) \leq 1\right\}.\]
We now prove that when \(G_{n}\) is Abelian, the converse inclusion holds.
Let \(\widehat{G_{n}}\) be the Pontryagin dual of \(G_{n}\) (we will use the multiplicative notation for \(\widehat{G_{n}}\)). The dual action \(\beta\) of \(\widehat{G_{n}}\) on \(C^{*}(G_{n},\sigma)\) is unitarily implemented by defining, for each \(z\in\widehat{G_{n}}\), the unitary \(v^{z}\) of \(\ell^{2}(G_{n},E)\) which is given by, for all \(\xi\in\ell^{2}(G_{n})\otimes E\):
\[v^{z}\xi:g\in G_{n}\longmapsto\overline{z(g)}\xi(g)(=z(-g)\xi(g)).\]
It is easily checked that \(z\in\widehat{G_{n}}\mapsto v^{z}\) is a unitary representation of \(\widehat{G_{n}}\). We then note that:
\[\forall z\in\widehat{G_{n}}\quad v^{z}\lambda_{E}(g)(v^{z})^{*}=\beta^{z} \lambda_{E}(g).\]
By construction, \(D_{n}\) commutes with \(v^{z}\) for all \(z\in\widehat{G_{n}}\), so \(\beta\) acts by full quantum isometries on \((C^{*}(G_{n},\sigma),\mathsf{L}_{n})\).
Let \(\mu\) be the Haar probability measure on \(\widehat{G_{n}}\). As seen in [32, Lemma 3.1], [57, Theorem 8.2], there exists a sequence \((\varphi_{k})_{k\in\mathds{N}}\) of non-negative functions over \(\widehat{G_{n}}\), each obtained as a linear combination of characters of \(\widehat{G_{n}}\) (i.e. of the form \(z\in\widehat{G_{n}}\mapsto z(g)\) for some \(g\in G_{n}\), by Pontryagin duality), such that \(\int_{\widehat{G_{n}}}\varphi_{k}\,d\mu=1\) for all \(k\in\mathds{N}\), and \((\varphi_{k})_{k\in\mathds{N}}\) converges, in the sense of distributions, to the Dirac measure at \(1\in\widehat{G_{n}}\), i.e., for all \(f\in C(\widehat{G_{n}})\),
\[\lim_{k\to\infty}\int_{\widehat{G_{n}}}f(z)\,\varphi_{k}(z)d\mu(z)=f(1).\]
We define, for each \(k\in\mathds{N}\), the continuous linear endomorphism:
\[\beta^{\varphi_{k}}:a\in C^{*}(G_{n},\sigma)\rightharpoonup\int_{\widehat{G_{n }}}\beta^{z}(a)\,\varphi_{k}(z)d\mu(z),\]
acting on \(C^{*}(G_{n},\sigma)\). Since the dual action is strongly continuous, we conclude that, for all \(a\in C^{*}(G_{n},\sigma)\):
\[\lim_{k\to\infty}\big{\|}\beta^{\varphi_{k}}(a)-a\big{\|}_{C^{*}(G_{n})}=0.\]
Since \(\mathsf{L}_{n}\) is lower semicontinuous, \(\varphi_{k}\geq 0\) and \(\int_{\widehat{G_{n}}}\varphi_{k}\,d\mu=1\) for all \(k\in\mathds{N}\), and \(\beta\) acts by quantum isometries, we also get, for all \(a\in\operatorname{dom}(\mathsf{L}_{n})\),
\[\mathsf{L}_{n}\big{(}\beta^{\varphi_{k}}(a)\big{)}\leq\int_{\widehat{G_{n}}} \varphi(z)\mathsf{L}_{n}(a)\,d\mu(z)=\mathsf{L}_{n}(a).\]
As a quick digression, lower semicontinuity also implies that \(\mathsf{L}_{n}(a)\leq\liminf_{k\to\infty}\mathsf{L}_{n}(\beta^{\varphi_{k}(a )})\), so altogether we have shown that \(\mathsf{L}_{n}(a)=\liminf_{k\to\infty}\mathsf{L}_{n}(\beta^{\varphi_{k}}(a))\).
For each \(k\in\mathds{N}\), as \(\varphi_{k}\) is a linear combination of characters of \(\widehat{G_{n}}\), there exists a finite subset \(F\subseteq G_{n}\) and a function \(t:F\to\mathds{C}\) such that \(\varphi_{k}:z\in\widehat{G_{n}}\rightharpoonup\sum_{g\in F}t(g)z(g)\); the range of \(\beta^{\varphi_{k}}\) is then the finite dimensional subspace of \(C_{c}(G_{n})\) consisting of the functions supported on \(F\). For our purpose, the main observations here are that, given \(a\in\operatorname{dom}(\mathsf{L}_{n})\), and \(\varepsilon>0\), there exists \(K\in\mathds{N}\) such that if \(k\geq K\), then \(\big{\|}a-\beta^{\varphi_{k}}(a)\big{\|}_{C^{*}(G_{n},\sigma)}<\varepsilon\) and \(\mathsf{L}_{n}(\beta^{\varphi_{k}}(a))\leq\mathsf{L}_{n}(a)\). In particular, again since \(\mathsf{L}_{n}\) is lower semi-continuous, it follows that:
\[\left\{a\in\operatorname{dom}(\mathsf{L}_{D}):\mathsf{L}_{D}(a)\leq 1\right\}= \operatorname{cl}\left(\left\{a\in\operatorname{dom}(\mathsf{L}_{D})\cap C_{c }(G_{n}):\mathsf{L}_{D}(a)\leq 1\right\}\right), \tag{4.11}\]
as claimed.
_Remark 4.13_.: With the notation of the proof of Lemma (4.12), fix \(\varphi\in\mathcal{S}(C^{*}(G_{n},\sigma))\). Since, for all \(k\in\mathds{N}\), we have \(\int_{\widehat{G_{n}}}\varphi^{k}\,d\mu=1\), we conclude that \(\beta^{\varphi_{k}}\) is a unital map, and thus
\[\sup\left\{\big{\|}a-\beta^{\varphi_{k}}(a)\big{\|}_{C^{*}(G_{n})} :a\in\operatorname{dom}(\mathsf{L}_{n}),\mathsf{L}_{n}(a)\leq 1\right\}\\ =\sup\left\{\big{\|}a-\beta^{\varphi_{k}}(a)\big{\|}_{C^{*}(G_{n })}:a\in\operatorname{dom}(\mathsf{L}_{n}),\mathsf{L}_{n}(a)\leq 1,\mu(a)=0\right\}\]
where the second supremum is indeed finite since \(X=\left\{a\in\operatorname{dom}(\mathsf{L}_{n}):\mathsf{L}_{n}(a)\leq 1,\mu(a)=0\right\}\) is compact and we take the supremum of a continuous function over this set. In fact, Arzela-Ascoli theorem can be applied here to prove that the convergence of \((\beta^{\varphi_{k}})_{k\in\mathds{N}}\) to
the identity on \(X\) is uniform, though we here offer a simple \(\frac{\varepsilon}{3}\)-type argument. First, note that for all \(a,b\in C^{*}(G_{\infty})\), and for all \(k\in\mathds{N}\),
\[\left\|\beta^{\varphi_{k}}(a)-\beta^{\varphi_{k}}(b)\right\|_{C^{*}(G_{\infty} )}\leq\int_{G_{\infty}}\left\|a-b\right\|_{C^{*}(G_{\infty})}\varphi^{k}(z)d \mu(z)=\left\|a-b\right\|_{C^{*}(G_{\infty})}.\]
Moreover, for all \(\varepsilon>0\), there exists a finite \(\frac{\varepsilon}{3}\)-dense subset \(X_{\varepsilon}\) of \(X\); as \(X_{\varepsilon}\) is finite, there exists \(K\in\mathds{N}\) such that, for all \(k\geq K\) and for all \(a\in X_{\varepsilon}\), then \(\left\|a-\beta^{\varphi_{k}}(a)\right\|_{C^{*}(G_{\infty})}<\frac{\varepsilon} {3}\), as seen above; therefore for all \(k\geq K\), we have
\[\left\|a-\beta^{\varphi_{k}}(a)\right\|_{C^{*}(G_{\infty})}\leq \left\|a-a^{\prime}\right\|_{C^{*}(G_{\infty})}+\left\|a^{\prime}-\beta^{ \varphi_{k}}(a^{\prime})\right\|_{C^{*}(G_{\infty})}+\left\|\beta^{\varphi_{k} }(a^{\prime}-a)\right\|_{C^{*}(G_{\infty})}\] \[\quad<\frac{\varepsilon}{3}+\frac{\varepsilon}{3}+\frac{ \varepsilon}{3}=\varepsilon.\]
This proves that indeed, \((\beta^{\varphi_{k}})_{k\in\mathds{N}}\) converges _uniformly_ to the identity over \(X\).
We will prove that some of the spectral triples introduced in Subsection (4.2) are metric by invoking a property central to the work in [9, 50], called _bounded doubling_, which we now recall in the formulation of [50].
**Definition 4.14** ([9, 50]).: A proper length function \(\mathds{L}\) on a discrete group \(G\) satisfies the _bounded doubling property_ when there exists \(\theta>1\) and \(c>0\) such that, for all \(r\geq 1\):
\[\left|\left\{g\in G:\mathds{L}(g)\leq\theta\cdot r\right\}\right|\leq c\left| \left\{g\in G:\mathds{L}(g)\leq r\right\}\right|.\]
The bounded doubling property indeed ensures the following result.
**Lemma 4.15**.: _The spectral triples constructed in Subsection (4.2) are metric if the proper length function \(\mathds{L}\coloneqq\max\{\mathds{L}_{H},\mathds{F}\}\) has the bounded doubling property._
Proof.: We note that Lemma (4.3) proves that \(\mathds{L}\) is a proper unbounded length function.
By [9, 50], since all our groups are Abelian hence nilpotent, for any \(\mu\in\mathcal{S}(C^{*}(G_{n}),\sigma)\), the set
\[\left\{a\in C_{c}(G_{n}):\left\|\left|\left[M_{\mathds{L}},a\right]\right| \right\|_{\ell^{2}(G_{n})}\leq 1,\mu(a)=0\right\}\]
is totally bounded. Since \(\left\|\left[M_{\mathds{L}},\cdot\right]\right\|_{\ell^{2}(G_{n})}\leq L_{n}\) on \(C_{c}(G_{n})\), we thus conclude that
\[\left\{a\in C_{c}(G_{n}):\mathsf{L}_{n}(a)\leq 1,\mu(a)=0\right\}\subseteq \left\{a\in C_{c}(G_{n}):\left\|\left[M_{\mathds{L}},a\right]\right\|_{\ell^{2 }(G_{n})}\leq 1,\mu(a)=0\right\}\]
and thus \(\left\{a\in C_{c}(G_{n}):\mathsf{L}_{n}(a)\leq 1,\mu(a)=0\right\}\) is also totally bounded. By Lemma (4.12), we also have:
\[\left\{a\in\operatorname{dom}\left(\mathsf{L}_{n}\right):\mathsf{L}_{n}(a) \leq 1,\mu(a)=0\right\}=\operatorname{cl}\left(\left\{a\in C_{c}(G_{n}): \mathsf{L}_{n}(a)\leq 1,\mu(a)=0\right\}\right)\]
so \(\left\{a\in\operatorname{dom}\left(\mathsf{L}_{n}\right):\mathsf{L}_{n}(a) \leq 1,\mu(a)=0\right\}\) is compact. Thus by Theorem (2.9), \(\mathsf{L}_{n}\) is a Lipschitz seminorm, i.e. our spectral triples are metric.
We are now ready to establish the following theorem.
**Theorem 4.16**.: _Let \(G=\bigcup_{n\in\mathds{N}}G_{n}\) be an Abelian discrete group, arising as the union of a strictly increasing sequence \((G_{n})_{n\in\mathds{N}}\) of subgroups of \(G\). Let \(\sigma\) be a \(2\)-cocycle of \(G\) and \(\mathds{L}_{H}\) a length function on \(G\) such that_
\[\lim_{n\to\infty}\operatorname{Haus}[\mathds{L}_{H}]\left(G_{n},G\right)=0,\]
_and whose restriction to \(G_{n}\) is proper for all \(n\in\mathds{N}\). Assume \(\operatorname{scale}:\mathds{N}\to[0,\infty)\) is a strictly increasing, unbounded function such that, if we set_
\[\mathds{F}:g\in G\longmapsto\operatorname{scale}(\min\{n\in\mathds{N}:g\in G _{n}\})\]
_then the proper length function \(\mathbb{L}\coloneqq\max\{\mathbb{L}_{H},\mathbb{F}\}\) has the bounded doubling property._
_Then, for any Hermitian space \(E\),_
\[\lim_{n\to\infty}\Lambda^{\mathsf{spec}}((C^{*}(G,\sigma),\ell^{2}(G)\otimes E,I\!\!D),(C^{*}(G_{n},\sigma),\ell^{2}(G_{n})\otimes E,I\!\!D_{n}))=0,\]
_where_
* \(I\!\!D=M_{\mathbb{L}_{H}}\otimes\gamma_{1}+M_{\mathbb{F}}\otimes\gamma_{2}\) _on_ \(\left\{\xi\in\ell^{2}(G)\otimes E:\sum_{g\in G}(\mathbb{L}_{H}(g)^{2}+\mathbb{F }(g)^{2})\left\|\xi(g)\right\|_{E}^{2}<\infty\right\}\)_, with_ \(\gamma_{1},\gamma_{2}\) _unitaries of_ \(E\) _such that, for all_ \(j,k\in\{1,2\}\)_:_ \[\gamma_{j}\gamma_{k}+\gamma_{k}\gamma_{j}=\begin{cases}2\text{ if }j=k,\\ 0\text{ otherwise.}\end{cases}\]
* \(\ell^{2}(G_{n})\otimes E\) _is identified with the subspace of_ \(G_{n}\)_-supported vectors in_ \(\ell^{2}(G)\otimes E\)_,_
* \(I\!\!D_{n}\) _is the restriction of_ \(I\!\!D\) _to_ \(\operatorname{dom}\left(I\!\!D\right)\cap\left(\ell^{2}(G_{n})\otimes E\right)\)_,_
* \(C^{*}(G,\sigma)\) _and_ \(C^{*}(G_{n},\sigma)\) _act via their left regular_ \(\sigma\)_-projective representations._
Proof.: Our theorem follows from Theorem (4.11). We first note that Lemma (4.15) proves that all our spectral triples are metric. By Lemma (4.12), since \(G_{\infty}\) is Abelian, we conclude that, for all \(n\in\overline{\mathbb{N}}\),
\[\left\{a\in\operatorname{dom}\left(\mathbb{L}_{n}\right):\mathbb{L}_{n}(a) \leqslant 1\right\}=\operatorname{cl}\left(\left\{a\in C_{c}(G_{n}):\mathbb{L} _{n}(a)\leqslant 1\right\}\right).\]
Since all hypotheses of Theorem (4.11) are met, the result follows.
In particular, for the noncommutative solenoids of Example (4.1), we obtain the following.
**Corollary 4.17**.: _Fix a prime number \(p\in\mathbb{N}\) and \(d\in\mathbb{N}\setminus\{0,1\}\). For each \(n\in\mathbb{N}\), let_
\[G_{n}\coloneqq\left(\frac{1}{p^{n}}\mathbb{Z}\right)^{d}\]
_and_
\[G_{\infty}\coloneqq\left(\mathbb{Z}\left[\frac{1}{p}\right]\right)^{d}.\]
_Fix a \(2\)-cocycle \(\sigma\) on \(G_{\infty}\) such that \(\forall g\in G_{\infty}\quad\sigma(g,-g)=1\)._
_Let \(\mathbb{L}_{H}\) be the restriction to \(G_{\infty}\) of some norm on \(\mathbb{R}^{2}\). We define \(\mathbb{F}\) by setting, for all \(g\in G_{\infty}\):_
\[\mathbb{F}(g)\coloneqq\min\left\{p^{n}:g\in\left(\frac{1}{p^{n}}\right)^{d} \right\}.\]
_Let \(E\) be an even dimensional hermitian space, with \(\gamma_{1},\gamma_{2}\) be two unitaries on \(E\) such that, for all \(j,k\in\{1,2\}\):_
\[\gamma_{j}\gamma_{k}+\gamma_{k}\gamma_{j}=\begin{cases}2\text{ if }j=k,\\ 0\text{ otherwise.}\end{cases}\]
_If we define, for all \(n\in\overline{\mathbb{N}}\), the operator_
\[I\!\!D_{n}:=M_{\mathbb{L}_{H}}\otimes\gamma_{1}+M_{\mathbb{F}}\otimes\gamma_ {2}\text{ on }\operatorname{dom}\left(I\!\!D_{n}\right)\]
_on the domain_
\[\operatorname{dom}\left(I\!\!D_{n}\right)\coloneqq\left\{\xi\in\ell^{2}(G_{n},E):\sum_{g\in G_{n}}(\mathbb{L}_{H}(g)^{2}+\mathbb{F}(g)^{2})\left\|\xi(g) \right\|_{E}^{2}<\infty\right\},\]
_then, for all \(n\in\overline{\mathbb{N}}\), the triple \((C^{*}(G_{n},\sigma),\ell^{2}(G_{n},E),I\!\!D_{n})\) is a metric spectral triple, and:_
\[\lim_{n\to\infty}\Lambda^{\mathsf{spec}}((C^{*}(G_{n},\sigma),\ell^{2}(G_{n},E),I\!\!D_{n}),(C^{*}(G_{\infty},\sigma),\ell^{2}(G_{\infty},E),I\!\!D_{\infty} ))=0.\]
_Moreover, for each \(n\in\mathds{N}\), the sequence \((C^{*}(G_{n},\sigma),\mathsf{L}_{k})_{k\geq n}\) of quantum compact metric spaces converge to \((C^{*}(G_{n},\sigma),\mathsf{L}_{\infty})\) in the Lipschitz distance._
Proof.: We first establish the bounded doubling property of certain related length functions.
Fix a prime number \(p\) and \(d\geq 2\). For all \(g\in G_{\infty}\), let
\[\mathbb{L}^{\prime}(g)=\max\left\{\big{\|}g\big{\|}_{\mathbb{R}^{d}},p^{\min \left\{n\in\mathds{N}:g\in\left(\frac{1}{p^{n}}\mathbb{Z}\right)^{d}\right\}} \right\},\]
where the norm we choose on \(\mathbb{R}^{d}\) for this proof is the \(\max\) norm. By Lemma (4.3), the function \(\mathbb{L}^{\prime}\) is an unbounded proper length function. By Lemma (4.7), we have that \(\left\|\left|M_{\mathbb{L}^{\prime}},\cdot\right\|\right\|_{\ell^{2}(G_{n})} \leq\mathsf{L}_{n}\) on \(C(G_{n})\) for all \(n\in\overline{\mathds{N}}\). By [11], the triple \((C^{*}(G_{n},\sigma),\ell^{2}(G_{n}),M_{\mathbb{L}^{\prime}})\) is a spectral triple.
Assume \(\mathbb{L}^{\prime}(g)\in p^{n}\). Since \(g\in\left(\frac{1}{p^{n}}\mathbb{Z}\right)^{d}\), we can write \(g=\left(\frac{a_{j}}{p^{n}}\right)_{1\leq j\leq d}\) for \(a_{1},\ldots,a_{d}\in\mathbb{Z}\). Since \(\big{\|}g\big{\|}_{\mathbb{R}^{d}}\leq p^{n}\), we also have \(a_{1},\ldots,a_{d}\in[-p^{2n},p^{2n}]\). Conversely, if \(g=\left(\frac{a_{j}}{p^{n}}\right)_{1\leq j\leq d}\) with \(-p^{2n}\leq a_{j}\leq p^{2n}\) for all \(j\in\{1,\ldots,d\}\), then \(\mathbb{L}^{\prime}(g)\leq p^{n}\) by definition. Hence, the closed ball of center \((0,0)\) and radius \(p^{n}\) has cardinal \((2p^{2n}+1)^{d}\).
Consequently:
\[\left|\left\{g\in G_{\infty}:\mathbb{L}^{\prime}(g)\leq p^{n+1} \right\}\right| =(2p^{2n+2}+1)^{d}\] \[\leq(2p^{2n+2}+p^{2})^{d}\] \[=p^{2d}(2p^{2n}+1)^{d}\] \[\leq p^{2d}\left|\left\{g\in G_{\infty}:\mathbb{L}^{\prime}(g) \leq p^{n}\right\}\right|.\]
Therefore, \(\mathbb{L}^{\prime}\) is a proper unbounded length with the bounded doubling property.
Let \(\mathbb{L}_{H}\) be any norm on \(\mathbb{R}^{d}\). Since all the norms on \(\mathbb{R}^{d}\) are equivalent, there exists \(C>0\) such that \(\frac{1}{C}\mathbb{L}_{H}\leq\left\|\cdot\right\|_{\mathbb{R}^{d}}\leq C \mathbb{L}_{H}\). Then
\[\frac{1}{C}(\max\{\mathbb{L}_{H},\mathbb{F}\})\leq\mathbb{L}^{\prime}\leq C \max\left\{\mathbb{L}_{H},\mathbb{F}\right\}.\]
Therefore,
\[\left|\left\{g\in G_{\infty}:\max\left\{\mathbb{L}_{H}(g),\mathbb{F}(g)\right\} \leq p^{n+1}\right\}\right|\leq C^{2}p^{2d}\left|\left\{g\in G_{\infty}:\max \left\{\mathbb{L}_{H}(g),\mathbb{F}(g)\right\}\leq p^{n}\right\}\right|.\]
Write \(\mathbb{L}:=\max\{\mathbb{L}_{H},\mathbb{F}\}\) on \(C_{c}(G_{n})\). We thus have shown that \(\mathbb{L}\), which is unbounded and proper by Lemma (4.3), also has the bounded doubling property.
By Lemma (4.12), since \(G_{\infty}\) is Abelian, we conclude that
\[\forall\,n\in\overline{\mathds{N}}\quad\left\{a\in\operatorname{dom}\left( \mathsf{L}_{n}\right):\mathsf{L}_{n}(a)\leq 1\right\}=\operatorname{cl}\left(\left\{a\in C _{c}(G_{n}):\mathsf{L}_{n}(a)\leq 1\right\}\right).\]
Thus, our corollary follows from Theorem (4.11).
We can choose somewhat different length functions over \(\left(\mathbb{Z}\left[\frac{1}{p}\right]\right)^{d}\), by varying not only \(\mathbb{L}_{H}\), but also \(\mathbb{F}\). For instance, Corollary (4.17) remains valid if we replace \(\mathbb{F}\) by \(\mathbb{F}^{\prime}:(g_{1},\ldots,g_{d})\in G_{\infty}\mapsto\max_{j=1}^{d}|g_{ j}|_{p}\), where \(|\cdot|_{p}\) is now the \(p\)-norm. The resulting length function \(\max\{\mathbb{L}_{H},\mathbb{F}^{\prime}\}\) has the bounded doubling property, as seen by applying [19, Proposition 3.17] up to an equivalence of metrics. We also note that for this construction to give us something different from Corollary (4.17), we require that \(\mathbb{L}_{H}(g)<\mathbb{F}^{\prime}(g)\) for at least one \(g\in\mathbb{Z}^{d}\setminus\{0\}\). In general, the difference is only up to a bounded perturbation of the underlying Dirac operator.
Another interesting family of \(C^{*}\)-algebras to which our work applies are certain _Bunce-Deddens algebras_.
**Notation 4.18**.: Let \(\mathcal{P}\) be the set of all sequences \((\alpha_{n})_{n\in\mathds{N}}\) of nonzero natural numbers such that \(\frac{\alpha_{n+1}}{\alpha_{n}}\) is a prime number for all \(n\in\mathds{N}\).
**Notation 4.19**.: For any integer \(m\in\mathds{Z}\), we denote the quotient group \(\sideset{}{\sideset{}{\sideset{}{\sideset{}{\sideset{}{\sideset{}{ \sideset{}{\sideset{}{\sideset{}{\sideset{}{\sideset{}{\sideset{}{\sideset{}{ \sideset{}{\sideset}{\sideset}{\sideset{}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ }{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\ideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ }{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ }{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\ideset}{\sideset}{ \s}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ }{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\s}{ \ideset}{\sideset}{\sideset}{\ideset}{\sideset}{\sideset}{\sideset}{ }{\sideset}{\sideset}{\ideset}{\sideset}{\sideset}{\sideset}{ }{\sideset}{\ideset}{\sideset}{\sideset}{\s}{\ideset}{\sideset}{ }{\sideset}{\sideset}{\sideset}{\s}{\ideset}{\sideset}{\s}{\ideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ }{\sideset}{\sideset}{\ideset}{\sideset}{\sideset}{\ideset}{ }{\sideset}{\sideset}{\ideset}{\sideset}{\sideset}{\sideset}{ \sideset}{\sideset}{\sideset}{\sideset}{\sideset}{\sideset}{ }{\sideset}{\ideset}{\sideset}{\sideset}{\sideset}{\ideset}{\s}{ \ideset}{\sideset}{\ideset}{\sideset}{\ideset}{\sideset}{\s}{ \ideset}{\sideset}{\ideset}{\sideset}{\sideset}{\sideset}{\ideset}{ \sideset}{\sideset}{\ideset}{\sideset}{\sideset}{\ideset}{\sideset}{ }{\sideset}{\ideset}{\sideset}{\ideset}{\sideset}{\ideset}{\sideset}{ }{\sideset}{\ideset}{\sideset}{\ideset}{\sideset}{\ideset}{\sideset}{ }{\sideset}{\ideset}{\sideset}{\ideset}{\sideset}{\sideset}{ }{\sideset}{\ideset}{\sideset}{\ideset}{\sideset}{\ideset}{\sideset}{ }{\sideset}{\ideset}{\sideset}{\ideset}{\sideset}{\ideset}{\sideset}{ }{\sideset}{\ideset}{\sideset}{\ideset}{\sideset}{\ideset}{\sideset}{ }{\sideset}{\ideset}{\sideset}{\ideset}{\sideset}{\ideset}{\sideset}{ }{\sideset}{\ideset}{\sideset}{\ideset}{\sideset}{\sideset}{\ideset}{\s}{ \ideset}{\sideset}{\ideset}{\sideset}{\ideset}{\sideset}{\ideset}{\sideset}{ }{
_is a proper unbounded length function over \(\mathbb{Z}(\alpha)\)._
_Moreover, \(\mathbb{L}\) has the bounded doubling property if, and only if, the sequence \(\left(\frac{\alpha_{n+1}}{\alpha_{n}}\right)_{n\in\mathbb{N}}\) is bounded._
Proof.: First, it is easy to see that, for all \(\zeta\in\mathbb{Z}(\alpha)\setminus\{1\}\),
\[\mathbb{F}(\zeta)=p^{\min\left\{n\in\mathbb{N}:\zeta\in\overline{\zeta^{2} \zeta_{\alpha_{n}}}\right\}},\]
while \(\mathbb{F}(1)=0\). Therefore, by Lemma (4.3), we already know that \(\mathbb{L}\) is a proper unbounded length function on \(\mathbb{Z}(\alpha)\).
For now, let us assume \(\|\cdot\|_{\mathbb{R}^{2}}\) is the max norm.
For any \(\rho>0\), we write \(B[\rho]\) the cardinality of the closed ball centered at \((1,0)\in\mathbb{Z}(\alpha)\times\mathbb{Z}\) of radius \(\rho\). For any \(d\in\mathbb{N}\), we compute the following expression:
\[B\left[\alpha_{d}\right]=|\{\zeta\in\mathbb{Z}(\alpha):\mathbb{L}(\zeta) \leq\alpha_{d}\}|=\left|\left\langle\zeta\in\overline{\widehat{\mathbb{Z} \zeta_{\alpha_{d}}}}\right\rangle\right|=\alpha_{d}.\]
Now, let \(R\geq 1\). Then, there exists \(d\in\mathbb{N}\) such that \(\alpha_{d}\in R\leq\alpha_{d+1}\). We note that since \(B[R]\leq B[\alpha_{d+1}]<\infty\), our length function \(\mathbb{L}\) is indeed proper; we also note that since \(B[R]\geq B[\alpha_{d}]=\alpha_{d}\geq 2^{d}\), the length function \(\mathbb{L}\) is also unbounded.
Now, assume that \(M\coloneqq\sup_{n\in\mathbb{N}}\frac{\alpha_{n+1}}{\alpha_{n}}<\infty\). We then compute:
\[B[2R]\leq B[2\alpha_{d+1}] \leq B[\alpha_{d+2}]=\alpha_{d+2}\] \[=\frac{\alpha_{d+2}}{\alpha_{d+1}}\frac{\alpha_{d+1}}{\alpha_{d}} \alpha_{d}\leq M^{2}\alpha_{d}=M^{2}B[\alpha_{d}]\leq M^{2}B[R].\]
Therefore, our length \(\mathbb{L}\) has the bounding doubling property. Now, if we allow for a different choice of monotone norm for \(\|\cdot\|_{\mathbb{R}^{2}}\), then, as all norms on \(\mathbb{R}^{2}\) are equivalent, the resulting length function still has the property of bounded doubling.
Now, assume instead that \(\sup_{n\in\mathbb{N}}\frac{\alpha_{n+1}}{\alpha_{n}}=\infty\). Let \(n\in\mathbb{N}\), and let \(r_{n}=\alpha_{n+1}/2\). We then note, using our above computation, that
\[B[2r_{n}]=\alpha_{n+1}=\frac{\alpha_{n+1}}{\alpha_{n}}\cdot B[r_{n}],\]
and thus \(\frac{\alpha_{n+1}}{\alpha_{n}}=\frac{B[2r_{n}]}{B[r_{n}]}\) for all \(n\in\mathbb{N}\); therefore, our length \(\mathbb{L}\) does not actually have the bounded doubling property.
**Corollary 4.22**.: _Let \(\alpha=(\alpha_{n})_{n\in\mathbb{N}}\) be a sequence of nonzero natural numbers such that \(\left(\frac{\alpha_{n+1}}{\alpha_{n}}\right)_{n\in\mathbb{N}}\) is a bounded sequence of prime numbers, and let_
\[\mathbb{Z}(\alpha):=\left\{\zeta\in\mathbb{C}:\exists n\in\mathbb{N}\quad \zeta^{\alpha_{n}}=1\right\}.\]
_Define:_
\[G_{\infty}=\mathbb{Z}(\alpha)\times\mathbb{Z}\text{ and }\forall n\in \mathbb{N}\quad G_{n}\coloneqq\widehat{\mathbb{Z}\zeta_{\alpha_{n}}}\times \mathbb{Z},\]
_i.e. \(G_{n}=\{(\zeta,z)\in G_{\infty}:z\in\mathbb{Z},\zeta^{\alpha_{n}}=1\}\). Let \(\sigma\) be a \(2\)-cocycle of \(G_{\infty}\)._
_Let \(\mathbb{L}_{Z}\) be the restriction of any continuous length function on \(\mathbb{T}\) to \(\mathbb{Z}(\alpha)\), and define \(\mathbb{L}_{H}:(u,z)\in G_{\infty}\hookrightarrow\mathbb{L}_{Z}(u)+|z|\)._
_For all \(\zeta\in\mathbb{Z}(\alpha)\), set:_
\[\mathbb{F}(\zeta)\coloneqq\min\{n\in\mathbb{N}:u^{n}=1\}.\]
_Let \(E\) be a Hermitian vector space, and let \(\gamma_{1},\gamma_{2}\) be unitaries such that \(\gamma_{1}\gamma_{2}=-\gamma_{2}\gamma_{1}\) and \(\gamma_{1}^{2}=\gamma_{2}^{2}=1_{E}\)._
_If we set, for all \(n\in\overline{\mathbb{N}}\),_
\[\mathbb{D}_{n}\coloneqq M_{\mathbb{L}_{H}}\otimes\gamma_{1}+M_{\mathbb{F}} \otimes\gamma_{2},\]
_then for all \(n\in\mathds{N}\), the spectral triple \((C^{*}(G_{n},\sigma),\ell^{2}(G_{n})\otimes E,D_{n})\) is metric, and_
\[\lim_{n\to\infty}\Lambda^{\text{spec}}\left(\left(C^{*}(G_{n},\sigma),\ell^{2}( G_{n})\otimes E,D_{n}\right),\left(C^{*}(\mathds{Z}(\alpha)\times\mathds{Z}, \sigma),\ell^{2}(\mathds{Z}(\alpha)\times\mathds{Z})\otimes E,D_{\infty} \right)\right)=0.\]
Proof.: A straightforward computation shows that \(|\cdot|\) is proper with the bounded doubling property.
By [19, Proposition 3.7] applied to the proper unbounded lengths \(|\cdot|\) and \(\mathds{L}_{Z}\), we conclude that \(\mathds{L}\coloneqq(\zeta,z)\in G_{\infty}\to\mathds{L}_{Z}(\zeta)+\mathds{F} (\zeta)+|m|\) has the bounded doubling property.
Since \(\mathds{L}_{Z}\) is continuous on \(\mathds{T}\), it induces the usual topology on \(\mathds{T}\) (as a subset of \(\mathds{C}\)). Therefore, the topology of the Hausdorff distance \(\mathsf{Haus}[\mathds{L}_{H}]\) is the Vietoris topology for the usual topology of \(\mathds{T}\), and thus the same as the topology induced by \(\mathsf{Haus}[\mathds{T}]\), when \(\mathds{T}\) is endowed with the restriction of the usual metric on \(\mathds{C}\). It then follows that:
\[\lim_{n\to\infty}\mathsf{Haus}[\mathds{L}_{H}]\left(\widetilde{\mathds{L}_{ \sigma}}_{n},\mathds{Z}(\alpha)\right)=0.\]
As all the other assumptions are now met, we conclude that our corollary holds, by Theorem (4.16).
The map
\[\omega:z\in\mathds{Z}\mapsto(z\mod\alpha_{n})_{n\in\mathds{N}}\in\widetilde{ \mathds{L}_{\sigma}}_{\alpha}\]
is an injective \({}^{*}\)-morphism of group with dense range. Now, we define the following automorphism of \(\mathds{Z}(\alpha)\):
\[\tau:u\in\mathds{Z}(\alpha)\mapsto u+\omega(1).\]
The C\({}^{*}\)-crossed-product \(C(\mathds{Z}(\alpha))\rtimes_{\tau}\mathds{Z}\) is the Bunce-Deddens algebra associated to the "supernatural" number
\[\mathbf{n}\coloneqq\left(p^{|(n\in\mathds{N}:\frac{\alpha_{n+1}}{\alpha_{n}} -p|)|}\right)_{p\text{ prime}}.\]
It is also \({}^{*}\)-isomorphic to \(C^{*}(\mathds{Z}(\alpha)\times Z,\sigma)\), as defined above, when \(\sigma\) is the \(2\)-cocycle defined by setting, for all \((\zeta,z),(\eta,y)\in G_{\infty}\):
\[\sigma((\zeta,z),(\eta,y))\coloneqq\eta^{z}.\]
Indeed, this isomorphism can be obtained by using [52]. We begin with the observation that Bunce-Deddens algebras [8] are C\({}^{*}\)-crossed products [54, 18]. Now, let us briefly explain the construction of this isomorphism. Since the natural inclusion \(j:\mathds{Z}(\alpha)\to\mathds{T}\) is a character of \(\mathds{Z}(\alpha)\), it is given by the pairing with an element in \(\widetilde{\mathds{L}_{\sigma}}_{\alpha}\); this element is precisely our \(\omega(1)\) defined above. In our case, we note that \(\lambda_{(1,1)}\lambda_{\zeta,0}\lambda_{(1,1)}^{*}=\zeta^{-1}\lambda_{\zeta,0}\) for all \(\zeta\in\mathds{Z}(\alpha)\). If \(f\in C_{c}\left(\widetilde{\mathds{L}_{\sigma}}_{\alpha}\right)\), we denote its Fourier transform by \(\widehat{f}\); specifically
\[\widehat{f}:\zeta\in\mathds{Z}(\alpha)\mapsto\sum_{z\in\widetilde{\mathds{L} _{\sigma}}_{\alpha}}f(z)\zeta^{-z}.\]
A straightforward computation shows that \(\overline{\tau(\widehat{f})}(\zeta)=\zeta^{-1}\widehat{f}(\zeta)\). Thus, we conclude that \(\lambda_{(1,1)}\lambda(\widehat{f})\lambda_{(1,1)}^{*}=\lambda\left(\overline {\tau(\widehat{f})}\right)\). A similar computation invoking the inverse Fourier transform can be done by using the canonical generators of the C\({}^{*}\)-crossed product \(C\left(\widetilde{\mathds{L}_{\sigma}}_{\alpha}\right)\rtimes_{\tau}\mathds{Z}\). By universality of the C\({}^{*}\)-crossed-product and the twisted group C\({}^{*}\)-algebra (here, since our groups are Abelian, these algebras agree with their image by their left regular representations), we conclude the description of our isomorphism.
Thus, we have constructed metric spectral triples over Bunce-Deddens algebra for bounded supernatural numbers, and these triples are limits of sequences of metric spectral triples for the spectral propinquity.
In particular, \(C^{*}(\mathbb{Z}(\alpha)\times\mathbb{Z},\sigma)\) is seen to be the inductive limit _and_ the limit for the propinquity, with the quantum metrics described here, of the C\({}^{*}\)-algebras \(C^{*}\!\left(\widetilde{\mathbb{Z}\!\!\!/\alpha_{n}}\right.\)\(\times\mathbb{Z},\sigma\)\(\alpha\) as \(n\in\mathds{N}\) approaches \(\infty\). Notably, \(C^{*}\left(\widetilde{\mathbb{Z}\!\!\!/\alpha_{n}}\times\mathbb{Z},\sigma\right)\) is actually \({}^{*}\)-isomorphic to the C\({}^{*}\)-algebra of continuous sections of a vector bundle over the circle \(\mathbb{T}\) with fibers the algebras of square \(\alpha_{n}\)-matrices. This situation is of course reminiscent of the fact that in particular, Bunce-Deddens algebras are AT algebras. However, starting from the usual description of Bunce-Deddens algebras as AT algebras led to difficulties in [6], where the quantum metrics on the Bunce-Deddens algebra do not arise from a spectral triple, and the convergence is only proven in the sense of Rieffel's quantum Gromov-Hausdorff distance. Thus, for Bunce-Deddens algebras associated with supernatural numbers consisting of only finitely many prime numbers, we have now constructed metric spectral triples which actually capture their inductive limit structure within our geometric framework. We hope that Theorems (4.11) and (4.16) will prove useful in constructing other examples of metric spectral triples over twisted group C\({}^{*}\)-algebras for interesting inductive limits of groups.
| Metrik幾何学の枠組みにおいて、私たちは量子コンパクト距離空間のGromov-Hausdorff距離の非可換アナログであるGromov-Hausdorff距離の収束条件を新しく導入します。この条件は、量子コンパクト距離空間(AF代数または離散イニシャル群の特定の歪み畳み込みC*-代数など)の多くの例において容易に確認できます。この条件は、Gromov-Hausdorff距離の概念の一般化であるスペクトル距離の収束にも当てはまります。特に、その空間における量子コンパクト距離空間の State space の収束、および Dirac 演算によって誘導される量子ダイナミクスも示します。この結果を、非可換ソレノイドと Bunce-Deddens C*-代数における新たなクラスのイニシャルの例に適用しました。この構築は、長さ関数が有界な2倍の情報を |
2309.04630 | Enhancing Missing Data Imputation of Non-stationary Signals with
Harmonic Decomposition | Dealing with time series with missing values, including those afflicted by
low quality or over-saturation, presents a significant signal processing
challenge. The task of recovering these missing values, known as imputation,
has led to the development of several algorithms. However, we have observed
that the efficacy of these algorithms tends to diminish when the time series
exhibit non-stationary oscillatory behavior. In this paper, we introduce a
novel algorithm, coined Harmonic Level Interpolation (HaLI), which enhances the
performance of existing imputation algorithms for oscillatory time series.
After running any chosen imputation algorithm, HaLI leverages the harmonic
decomposition based on the adaptive nonharmonic model of the initial imputation
to improve the imputation accuracy for oscillatory time series. Experimental
assessments conducted on synthetic and real signals consistently highlight that
HaLI enhances the performance of existing imputation algorithms. The algorithm
is made publicly available as a readily employable Matlab code for other
researchers to use. | Joaquin Ruiz, Hau-tieng Wu, Marcelo A. Colominas | 2023-09-08T22:45:54 | http://arxiv.org/abs/2309.04630v1 | # Enhancing missing data imputation of non-stationary signals with harmonic decomposition
###### Abstract.
Dealing with time series with missing values, including those afflicted by low quality or over-saturation, presents a significant signal processing challenge. The task of recovering these missing values, known as imputation, has led to the development of several algorithms. However, we have observed that the efficacy of these algorithms tends to diminish when the time series exhibit non-stationary oscillatory behavior. In this paper, we introduce a novel algorithm, coined Harmonic Level Interpolation (HaLI), which enhances the performance of existing imputation algorithms for oscillatory time series. After running any chosen imputation algorithm, HaLI leverages the _harmonic decomposition_ based on the _adaptive nonharmonic model_ of the initial imputation to improve the imputation accuracy for oscillatory time series. Experimental assessments conducted on synthetic and real signals consistently highlight that HaLI enhances the performance of existing imputation algorithms. The algorithm is made publicly available as a readily employable Matlab code for other researchers to use.
Imputation; missing data; adaptive nonharmonic model; harmonic decomposition.
## 1. Introduction
Missing data is a pervasive issue [1, 2] that necessitates different strategies for handling depending on the dataset type and data analysis objectives, presenting various challenges to researchers. This paper centers on the predicament of missing values within _oscillatory time series_ and the exploration of imputation methods for their retrieval. This issue frequently surfaces in digital health, particularly during extended health monitoring using biomedical sensors. Factors such as patient motion, sensor disruptions, and calibration hurdles can lead to absent values in recorded biomedical time series. While akin challenges exist in other fields, our focus predominantly rests on biomedical applications. Notably, the insights we provide can be extended to relevant time series analysis in different domains.
### Categorization of missing value
A traditional classification of missing data was proposed in [3, 4] for the general missing value problem that treats the missing data indicators as random variables. The description depends on the observed and unobserved values in a dataset. _Missing completely at random_ (MCAR) refers to cases where the data missingness does not depend on the data, either observed or unobserved. Note that it does not mean that the missing pattern is random. _Missing at random_ (MAR) refers to cases where the data missingness does not depend on unobserved data. MAR along with the assumption that the parameters of the missing data mechanism and sampling are distinct is called _ignorable_
###### Abstract
We consider a class of _non-stationary signals_, which is
fuzzy algorithm [12], manifold learning [13], Bayesian estimation [14], autoregressive model based approach [15], empirical mode decomposition (EMD) [16, 17], and deep learning approach like BRITS [18], GP-VAE [19] and imputeGAN [20], among others. The main difference between the traditional algorithms and the deep learning methods is that the former can be applied directly to target signals without requiring lengthy training periods and large datasets.
### Our contribution
We introduce a novel algorithm, named Harmonic Level Interpolation (HaLI), for handling x-missing data imputation in non-stationary signals using the adaptive non-harmonic model (ANHM). HaLI consists of two main parts. The first part utilizes an existing imputation technique designed for oscillatory signals to initially fill in data gaps. See Table 1 for a list of considered techniques and Section 4 for a summary of these methods. In the second phase, HaLI performs harmonic decomposition on the imputed signal and subsequently interpolates the decomposed harmonics. Our results demonstrate improved performance, particularly when the amplitude and phases of the signal's harmonic components follow specific regularity conditions, across synthetic and real-world biomedical signals.
It is important to note the presence of analogous imputation algorithms rooted in non-stationary signal decomposition. For instance, in [16], a gap-filling technique based on decomposing the signal into intrinsic mode functions (IMFs) by EMD is proposed. This method takes advantage of the slow oscillations and regularity of the IMFs to improve missing data imputation. In [17], authors employ polynomial interpolation to fill in missing data on the IMF level,
Figure 1. Examples of missing data in the biomedical field. Top: Photoplethysmography signal depicting missing data resulting from sensor disconnection. Middle: Oversaturated airflow signal featuring saturated peaks highlighted by red arrows. Bottom: Airflow signal showcasing a low-quality segment emphasized by the red box.
the original signal through mode superposition. The advantage of EMD-based decomposition approach over other techniques was only demonstrated for short intervals of missing data. It is also worth noting that EMD-based techniques might face challenges like _mode mixing_ inherent to EMD. Furthermore, their theoretical underpinnings remain incomplete.
### Paper organization
The subsequent sections of this paper are structured as follows. Sec. 2 describes the ANHM, a phenomenological model for non-stationary signals. Sec. 3 introduces our main algorithm, HaLI. Sec. 4, deferred to the Supplemental Material, summarizes existing time-series imputation methods, along with considerations for automating parameter tuning for these methods. In Sec. 5, the experiments on synthetic and real-world signals are described. Finally, the relevant results of our proposal and further work are discussed in 6.
## 2. Model of Non-Stationary Oscillatory Signals
Biomedical signals often exhibit dynamic traits owing to the ever-changing nature of their underlying systems. In this study, we focus on _oscillatory_ signals with time-varying amplitude, called amplitude modulation (AM), and time-varying frequency, called frequency modulation (FM), along with oscillatory patterns. Vital (patho-)physiological insights for medical diagnosis and monitoring are embedded within AM, FM, and oscillatory patterns. Previous works, such as [21, 22], introduced the _adaptive harmonic model_ (AHM) to capture AM and FM. Expanding further to encompass non-sinusoidal oscillatory patterns, the AHM was broadened into the ANHM [23]. Furthermore, the gradual trends often carry critical information. For instance, the mean arterial blood pressure (MAP) relies on the average arterial blood pressure (ABP) signal across a cycle. The MAP's temporal progression can be modeled through the ABP signal's trend. Consequently, signal estimation methods, including missing data imputation, must consider this gradual trend. Under this consideration, the ANHM with a _fixed_ oscillatory pattern satisfies
\[x_{0}(t)=\sum_{k=1}^{K}A_{k}(t)s_{k}(\phi_{k}(t))+T(t)+\Phi(t), \tag{1}\]
where \(K\in\mathbb{N}\) is the number of oscillatory components in the signal \(x\), \(A_{k}\in C^{1}(\mathbb{R})\) is a positive function, \(\phi_{k}\in C^{2}(\mathbb{R})\) is a monotonically strictly increasing function, \(s_{k}\) is a 1-periodic smooth function with unit \(L^{2}\) norm, mean 0 and \(|\hat{s}_{k}(1)|>0\), \(T(t)\) is a slow-varying trend function, and \(\Phi\) is a random process with mean 0 and finite variance modeling the noise. We call \(A_{k}(t)s_{k}(\phi_{k}(t))\) the \(k\)-th _intrinsic mode type_ (IMT) function, \(A_{k}(t)\) (\(\phi_{k}(t)\), \(\phi_{k}^{\prime}(t)>0\) and \(s_{k}\) respectively) the AM (phase, instantaneous frequency (IF) and _wave-shape function_ (WSF) respectively) of the \(k\)-th IMT function. To avoid distraction, we focus on the assumption that \(\phi_{1}^{\prime}(t)<\phi_{2}^{\prime}(t)<\ldots<\phi_{K}^{\prime}(t)\) and \(\inf_{t}\phi_{1}^{\prime}(t)>0\); that is, we do not consider the mode-mixing setup where the IF of different IMT functions might overlap. To guarantee the identifiability of the model [24], the following slowly time-varying conditions are needed for \(A_{k}(t)\) and \(\phi_{k}^{\prime}(t)\):
(C1) \[|A_{k}^{\prime}(t)| <\epsilon|\phi_{k}^{\prime}(t)|\] (C2) \[|\phi_{k}^{\prime\prime}(t)| <\epsilon|\phi_{k}^{\prime}(t)|\]
for all \(t\in\mathbb{R}\), where \(k=1,\dots,K\) and \(\epsilon>0\) is a small constant. Moreover, \(T(t)\) is assumed to vary slowly so that its spectrum is supported in \([-\delta,\delta]\), where \(0\leq\delta<\inf\limits_{t}\,\phi_{1}^{\prime}(t)\).
In (1), each IMT function oscillates with a single WSF, indicating a consistent oscillatory pattern over time. Nonetheless, research highlights that in real-world signals, especially biomedical ones, the oscillatory pattern often varies rather than remaining static [25, 26]. Thus, the model (1) is extended to encompass this dynamic oscillatory pattern, also known as a _time-varying WSF_[25]:
\[x_{1}(t)=\sum_{k=1}^{K}\sum_{\ell=1}^{\infty}B_{k,\ell}(t)\cos(2\pi\phi_{k,\ell }(t))+T(t)+\Phi(t)\,, \tag{2}\]
with the following assumptions satisfied for a fixed small constant \(\epsilon^{\prime}\geq 0\):
* \(B_{l}\in C^{1}(\mathbb{R})\cap L^{\infty}(\mathbb{R})\) for \(l=1,2,\dots\) and \(B_{1}(t)>0\) and \(B_{l}(t)\geq 0\) for all \(t\) and \(l=2,3\dots\). \(B_{l}(t)\leq c(l)B_{1}(t)\), for all \(t\in\mathbb{R}\) and \(l=1,2,\dots\), and with \(\{c(l)\}_{l=1}^{\infty}\) a non-negative \(\ell^{1}\) sequence. Moreover, there exists \(N\in\mathbb{N}\) so that \(\sum_{l=N+1}^{\infty}B_{l}(t)\leq\epsilon^{\prime}\sqrt{\sum_{l=1}^{\infty}B_ {l}^{2}(t)}\) and \(\sum_{l=N+1}^{\infty}lB_{l}(t)\leq D\sqrt{\sum_{l=1}^{\infty}B_{l}^{2}(t)}\) for some constant \(D>0\).
* \(\phi_{l}\in C^{2}(\mathbb{R})\) and \(|\phi_{l}^{\prime}(t)-l\phi_{1}^{\prime}(t)|\leq\epsilon^{\prime}\phi_{1}^{ \prime}(t)\), for all \(t\in\mathbb{R}\) and \(l=1,\dots,\infty\).
* \(|B_{l}^{\prime}(t)|\leq\epsilon^{\prime}c(l)\phi_{1}^{\prime}(t)\) and \(|\phi_{l}^{\prime\prime}(t)|\leq\epsilon^{\prime}l\phi_{1}^{\prime}(t)\) for all \(t\in\mathbb{R}\), and \(\sup_{l;\;B_{l}\neq 0}\|\phi_{l}^{\prime\prime}\|_{\infty}=M\) for some \(M\geq 0\).
We also call (2) the ANHM model and \(\sum_{\ell=1}^{\infty}B_{k,\ell}(t)\cos(2\pi\phi_{k,\ell}(t))\) the \(k\)-th IMT function. This model introduces a level of regularity to amplitudes and phases (and, in turn, frequencies), allowing their estimation through smooth curve interpolation methods (see Sec. 3.4). We call \(B_{k,\ell}(t)\cos(2\pi\phi_{k,\ell}(t))\) the \(\ell\)_-th harmonic_ of the \(k\)-th IMT function (with \(\ell=1\) being the _fundamental component_). Here, \(B_{k,\ell}(t)\) and \(\phi_{k,\ell}(t)\) denote the associated amplitude and phase functions. An algorithm that transforms \(x_{1}(t)\) into \(\{B_{k,\ell}(t),\,\phi_{k,\ell}(t)|\,k=1,\dots,K,\,\ell=1,\dots,\infty\}\) alongside \(T(t)\) is referred to as a _harmonic decomposition_ algorithm. This model captures the time-varying WSF in the following sense. Note that (1) can be rewritten as \(x_{0}(t)=\sum_{k=1}^{K}\sum_{\ell=1}^{\infty}[A_{k}(t)a_{k,\ell}]\cos(2\pi[ \ell\phi_{k}(t)+b_{k,\ell}])+T(t)+\Phi(t)\), where \(a_{k,1}>0\), \(a_{k,\ell}\geq 0\) and \(b_{k,\ell}\in[0,1)\) come from the Fourier series expansion of \(s_{k}\). The time-varying WSF is captured by generalizing \(a_{k,\ell}\) and \(b_{k,\ell}\) to be time-varying via setting \(B_{k,\ell}(t):=A_{k}(t)a_{k,\ell}\) and \(\phi_{k,\ell}(t):=\ell\phi_{k}(t)+b_{k,\ell}\) and generalizing the associated conditions.
In many practical applications, the WSF is not spiky and can be well modeled by finite harmonics. In this study, we focus on the following simplified model of \(x_{1}(t)\):
\[x(t)=\sum_{k=1}^{K}\sum_{\ell=1}^{D_{k}}B_{k,\ell}(t)\cos(2\pi\phi_{k,\ell}(t ))+T(t)+\Phi(t), \tag{3}\]
where \(D_{k}\in\mathbb{N}\) is called the _harmonic degree_ of the \(k\)-th IMT function. To estimate \(D_{k}\), we will employ trigonometric regression model selection criteria [27, 28], which recent studies have shown to be effective in accurately determining the necessary number of harmonic components to capture non-sinusoidal oscillatory patterns in non-stationary signals [29]. From now on, we focus on the ANHM (3).
In practice, the signal adhering to ANHM is observed over a finite interval \([0,T_{s}]\), where \(T_{s}>0\), and the signal is uniformly discretized using a sampling rate
\(1/\Delta t\), where \(\Delta t>0\) represents the sampling period. The sampled signal is denoted as \(\mathbf{x}(n)=x(n\Delta t)\) and \(t_{n}=n\Delta t\), where \(n=1,\ldots,N\). Our focus centers on datasets with x-missing values. The indices of signal \(\mathbf{x}\) are partitioned into two subsets: the observed data subset \(\mathcal{O}\subset\{1,\ldots,N\}\) and the missing data subset \(\mathcal{M}\subset\{1,\ldots,N\}\). The set \(\mathcal{M}\) corresponds to the indexes \(n\in\{1,\ldots,N\}\) such that the signal \(\mathbf{x}(n)\) is given the value \(\alpha\in\mathbb{C}\), NaN or any missing data symbol chosen by the user. The objective is to estimate (impute) the values \(\mathbf{x}(n)\) for \(n\in\mathcal{M}\).
## 3. Proposed Algorithm: Harmonic Level Interpolation (HaLi)
The proposed imputation algorithm, called Harmonic Level Interpolation (HaLi), involves three main steps. First, an initial imputation uses an existing method. Second, harmonic amplitudes and phases are acquired by decomposing the imputed signal through a time-frequency (TF) analysis or other suitable algorithm. Lastly, refined imputation occurs by interpolating the harmonic amplitudes and phases. See Fig. 2 for an overall flowchart of HaLi. Note that HaLi shares a high-level idea with MissForest [30] or MISE [31] for generic data, where initial imputation involves mean or median filling, followed by iterative machine learning-based imputation. HaLi, however, is tailored for non-stationary oscillatory time series.
Before delving into the specifics, it is essential to address some technical considerations. The initial imputation holds critical importance for HaLi. This step is pivotal because harmonic decomposition is sensitive to the boundary effect inherent in TF analysis tools. Specifically, for the x-missing signal \(f(t)(1-\chi_{I}(t))\), due to the discontinuity and irregularity on the boundary of \(I\), the TF representation is impacted near the boundary, which leads to a low-quality harmonic decomposition. While the possibility of bypassing the initial imputation exists if a harmonic decomposition algorithm immune to this boundary effect were available, such an algorithm has not been developed to our knowledge. By employing an initial imputation, the boundary effect is alleviated and we can achieve a reliable harmonic decomposition, particularly near the boundary of \(I\). Nonetheless, harmonics over \(I\) hinge extensively on this initial imputation, which is usually limited. The core innovation of the proposed HaLi algorithm lies in enhancing imputation quality over \(I\) by properly interpolating harmonics from \(\mathbb{R}\backslash I\) into \(I\). The subsequent sections provide a comprehensive breakdown of each step.
### Step 0: Preprocessing
Given a x-missing signal \(\mathbf{x}\), the first task is identifying the intervals containing missing values, unless they have been explicitly provided. Each detected missing value interval is described by its initial index \(i_{ms}\) and the interval length \(L\). Note that in cases where the missing data symbol \(\alpha\) is a finite number, we can designate a minimum interval length \(I_{-}\). This allows us to consider any sequence of \(\alpha\) longer than \(I_{-}\) as a missing interval, encompassing even the natural \(\alpha\)-crossings of the signal.
### Step 1: Initial Data Imputation
The first step of HaLi involves established techniques for handling missing data in time series. This step takes the incomplete data signal \(\mathbf{x}\) and missing value intervals as input. The methods explored in this study are summarized in Table 1. Each method considered may require specific parameters for model fitting, which can be predefined based on prior signal information or defaulted to preset values. Denote the resulting signal
with imputed values as \(\mathbf{x}_{1}\in\mathbb{R}^{N}\). Further details on these approaches are available in Section 4.
Among various algorithms, we emphasize Takens' embedding or Takens' Lag Map (TLM). This method consistently outperforms others, as demonstrated in Section 5. The strategy aims to locate a segment in the signal, without missing values, that closely matches the "expected data" in the missing data interval. The choice of a suitable similarity measure is crucial for effective results, enabling the identification of a segment with values that can be used to impute the missing data.
Before detailing the procedure, recall that Takens' embedding theorem [32] states that under mild conditions, the properties of the underlying dynamic system generating the time series \(y:\mathbb{Z}\to\mathbb{R}\) can be represented by an embedding of the time-lagged segments of the time series. Specifically, let the delay vector be \(\mathbf{y}(k)=\left[y(k),y(k-\tau),\ldots,y(k-(d_{e}-1)\tau)\right]^{\top}\in \mathbb{R}^{d_{e}}\), where \(d_{e}\in\mathbb{N}\) is the embedding dimension and \(\tau>0\) is the lag time. With mild conditions on the underlying manifold that hosts the dynamics, \(\tau\) and \(d_{e}\), the set \(\{\mathbf{y}(k)\}_{k}\) is diffeomorphic to the underlying manifold. In this sense, the underlying dynamic system is recovered.
Fix a missing data interval with the initial index \(i_{ms}\) and the interval length \(L\). Define the signal pattern to the left and the right of the missing data interval as \(\mathbf{x}_{l}=[\mathbf{x}(i_{ms}-d),\ldots,\mathbf{x}(i_{ms}-1)]\) and \(\mathbf{x}_{r}=[\mathbf{x}(i_{ms}+L+1),\ldots,\mathbf{x}(i_{ms}+L+d-1)]\), respectively. Then, we concatenate \(\mathbf{x}_{l}\) and \(\mathbf{x}_{r}\) to form the reference pattern
Figure 2. Proposed missing data imputation method based on interpolation of the harmonic decomposition of non-stationary signals. The three steps of the method are initial missing data imputation, harmonic decomposition and trend extraction of the imputed signal, and interpolation of the harmonic amplitude, phases, and trend to obtain the final imputation result.
\([\mathbf{x}_{l}\ \mathbf{x}_{r}]\). To estimate the missing portion of the signal, a sliding window of length \(2d+L\) is run over the signal outside the missing data interval, and the best imputation candidate \(\mathbf{x}_{p}=[\mathbf{x}(p),\ldots,\mathbf{x}(p+L-1)]\) is the one with the minimal Euclidean distance between \(\mathbf{x}_{\texttt{tmp}}\) and the vector \([\mathbf{x}_{p,l}\ \mathbf{x}_{p,r}]\), where \(\mathbf{x}_{p,l}=[\mathbf{x}(p-d),\ldots,\mathbf{x}(p-1)]\) and \(\mathbf{x}_{p,r}=[\mathbf{x}(p+L+1),\ldots,\mathbf{x}(p+L+d-1)]\). Then, repeat the same procedure for all missing value intervals. This way, the imputed data is chosen so that the local behavior of the signal around the candidate values is similar to the behavior around the missing data interval. From the technical perspective, we set \(d_{e}=d\) and \(\tau=1\) for the Takens' embedding, and find "neighbors" over a projected subspace. We shall mention that TLM is similar to the KNNimpute proposed in [33] for DNA microarrays.
### Step 2: Trend Separation and Harmonic Decomposition of Imputed Signal
Once the initial imputation result \(\mathbf{x}_{1}\) is obtained, we apply a harmonic decomposition algorithm to decompose \(\mathbf{x}_{1}\) into its trend and harmonics. There are several harmonic decomposition algorithms. However, to avoid distracting from the focus of this paper, we apply the short-time Fourier transform (STFT) based algorithm. Denote the STFT of \(\mathbf{x}_{1}\) by \(\mathbf{F}\in\mathbb{C}^{N\times N}\), using a Gaussian window \(\mathbf{g}\) satisfying \(\mathbf{g}(n)=e^{-\sigma n^{2}}\), where \(\sigma>0\) is the window size. This parameter is chosen following the rule of thumb that the temporal support of \(\mathbf{g}\) contains about \(5\sim 8\) cycles of the oscillatory component of interest. The average cycle length is estimated locally from the average period \(\tilde{T}\) of \(\mathbf{x}_{1}\). With \(\mathbf{F}\), we find the fundamental ridge associated with each IMT function. To do this, we first compute the de-shape STFT [25], denoted as \(\mathbf{W}\in\mathbb{R}^{N\times N}\), to obtain the IF of each IMT function in \(\mathbf{x}_{1}\). We find the ridges of \(|\mathbf{W}|^{2}\) using a greedy ridge detection procedure [42]. This algorithm outputs the set of fundamental ridges \(\{\mathbf{c}_{k}^{*}(n)\}_{k=1}^{K}\), where \(\mathbf{c}_{k}^{*}(n)\approx\boldsymbol{\phi}_{k,1}(n)\). Due to the continuity assumption of IFs, and hence the associated ridges, this algorithm limits the search region for each time step to a frequency band \(\text{FB}_{n,k}=[\mathbf{c}_{k}(n-1)-\text{FB},\mathbf{c}_{k}(n-1)+\text{FB}]\) around the ridge estimation at the previous time, where \(\text{FB}>0\) defines the maximum frequency jump for consecutive steps. The maximum frequency jump FB is set to \(10f_{s}/N\).
\begin{table}
\begin{tabular}{l l} \hline \hline Name & Reference \\ \hline Takens’ Lag Map (TLM) & [32] \\ Least Square Estimation (LSE) & [34] \\ Dynamic Mode Decomposition (DMD) & [35] \\ Extended Dynamic Mode Decomposition (EDMD) & [36] \\ Gaussian Process Regression (GPR) & [37] \\ ARIMA Regression with forward forecasting & [38] \\ ARIMA Regression with backward forecasting & [38] \\ Trigonometric Box-Cox, ARMA and Seasonal & [39] \\ Modeling (TBATS) & \\ Sparse TF Non-linear Matching Pursuit (STF) & [40] \\ Locally Stationary Wavelet Process (LSW) & [41] \\ \hline \hline \end{tabular}
\end{table}
Table 1. Initial imputation methods considered in this study
Based on the estimated fundamental ridges \(\{\mathbf{c}_{k}^{*}(n)\}_{k=1}^{K}\), we perform the detrending of the signal by subtracting the trend estimate \(\hat{\mathbf{T}}_{1}\) from \(\mathbf{x}_{1}\), where
\[\hat{\mathbf{T}}_{1}(n)=\frac{2}{g(0)}\Re\sum_{1\leq j<\underline{\mathbf{c}}^{* }-\Delta}\mathbf{F}(n,j), \tag{4}\]
Here, \(\underline{\mathbf{c}}^{*}=\underset{k}{\text{min}}\ \mathbf{c}_{k}^{*}(n)\), \(\Re\) means taking the real part of the complex number, and \(\Delta\) is chosen close to the measure of the half-support of \(\hat{\mathbf{g}}(k)=\mathcal{F}(\mathbf{g}(n))\), the Fourier transform of the analysis window.
For the harmonic decomposition, we recall that the complex fundamental component of the \(k\)-th IMT function of \(\mathbf{x}_{1}\) can be recovered by integrating the STFT around the ridge \(\mathbf{c}_{k}^{*}\):
\[\mathbf{y}_{k,1}(n)=\frac{1}{\mathbf{g}(0)}\sum_{|j-\mathbf{c}_{k}^{*}(n)|< \Delta}\mathbf{F}(n,j), \tag{5}\]
where \(n=1,\ldots,N\). Then, the fundamental amplitude is estimated as \(\tilde{\mathbf{A}}_{k,1}(n)=|\mathbf{y}_{k,1}(n)|\) and the associated phase, denoted as \(\tilde{\boldsymbol{\phi}}_{k,1}\), is estimated by unwrapping the phase of \(\mathbf{y}_{k,1}\). With the fundamental component, we extract the higher harmonics of each IMT function. To do this, for \(1\leq k\leq K\), we obtain the estimate of \(D_{k}\), denoted as \(D_{k}^{*}\), using the model selection criteria based on trigonometric regression [29]. After determining \(D_{k}^{*}\), we find the ridge of the \(\ell\)-th harmonic of the \(k\)-th component \(\mathbf{c}_{k,\ell}(n)\) for \(2\leq\ell\leq D_{k}^{*}\). Given condition (C3) in Sec. 2, we know that \(\mathbf{c}_{k,\ell}(n)\approx\ell\mathbf{c}_{k}(n)\). The proper harmonic ridge \(\mathbf{c}_{k,\ell}^{*}(n)\) is found by searching the maximum energy ridge in the vicinity of \(\ell\mathbf{c}_{k}^{*}(n)\). Finally, the amplitude modulation of the \(\ell\)-th harmonic is estimated as \(\tilde{\mathbf{A}}_{k,\ell}(n)=|\mathbf{y}_{k,\ell}(n)|\) for \(n=1,\ldots,N\) and its phase, denoted as \(\tilde{\boldsymbol{\phi}}_{k,\ell}\), is estimated by unwrapping the phase of \(\mathbf{y}_{k,\ell}\), where \(\mathbf{y}_{k,\ell}(n)\) is the \(\ell\)-th complex component of \(k\)-th component of \(\mathbf{x}_{1}\) obtained using (5) by replacing \(\mathbf{c}_{k}^{*}(n)\) for \(\mathbf{c}_{k,\ell}^{*}(n)\).
Note that we could employ advanced TF analysis algorithms like the synchrosqueezing transform [21], newer ridge detection algorithms [43, 44], or sophisticated reconstruction formulas at this stage. However, for the sake of algorithm simplicity, computational efficiency, and accuracy, we choose to follow this pragmatic approach.
### Step 3: Interpolation at the Harmonic Level
The harmonic decomposition of \(\mathbf{x}_{1}\) can enhance imputation by employing amplitude and phase clipping, followed by interpolation within the missing data interval(s). Various curve interpolation schemes are applicable in this stage. However, due to the regularity of amplitude and phase functions, we anticipate smoother curves with minimal oscillations. We consider two effective yet simple interpolation techniques: cubic spline interpolation and shape-preserving cubic Hermite interpolation (pchip) [45]. It is worth noting that splines tend to overshoot within the data gap, while pchip produces smoother curves with fewer oscillations. The specific procedure for this step is detailed below.
1. The harmonic amplitudes \(\tilde{\mathbf{A}}_{k,\ell}\) and phases \(\tilde{\boldsymbol{\phi}}_{k,\ell}\) are clipped in the missing data interval \(I_{\texttt{ms}}\).
2. The amplitude and phases are interpolated on \(I_{\texttt{ms}}\), resulting in interpolated amplitude estimate \(\overline{\mathbf{A}}_{k,\ell}\) and interpolated phase estimate \(\overline{\boldsymbol{\phi}}_{k,\ell}\).
3. The final imputation result \(\mathbf{x}_{2}\) is constructed by harmonic superposition in the missing data interval: \[\mathbf{x}_{2}(n):=\sum_{k=1}^{K}\sum_{\ell=1}^{D_{k}^{*}}\mathbf{\bar{A}}_{k, \ell}(n)\cos(2\pi\overline{\phi}_{k,\ell}(n))\] for \(n\in I_{\texttt{ms}}\). The remaining part of \(\mathbf{x}_{2}\) satisfies \(\mathbf{x}_{2}(n):=\mathbf{x}_{1}(n)\) for \(n\in\{1,\ldots,N\}\backslash I_{\texttt{ms}}\).
Missing values within the trend \(\hat{\mathbf{T}}_{1}\) are directly interpolated to produce the final estimate \(\hat{\mathbf{T}}_{2}\). This interpolation method is chosen because time-varying trends typically exhibit smooth variations and lack local oscillations at the scale of data gaps.
The resulting output of the HaLI algorithm is the imputed signal \(\mathbf{x}_{2}+\hat{\mathbf{T}}_{2}\). Importantly, this final imputation result represents a denoised estimate of \(\mathbf{x}\). Signal denoising is a common task in various signal-processing applications. In essence, our proposed method not only imputes missing data but also simultaneously denoise signal, a topic we will explore further in the numerical experiments section.
## 4. Summary of existing imputation algorithms for time series
In this section, we review the existing imputation methods from Table 1, which we use as initial imputation. We also give some guidelines regarding the tuning of their parameters.
### Algorithms
#### 4.1.1. Least Square Estimation and Dynamic Mode Decomposition
Nonlinear dynamic system identification is a common approach for time series missing data imputation. Generally, a parametric model is fit to the known data and then a forecasting procedure is used to estimate the future values of the time series. For data imputation, we consider the data points left to the missing interval as a time series to be forecasted into the missing data interval. Matrices \(\mathbf{X}\in\mathbb{R}^{K\times M}\) and \(\mathbf{Y}\in\mathbb{R}^{K\times M}\), where \(\mathbf{X}\) contains the lagged values of the time series and \(\mathbf{Y}\) contains the corresponding values to be predicted, are defined as
\[\mathbf{X}= \left[\mathbf{x}_{0},\mathbf{x}_{1},\ldots,\mathbf{x}_{M-1} \right], \tag{7}\] \[\mathbf{Y}= \left[\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{M}\right]\,, \tag{6}\]
where \(\mathbf{x}_{l}\in\mathbb{R}^{K}\) is given by \([x(i_{ms}-M-K+l),\ldots,x(i_{ms}-M+l-1)]\), for \(1\leq l\leq M\); that is, each column of \(\mathbf{Y}\) is obtained by shifting the element of the \(l\)-th column of \(\mathbf{X}\) by one sample. Then, a forecasting model is fit to \(\mathbf{X}\) and \(\mathbf{Y}\) using a least squares estimation (LSE) algorithm
\[\min_{\mathbf{A}}\lVert\mathbf{Y}-\mathbf{A}\mathbf{X}\rVert^{2}, \tag{8}\]
where the solution is \(\tilde{\mathbf{A}}=\mathbf{Y}\mathbf{X}^{T}\left(\mathbf{X}\mathbf{X}^{T} \right)^{-1}\) if its exists. Once the forecasting model is fit, the missing values are imputed by setting \(\mathbf{x}_{1}(i_{ms}+m)=\tilde{\mathbf{A}}^{m}\mathbf{x}_{K}\), where \(m\) is the imputed data index.
The dynamic mode decomposition (DMD) for system identification follows the same formulation as LSE but the matrix \(\mathbf{A}\) is estimated from the singular value decomposition of the data matrix \(\mathbf{X}\).
#### 4.1.2. Extended Dynamic Mode Decomposition
Formally, this method seeks to estimate the non-linear evolution operator, denoted as \(\mathbf{F}\), of a discrete-time dynamical system: \(\mathbf{x}(n+1)=\mathbf{F}(\mathbf{x}(n))\). To achieve this, the extended dynamic mode decomposition (EDMD) approach approximates the infinite-dimensional _linear_ Koopman operator, denoted as \(\mathcal{K}\), which governs the evolution of the system _observables_. These observables are functions defined in the state space that define the time evolution of the system. While the Koopman operator provides a _global_ linearization of the dynamical system, LSE and DMD perform a _local_ linearization.
In [36], the authors propose a data-driven algorithm that approximates the eigenvalues \(\{\mu_{j}\}\) and modes \(\{\mathbf{v}_{j}\}\) of the finite-dimensional Koopman operator estimate \(\hat{\mathbf{K}}\). They demonstrate that this method extends the original DMD proposal, improving mode estimates of the Koopman operator.
Furthermore, in the study conducted by Hua et al. [46], a novel algorithm is introduced to efficiently estimate the modes. This algorithm uses a kernel function to project the data into a high-dimensional feature space, which improves the approximation of the Koopman operator. Based on this procedure, the missing data is imputed by forecasting into the missing data interval by
\[\mathbf{x}(n+1)=\sum_{j}\mu_{k}\mathbf{v}_{j}\phi_{j}(\mathbf{x}(n)), \tag{9}\]
where \(\{\phi_{j}(\mathbf{x})\}\) is a dictionary of eigenfunctions that map the observables \(\mathbf{x}\) into the state space. Like with \(\mathcal{K}\), the infinite-dimensional eigenfunctions are approximated by the linear combination of a set of finite-dimensional functions \(\boldsymbol{\psi}_{j}(\mathbf{x})\) defined in the feature space of dimension \(J\) which is in general much greater than both \(K\) and \(M\). In order to avoid a high-dimensional problem, the functions \(\boldsymbol{\psi}(\mathbf{x})\) are defined implicitly by using a kernel \(\mathbf{k}(\mathbf{x}_{i},\mathbf{x}_{j})=\langle\boldsymbol{\psi}(\mathbf{x} _{i}),\boldsymbol{\psi}(\mathbf{x}_{j})\rangle\), which is the inner-product between two \(J\)-dimensional eigenfunctions. In this work, a Gaussian kernel \(\mathbf{k}(\mathbf{x}_{i},\mathbf{x}_{j})=\exp(-\|\mathbf{x}_{i}-\mathbf{x}_{ j}\|_{2}/\gamma)\) is used, where the kernel size \(\gamma\) is chosen by the user.
#### 4.1.3. Gaussian Process Regression
Gaussian Process Regression (GPR) is a machine learning technique used for regression tasks. It models the relationship between input variables (features \(\mathbf{x}\)) and output variables (responses \(\mathbf{y}\)) in a probabilistic manner via \(\mathbf{y}=f(\mathbf{x})\). Instead of predicting a single value as in traditional linear regression, GPR provides a probability distribution over possible target values for a given input. In the imputation application, a Gaussian process (GP) is trained using the available data, which allows the trained GP to estimate missing values of a time series based on the posterior probability distributions. To implement the GPR imputation method, we used the fitrgp method from the Statistical and Machine Learning Toolbox of MATLAB(r). To train the model, we used the matrix \(\mathbf{X}\) as defined in (6) and use the matrix \(\mathbf{Y}\) as the target. This approach performs dynamic model estimation using a non-parametric method, which can be beneficial in cases where the underlying system is complex and hard to model parametrically.
#### 4.1.4. ARIMA Regression with forward forecasting
ARIMA (Autoregressive Integrated Moving Average) is a widely adopted time-series modeling technique employed for forecasting future values in non-stationary time series data [47]. Moreover, to capture periodic components within the signal, one can resort to the seasonal ARIMA (SARIMA) models [47]. The modeling process entails the identification of underlying data patterns by examining autocorrelations and partial autocorrelations. Once these patterns are discerned, the model becomes a valuable tool for generating future values. In the context of missing data imputation, the approach involves fitting the SARIMA model to the data preceding the missing interval, followed by imputing the missing data utilizing the fitted model. Notably, one of the pivotal parameters in this approach is the seasonality period, denoted as \(\lambda\), among other parameters.
#### 4.1.5. ARIMA Regression with backward forecasting
This method is analogous to the previous method but performs a backward prediction step. First, we fit the ARIMA model to the flipped version of the data to the right of the missing interval. \(\mathbf{x}_{pos}=[x(N),x(N-1),\ldots,x(i_{ms}+L+1)]\). Then, the missing data interval is predicted from the fit ARIMA model and flipped into the forward direction. As in the previous method, the periodicity \(\lambda\) needs to be set beforehand.
#### 4.1.6. TBATS Modeling
The TBATS (Trigonometric seasonality, Box-Cox transformation, ARMA errors, Trend, and Seasonality) model is a state-space model designed for time series analysis [39]. It incorporates exponential smoothing to give more weight to recent data during estimation. The trend and seasonal components are also smoothed using exponential methods to enhance local data estimation. Additionally, the Box-Cox transformation is applied to normalize the data, improving its statistical characteristics. ARMA errors are included to capture complex data dynamics. Similar to ARIMA models, TBATS can be employed for missing data imputation by fitting the model to the data before the missing interval and forecasting the missing values. In [39], an efficient algorithm is proposed for estimating model parameters using maximum likelihood optimization on the state-space representation of the model.
#### 4.1.7. Data-Driven Time-Frequency Analysis
The sparse time-frequency (TF) decomposition method [40] was used for imputing signals with missing data. Their nonlinear matching pursuit algorithm decomposes non-stationary signals into phase- and amplitude-modulated harmonic components, represented as \(A_{k}(t)\cos(\theta_{k}(t))\). This method actively seeks optimal amplitude and phase functions using an overcomplete Fourier dictionary while promoting sparsity through \(\ell\)-1 norm regularization of the amplitude coefficients. To address missing data, they employ an interior-point method based on preconditioned conjugate gradient [48]. This algorithm requires setting specific parameters in advance, including an initial phase estimator and the number of harmonic components denoted as \(K\).
#### 4.1.8. Locally Stationary Wavelet Process Forecasting
Fix a mother wavelet \(\psi(t)\) with a compact support. In the locally stationary wavelet (LSW) model [49], a time series \(\{X_{t,T}\}_{t=0,1,\ldots,T-1}\), where \(T\in\mathbb{N}\), is a realization of a LSW process if it satisfies
\[X_{t,T}=\sum_{j=1}^{\infty}\sum_{k=-\infty}^{\infty}w_{j,k;T}\psi_{j,k}(t) \zeta_{j,k}\,, \tag{10}\]
where, \(\psi_{j,k}(t)\) represents a family of discrete non-decimated wavelets translated and dilated from \(\psi\), \(\zeta_{j,k}\) is a set of uncorrelated random variables with mean \(0\) and variance \(1\), and \(w_{j,k;T}\) are the amplitudes of the process satisfying some mild conditions so that \(X_{t,T}\) is a locally stationary random process. Within the LSW framework, a set of observations \(X_{0,T},\ldots,X_{t-1,T}\) can be used to predict the next observation \(X_{t,T}\) by a linear combination of the most recent \(p\) observations, as proposed in [50], where the weights of linear combination are chosen by minimizing the mean square prediction error (MSPE).
### Automatic imputation parameters tuning
Parameter tuning is essential and can greatly affect algorithm performance. Typically, unless an automatic algorithm is available, parameters are manually adjusted to achieve satisfactory results. For those algorithms without standard parameter selection procedure, we provide empirical data-driven criteria that can be applied to a wide range of oscillatory time series, and leave their theoretical justification to future work.
#### 4.2.1. TLM algorithm
The TLM algorithm requires the template length \(d\) to be set beforehand. This length defines the pattern to be matched to the rest of the signal and can significantly impact the final result. Given that we focus on the study of oscillatory signals, we propose an intuitive way to define \(d\) based on the quasi-periodicity of the signal. Given the average period \(\tilde{T}\) as \(\tilde{T}=2\pi/f_{p}\), where \(f_{p}\) is the most energetic frequency of the signal that can be estimated by SST, we then set \(d=k\tilde{T}\) with \(k\in\mathbb{N}\). We empirically find that \(k=3\) leads to satisfactory results in most signals. Note that the slow-varying nature of the instantaneous frequency means that the signal can be locally approximated by a harmonic function with minimal loss in accuracy.
#### 4.2.2. Dynamic signal forecasting
Algorithms like LSE, DMD and EDMD involve two adjustable parameters: the dimension of the embedding space denoted as \(K\), which is related to the maximum shift between subsignals, and the length of the subsignals represented by \(M\), used in constructing the system matrix \(\mathbf{A}\). When dealing with nonstationary signals, it's essential to set \(M\) to be greater than the signal's average period to capture intercycle variability effectively. It is recommended to select \(M\) so that it encompasses at least three complete signal cycles within each subsignal. The embedding dimension \(K\) needs to be chosen such that it is greater than \(M\). Theoretical findings in [34] illustrate that as \(K\) approaches infinity, the variance of the forecast result asymptotically decreases, but the bias increases. As a practical compromise, we empirically opt for \(K=2.5M\) in all experiments within this study.
#### 4.2.3. ARIMA regression
To fit a SARIMA model to the data, either before or after the missing interval for a good imputation, estimating the periodicity (or seasonality) \(\lambda\) of the signal is crucial due to the time-varying nature of the signals of interest. However, to our knowledge, there does not exist a proper approach to estimate the seasonality of signals with time-varying frequency for SARIMA. We thus consider the following empirical approach. We first estimate the length of each cycle of the signal as \(T_{k}=\#\{\hat{\phi}_{1}(n):\ k-1<\hat{\phi}_{1}(n)\leq k\}\), where \(\hat{\phi}_{1}\) is the estimated phase and \(k\in\mathbb{N}\). Since the seasonality is assumed fixed in SARIMA, we estimate the seasonality for the ARIMA forward regression method by \(\lambda_{f}=\frac{1}{N_{c}}\sum_{i=a-N_{c}}^{i=a}T_{i}\), where \(a\) denotes cycle immediately preceding the missing
data interval. Similarly, for the ARIMA backward regression method, we estimate the seasonality by \(\lambda_{b}=\frac{1}{N_{c}}\sum_{i=b}^{i=b+N_{c}}T_{i}\), where \(b\) denotes the first cycle immediately after the missing data interval. In either case, \(N_{c}\) needs to be set beforehand. Due to the time-varying nature of the considered model, local estimates are preferable as they more accurately capture the dynamics of the data in the vicinity of the missing interval, but \(N_{c}\) should not be too small to avoid large estimate variation. As a general guideline, an empirical choice \(N_{c}=3\) typically leads to a good estimate for the missing interval. Other parameters, including the orders of the model and parameters, could be estimated following the standard procedure.
#### 4.2.4. Tbats
The general procedure for estimating the state-space model involves two main steps: parameter estimation and model selection. In the first step, optimal parameters are determined by maximizing the likelihood of the data given a specific model. In the second step, the best model is selected using an information criterion. For the TBATS model, various configurations are tested, such as whether to apply Box-Cox transformation or include ARMA errors in the model. In the context of the signals studied in this work, special attention is given to the periodicity of the seasonal components and the smoothing parameter for the trend due to the nature of time-varying frequency. Estimating the periodicity of the fundamental component, linked to the longest seasonal component of the model, is crucial to ensure accurate forecasting of missing data. We follow the empirical estimation approach used for SARIMA models discussed in the previous section to handle the time-varying frequency issue.
#### 4.2.5. Data-Driven Time-Frequency Analysis
For the initial phase estimate \(\theta_{0}(t)\) we use the linear function \(\theta_{0}(t)=2\pi f_{0}t\), where \(f_{0}\) is chosen as the maximum energy frequency of the spectrum of \(x(t)\). The number of harmonic components \(K\) is estimated by finding the optimal order of the adaptive non-harmonic model fit to the data, as described in [29].
#### 4.2.6. Locally Stationary Wavelet Processes
: The number \(p\) of past indexes used to estimate the forecasting coefficients is automatically estimated using the local partial autocorrelation function (LPACF) as proposed in [51]. Authors in [52] studied the choice of mother wavelet and found that the 8th order extremal phase Daubechies wavelet performs adequately for the majority of cases. Additionally, in order to compute \(\mathbf{B}_{t,T}\), a \((t+1)\times(t+1)\) that approximates the covariance matrix of \(X_{0,T},\ldots,X_{t-1,T}\) used to find the minimizer of the MSPE, a smoothing of the wavelet periodogram needs to be performed. The smoothed periodogram bandwidth is estimated using the automatic procedure proposed by Nason [53].
## 5. Numerical results
Within this section, we present two sets of analyses to showcase the efficacy of our proposed HaLI algorithm. The initial experiment involves synthetic signals adhering to the ANHM, exhibiting time-varying WSF. This model is utilized to showcase our method's proficiency in restoring missing data when oscillation patterns shift within the data gap. The subsequent experiment applies our approach to real-world signals, further highlighting its enhancements over the initial imputation. Considering the multiple options for Step 1, we conduct a comparative study encompassing diverse algorithms listed in Table 1. The Matlab code of HaLI is available at [https://github.com/joaquinr-uner/MSI_HarmDecomp](https://github.com/joaquinr-uner/MSI_HarmDecomp), where the readers can
also find codes that reproduce the reported results. Likewise, the ready-to-use package is available at [https://github.com/joaquinr-uner/harmonic_imputation](https://github.com/joaquinr-uner/harmonic_imputation).
In this section, to specify which algorithm is used in the initial and third steps of the HaLI algorithm, we adopt the notation \(\mathsf{HaLI}[\mathsf{IMP}](\mathsf{int})\), wherein \([\mathsf{IMP}]\) indicates the imputation method employed in the initial step and \((\mathsf{int})\) represents the interpolation scheme utilized in Step 3. The chosen interpolation method is indicated by \(\mathsf{int}=\mathsf{s}\) for splines or \(\mathsf{int}=\mathsf{p}\) for pchip interpolation.
### Synthetic signals
We consider simulated signals satisfying the ANHM (3). We generate these signals by defining the instantaneous amplitude and phase as \(B_{1}(t)=\sqrt{t+1}\) and \(\phi_{1}(t)=50t+5/(2\pi)\cos(2\pi t)+Y(t)\), respectively, where \(Y(t)\) is a random process defined as \(Y(t)=\int_{0}^{t}\frac{R_{B}(u)}{\|R_{B}(u)\|_{\infty}}du\), where \(R_{B}(t)\) is a moving averaged standard Gaussian white noise. The harmonic phases are set to \(\phi_{\ell}(t)=e_{\ell}\phi_{1}(t)\) for \(\ell\geq 2\), where \(e_{\ell}\) is a uniform random variable supported on \([0.95\ell,1.05\ell]\). The harmonic amplitudes \(B_{\ell}(t)\) are related to \(B_{1}(t)\) through the condition (C4) so that, for each harmonic \(\ell\geq 2\), \(\alpha_{\ell}(t)=B_{\ell}(t)/B_{1}(t)\) is slowly time-varying that is independent from the other harmonics. The signals are synthesized using the sampling rate \(4000\) Hz with the duration \(T_{s}=1\) s. For each signal, we simulate missing values with various percentages of missingness \(P_{ms}\in[0.05,0.2]\) and generate three missing data intervals with different lengths such that \(L_{1}+L_{2}+L_{3}=NP_{ms}\), where \(L_{i}\), \(i=1,2,3\), is the length of the \(i\)-th missing data interval. Regarding the noise component \(\Phi(t)\), we considered a zero-mean Gaussian distribution with two levels of variance corresponding to SNR values of \(10\) and \(20\) dB. We also conducted runs with no added noise to establish a reference. In total, we generated \(100\) signals, each with randomly placed missing values at varying levels of missing data and noise. Therefore, we processed a total of \(300\) random signals for each level of missingness. Figure 3 illustrates the effect of the initial data imputation. The top two rows display the original noisy signal (\(\text{SNR}=10\text{ dB}\)) with \(20\%\) missing data across three intervals and the modulus of its STFT (\(|\mathbf{F}|\)), revealing TF plane discontinuities and blurring of harmonic ridges. The third and fourth rows show the signal after initial imputation and the modulus of its STFT, where artifacts due to missing data are reduced. The HaLI algorithm further enhances this TFR representation through harmonic decomposition and interpolation.
To evaluate each imputation method, we computed the mean absolute error (MAE) across simulated signals. First, we assess various initial imputation methods detailed in Table 1. The optimal initial imputation method is denoted as BI (Best Imputation). The MAE of the final results by HaLI are computed and compared to those of the initial imputation. The Wilcoxon signed-rank hypothesis test with Bonferroni correction was used to determine if the differences in the median values of the MAE before and after interpolation were statistically significant.
Figure 4 displays the experiment's outcomes through boxplots, and Table 2 presents median MAE values and corresponding \(p\)-values for the comparison between the best initial imputation and the complete HaLI algorithm using both interpolation methods. The results substantiate that the proposed HaLI approach consistently surpasses initial imputation across various missingness rates, interpolation tactics, and noise levels. Notably, shape-preserving cubic interpolation consistently outperforms cubic spline interpolation in all scenarios, attributed to pchip's reduced oscillations and overshooting compared to splines. Furthermore,
the Wilcoxon signed-rank test validates the statistically significant enhancement in all instances when employing either spline or pchip interpolation (\(p\)-value \(<0.0001\)).
To compare the imputation strategies in the first step, we present frequency histograms for the best-performing initial imputation approach in Fig. 5. We observe that the TLM algorithm consistently outperforms other methods across varying missing data rates. Among these, the dynamic signal forecasting methods (GPR, DMD, LSE, and EDMD) follow TLM for \(P_{ms}=5\%\) and \(10\%\). From \(P_{ms}=15\%\) onwards, both ARIMA methods demonstrate similar performance, consistently outperforming other methods except TLM and GPR. At \(P_{ms}=20\%\), TLM is still the most frequent best-performing method, followed by GPR and EDMD. Finally, the dominance of the TLM approach diminishes with longer missing data segments, potentially related to its limitation in comparing templates solely in the projection space for extended intervals. Table 3 provides average computation times for different algorithms, including the combined time of Steps 2 and 3 in HaLI for both interpolation alternatives. These findings highlight TLM's lower computational load and superior performance.
Figure 3. Top: Example noisy synthetic signal with time-varying wave-shape and \(20\%\) of missing data alongside the modulus of its STFT. Bottom: Imputed version of the signal using Takens’ Lag Map in the initial imputation step alongside the modulus of its STFT.
### Missing data imputation in physiological signals
We analyzed real-world physiological signals from various sources, including the Taiwan Integrated
Figure 4. Boxplot of the MAE for synthetic signals at different noise levels. For each missing data rate, boxplots for the best imputation method (BI) and the complete HaLI algorithm with spline (HaLI[BI](s)) and pchip (HaLI[BI](p)) interpolation schemes. In both cases, the harmonic decomposition and interpolation steps are performed on the result of the best-performing initial imputation method based on the MAE. Statistically significant differences (\(p<0.0167\)) between the median values of the three methods according to the signed-rank Wilcoxon test are indicated with ‘*’.
Database for Intelligent Sleep (TIDIS)1, the labeled raw accelerometry database2, and arterial blood pressure signals from the CHARIS database3. Our selection from the TIDIS database includes photoplethysmogram (PPG), airflow signal (AF), nasal pressure signal (NP), and thorax impedanceimetry (THO) recordings. For accelerometry, we focused on the right ankle sensor's x-axis signal during walking. The CHARIS ABP signals exhibit a significant trend linked to mean arterial pressure (MAP), vital for the clinical assessment of imputed signals. In total, we evaluated 131 distinct signals, distributed as 25 PPG, 17 AF, 25 NP, 19 THO, 32 ACC, and 13 ABP recordings. We selected segments free of saturation or sensor disconnections and with a clearly visible oscillatory pattern. For CHARIS, 39 ABP segments from 3 different selections were chosen. Each segment underwent three missing intervals, mirroring the approach from the previous section, with missing data rates ranging from 5% to 20%. Notably, all signals comprise a solitary oscillatory component, excluding ABP where a nontrivial slow-varying oscillatory component is observed.
Footnote 1: [https://tidis.org/en/](https://tidis.org/en/)
Footnote 2: [https://physionet.org/content/accelerometry-walk-climb-drive/1.0.0/](https://physionet.org/content/accelerometry-walk-climb-drive/1.0.0/)
Footnote 3: [https://physionet.org/content/charisdb/1.0.0/](https://physionet.org/content/charisdb/1.0.0/)
Figure 5. Imputation performance on synthetic signals. Frequency histograms of the best performing initial imputation method on synthetic signals for each missingness rate. References: TLM: Takens’ Lag Map. LSE: Least Square Estimation. DMD: Dynamic Mode Decomposition. EDD: Extended Dynamic Mode Decomposition. GPR: Gaussian Process Regression. ARF: ARIMA Regression with Forward Forecasting. ARB: ARIMA Regression with Backward Forecasting. TBT: Trigonometric Box-Cox, ARMA, and Seasonal Forecasting. TFA: Data-Driven Time-Frequency Analysis. LSW: Locally Stationary Wavelet Process.
component tied to respiratory dynamics emerged. Thus, \(K=1\) was set for all signals except ABP, where \(K=2\) was chosen.
The most effective combination for HaLI is determined based on error metrics, considering normalized mean absolute error (NMAE) due to the varied magnitudes of physiological signals. The results are summarized in the boxplots in Fig. 6. Notably, the median NMAE value of HaLI is consistently lower compared to initial imputation in the majority of instances (as summarized in Table 4). The Wilcoxon
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & & \multicolumn{4}{c|}{\(\mathbf{P_{ms}[\%]}\)} \\ \hline & & **5** & **10** & **15** & **20** \\ \hline & MAE\({}_{I}\) & 0.1352 & 0.2505 & 0.4167 & 0.5990 \\ & MAE\({}_{S}\) & 0.0842 & 0.1948 & 0.3436 & 0.5304 \\ & MAE\({}_{P}\) & 0.0832 & 0.1768 & 0.2967 & 0.4762 \\ & \(p_{I-S}\) & \(<0.0001\) & \(<0.0001\) & \(<0.0001\) & \(<0.0001\) \\ & \(p_{I-P}\) & \(<0.0001\) & \(<0.0001\) & \(<0.0001\) & \(<0.0001\) \\ & \(p_{S-P}\) & \(<0.0001\) & \(<0.0001\) & \(<0.0001\) & \(<0.0001\) \\ \hline & MAE\({}_{I}\) & 0.1971 & 0.2943 & 0.4994 & 0.6685 \\ & MAE\({}_{S}\) & 0.1135 & 0.2554 & 0.4349 & 0.6172 \\ & MAE\({}_{P}\) & 0.1093 & 0.2437 & 0.3713 & 0.5640 \\ & \(p_{I-S}\) & \(<0.0001\) & \(<0.0001\) & 0.0001 & \(<0.0001\) \\ & \(p_{I-P}\) & \(<0.0001\) & \(<0.0001\) & \(<0.0001\) & \(<0.0001\) \\ & \(p_{S-P}\) & \(<0.0001\) & \(<0.0001\) & \(<0.0001\) & \(<0.0001\) \\ \hline & MAE\({}_{I}\) & 0.3818 & 0.4396 & 0.5963 & 0.7445 \\ & MAE\({}_{S}\) & 0.1663 & 0.2664 & 0.4805 & 0.6094 \\ & MAE\({}_{P}\) & 0.1620 & 0.2444 & 0.4052 & 0.5652 \\ & \(p_{I-S}\) & \(<0.0001\) & \(<0.0001\) & \(<0.0001\) & \(<0.0001\) \\ & \(p_{I-P}\) & \(<0.0001\) & \(<0.0001\) & \(<0.0001\) & \(<0.0001\) \\ & \(p_{S-P}\) & \(<0.0001\) & \(<0.0001\) & \(<0.0001\) & \(<0.0001\) \\ \hline \end{tabular}
\end{table}
Table 2. Median MAE values across each initial method and the HaLI procedure for synthetic signals at each noise level. I: Best initial imputation. S: HaLI[Bl](s). P:HaLI[Bl](p), where Bl means Best Imputation.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{\(\mathbf{P_{ms}[\%]}\)} \\ \hline & **5** & **10** & **15** & **20** \\ \hline TLM & 0.027 & 0.0245 & 0.0235 & 0.0225 \\ \hline LSE & 0.0116 & 0.01 & 0.001 & 0.001 \\ \hline DMD & 0.025 & 0.0235 & 0.0216 & 0.0211 \\ \hline EDMD & 0.09 & 0.13 & 0.09 & 0.09 \\ \hline GPR & 1.690 & 1.531 & 1.455 & 1.412 \\ \hline ARIMAF & 5.06 & 4.180 & 4.519 & 4.467 \\ \hline ARIMAB & 4.707 & 4.477 & 4.286 & 4.010 \\ \hline TBATS & 11.192 & 10.917 & 10.866 & 11.038 \\ \hline DDTFA & 241.28 & 230.21 & 210.59 & 210.76 \\ \hline LSW & 6.112 & 9.786 & 19.021 & 37.277 \\ \hline HaLI[Bl](s) (Steps 2+3) & 0.903 & 0.895 & 0.861 & 0.871 \\ \hline HaLI[Bl](p) (Steps 2+3) & 0.901 & 0.893 & 0.859 & 0.870 \\ \hline \end{tabular}
\end{table}
Table 3. Average computation time (in seconds) of different algorithms for the synthetic signal experiment.
-shaped-rank hypothesis test confirms the statistical significance of this enhancement, as indicated by corresponding \(p\)-values in Table 4. Across various \(P_{ms}\) values and signals, H4L(p) consistently improves the NMAE measure over initial imputation, except for specific cases such as PPG and ABP at 15% and 20%, as well as AF signals at 5% missing data rates. This is attributed to the significantly higher fundamental frequencies of PPG and ABP signals compared to respiratory signals (AF,
\begin{table}
\begin{tabular}{l
NP, and THO), resulting in more cycles within the missing value interval(s). This underscores that initial imputation methods within this study face constraints when dealing with a high volume of consecutively missing signal cycles. Nevertheless, significant improvement is evident across other scenarios when applying the proposed approach. Fig. 7 offers a visual contrast between initial imputation and H4Ll. We see that H4Ll mitigates amplitude overshooting through amplitude interpolation and enhances synchronization between original and imputed cycles via phase interpolation. Subsequently, Fig. 8 presents histograms for the top-performing initial imputation method across each signal, with TLM demonstrating superiority over other methods in the majority of cases.
## 6. Discussion and conclusions
This study introduces a novel strategy for imputing missing data within non-stationary oscillatory time series. Our proposed method combines established imputation techniques, harmonic decomposition, and harmonic-level interpolation. We evaluate various missing data imputation algorithms and provide accessible Matlab code for wider use. Our experiments, involving synthetic and real-world signals, highlight the effectiveness of the H4Ll approach, especially in harmonic-level interpolation. We analyze real signals with different sampling rates and fundamental frequencies, demonstrating the method's robustness across signal sample counts and cycle durations. In summary, H4Ll shows great promise, with future work focused on exploring its clinical applications. Importantly, the harmonic decomposition and harmonic-level interpolation can be applied as post-processing steps
Figure 6. Boxplots of the normalized mean absolute errors (NMAE) for each physiological signal considered at each missing data rate. BestImp: Best initial imputation; H4Ll[Bl](s): H4Ll based on the best initial imputation and spline interpolation; H4Ll[Bl](p): H4Ll based on the best initial imputation and shape-preserving cubic interpolation. Top: from left to right; result for PPG, ABP, and accelerometry. Bottom: from left to right; results for airflow, nasal pressure, and thorax impedanciometry.
to enhance any initial imputation method, offering versatility for integration into existing techniques.
Several technical considerations warrant discussion. First, real-world scenarios often involve mode-mixing setups, where the IFs of different IMT functions overlap. Our current algorithm has limitations in handling this issue, and we plan to explore new approaches for harmonic decomposition. The synchrosqueezed chirplet transform [54] shows promise in addressing this challenge. Next, the assumption of \(B_{k,\ell}\) being a summable sequence indexed by \(\ell\) mandates a non-spiky, oscillatory pattern. However, many real-world signals, like electrocardiograms or local field potentials with stimulation artifacts, exhibit spiky patterns. Adapting the proposed imputation algorithm to accommodate such scenarios is essential. Additionally, our current model characterizes oscillatory time series and their trends by describing their oscillatory behavior--a framework termed a phenomenological model. Enhanced knowledge of the background could warrant a more intricate statistical modeling of the IMT functions and trends. However, this tailored approach hinges on the application at hand and lies beyond the scope of this paper.
This study narrows its focus to univariate time series. In practice, multivariate time series with missing values are common [55, 56]. While separate imputation is feasible, a pertinent question arises: Can missing values be collectively imputed?
Figure 7. Comparison between the initial missing data imputation before and after HaLI post-processing for each physiological signal. The original data is shown in black and the best initial imputation result and our improved imputation using HaLI are superimposed in red and blue, respectively. The NMAE with and without HaLI is shown at the bottom of each plot.
Examples of joint imputation can be found in [57]. Within the statistical community, multiple imputation [58] is a well-established technique. When an oscillatory time series conforms to a statistical model akin to the framework in [58], we conjecture that H4L1 could be extended accordingly. If we viewed a time series as a function on \(\mathbb{R}\), it is natural to ask if the proposed imputation idea could be generalized to spatial datasets, which is a function on \(\mathbb{R}^{2}\)[59], or even higher dimensional nonlinear spaces [60]. These intriguing avenues will be the focus of our future investigations.
It is worth noting that various user-friendly software packages have been developed to facilitate missing time series imputation efficiently. Examples include R-based packages like 'imputeTS'4 and Python-based packages like 'impyute'5, among others. These platforms offer a range of imputation methods, from simple ones like mean and last observation carried forward (LOCF) imputation to more advanced
Figure 8. Imputation performance on real signals. Frequency histograms of the best performing initial imputation method on the different real signals considered. References: TLM: Takens’ Lag Map. LSE: Least Square Estimation. DMD: Dynamic Mode Decomposition. EDD: Extended Dynamic Mode Decomposition. GPR: Gaussian Process Regression. ARF: ARIMA Regression with Forward Forecasting. ARB: ARIMA Regression with Backward Forecasting. TBT: Trigonometric Box-Cox, ARMA, and Seasonal Forecasting. TFA: Data-Driven Time-Frequency Analysis. LSW: Locally Stationary Wavelet Process.
techniques based on non-stationary modeling, such as ARMA and structural time-series methods. Additionally, they often provide tools for data visualization and performance metric calculation. However, it is important to note that a readily available imputation toolbox in Matlab is currently unavailable. Therefore, this work contributes a publicly accessible Matlab implementation of the proposed imputation approach, serving as a practical module for Matlab users.
| 時間系列に欠損値が含まれる場合、低品質や過剰飽和の問題を抱えるものも含まれる。これは、欠損値を回復することを指す「補完」というタスクで、複数のアルゴリズムが開発されました。しかし、時間系列が非線形な振動性を持つ場合、これらのアルゴリズムの有効性が低下することが観察されました。この論文では、ハーモニックレベル補間(HaLI)という新しいアルゴリズムを紹介し、振動時間系列の補完アルゴリズムの性能を向上させるものです。補完アルゴリズムを実行した後、HaLIは、初期補完のための適応非ハーモニックモデルに基づくハーモニック分解を利用して、振動時間系列における補完精度を向上させる。合成と実信号に基づいた実験的な評価から、HaLIは既存の補完アルゴリズムの性能を向上させていることが示されています。このアルゴリズムは |
2309.12046 | Non-perturbative real topological strings | We study the resurgent structure of Walcher's real topological string on
general Calabi--Yau manifolds. We find all-orders trans-series solutions to the
corresponding holomorphic anomaly equations, by extending the operator
formalism of the closed topological string, and we obtain explicit formulae for
multi-instanton amplitudes. We find that the integer invariants counting disks
appear as Stokes constants in the resurgent structure, and we provide
experimental evidence for our results in the case of the real topological
string on local $\mathbb P^2$. | Marcos Marino, Maximilian Schwick | 2023-09-21T13:15:52 | http://arxiv.org/abs/2309.12046v1 | # Non-perturbative real topological strings
###### Abstract
We study the resurgent structure of Walcher's real topological string on general Calabi-Yau manifolds. We find all-orders trans-series solutions to the corresponding holomorphic anomaly equations, by extending the operator formalism of the closed topological string, and we obtain explicit formulae for multi-instanton amplitudes. We find that the integer invariants counting disks appear as Stokes constants in the resurgent structure, and we provide experimental evidence for our results in the case of the real topological string on local \(\mathbb{P}^{2}\).
###### Contents
* 1 Introduction
* 2 The real topological string
* 2.1 Free energy and holomorphic anomaly equations
* 2.2 Propagators
* 2.3 Direct integration
* 2.4 Fixing the holomorphic ambiguity
* 3 Trans-series solutions and resurgent structure
* 3.1 Master equation and warm-up example
* 3.2 Operator formalism
* 3.3 The one-instanton amplitude
* 3.4 Multi-instantons
* 3.5 Boundary conditions
* 3.6 Stokes automorphisms
* 4 Experimental evidence: the real topological string on local \(\mathbb{P}^{2}\)
* 4.1 Perturbative expansion
* 4.2 Trans-series and asymptotics
* 5 Conclusions
* A Local \(\mathbb{P}^{2}\)
* A.1 Useful formulae
* A.2 An integral representation of the domain wall tension
## 1 Introduction
Topological string theory is defined perturbatively, and it has been a longstanding problem to understand its non-perturbative structure. An important clue for this is the asymptotic behavior of its genus expansion: as in other string theories [1; 2], it grows factorially (see e.g. [3]). This usually indicates the existence of exponentially small corrections in the string coupling constant. In the case of conventional strings and minimal strings, it is believed that such effects are due to D-branes [4].
In [5; 6; 7] it was proposed that a systematic understanding of the non-perturbative effects associated to the divergence of the topological string perturbation series can be obtained by using the theory of resurgence (see [8; 9; 10; 11] for introductions and reviews). According to this theory, one can associate to the perturbative series a collection of exponentially small corrections, describing non-perturbative amplitudes, together with numerical invariants called Stokes constants (see e.g. [12] for a definition of such a "resurgent structure"). In addition, there is growing evidence that the Stokes constants characterizing the resurgent structure of the topological string free energy
are closely related to BPS invariants of the Calabi-Yau (CY) threefold [12; 13; 14; 15; 16; 17; 18; 19] (a connection between Stokes constants and BPS invariants has been also found in complex Chern-Simons theory and its supersymmetric 3d duals, see [20; 21; 22; 23; 24]).
The resurgent structure of topological string theory is therefore a very rich object, containing perhaps information about all the BPS invariants of the theory, and a complete description is still lacking. It is possible however to obtain explicit expressions for the non-perturbative corrections by using trans-series solutions to the holomorphic anomaly equations (HAEs) of [25; 26]. This idea was put forward in [27; 28] in the local CY case. It was further developed in [15], where exact trans-series solutions in closed form were obtained, and extended to compact CY manifolds in [17]. The results of [15; 17] are based on an operator formalism first considered in [29; 30]. One finds in particular that multi-instanton solutions to the HAE are given by a generalization of the eigenvelue tunneling of matrix models [2; 6; 31; 32], which suggests that flat coordinates are quantized in integer units of the string coupling constant.
All the studies done so far focused on closed topological strings. There is a very interesting and rich generalization thereof which we will call the real topological string. Its theory was developed in a series of papers [33; 34; 35; 36] by J. Walcher and collaborators. The basic idea is to introduce a real three-dimensional locus in the CY target which plays two simultaneous roles: on the one hand it is a submanifold wrapped by a D-brane, giving boundary conditions for open topological strings, and on the other hand it is an orientifold plane, leading in addition to non-orientable amplitudes. Originally, the theory was conceived as involving open and closed strings only [33; 34], but it was later realized that consistency requires the non-orientable sector as well [35].
The real topological string leads to a perturbative series in the string coupling constant due to a sum over orientable and non-orientable surfaces, with and without boundaries. It is then natural to ask what is the resurgent structure of this series and whether the results of [15; 17] can be generalized to this setting. In this paper we start a systematic exploration of these issues. First, we generalize the results of [15; 17] and we obtain trans-series solutions to Walcher's HAE [34; 35] for the real topological string. This is achieved by extending the operator formalism of [15; 17]. One consequence of our analysis is that multi-instanton solutions to the HAE of the real topological string can be again interpreted in terms of shifts of the closed string background by integer multiples of the string coupling constant. The boundary conditions for the trans-series are provided by the behavior of the real topological string amplitudes at the conifold and the large radius points. As usual in resurgence theory, our results make it possible to obtain the asymptotics of the free energies for large "genus" or Euler characteristic of the worldsheet, which we check successfully against perturbative results in the case of the real topological string on local \(\mathbb{P}^{2}\). In addition, we find indications that the connection between Stokes constants and BPS invariants extends to the real topological string, and one can explicitly show that the integer invariants counting disks appear as Stokes constants.
This paper is organized as follows. In section 2 we review relevant results about the real topological string, focusing on Walcher's HAE. Some of the results on the real propagators, like e.g. (2.58), seem however to be new. In section 3 we study the non-perturbative aspects of the theory. We construct trans-series solutions to the HAE by extending the operator formalism of [15; 17] to the real case, and we study different boundary conditions for these equations. In section 4 we test the one-instanton amplitude obtained in section 3 in the case of local \(\mathbb{P}^{2}\). Finally, section 5 contains conclusions and prospects for future studies. Appendix A collects useful results for local \(\mathbb{P}^{2}\) which are used in the paper.
The real topological string
### Free energy and holomorphic anomaly equations
The real topological string involves a CY manifold \(X\) together with a D-brane configuration and an orientifold plane. In practice, we consider an antiholomorphic involution of \(X\),
\[\sigma:X\to X. \tag{1}\]
The fixed locus \(L\) of \(\sigma\) is a Lagrangian submanifold. It can be wrapped by an A-brane, after a choice of an \(U(1)\) flat bundle on it, but we can also consider \(L\) as an orientifold plane. In the real topological string we have to include the A-brane and the orientifold plane at the same time, as explained in [35].
**Example 2.1**.: The main example in this paper will be the real topological string on local \(\mathbb{P}^{2}\), first considered in [35]. In this case, \(X\) is the total space of the bundle \(\mathcal{O}(-3)\to\mathbb{P}^{2}\). The involution \(\sigma\) acts by complex conjugation on both the fiber and the base. The fixed locus \(L\) is the total space of the orientation bundle over \(\mathbb{RP}^{2}\), and since it has \(H_{1}(L,\mathbb{Z})=\mathbb{Z}_{2}\), there are two choices for a flat \(U(1)\) bundle (or Wilson line). Another example is the quintic CY [37], together with an involution which acts again by complex conjugation. Its fixed point locus \(L\) is topologically a real projective space \(\mathbb{RP}^{3}\), and \(H_{1}(L,\mathbb{Z})=\mathbb{Z}_{2}\) as well (see e.g. [33]).
In the real topological string one has to consider contributions from all orientable and non-orientable worldsheet topologies, with and without boundaries. The topological class of the worldsheet is determined by the genus \(g\geq 0\), the number of crosscaps \(c=0,1,2\), and the number of boundaries \(h\geq 0\). The Euler characteristic of such a surface, \(\chi\), is given by
\[\chi=2g-2+h+c. \tag{2}\]
The total free energy is obtained as a sum over all possible values of \(\chi\geq-2\), and it has the form,
\[\mathcal{G}=\sum_{\chi\geq-2}\mathcal{G}_{\chi}g_{s}^{\chi}. \tag{3}\]
We will denote by \(\mathcal{F}_{g,h}\) the contribution to \(\mathcal{G}_{\chi}\) coming from orientable Riemann surfaces with genus \(g\) and \(h\) boundaries, and by \(\mathcal{R}_{g,h}\), \(\mathcal{K}_{g,h}\) the contribution from non-orientable Riemann surfaces of genus \(g\) with \(h\) boundaries, and \(c=1,2\), respectively1. We then have
Footnote 1: Our labelling of non-orientable Riemann surfaces is different from the one used in e.g. [36].
\[\mathcal{G}=\sum_{\chi=2g-2+h}\mathcal{F}_{g,h}g_{s}^{\chi}+\sum_{\chi=2g-1+ h}\mathcal{R}_{g,h}g_{s}^{\chi}+\sum_{\chi=2g+h}\mathcal{K}_{g,h}g_{s}^{\chi}. \tag{4}\]
As in [35; 36], we will focus on examples in which \(\mathcal{R}_{g,h}=0\).
The real topological string includes in particular contributions from the conventional closed topological string, corresponding to \(c=h=0\). The first new ingredient is the disk amplitude with \(g=c=0\), \(h=1\), which we will sometimes denote as
\[\mathcal{G}_{-1}=\mathcal{T}. \tag{5}\]
When \(H_{1}(L,\mathbb{Z})=\mathbb{Z}_{2}\), as in the examples we will consider, we can interpret \(\mathcal{T}\) as the tension of a BPS domain wall interpolating between the two vacua on the D-brane [33].
In the A-model, the real topological string provides a generalization of Gromov-Witten theory which "counts" holomorphic maps from open and non-orientable Riemann surfaces to the target space \(X\). We recall that, in conventional closed topological string theory, the enumerative information of the Gromov-Witten invariants can be repackaged in terms of _integer_ BPS invariants [38], and the total closed string free energy
\[\mathcal{F}(t;g_{s})=\sum_{g\geq 0}\mathcal{F}_{g}(t)g_{s}^{2g-2} \tag{6}\]
can be written as
\[\mathcal{F}(t;g_{s})=\sum_{d\in H_{2}(X,\mathbb{Z})}\sum_{r\geq 0}\sum_{k=1}^{ \infty}n_{r,d}\frac{1}{k}\left(2\sin\frac{kg_{s}}{2}\right)^{2r-2}\mathrm{e}^{ -kd\cdot t}, \tag{7}\]
up to a cubic polynomial in the \(t_{i}\)s. Here \(t=(t_{1},\cdots,t_{s})\) is the vector of complexified Kahler parameters of \(X\), and \(n_{r,d}\) are the Gopakumar-Vafa invariants, which are integers (see [39] for a detailed derivation of (7) from a physics perspective). The integrality structure in (7) can be extended to the real topological string, and it was proposed in [34, 35, 40] that
\[\mathcal{G}-\mathcal{F}=2\sum_{r\geq-1}\sum_{\begin{subarray}{c}d\in H_{2}(X,\mathbb{Z})\\ d_{i}\equiv r\,\mathrm{mod}\,2\end{subarray}}\sum_{k\,\mathrm{odd}}\tilde{n}_{ r,d}\frac{1}{k}\left(2\mathrm{i}\sin\frac{kg_{s}}{2}\right)^{r}\mathrm{e}^{-dk \cdot t/2}. \tag{8}\]
Here, \(\tilde{n}_{r,d}\) are new integer invariants counting BPS states. An M-theoretic derivation of this integrality formula has been proposed in [41]. As it was emphasized in [35], one has to combine the open and the non-orientable sector to obtain integer invariants \(\tilde{n}_{r,d}\). Note however that, when \(r=-1\), only disks contribute, and \(\tilde{n}_{-1,d}\) can be regarded as an integer counting of disks in \(X\) with boundary conditions set by \(L\).
As one would expect from mirror symmetry, the calculation of \(\mathcal{G}_{\chi}\) is easier in the B-model. This was first shown in the case of \(\mathcal{T}=\mathcal{G}_{-1}\) in [33]. The domain wall tension is a holomorphic section of the Hodge line bundle \(\mathcal{L}\) over the moduli space of complex structures \(\mathcal{M}\) of \(X\). It solves an inhomogeneous Picard-Fuchs equation of the form
\[\mathfrak{LT}=f(z_{a}), \tag{9}\]
where \(\mathfrak{L}\) is the Picard-Fuchs operator which governs the closed string periods, and \(f(z_{a})\) is a known function of the complex moduli. This leads to an efficient counting of disks ending on \(L\). The question then arises of how to compute the higher order terms in the real topological string free energy. This problem was solved by Walcher by finding _extended_ HAE for the real topological string, which generalize the closed string case originally studied in [25, 26]. In order to present Walcher's HAE we need to recall some basic ingredients on the special geometry of the CY moduli space \(\mathcal{M}\) (further details can be found in e.g. [26, 42, 43]).
The moduli space \(\mathcal{M}\) of complex structures of the mirror CY is a special Kahler manifold of complex dimension \(s\). We will denote by \(z^{a}\), with \(a=1,\cdots,s\), generic complex coordinates on this moduli space. The Kahler potential will be denoted by \(K\), and
\[G_{a\bar{b}}=\partial_{a}\partial_{\bar{b}}K \tag{10}\]
is the corresponding Kahler metric. The line bundle \(\mathcal{L}\) over \(\mathcal{M}\) is endowed with a \(U(1)\) connection \(A_{a}=\partial_{a}K\equiv K_{a}\), and the covariant derivative \(D_{a}\) involves both the Levi-Civita connection associated to the metric,
\[\Gamma^{a}_{bc}=G^{a\bar{k}}\partial_{\bar{b}}G_{c\bar{k}}, \tag{11}\]
and the \(U(1)\) connection on \(\mathcal{L}\). Since \(\mathcal{M}\) is a special Kahler manifod, its Christoffel symbols satisfy the special geometry relation
\[\partial_{\bar{b}}\Gamma^{d}_{ac}=G_{a\bar{b}}\delta^{d}_{c}+G_{\bar{c}\bar{b}} \delta^{d}_{a}-C_{acm}\overline{C}^{md}_{\bar{b}}, \tag{12}\]
where \(C_{abc}\) is the Yukawa coupling, and
\[\overline{C}^{ij}_{\bar{k}}=\mathrm{e}^{2K}\,\overline{C}_{\bar{k}\bar{a} \bar{b}}G^{i\bar{a}}G^{j\bar{b}}. \tag{13}\]
We recall that, if \(\Omega\) is the holomorphic three-form of \(X\), the Yukawa coupling is defined by
\[C_{abc}=\int_{X}\Omega\wedge\frac{\partial^{3}\Omega}{\partial z^{a}\partial z ^{b}\partial z^{c}}. \tag{14}\]
In topological string theory, it corresponds to the two-sphere amplitude with three insertions.
In the B-model, the free energies \(\mathcal{G}_{\chi}\) get promoted in general to non-holomorphic quantities which we will denote by \(G_{\chi}\) (as in [15], non-holomorphic and holomorphic versions of the free energies will written as capital Roman and capital calligraphic letters, respectively). The \(G_{\chi}\) are sections of the bundle \(\mathcal{L}^{-\chi}\), and they satisfy Walcher's HAE. The first ingredient we will need to set up these HAE is the disk two-point function \(\Delta_{ij}\), where \(i,j\) are moduli indices. Morally, we have
\[\Delta_{ij}\sim D_{i}D_{j}G_{-1}. \tag{15}\]
A more detailed analysis shows that \(\Delta_{ij}\) is given by the so-called Griffiths' infinitesimal invariant [34]. It can be written in terms of the domain wall tension \(\mathcal{T}\) as
\[\Delta_{ij}=D_{i}D_{j}\mathcal{T}-\mathrm{i}C_{ijk}\mathrm{e}^{K}G^{k\bar{m}}D _{\bar{m}}\overline{\mathcal{T}}. \tag{16}\]
The Griffiths invariant is not holomorphic, and by using (12) one finds that it satisfies the HAE
\[\partial_{\bar{a}}\Delta_{ij}=-\mathrm{i}C_{ijp}\mathrm{e}^{K}G^{p\bar{b}} \Delta_{\bar{a}\bar{b}}, \tag{17}\]
where
\[\Delta_{\bar{a}\bar{b}}=\overline{\Delta_{ab}}. \tag{18}\]
The Griffiths' invariant plays the role in the open string sector of the Yukawa coupling in the closed string sector. For this reason, it is convenient to introduce its anti-holomorphic version as
\[\Delta_{\bar{a}}^{\ b}=\mathrm{i}\mathrm{e}^{K}G^{b\bar{c}}\Delta_{\bar{a} \bar{c}}. \tag{19}\]
We are now ready to consider the HAE for the real topological string in the B-model. As in the closed string case, the one-loop contribution corresponding to \(\chi=0\) is somewhat special and needs a separate treatment. There are three different contributions at \(\chi=0\): the torus amplitude \(F_{1}\), the annulus amplitude \(F_{0,2}\), and the Klein bottle amplitude \(K_{0,0}\)2. We first recall that the closed string amplitude at genus one satisfies the HAE [25],
Footnote 2: Non-holomorphic Klein bottle amplitudes \(K_{g,h}\) shouldn’t be confused with the Kähler potential \(K\) and its derivatives \(K_{i}\), \(K_{ij}\).
\[\partial_{\bar{k}}\partial_{m}F_{1}=\frac{1}{2}\overline{C}^{ij}_{\bar{k}}C_{ mij}-\left(\frac{\chi}{24}-1\right)G_{\bar{k}m}, \tag{20}\]
where \(\chi\) is the Euler number of the CY \(X\). The annulus amplitude satisfies [34; 35],
\[\partial_{\bar{a}}\partial_{j}F_{0,2}=-\Delta_{jk}\Delta_{\bar{a}}^{\,k}. \tag{21}\]
Finally, the Klein bottle amplitude satisfies, for the type of geometries we will consider [35],
\[\partial_{\bar{k}}\partial_{m}K_{0,0}=\frac{1}{2}\overline{C}^{ij}_{\bar{k}}C_ {mij}-G_{\bar{k}m}, \tag{22}\]
which is very similar to the HAE for \(F_{1}\). In the next section we will solve these equations explicitly in terms of propagators.
We can now write down Walcher's extended HAE for the real topological string free energies with \(\chi\geq 1\)[34; 35]. It is given by
\[\partial_{\bar{a}}G_{\chi}=\frac{1}{2}\overline{C}^{P\,jk}_{\bar{a}}\sum_{ \chi_{1}+\chi_{2}=\chi-2\atop\chi_{1},\chi_{2}\geq 0}D_{j}G_{\chi_{1}}D_{k}G_{ \chi_{2}}+\overline{C}^{P\,jk}_{\bar{a}}D_{j}D_{k}G_{\chi-2}-\Delta^{P\,j}_{ \bar{a}}D_{j}G_{\chi-1}, \tag{23}\]
and it applies for \(\chi\geq 1\). In these equations, \(\overline{C}^{P\,jk}_{\bar{a}}\) and \(\Delta^{P\,\,j}_{\bar{a}}\) are defined as [35]
\[\overline{C}^{P\,jk}_{\bar{a}}=\overline{C}^{jl}_{\bar{a}}\frac{\delta^{k}_{l }+P^{k}_{l}}{2},\qquad\Delta^{P\,k}_{\bar{a}}=\Delta^{j}_{\bar{a}}\frac{\delta ^{k}_{j}+P^{k}_{j}}{2}, \tag{24}\]
and \(P^{k}_{l}\) is a projector related to the orientifold action. In the cases we will consider, \(P^{k}_{l}=\delta^{k}_{l}\), and \(\overline{C}^{P\,jk}_{\bar{a}}=\overline{C}^{jk}_{\bar{a}}\), \(\Delta^{P\,j}_{\bar{a}}=\Delta^{j}_{\bar{a}}\). As in the case of the closed topological string, the HAE can be solved recursively starting from the free energy with \(\chi=0\) and the "tree level data" given by the Yukawa coupling and Griffith's invariant. The best procedure to solve these equations is probably the so-called "direct integration" method, developed e.g. in [44; 45; 46; 47]. To develop this method, one has to use propagators, which will also play a crucial role in understanding the non-perturbative sectors. They are the subject of the next section.
### Propagators
Propagators were introduced in [26] as a tool to solve the HAE for the closed topological string, and they were generalized in [34] to solve for (23). In this section we will provide a detailed description of the propagators in both the real and the closed case (for the closed case one might consult [17; 48]). Some of the results for the real propagators seem to be new, and they will be needed in the discussion of the trans-series solution to the extended HAE.
The two-index, closed string propagator is defined by [26]
\[\partial_{\bar{k}}S^{ij}=\mathrm{e}^{2K}\,G^{i\bar{a}}G^{j\bar{b}}\overline{C }_{\bar{k}\bar{a}\bar{b}}. \tag{25}\]
In addition to \(S^{ab}\) one introduces as well
\[\partial_{\bar{c}}S^{b}=G_{a\bar{c}}S^{ab},\qquad\partial_{\bar{c}}S=G_{a\bar {c}}S^{a}. \tag{26}\]
As in [17], we will use the propagator formalism set up in [47]. We introduce the shifted propagators
\[\begin{split}\tilde{S}^{ij}&=S^{ij},\\ \tilde{S}^{i}&=S^{i}-S^{ij}K_{j},\\ \tilde{S}&=S-S^{i}K_{i}+\frac{1}{2}S^{ij}K_{i}K_{j}. \end{split} \tag{27}\]
A fundamental result of [46; 47; 26] is that the derivatives of the propagators w.r.t. the moduli can be written as quadratic expressions in the propagators, and one has the equations
\[\partial_{i}S^{jk} =C_{imn}S^{mj}S^{nk}+\delta_{i}^{j}\tilde{S}^{k}+\delta_{i}^{k} \tilde{S}^{j}-s_{im}^{j}S^{mk}-s_{im}^{k}S^{mj}+h_{i}^{jk}, \tag{28}\] \[\partial_{i}\tilde{S}^{j} =C_{imn}S^{mj}\tilde{S}^{n}+2\delta_{i}^{j}\tilde{S}-s_{im}^{j} \tilde{S}^{m}-h_{ik}S^{kj}+h_{i}^{j},\] \[\partial_{i}\tilde{S} =\frac{1}{2}C_{imn}\tilde{S}^{m}\tilde{S}^{n}-h_{ij}\tilde{S}^{j} +h_{i},\] \[\partial_{i}K_{j} =K_{i}K_{j}-C_{ijn}S^{mn}K_{m}+s_{ij}^{m}K_{m}-C_{ijk}\tilde{S}^{ k}+h_{ij}.\]
In these equation, \(s_{ac}^{r}\), \(h_{i}^{jk}\), \(h_{ij}\), \(h_{i}^{j}\) and \(h_{i}\) are holomorphic functions or ambiguities, and they are usually chosen to be rational functions of the moduli. We also have the following important relation between the Christoffel symbols of the Kahler metric, and the propagator \(S^{ab}\),
\[\Gamma_{ac}^{r}=\delta_{c}^{r}K_{a}+\delta_{a}^{r}K_{c}-C_{acp}S^{rp}+s_{ac}^ {r}. \tag{29}\]
We will need various properties of the closed string propagators. As first pointed out in [46], some of these properties are better addressed in the "big moduli space" formalism. In this space, on top of the \(s\) complex coordinates \(z^{a}\), one introduces an additional complex coordinate corresponding roughly to the string coupling constant. In what follows, lower Latin indices run over \(a=1,\cdots,s\), and lower Greek indices run over the indices of the "big" moduli space \(\alpha=0,1,\cdots,s\).
In the big moduli space, a crucial role is played by the projective coordinates for the moduli space, \(X^{I}\), \(I=0,1,\cdots,s\). We recall that a choice of frame in the topological string is equivalent to a choice of a symplectic basis of three-cyles in \(H_{3}(X,\mathbb{Z})\), \(A^{I}\), \(B_{I}\), \(I=0,1,\cdots,s\). Then, the periods of the holomorphic three-form are defined by
\[X^{I}=\int_{A^{I}}\Omega,\qquad\mathcal{F}_{I}=\int_{B_{I}}\Omega. \tag{30}\]
The first set of periods defines the projective coordinates, while the second set defines the (projective) prepotential \(\mathcal{F}\) through
\[\mathcal{F}_{I}=\frac{\partial\mathcal{F}}{\partial X^{I}}. \tag{31}\]
We now introduce the \((1+s)\times(1+s)\) matrix (see e.g. [48; 43; 46])
\[\chi_{\alpha}^{I}=\begin{cases}D_{a}X^{I},&\text{if $\alpha=a=1,2,\cdots,s$},\\ X^{I},&\text{if $\alpha=0$}.\end{cases} \tag{32}\]
This matrix is invertible, and its inverse will be denoted by \(\chi_{I}^{\alpha}\). It satisfies
\[\chi_{\alpha}^{I}\chi_{J}^{\alpha}=\delta_{J}^{I},\qquad\chi_{\alpha}^{I}\chi _{I}^{\beta}=\delta_{\alpha}^{\beta}. \tag{33}\]
It is shown in [48] that the quantities
\[h_{I}=\chi_{I}^{0}+K_{a}\chi_{I}^{a} \tag{34}\]
are holomorphic.
Let us now consider the holomorphic limit of the theory, which depends on a choice of frame. This is specified by a choice of periods \(X^{I}\), \(I=0,1,\cdots,s\), and flat coordinates
\[t^{a}=\frac{X^{a}}{X^{0}},\qquad a=1,\cdots,s. \tag{35}\]
In the holomorphic limit, one has [26]
\[\Gamma^{c}_{ab}\to\frac{\partial z^{c}}{\partial t^{p}}\frac{\partial^{2}t^{p} }{\partial z^{a}\partial z^{b}},\qquad K_{a}\to-\partial_{a}\log(X^{0}). \tag{36}\]
The holomorphic limit of the free energies is obtained by taking the holomorphic limit of the propagators. In the case of the closed string, this limit has been studied in detail in [15; 17]. We will denote the holomorphic limit of the shifted propagators by calligraphic letters \(\mathcal{S}^{ij}\), \(\tilde{\mathcal{S}}^{j}\), \(\tilde{\mathcal{S}}\). Based on the results in [48], it can be shown that these holomorphic propagators satisfy the equations
\[\begin{split} C_{ijl}\mathcal{S}^{lk}-s^{k}_{ij}&=- \partial_{ij}^{2}X^{I}\chi^{k}_{I},\\ C_{ijk}\tilde{\mathcal{S}}^{k}-h_{ij}&=h_{I} \partial_{ij}^{2}X^{I}.\end{split} \tag{37}\]
Let us now introduce propagators for the real topological string, follllowing [34]3. They are defined by
Footnote 3: In [34; 35], the real topological string propagators are denoted by \(\Delta^{b}\), \(\Delta\).
\[\partial_{\bar{a}}R^{b}=\mathrm{i}e^{K}G^{b\bar{c}}\Delta_{\bar{a}\bar{c}}= \Delta_{\bar{a}}^{\ b}, \tag{38}\]
and
\[R^{a}=G^{a\bar{b}}\partial_{\bar{b}}R. \tag{39}\]
They are both sections of \(\mathcal{L}^{-1}\). One has also the property
\[\Delta_{\bar{a}\bar{b}}=-\mathrm{i}\,\mathrm{e}^{-K}D_{\bar{a}}D_{\bar{b}}R. \tag{40}\]
The propagator \(R^{k}\) is closely related to the Griffiths invariant. To see this, we first note that the HAE (17) can be written as
\[\partial_{\bar{a}}\Delta_{ij}=-C_{ijk}\Delta_{\bar{a}}^{\ k}, \tag{41}\]
and one has
\[C_{ijk}R^{k}+\Delta_{ij}=d_{ij}, \tag{42}\]
where \(d_{ij}\) are holomorphic functions. The relation (42) is similar to (29), and as we will see, it gives a useful expression for the holomorphic limit of the real propagators. As in the closed string case, the derivatives of \(R^{j}\), \(R\) can be written as polynomials in the propagators. It is convenient to introduce the tilded real propagators [47]
\[\begin{split}\tilde{R}^{i}&=R^{i},\\ \tilde{R}&=R-K_{i}R^{i}.\end{split} \tag{43}\]
Since \(R^{i}\) remains the same, we will omit the tilde. Then, one finds the relations [47]
\[\begin{split}\partial_{i}R^{j}&=\delta^{j}_{i} \widetilde{R}+S^{jl}\left(C_{ilk}R^{k}-d_{il}\right)-s^{j}_{ik}R^{k}+d^{j}_{i},\\ \partial_{i}\widetilde{R}&=\left(C_{ijk}R^{j}-d_{ ik}\right)\widetilde{S}^{k}-h_{ij}R^{j}+d_{i},\end{split} \tag{44}\]
where \(d_{ij}\), \(d_{i}^{j}\) and \(d_{i}\) are holomorphic ambiguities and the \(d_{ij}\) are the same quantities appearing in (42).
It is possible to obtain relations between the real topological string propagators and the closed string propagators. Indeed, by using (16), we find
\[\begin{split}\partial_{\bar{a}}R^{b}&=\mathrm{i} \mathrm{e}^{K}G^{b\bar{c}}D_{\bar{a}}D_{\bar{c}}\overline{\mathcal{T}}-\overline {C}_{\bar{a}}^{bm}D_{m}\mathcal{T}=\partial_{\bar{a}}\left(\mathrm{i}\mathrm{e} ^{K}G^{b\bar{c}}D_{\bar{c}}\overline{\mathcal{T}}\right)-\partial_{\bar{a}}S^{bm }D_{m}\mathcal{T}\\ &=\partial_{\bar{a}}\left\{\mathrm{i}\mathrm{e}^{K}G^{b\bar{c}}D_ {\bar{c}}\overline{\mathcal{T}}-S^{bm}D_{m}\mathcal{T}+S^{b}\mathcal{T}\right\},\end{split} \tag{45}\]
where we have used that
\[\partial_{\bar{a}}(D_{m}\mathcal{T})=G_{\bar{a}m}\mathcal{T}, \tag{46}\]
since \(\mathcal{T}\) is holomorphic. A similar argument can be applied to \(R\), and one finds in this way,
\[\begin{split} R^{k}&=-S^{km}\partial_{m}\mathcal{T} +\tilde{S}^{k}\mathcal{T}+\mathrm{i}\,\mathrm{e}^{K}G^{k\bar{c}}D_{\bar{c}} \overline{\mathcal{T}}+r^{k},\\ \tilde{R}&=-\tilde{S}^{m}\partial_{m}\mathcal{T}+2 \tilde{S}\mathcal{T}+\mathrm{i}\,\mathrm{e}^{K}\left(\overline{\mathcal{T}}-K _{b}G^{b\bar{c}}D_{\bar{c}}\overline{\mathcal{T}}\right)+r.\end{split} \tag{47}\]
In these expressions, \(r^{k}\), \(r\) are holomorphic functions which depend explicitly on the domain wall tension \(\mathcal{T}\). For example, one can show that the functions \(r^{k}\) satisfy
\[C_{ijk}r^{k}=-\partial_{ij}^{2}\mathcal{T}+s_{ij}^{k}\partial_{k}\mathcal{T}- h_{ij}\mathcal{T}+d_{ij}. \tag{48}\]
A similar connection between the real and the closed string propagators is implicit in the expressions for the former found in [49] in the big moduli space.
Let us now consider the holomorphic limit for the real topological string. In order to do this, we have to understand first the holomorphic limit of Griffiths' invariant. As explained in [34], in a given frame one has to choose \(\mathcal{T}\) in such a way that it vanishes at the appropriate base-point, together with all its holomorphic derivatives. When this is the case, both \(\overline{\mathcal{T}}\) and \(D_{\bar{a}}\overline{\mathcal{T}}\) vanish in the holomorphic limit. Therefore, the holomorphic limit of Griffiths' invariant, which we will denote by \(\mathcal{D}_{ij}\), is simply given by
\[\mathcal{D}_{ij}=D_{i}D_{j}\mathcal{T}, \tag{49}\]
where the covariant derivatives are calculated in the holomorphic limit. In terms of holomorphic propagators, one has
\[\mathcal{D}_{ij}=\partial_{ij}^{2}\mathcal{T}-\left(C_{ijl}\mathcal{S}^{lk}-s_ {ij}^{k}\right)\partial_{k}\mathcal{T}+\left(C_{ijk}\tilde{\mathcal{S}}^{k}-h _{ij}\right)\mathcal{T}. \tag{50}\]
Let us now consider the holomorphic limit of the real propagators \(R^{b}\), \(\tilde{R}\) in a given frame. We will use again calligraphic letters and denote these limits by \(\mathcal{R}^{b}\), \(\tilde{\mathcal{R}}\). They satisfy the holomorphic limits of the various equations that we have considered. For example, from (45) we find
\[\begin{split}\mathcal{R}^{k}&=-\mathcal{S}^{km} \partial_{m}\mathcal{T}+\tilde{\mathcal{S}}^{k}\mathcal{T}+r^{k},\\ \tilde{\mathcal{R}}&=-\tilde{\mathcal{S}}^{m}\partial _{m}\mathcal{T}+2\tilde{\mathcal{S}}\mathcal{T}+r.\end{split} \tag{51}\]
In addition, by taking the holomorphic limit of (42) we obtain,
\[C_{ijk}\mathcal{R}^{k}-d_{ij}=-\mathcal{D}_{ij}. \tag{52}\]
As we have emphasized many times, the original propagators are globally defined but non-holomorphic. Their holomorphic limit is frame dependent, and we would like to know how they transform under a change of frame. This was a crucial ingredient in [15, 17] and we will need the corresponding generalization to the real case. We recall that a change of frame is implemented by a change of symplectic basis in (30) and is therefore associated to a symplectic matrix
\[\Gamma=\begin{pmatrix}A&B\\ C&D\end{pmatrix}, \tag{53}\]
where \(A\), \(B\), \(C\), \(D\) are \((1+s)\times(1+s)\) matrices which satisfy
\[A^{\rm T}D-C^{\rm T}B=\mathbf{1}_{s+1},\ \ \ A^{\rm T}C=C^{\rm T}A,\ \ \ B^{\rm T}D=D^{\rm T}B, \tag{54}\]
and \(\mathbf{1}_{s+1}\) is the identity matrix of rank \(s+1\). The matrix \(\Gamma\) acts on the periods as
\[\begin{split} X_{\Gamma}^{J}&=C^{JI}\mathcal{F}_{I}+D^{J }_{\,I}X^{I},\\ \mathcal{F}_{J}^{\Gamma}&=A_{\,J}^{\,I}\mathcal{F}_{I}+B_{JI }X^{I}.\end{split} \tag{55}\]
One can show [17, 48] that the holomorphic shifted propagators in the frame defined by \(\Gamma\) are related to the ones in the original frame by the equations,
\[\begin{split}\mathcal{S}^{kI\Gamma}&=\mathcal{S}^{ kl}-[(C\tau+D)^{-1}C]^{IJ}\chi_{I}^{k}\chi_{J}^{l},\\ \tilde{\mathcal{S}}^{k\Gamma}&=\tilde{\mathcal{S}}^ {k}+[(C\tau+D)^{-1}C]^{IJ}\chi_{I}^{k}h_{J},\\ \tilde{\mathcal{S}}^{\Gamma}&=\tilde{\mathcal{S}}- \frac{1}{2}[(C\tau+D)^{-1}C]^{IJ}h_{I}h_{J},\end{split} \tag{56}\]
where \(\chi_{I}^{k}\), \(h_{I}\) were introduced in (33) and (34), respectively.
In the case of the real topological string there is an additional ingredient, since the domain wall tension \(\mathcal{T}\) is in general _not_ invariant under a change of frame. The reason is that, as we change the frame, we want to make sure that the domain wall tension in the new frame, \(\mathcal{T}^{\Gamma}\), as well as its holomorphic derivatives, vanish at the appropriate base-point. The original domain wall tension \(\mathcal{T}\) (or its analytic continuation) does not satisfy this property. In general, \(\mathcal{T}^{\Gamma}\) differs from \(\mathcal{T}\) by a (real) linear combination of periods, i.e.
\[\mathcal{T}^{\Gamma}=\mathcal{T}+\alpha_{I}X^{I}+\beta^{I}\mathcal{F}_{I}. \tag{57}\]
Note that, since \(X^{I}\), \(\mathcal{F}_{I}\) satisfy the homogeneous version of the Picard-Fuchs equation, both \(\mathcal{T}\) and \(\mathcal{T}^{\Gamma}\) solve the defining equation (9) for the domain wall tension. We also note that the holomorphic functions \(r^{k}\), \(r\) appearing in (47) will depend on the choice of frame through their dependence on \(\mathcal{T}\).
We will need the transformation properties of the real propagators under a change of frame specified by (55) and (57). To derive these, we can first deduce the transformation properties of the holomorphic Griffiths invariant (50), and then recall the relation (52) to derive the transformation of \(\mathcal{R}^{k}\). By using the first equation in (44) one finally obtains the transformation of \(\tilde{\mathcal{R}}\). The final result is,
\[\begin{split}\mathcal{R}^{k\Gamma}-\mathcal{R}^{k}& =\left\{[(C\tau+D)^{-1}C]^{PJ}\partial_{P}\mathcal{T}^{\Gamma}- \beta^{J}\right\}\chi_{J}^{k},\\ \tilde{\mathcal{R}}^{\Gamma}-\tilde{\mathcal{R}}&=- \left\{[(C\tau+D)^{-1}C]^{PJ}\partial_{P}\mathcal{T}^{\Gamma}-\beta^{J} \right\}h_{J}.\end{split} \tag{58}\]
In deriving these equations it is useful to keep in mind that, since \(\mathcal{T}\) is a section of \(\mathcal{L}\), it is a homogeneous function of \(X^{I}\) of degree one, and Euler's theorem gives,
\[\mathcal{T}=X^{I}\partial_{I}\mathcal{T}. \tag{59}\]
As a consequence,
\[\chi_{I}^{a}\partial_{a}\mathcal{T}+h_{I}\mathcal{T}=\partial_{I}\mathcal{T}. \tag{60}\]
### Direct integration
Let us now rewrite Walcher's HAE in terms of closed and real propagators, following [47]. Since the non-holomorphic dependence of the free energies is contained in the propagators, one obtains the equations
\[\begin{split}\frac{\partial G_{\chi}}{\partial S^{jk}}& =\frac{1}{2}\sum_{\chi_{1}+\chi_{2}=\chi-2\atop\chi_{1},\chi_{2} \geq 0}D_{j}G_{\chi_{1}}D_{k}G_{\chi_{2}}+D_{j}D_{k}G_{\chi-2},\\ \frac{\partial G_{\chi}}{\partial R^{i}}-K_{i}\frac{\partial G_{ \chi}}{\partial\tilde{R}}&=-D_{i}G_{\chi-1},\end{split} \tag{61}\]
as well as the constraint
\[\frac{\partial G_{\chi}}{\partial K_{i}}=0. \tag{62}\]
It is sometimes useful to use HAE for the different ingredients of the total free energy. For example, the oriented open string amplitudes \(F_{g,h}\) satisfy [34]
\[\partial_{\bar{a}}F_{g,h}=\frac{1}{2}\overline{C}_{\bar{a}}^{jk}\sum_{(g_{1}, h_{1})+(g_{2},h_{2})=(g,h)}D_{j}F_{g_{1},h_{1}}D_{k}F_{g_{2},h_{2}}+\frac{1}{2} \overline{C}_{\bar{a}}^{jk}D_{j}D_{k}F_{g-1,h}-R_{a}^{j}D_{j}F_{g,h-1}, \tag{63}\]
which leads to
\[\begin{split}\frac{\partial F_{g,h}}{\partial S^{jk}}& =\frac{1}{2}\sum_{(g_{1},h_{1})+(g_{2},h_{2})=(g,h)}D_{j}F_{g_{1},h _{1}}D_{k}F_{g_{2},h_{2}}+\frac{1}{2}D_{j}D_{k}F_{g-1,h},\\ \frac{\partial F_{g,h}}{\partial R^{i}}-K_{i}\frac{\partial F_{g, h}}{\partial\tilde{R}}&=-D_{i}F_{g,h-1},\end{split} \tag{64}\]
and the constraint
\[\frac{\partial F_{g,h}}{\partial K_{i}}=0. \tag{65}\]
Similarly, one can write HAE for the non-orientable amplitudes separately.
We can now use the propagators to write explicit expressions for the free energies. Let us first consider the exceptional case with \(\chi=0\). In the case of the genus one free energy \(F_{1}\), we can use the definition of the propagator to integrate (20) as
\[D_{i}F_{1}=\frac{1}{2}C_{ijk}S^{jk}-\left(\frac{\chi}{24}-1\right)K_{i}+f_{i} ^{(1)}(z), \tag{66}\]
where \(f_{i}^{(1)}(z)\) is a holomorphic ambiguity. A similar formula can be obtained for the Klein bottle amplitude,
\[D_{i}\mathcal{K}_{0,0}=\frac{1}{2}C_{ijk}S^{jk}-K_{i}+k_{i}^{(1)}(z), \tag{67}\]
where \(k_{i}^{(1)}(z)\) is the corresponding ambiguity. Finally, for the annulus amplitude one obtains from (2.21), in terms of real propagators,
\[\partial_{j}F_{0,2}=\frac{1}{2}C_{jkl}R^{k}R^{l}-d_{jk}R^{k}+f_{j}^{(0,2)}(z), \tag{2.68}\]
where \(f_{j}^{(0,2)}(z)\) is the holomorphic ambiguity. By adding all these results, we obtain a useful formula for the derivative of the total \(\chi=0\) amplitude,
\[\partial_{j}G_{0}=C_{jlk}S^{kl}-\frac{\chi}{24}K_{j}+\frac{1}{2}C_{jkl}R^{l}R^ {k}-d_{jk}R^{k}+g_{j}^{(0)}. \tag{2.69}\]
Let us note that, when acting on \(G_{0}\), \(D_{j}=\partial_{j}\). The result (2.69) can be used in the HAE (2.61) to calculate all the \(G_{\chi}\) recursively, as polynomials in the propagators, and up to holomorphic ambiguities. Let us work out the first case, \(G_{1}\). It satisfies the equations
\[\begin{split}\frac{\partial G_{1}}{\partial S^{ij}}& =\Delta_{ij}=-C_{ijk}R^{k}+d_{ij},\\ \frac{\partial G_{1}}{\partial R^{j}}-K_{j}\frac{\partial G_{1}} {\partial\tilde{R}}&=-D_{j}G_{0}.\end{split} \tag{2.70}\]
By using (2.69) the last equation can be split into two,
\[\begin{split}\frac{\partial G_{1}}{\partial R^{k}}& =-C_{ijk}S^{ij}-\frac{1}{2}C_{kij}R^{i}R^{j}+d_{kj}R^{j}+g_{k}^{(0 )},\\ \frac{\partial G_{1}}{\partial\tilde{R}}&=-\frac{ \chi}{24}.\end{split} \tag{2.71}\]
By integrating these equations, we obtain
\[G_{1}=-S^{ij}\left(C_{ijk}R^{k}-d_{ij}\right)-\frac{1}{6}C_{ijk}R^{i}R^{j}R^{k }+\frac{1}{2}d_{ij}R^{i}R^{j}-\frac{\chi}{24}\tilde{R}+g_{k}^{(0)}R^{k}+g_{1}, \tag{2.72}\]
where \(g_{1}\) is a new holomorphic ambiguity.
### Fixing the holomorphic ambiguity
Integrating the HAE for the real topological string requires fixing the holomorphic ambiguity at each value of \(\chi\). As in the closed string case, in order to do this it is helpful to know the behavior of the real topological string amplitudes at special points in moduli space. Among these, perhaps the most important one is the conifold locus. It has been known for some time that closed topological string amplitudes have a universal behavior at this locus, given by the \(c=1\) string [50]. Let us assume for simplicity that there is a single modulus in the geometry, and let \(t_{c}\) be a vanishing flat coordinate at the conifold locus. Then, in the conifold frame, the closed string amplitudes have the following behavior,
\[\mathcal{F}_{g}^{c}(t_{c})=\frac{B_{2g}}{2g(2g-2)}t_{c}^{2-2g}+\mathcal{O}(1), \tag{2.73}\]
where \(B_{2g}\) are Bernoulli numbers, and we have normalized \(t_{c}\) appropriately (here and in the following we use inhomogeneous free energies, which differ from the homogeneous ones in an appropriate power of \(X^{0}\)). According to (2.73) there are no subleading poles in the free energy
near the conifold point. This gap condition was emphasized in [45] and it was later realized that, in many toric CY manifolds, it makes it possible to fix the holomorphic ambiguity at all genera [51].
The behavior of the real topological string amplitudes near the conifold point was studied in [34; 35; 36; 40]. It was found that all real topological string amplitudes \(\mathcal{F}_{g,h}\), \(\mathcal{K}_{g,h}\) with \(h\neq 0\) were regular at the conifold point, except for \(\mathcal{K}_{g,0}\), which in the conifold frame and near the conifold point behaves as
\[\mathcal{K}^{c}_{g-1,0}(t_{c})=\frac{\Psi_{g}}{t_{c}^{2g-2}}+\mathcal{O}(1). \tag{74}\]
A proposal for the value of the coefficient \(\Psi_{g}\) was made in [40], and it is given by
\[\Psi_{g}=\frac{1}{2^{2g+1}g(g-1)}\left\{-(2^{2g}-1)B_{2g}+gE_{2g-2}\right\}, \tag{75}\]
where \(B_{2g}\) are Bernoulli numbers and \(E_{2g-2}\) are Euler numbers. We conclude that \(\mathcal{G}_{\chi}\) has a gap behavior at the conifold for \(\chi\) even, and is regular for \(\chi\) odd. This behavior helps in fixing the holomorphic ambiguity, but in contrast to what happens in the closed string case, it does not fix it completely, even in the local case, and one has to use further information. We will discuss this in more detail in section 4.
## 3 Trans-series solutions and resurgent structure
Our discussion of the real topological string has so far focused on its perturbative expansion. Following [15; 17; 27; 28; 52], we would like to consider now _trans-series solutions_ to Walcher's HAE. These will be used to obtain information on the non-perturbative sector of the real topological string, and in particular to find explicit multi-instanton amplitudes.
### Master equation and warm-up example
In order to obtain a trans-series solution we have to re-express first the HAE as a "master equation" for the full free energy \(G\). Due to the special role of the \(G_{\chi}\) with \(-2\leq\chi\leq 0\), we have to consider four different generating series, which we define as
\[\begin{split} G^{(0)}&=\sum_{\chi\geq-2}G_{\chi}g_{ s}^{\chi},\qquad\widetilde{G}^{(0)}=\sum_{\chi\geq-1}G_{\chi}g_{s}^{\chi},\\ \widehat{G}^{(0)}&=\sum_{\chi\geq 0}G_{\chi}g_{s}^{ \chi},\qquad\overline{G}^{(0)}=\sum_{\chi\geq 1}G_{\chi}g_{s}^{\chi}.\end{split} \tag{76}\]
The superscript \({}^{(0)}\) indicates that these are all perturbative amplitudes, and the overline in the last series should not be confused with complex conjugation. We will first consider the following trans-series ansatz to solve the HAE,
\[G=\sum_{\ell\geq 0}\mathcal{C}^{\ell}G^{(\ell)}, \tag{77}\]
where \(\mathcal{C}\) is a formal parameter keeping track of the exponential weight, and
\[G^{(\ell)}=\sum_{n\geq 0}\mathrm{e}^{-\ell\mathcal{A}/g_{s}}G_{n}^{(\ell)}g_{s}^ {n},\qquad\ell\geq 1. \tag{78}\]
We will often refer to \(\mathcal{A}\) as an "instanton action" and to the \(G^{(\ell)}\) as \(\ell\)-th instanton amplitudes (a more general ansatz will be considered below, in section 3.4). Our goal is to find explicit expressions for the amplitudes \(G^{(\ell)}\). We define the trans-series analogue of \(\widetilde{G}^{(0)}\) as
\[\widetilde{G}=\widetilde{G}^{(0)}+\sum_{\ell\geq 1}\mathcal{C}^{\ell}G^{(\ell)}, \tag{3.4}\]
and similarly for \(\widehat{G}\) and \(\overline{G}\).
Before working out the general trans-series solution, it is instructive to solve a simple, particular case by hand, as it was done originally in [27, 28]. The simplest situation is the one considered in [15] for closed topological strings, namely toric or local CY manifolds with a single modulus. In the local case, as first noted in [53], the HAE simplify substantially: the holomorphic limit of \(K_{a}\) vanishes, and the closed string propagators \(\widetilde{\mathcal{S}}^{k}\), \(\tilde{S}\) can be set to zero by an appropriate choice of the holomorphic functions appearing in the equations (2.28). Similarly, one can set to zero the holomorphic real topological string propagator \(\widetilde{\mathcal{R}}\) by choosing \(h_{ij}=d_{i}=0\), as required by the second equation in (2.44). Therefore, if we are interested in the end in calculating holomorphic quantities, we can set \(\tilde{S}^{k}\), \(\tilde{S}\) and \(\tilde{R}\) to zero from the very beginning. Since we want to consider a single modulus case, there are only two non-zero propagators, namely \(S^{zz}\) and \(R^{z}\). Taking all this into account, the HAE (2.61) simplify to
\[\begin{split}\frac{\partial G_{\chi}}{\partial S^{zz}}& =\frac{1}{2}\sum_{\chi_{1}+\chi_{2}\chi_{-2}\atop\chi_{1},\chi_{2} \geq 0}D_{z}G_{\chi_{1}}D_{z}G_{\chi_{2}}+D_{z}^{2}G_{\chi-2},\\ \frac{\partial G_{\chi}}{\partial R^{z}}&=-D_{z}G_{ \chi-1}.\end{split} \tag{3.5}\]
We first write down master equations for the perturbative series, and we then postulate that these equations are valid for their trans-series counterparts. It is easy to see that one obtains in this way,
\[\begin{split}\frac{\partial\overline{G}}{\partial S^{zz}}& =\frac{g_{s}^{2}}{2}\left(D_{z}\widehat{G}\right)^{2}+g_{s}^{2}D _{z}^{2}\widetilde{G},\\ \frac{\partial\overline{G}}{\partial R^{z}}&=-g_{s} D_{z}\widehat{G}.\end{split} \tag{3.6}\]
We will focus in this section on the one-instanton amplitude. It satisfies the linearized master equations
\[\begin{split}\frac{\partial G^{(1)}}{\partial S^{zz}}& =g_{s}^{2}D_{z}\widehat{G}^{(0)}D_{z}G^{(1)}+g_{s}^{2}D_{z}^{2} G^{(1)},\\ \frac{\partial G^{(1)}}{\partial R^{z}}&=-g_{s}D_{z} G^{(1)}.\end{split} \tag{3.7}\]
We now plug in these equations the ansatz (3.3) and we solve order by order in \(g_{s}\). We find that, as in the closed topological string case [27, 28], \(\mathcal{A}\) is independent of the propagators. The next order gives the following equations for \(G_{0}^{(1)}\):
\[\begin{split}\frac{\partial G_{0}^{(1)}}{\partial S^{zz}}& =(\partial_{z}\mathcal{A})^{2}G_{0}^{(1)},\\ \frac{\partial G_{0}^{(1)}}{\partial R^{z}}&= \partial_{z}\mathcal{A}\,G_{0}^{(1)}.\end{split} \tag{3.8}\]
This can be immediately integrated to obtain
\[G_{0}^{(1)}=\exp\left((\partial_{z}\mathcal{A})^{2}S^{zz}+\partial_{z}\mathcal{A }\,R^{z}\right)g_{0}^{(1)}, \tag{3.9}\]
where \(g_{0}^{(1)}\) is independent of the propagators, and it is the trans-series counterpart of the holomorphic ambiguity. As first explained in [27; 28], this ambiguity is fixed by evaluating the trans-series in a special frame, called the \(\mathcal{A}\)-frame. This is a frame in which \(\mathcal{A}\) is one of the \(A\)-periods. In this frame we impose a fixed, simple form for the multi-instanton amplitude, which we will call, as in [15; 17], a _boundary condition_. We will discuss different boundary conditions in section 3.5. In the case at hand, as we will explain below, the boundary condition is simply
\[\mathcal{G}_{0,\mathcal{A}}^{(1)}=1, \tag{3.10}\]
where the subscript \(\mathcal{A}\) means evaluation in the \(\mathcal{A}\)-frame. On the other hand, from the above expression we find
\[\mathcal{G}_{0,\mathcal{A}}^{(1)}=\exp\left((\partial_{z}\mathcal{A})^{2} \mathcal{S}_{\mathcal{A}}^{zz}+\partial_{z}\mathcal{A}\,\mathcal{R}_{ \mathcal{A}}^{z}\right)g_{0}^{(1)}, \tag{3.11}\]
and this fixes \(g_{0}^{(1)}\). We obtain in the end
\[G_{0}^{(1)}=\exp\left((\partial_{z}\mathcal{A})^{2}(S^{zz}-\mathcal{S}_{ \mathcal{A}}^{zz})+\partial_{z}\mathcal{A}\,(R^{z}-\mathcal{R}_{\mathcal{A}}^ {z})\right). \tag{3.12}\]
This procedure can be pushed to calculate \(G^{(1)}\) at higher orders. At next to leading order we find, for example,
\[G_{1}^{(1)} =\exp\left[(\partial_{z}\mathcal{A})^{2}(S^{zz}-\mathcal{S}_{ \mathcal{A}}^{zz})+\partial_{z}\mathcal{A}(R^{z}-\mathcal{R}_{\mathcal{A}}^{ z})\right]\frac{\partial_{z}\mathcal{A}(S^{zz}-\mathcal{S}_{\mathcal{A}}^{zz})}{6} \Big{[}4C_{zzz}(\partial_{z}\mathcal{A})^{2}(S^{zz}-\mathcal{S}_{\mathcal{A}}^ {zz})^{2}+\] \[+6C_{zzz}(S^{zz}-\mathcal{S}_{\mathcal{A}}^{zz})(1+(\partial_{z} \mathcal{A})R^{z})+3C_{zzz}\left((R^{z}-\mathcal{R}_{\mathcal{A}}^{z})^{2}+2 \mathcal{S}_{\mathcal{A}}^{zz}+2(R^{z}-\mathcal{R}_{\mathcal{A}}^{z}) \mathcal{R}_{\mathcal{A}}^{z}+(\mathcal{R}_{\mathcal{A}}^{z})^{2}\right)\] \[+6k_{z}^{(1)}(z)\Big{]}. \tag{3.13}\]
The above expression is already quite complicated and pushing the calculation to higher orders immediately becomes cumbersome, although there is no conceptual difficulty. In addition, the meaning of the multi-instanton amplitudes is not clear in this language. For this reason, we will adopt the operator formalism first discussed in [29; 30].
### Operator formalism
In order to solve the HAE for the trans-series of the real topological string, we generalize the operator formalism introduced in [15; 17]. We have to introduce first a generalized \(U(1)\) covariant derivative:
\[\mathfrak{d}_{\alpha}=(\mathfrak{d}_{0},\mathfrak{d}_{a}), \tag{3.14}\]
where
\[\mathfrak{d}_{0}=-g_{s}\frac{\partial}{\partial g_{s}},\qquad\mathfrak{d}_{a} =\mathfrak{D}_{a}-K_{a}g_{s}\frac{\partial}{\partial g_{s}}. \tag{3.15}\]
Here, \(\mathfrak{D}_{a}\) acts on a function of \(z^{a}\), the closed and real propagators \(S^{ab}\), \(\tilde{S}^{a}\), \(\tilde{S}\), \(R^{i}\), \(\tilde{R}\), and \(K_{a}\), as
\[\mathfrak{D}_{a}f=\frac{\partial f}{\partial z^{a}}+\partial_{a}S^{de}\frac{ \partial f}{\partial S^{de}}+\partial_{a}\tilde{S}^{c}\frac{\partial f}{ \partial\tilde{S}^{c}}+\partial_{a}\tilde{S}\frac{\partial f}{\partial\tilde {S}}+\partial_{a}R^{c}\frac{\partial f}{\partial R^{c}}+\partial_{a}\tilde{R} \frac{\partial f}{\partial\tilde{R}}+\partial_{a}K_{c}\frac{\partial f}{ \partial K_{c}}, \tag{3.16}\]
where the derivatives of the propagators are computed with the rules (28), (44). The basic operator of the formalism is defined in such a way that, in the holomorphic limit, it will become a derivative w.r.t. the flat projective coordinates \(X^{I}\). One defines [17]
\[\begin{split} T^{j}&=g_{s}\left\{-\mathcal{A}\left( \tilde{S}^{j}-\tilde{\mathcal{S}}^{j}_{\mathcal{A}}\right)+\partial_{m} \mathcal{A}(S^{mj}-\mathcal{S}^{mj}_{\mathcal{A}})\right\},\\ T^{0}&=g_{s}\left\{2\mathcal{A}(\tilde{S}-\tilde{ \mathcal{S}}_{\mathcal{A}})-\partial_{m}\mathcal{A}(\tilde{S}^{m}-\tilde{ \mathcal{S}}^{m}_{\mathcal{A}})\right\}-K_{j}T^{j}.\end{split} \tag{47}\]
These are used to construct the operator \(\mathsf{D}\) as
\[\mathsf{D}=T^{\alpha}\mathfrak{d}_{\alpha}=T^{j}\mathfrak{d}_{j}+T^{0} \mathfrak{d}_{0}, \tag{48}\]
and we will sometimes decompose it as
\[\mathsf{D}=\mathsf{D}_{0}+\mathsf{D}_{1}, \tag{49}\]
where
\[\mathsf{D}_{0}=T^{0}\mathfrak{d}_{0},\qquad\mathsf{D}_{1}=T^{j}\mathfrak{d}_ {j}. \tag{50}\]
It is also convenient to introduce
\[\tilde{T}^{0}=T^{0}+K_{j}T^{j}=g_{s}\left\{2\mathcal{A}(S-\mathcal{S}_{ \mathcal{A}})-\partial_{m}\mathcal{A}(\tilde{S}^{m}-\tilde{\mathcal{S}}^{m}_{ \mathcal{A}})\right\}. \tag{51}\]
We have the crucial relations [17]
\[\begin{split}\mathfrak{d}_{i}T^{j}&=-\Gamma^{j}_{ ik}T^{k}-T^{0}\delta^{j}_{i},\\ \mathfrak{d}_{i}T^{0}&=0,\end{split} \tag{52}\]
and we note that
\[\mathfrak{d}_{i}\tilde{T}^{0}=-K_{i}\tilde{T}^{0}-\left(C_{ijk}\tilde{S}^{k}- h_{ij}\right)T^{j}. \tag{53}\]
We also introduce derivative operators w.r.t. the propagators, just as in [17]:
\[\begin{split}\delta^{S}_{ij}&=\frac{1}{g_{s}^{2}} \left(\frac{\partial}{\partial S^{ij}}-K_{(i}\frac{\partial}{\partial\tilde{ S}^{j)}}+\frac{1}{2}K_{i}K_{j}\frac{\partial}{\partial\tilde{S}}\right),\\ \delta^{S}_{0i}&=-\frac{1}{g_{s}^{2}}\left(\frac{ \partial}{\partial\tilde{S}^{i}}-K_{i}\frac{\partial}{\partial\tilde{S}} \right),\\ \delta^{S}_{00}&=\frac{1}{2g_{s}^{2}}\frac{\partial}{ \partial\tilde{S}}.\end{split} \tag{54}\]
and we define
\[\omega_{S}=T^{i}T^{j}\delta^{S}_{ij}+T^{0}T^{i}\delta^{S}_{0i}+T^{2}_{0}\delta ^{S}_{00}, \tag{55}\]
as well as the operator
\[\mathsf{W}=\omega_{S}-\mathsf{D}\widehat{G}\,\mathsf{D}, \tag{56}\]
where \(\widehat{G}=\widehat{G}^{(0)}\) is one of the formal series in (3).
We now write the HAE in terms of these operators. The equation for the dependence on the closed string propagators is similar to what is obtained in the closed string case,
\[\mathsf{W}\overline{G}=\mathsf{D}G_{0}\mathsf{D}\widehat{G}-\frac{1}{2}\left( \mathsf{D}\widehat{G}\right)^{2}+\mathsf{D}^{2}\widetilde{G}. \tag{57}\]
In this equation, \(\mathsf{D}^{2}\widetilde{G}\) is a series in \(g_{s}\) and its first term has to be understood as
\[\mathsf{D}^{2}\left(\frac{G_{-1}}{g_{s}}\right)=\frac{1}{g_{s}}T^{i}T^{j} \mathfrak{D}_{i}\mathfrak{D}_{j}G_{-1}=\frac{1}{g_{s}}T^{i}T^{j}\Delta_{ij}. \tag{3.28}\]
Let us now consider the second equation in (2.61), involving the open string propagators. By considering different powers of \(K_{i}\), we obtain two different equations
\[\frac{\partial G_{\chi}}{\partial R^{i}}=-\mathfrak{D}_{i}G_{\chi-1},\qquad \frac{\partial G_{\chi}}{\partial\widetilde{R}}=(1-\chi)G_{\chi-1}, \tag{3.29}\]
which in terms of generating functions read
\[\frac{\partial\overline{G}}{\partial R^{i}}=-g_{s}\mathfrak{D}_{i}\widehat{G},\qquad\frac{\partial\overline{G}}{\partial\widetilde{R}}=g_{s}\mathfrak{d}_ {0}\widehat{G}. \tag{3.30}\]
This suggests introducing the operator
\[\mathsf{R}=\frac{1}{g_{s}}\left(T^{j}\frac{\partial}{\partial R^{j}}-\tilde{T }^{0}\frac{\partial}{\partial\widetilde{R}}\right). \tag{3.31}\]
This is the new ingredient in the operator formalism that we need in order to study the real topological string. As in [17], we will require the operators to have zero charge w.r.t. the \(U(1)\) connection on \(\mathcal{L}\). This is why we have included a factor of \(g_{s}\) in the r.h.s. of (3.31) (since \(R^{i}\), \(\widetilde{R}\) have charge one). Then, the equations (3.30) can be written as
\[\mathsf{R}\overline{G}=-\mathsf{D}\widehat{G}. \tag{3.32}\]
We then have three operators: \(\mathsf{R}\), \(\mathsf{W}\) and \(\mathsf{D}\), and we want to understand their commutator algebra. One can immediately calculate
\[\begin{split}\left[\frac{1}{g_{s}}\frac{\partial}{\partial R^{k} },\mathfrak{d}_{i}\right]&=\frac{1}{g_{s}}\left(S^{jl}C_{ilk}-s_ {ik}^{j}-K_{i}\delta_{k}^{j}\right)\frac{\partial}{\partial R^{j}}+\frac{1}{g _{s}}\left(\tilde{S}^{j}C_{ijk}-h_{ik}\right)\frac{\partial}{\partial\tilde{ R}},\\ \left[\frac{1}{g_{s}}\frac{\partial}{\partial\tilde{R}}, \mathfrak{d}_{i}\right]&=\frac{1}{g_{s}}\left(\frac{\partial}{ \partial R^{i}}-K_{i}\frac{\partial}{\partial\tilde{R}}\right),\end{split} \tag{3.33}\]
and from this result one easily verifies that \(\mathsf{R}\) and \(\mathsf{D}\) commute:
\[[\mathsf{R},\mathsf{D}]=0. \tag{3.34}\]
To compute the other commutators some extra work is needed. To obtain the commutator between \(\mathsf{W}\) and \(\mathsf{D}\), we have to calculate
\[\begin{split}\left[\frac{1}{g_{s}^{2}}\frac{\partial}{\partial S ^{ij}},\mathfrak{d}_{k}\right]&=-\frac{1}{g_{s}^{2}}\left(\Gamma _{ik}^{m}\frac{\partial}{\partial S^{mj}}+\Gamma_{jk}^{m}\frac{\partial}{ \partial S^{mi}}\right)\\ &+\frac{1}{2}\left(C_{ikb}\tilde{S}^{b}-h_{ik}\right)\frac{1}{g_{ s}^{2}}\frac{\partial}{\partial\tilde{S}^{j}}+\frac{1}{2}\left(C_{jkb}\tilde{S}^{b}-h_{ jk}\right)\frac{1}{g_{s}^{2}}\frac{\partial}{\partial\tilde{S}^{i}}\\ &-\frac{1}{2g_{s}^{2}}C_{ikm}K_{j}\frac{\partial}{\partial K_{m}} -\frac{1}{2g_{s}^{2}}C_{jkm}K_{i}\frac{\partial}{\partial K_{m}}\\ &+\frac{1}{2g_{s}^{2}}\left(C_{kjm}R^{m}-d_{kj}\right)\frac{ \partial}{\partial R^{i}}+\frac{1}{2g_{s}^{2}}\left(C_{kim}R^{m}-d_{ki}\right) \frac{\partial}{\partial R^{j}},\end{split} \tag{3.35}\]
as well as
\[\left[\frac{1}{g_{s}^{2}}\frac{\partial}{\partial\tilde{S}^{i}}, \mathfrak{d}_{j}\right] =\frac{2}{g_{s}^{2}}\frac{\partial}{\partial S^{ij}}-\frac{1}{g_{s}^ {2}}\Gamma_{ij}^{m}\frac{\partial}{\partial\tilde{S}^{m}}+\left(C_{ijk}\tilde{ S}^{k}-h_{ij}\right)\frac{1}{g_{s}^{2}}\frac{\partial}{\partial\tilde{S}}-\frac{1}{g_{s} ^{2}}C_{ijm}\frac{\partial}{\partial K_{m}} \tag{3.36}\] \[+\frac{1}{g_{s}^{2}}\left(C_{ijp}R^{p}-d_{ij}\right)\frac{\partial }{\partial\tilde{R}},\] \[\left[\frac{1}{g_{s}^{2}}\frac{\partial}{\partial\tilde{S}}, \mathfrak{d}_{i}\right] =\frac{2}{g_{s}^{2}}\left(\frac{\partial}{\partial\tilde{S}^{i}}- K_{i}\frac{\partial}{\partial\tilde{S}}\right).\]
When the real propagators are set to zero one recovers the results of [17]. Let us introduce
\[\delta_{i}^{R}=\frac{\partial}{\partial R^{j}}-K_{j}\frac{\partial}{\partial \tilde{R}}. \tag{3.37}\]
From here one obtains
\[\left[\delta_{ij}^{S},\mathfrak{d}_{k}\right] =-\Gamma_{ik}^{m}\delta_{jm}^{S}-\Gamma_{jk}^{m}\delta_{im}^{S}+ \frac{1}{2g_{s}^{2}}\left(C_{kim}R^{m}-d_{ki}\right)\delta_{j}^{R}+\frac{1}{2g _{s}^{2}}\left(C_{kjm}R^{m}-d_{kj}\right)\delta_{i}^{R},\] \[\left[\delta_{0i}^{S},\mathfrak{d}_{j}\right] =-2\delta_{ij}^{S}-\Gamma_{ij}^{m}\delta_{0m}^{S}+\frac{1}{g_{s}^ {2}}C_{ijm}\frac{\partial}{\partial K_{m}}, \tag{3.38}\] \[\left[\delta_{00}^{S},\mathfrak{d}_{i}\right] =-\delta_{0i}^{S},\]
which leads to
\[\left[\omega_{S},\mathfrak{d}_{k}\right]=\frac{1}{g_{s}}T^{i}\left(C_{ipk}R^{p }-d_{ik}\right)\mathsf{R}+\frac{1}{g_{s}^{2}}T^{0}T^{i}C_{ikm}\frac{\partial}{ \partial K_{m}}. \tag{3.39}\]
In addition, we have from [17]
\[\omega_{S}(T^{\alpha})=T^{\alpha}\mathsf{D}\left(\frac{\mathcal{A}}{g_{s}} \right), \tag{3.40}\]
and one can also deduce
\[\left[\omega_{S},\mathsf{D}\right]=\mathsf{D}\left(\frac{\mathcal{A}}{g_{s}} \right)\mathsf{D}+\frac{1}{g_{s}}T^{i}T_{k}\left(C_{ipk}R^{p}-d_{ik}\right) \mathsf{R}+\frac{1}{g_{s}^{2}}T^{0}T^{i}T^{j}C_{ijm}\frac{\partial}{\partial K _{m}}. \tag{3.41}\]
We conclude that
\[\left[\mathsf{W},\mathsf{D}\right]=\mathsf{D}H_{0}\mathsf{D}+\frac{1}{g_{s}}T ^{i}T_{k}\left(C_{ipk}R^{p}-d_{ik}\right)\mathsf{R}+\frac{1}{g_{s}^{2}}T^{0}T^ {i}T^{j}C_{ijm}\frac{\partial}{\partial K_{m}}, \tag{3.42}\]
where
\[H_{0}=\frac{\mathcal{A}}{g_{s}}+\mathsf{D}\widehat{G}. \tag{3.43}\]
This is the analogue of the functional \(\mathcal{G}\) introduced in [15; 17; 29].
It is convenient to write (3.42) in a slightly different form. We introduce the quantity,
\[H_{1}=-\mathfrak{D}_{i}\mathcal{A}\left(R^{i}-\mathcal{R}_{\mathcal{A}}^{i} \right)+\mathcal{A}\left(\tilde{R}-\tilde{\mathcal{R}}_{\mathcal{A}}\right). \tag{3.44}\]
By using the result of [17]
\[\partial_{ij}^{2}\mathcal{A}=-\partial_{t}\mathcal{A}\left(C_{ijm}\mathcal{S} _{\mathcal{A}}^{ml}-s_{ij}^{l}\right)+\mathcal{A}\left(C_{ijm}\tilde{\mathcal{ S}}_{\mathcal{A}}^{m}-h_{ij}\right) \tag{3.45}\]
one can show that
\[\mathsf{D}H_{1}=-\frac{1}{g_{s}}T^{i}T^{k}\left(C_{ikp}R^{p}-d_{ik}\right), \tag{3.46}\]
therefore we can write (3.42) as
\[[\mathsf{W},\mathsf{D}]=\mathsf{D}H_{0}\,\mathsf{D}-\mathsf{D}H_{1}\,\mathsf{R} +\frac{1}{g_{s}^{2}}T^{0}T^{i}T^{j}C_{ijm}\frac{\partial}{\partial K_{m}}. \tag{3.47}\]
There are two other properties of \(H_{1}\) which will be needed:
\[\mathsf{D}^{2}\left(\frac{G_{-1}}{g_{s}}\right)=\mathsf{D}H_{1},\qquad\mathsf{ R}\mathsf{D}G_{0}=-\mathsf{D}H_{1}. \tag{3.48}\]
Finally, we want to calculate the commutator of \(\mathsf{W}\) and \(\mathsf{R}\). By using that
\[[\omega_{S},\mathsf{R}]=\mathsf{D}\left(\frac{\mathcal{A}}{g_{s}}\right) \mathsf{R}, \tag{3.49}\]
one finds,
\[[\mathsf{W},\mathsf{R}]=\mathsf{D}\left(\frac{\mathcal{A}}{g_{s}}\right) \mathsf{R}-\left(\mathsf{D}H_{1}+\mathsf{D}^{2}\widehat{G}\right)\mathsf{D}. \tag{3.50}\]
One of the key aspects of the operator formalism constructed in [15, 17, 29] is that, in the holomorphic limit, \(\mathsf{D}\) becomes indeed a derivative operator w.r.t. the flat coordinates of the CY. The more general case was analyzed in [17]. Let us write the action \(\mathcal{A}\) as an integer linear combination of periods,
\[\kappa^{-1}\mathcal{A}=c^{J}\mathcal{F}_{J}+d_{J}X^{J}. \tag{3.51}\]
In this formula, \(\kappa\) is a universal normalization constant which depends on a choice of normalization for the string coupling constant, see [17] for a detailed discussion. We will set it to one in most of the formulae that follow. Let \(f\) be a homogeneous function of degree \(n\) in the big moduli space coordinates. Then, in the holomorphic limit, one has [17]
\[\mathsf{D}(g_{s}^{-n}f)\to g_{s}^{-n+1}c^{I}\frac{\partial f}{\partial X^{I}}. \tag{3.52}\]
### The one-instanton amplitude
We are now ready to use the formalism above to obtain the one-instanton amplitude in closed form. It satisfies the linearized HAE, which in the operator language that we have just developed is given by
\[\mathsf{W}G^{(1)} =\mathsf{D}^{2}G^{(1)}, \tag{3.53}\] \[\mathsf{R}G^{(1)} =-\mathsf{D}G^{(1)}.\]
To solve this equation we introduce an exponential ansatz, as in [15],
\[G^{(1)}=\mathrm{e}^{-\Phi}. \tag{3.54}\]
In terms of \(\Phi\), the equations (3.53) read
\[\mathsf{W}\Phi =\mathsf{D}^{2}\Phi-\left(\mathsf{D}\Phi\right)^{2}, \tag{3.55}\] \[\mathsf{R}\Phi =-\mathsf{D}\Phi.\]
To solve these equations we consider
\[H=H_{0}+H_{1}, \tag{3.56}\]
where \(H_{0,1}\) were defined in (3.43), (3.44), respectively. One can easily show that it satisfies the equations
\[\mathsf{W}H=\mathsf{D}^{2}H, \tag{3.57}\] \[\mathsf{R}H=-\mathsf{D}H.\]
In proving the first equation, it is useful to note that
\[\omega_{S}(\mathsf{D}G_{0})=\mathsf{D}\left(\frac{\mathcal{A}}{g_{s}}\right) \mathsf{D}G_{0}+\mathsf{D}^{2}\left(\frac{\mathcal{A}}{g_{s}}\right). \tag{3.58}\]
We now claim that (3.55) is solved by
\[\Phi=\mathsf{O}_{\lambda}H, \tag{3.59}\]
where \(\mathsf{O}_{\lambda}\) is the operator
\[\mathsf{O}_{\lambda}=\sum_{k\geq 1}\frac{(-\lambda)^{k-1}}{k!}\mathsf{D}^{k-1 }=\frac{1}{\lambda\mathsf{D}}\left(1-\mathrm{e}^{-\lambda\mathsf{D}}\right)= \frac{1}{\lambda}\int_{0}^{\lambda}\mathrm{e}^{-u\mathsf{D}}\mathrm{d}u, \tag{3.60}\]
and we will have to set
\[\lambda=2. \tag{3.61}\]
The check that (3.59) satisfies the second equation in (3.55) follows immediately from the second equation in (3.57) and the commutation relation (3.34). To prove the first equation we have to work harder. We first obtain the commutator of \(\mathsf{W}\) with \(\mathsf{O}_{\lambda}\). As in [15; 17], we calculate the commutator of \(\mathsf{W}\) with \(\mathrm{e}^{-u\mathsf{D}}\) with the help of Hadamard's lemma,
\[\mathrm{e}^{A}B\mathrm{e}^{-A}=\sum_{n\geq 0}\frac{1}{n!}[A,B]_{n}, \tag{3.62}\]
where the iterated commutator \([A,B]_{n}\) is defined by
\[[A,B]_{n}=[A,[A,B]_{n-1}],\qquad[A,B]_{0}=B. \tag{3.63}\]
In our case, we have the simple result that
\[[\mathsf{D},\mathsf{W}]_{n}=-\left(\mathsf{D}^{n}H_{0}\right)\mathsf{D}+ \left(\mathsf{D}^{n}H_{1}\right)\mathsf{R}. \tag{3.64}\]
We conclude that
\[\mathsf{W}\,\mathrm{e}^{-u\mathsf{D}}=\mathrm{e}^{-u\mathsf{D}}\left(\mathsf{ W}-\sum_{k\geq 1}\frac{u^{k}}{k!}\left(\left(\mathsf{D}^{k}H_{0}\right) \mathsf{D}-\left(\mathsf{D}^{k}H_{1}\right)\mathsf{R}\right)\right). \tag{3.65}\]
By using now the integral formula for \(\mathsf{O}_{\lambda}\) in (3.60), we find
\[\mathsf{W}\mathsf{O}_{\lambda}=\mathsf{O}_{\lambda}\mathsf{W}-\frac{1}{ \lambda}\int_{0}^{\lambda}\mathrm{d}u\,\mathrm{e}^{-u\mathsf{D}}\left\{\left[ (\mathrm{e}^{u\mathsf{D}}-1)H_{0}\right]\mathsf{D}-\left[(\mathrm{e}^{u \mathsf{D}}-1)H_{1}\right]\mathsf{R}\right\}. \tag{3.66}\]
From this, and by taking into account the second equation in (3.57), one obtains
\[\mathsf{W}\Phi=\mathsf{D}^{2}\Phi-\frac{1}{\lambda}\int_{0}^{\lambda}\left[(1- \mathrm{e}^{-u\mathsf{D}})H\right]\left[\mathrm{e}^{-u\mathsf{D}}\mathsf{D}H \right]. \tag{3.67}\]
On the other hand, we have that
\[\mathsf{D}\Phi=\sum_{k\geq 1}\frac{(-\lambda)^{k-1}}{k!}\mathsf{D}^{k}H=\frac{1 }{\lambda}\int_{0}^{\lambda}\mathrm{e}^{-u\mathsf{D}}\mathsf{D}H\mathrm{d}u. \tag{3.68}\]
Its square can be computed as
\[\begin{split}\left(\mathsf{D}\Phi\right)^{2}&=\frac {1}{\lambda^{2}}\int_{0}^{\lambda}\mathrm{e}^{-u\mathsf{D}}\mathsf{D}H\, \mathrm{d}u\int_{0}^{\lambda}\mathrm{e}^{-v\mathsf{D}}\mathsf{D}H\,\mathrm{d}v =\frac{2}{\lambda^{2}}\int_{0}^{\lambda}\mathrm{e}^{-u\mathsf{D}}\mathsf{D}H \,\mathrm{d}u\int_{0}^{u}\mathrm{e}^{-v\mathsf{D}}\mathsf{D}H\mathrm{d}v\\ &=\frac{2}{\lambda^{2}}\int_{0}^{\lambda}\left[\mathrm{e}^{-u \mathsf{D}}\mathsf{D}H\right]\left[(1-\mathrm{e}^{-u\mathsf{D}})H\right]\, \mathrm{d}u.\end{split} \tag{3.69}\]
In going to the last line, instead of integrating the symmetric function in \(v\), \(u\) over the square \([0,\lambda]^{2}\), we integrated it over the triangle below the diagonal, and multiplied the result by two. We conclude that
\[\frac{1}{\lambda}\int_{0}^{\lambda}\mathrm{e}^{-u\mathsf{D}}\left\{\left[( \mathrm{e}^{u\mathsf{D}}-1)H\right]\mathsf{D}H\right\}\,\mathrm{d}u=\frac{ \lambda}{2}\left(\mathsf{D}\Phi\right)^{2}. \tag{3.70}\]
Therefore,
\[\mathsf{W}\Phi=\mathsf{D}^{2}\Phi-\frac{\lambda}{2}\left(\mathsf{D}\Phi\right) ^{2}, \tag{3.71}\]
and by setting \(\lambda=2\) we obtain the sought-for result.
Since the solution to the HAE is expressed in terms of \(H\), we have to calculate its holomorphic limit \(\mathcal{H}\). To do this it is convenient to introduce a modified prepotential or \(\chi=-2\) free energy as in [15; 17], defined by
\[c^{I}\frac{\partial\widetilde{\mathcal{G}}_{-2}}{\partial X^{I}}=\mathcal{A}= c^{I}\mathcal{F}_{I}+d_{I}X^{I}. \tag{3.72}\]
It differs from the usual one by quadratic terms in \(X^{I}\). Then, the holomorphic limit of \(H_{0}\) is
\[\mathcal{H}_{0}=g_{s}^{-1}c^{I}\partial_{I}\widetilde{\mathcal{G}}_{-2}+\sum_ {\chi\geq 0}c^{I}\partial_{I}\mathcal{G}_{\chi}g_{s}^{\chi+1}. \tag{3.73}\]
We should also note that, as found in [17], in the compact CY case, the genus one closed string free energy \(\mathcal{F}_{1}\) appearing here is in fact given by
\[\mathcal{F}_{1}-\left(\frac{\chi}{24}-1\right)\log X^{0}. \tag{3.74}\]
The only thing that remains to do is to calculate the holomorphic limit of \(H_{1}\), which will be denoted by \(\mathcal{H}_{1}\). By using (2.58) we obtain
\[\mathcal{H}_{1}=[(C\tau+D)^{-1}C]^{PJ}\partial_{P}\mathcal{T}^{\Gamma}\left( \partial_{i}\mathcal{A}\chi_{J}^{i}+\mathcal{A}h_{J}\right)-\beta^{J}\left( \partial_{i}\mathcal{A}\chi_{J}^{i}+\mathcal{A}h_{J}\right), \tag{3.75}\]
where
\[c^{I}=C^{1I},\qquad d_{I}=D^{1}{}_{I}, \tag{3.76}\]
since we regard \(\mathcal{A}=X^{1}_{\mathcal{A}}\) as the first coordinate in the \(\mathcal{A}\)-frame. We now use
\[\partial_{i}\mathcal{A}\chi^{i}_{J}+\mathcal{A}h_{J}=c^{P}\tau_{PJ}+d_{J} \tag{3.77}\]
to obtain
\[\mathcal{H}_{1}=c^{P}\partial_{P}\mathcal{T}^{\Gamma}-\beta^{J}c^{p}\tau_{PJ}- \beta^{J}d_{J}=c^{P}\partial_{P}\left(\mathcal{T}^{\Gamma}-\beta^{J}\mathcal{F }_{J}\right)-\beta^{J}d_{J}, \tag{3.78}\]
or equivalently,
\[\mathcal{H}_{1}=c^{P}\partial_{P}\left(\mathcal{T}+\alpha_{J}X^{J}\right)- \beta^{J}d_{J}. \tag{3.79}\]
This can be used to define a new \(\tilde{\mathcal{G}}_{-1}\) as
\[\mathcal{H}_{1}=c^{P}\partial_{P}\tilde{\mathcal{G}}_{-1}, \tag{3.80}\]
which differs from \(\mathcal{G}_{-1}=\mathcal{T}\) in linear terms in the \(X^{I}\). We then consider a redefined total free energy,
\[\tilde{\mathcal{G}}=\frac{1}{g_{s}^{2}}\tilde{\mathcal{G}}_{-2}+\frac{1}{g_{s }}\tilde{\mathcal{G}}_{-1}+\sum_{\chi\geq 0}\mathcal{G}_{\chi}g_{s}^{\chi}, \tag{3.81}\]
in such a way that
\[\mathcal{H}=g_{s}c^{I}\partial_{I}\tilde{\mathcal{G}}. \tag{3.82}\]
We conclude that the holomorphic limit of \(\Phi\) is
\[\frac{1}{2}\left(\tilde{\mathcal{G}}(X^{I})-\tilde{\mathcal{G}}(X^{I}-2g_{s}c ^{I})\right), \tag{3.83}\]
and the holomorphic one-instanton amplitude is given by
\[\mathcal{G}^{(1)}=\exp\left[\frac{1}{2}\left(\tilde{\mathcal{G}}(X^{I}-2g_{s} c^{I})-\tilde{\mathcal{G}}(X^{I})\right)\right]. \tag{3.84}\]
This is very similar to the result in [15; 17]: the one-instanton amplitude is obtained by shifting the flat coordinates by an integer multiple of the string coupling constant (up to normalization). To illustrate the above result let us list the first few orders in the \(g_{s}\) expansion of the one-instanton amplitude (3.84). We have
\[\mathcal{G}^{(1)}=\mathrm{e}^{-\mathcal{A}/g_{s}}\exp\left[c^{I}c^{J}\tau_{IJ }-\mathcal{H}_{1}\right]\left(1+g_{s}\Upsilon_{1}+g_{s}^{2}\left(\Upsilon_{2} +\frac{1}{2}\Upsilon_{1}^{2}\right)+\dots\right) \tag{3.85}\]
where
\[\Upsilon_{1} =-\frac{2}{3}c^{I}c^{J}c^{K}C_{IJK}+c^{I}c^{J}\frac{\partial^{2} \mathcal{T}}{\partial X^{I}\partial X^{J}}-c^{I}\frac{\partial\mathcal{G}_{0 }}{\partial X^{I}}+\frac{c^{0}}{X^{0}}\left(\frac{\chi}{24}-1\right), \tag{3.86}\] \[\Upsilon_{2} =\frac{1}{3}c^{I}c^{J}c^{K}c^{L}\frac{\partial^{4}\mathcal{G}_{- 2}}{\partial X^{I}\partial X^{J}\partial X^{K}\partial X^{L}}-\frac{2}{3}c^{I }c^{J}c^{K}\frac{\partial^{3}\mathcal{T}}{\partial X^{I}\partial X^{J} \partial X^{K}}+\] \[+c^{I}c^{J}\frac{\partial^{2}\mathcal{G}_{0}}{\partial X^{I} \partial X^{J}}+\frac{(c^{0})^{2}}{(X^{0})^{2}}\left(\frac{\chi}{24}-1\right), \tag{3.87}\]
where we have denoted the second derivative of the modified prepotential as
\[\tau_{IJ}=\frac{\partial^{2}\widetilde{\mathcal{G}}_{-2}}{\partial X^{I} \partial X^{J}}. \tag{3.88}\]
So far we have just considered one-instanton solutions satisfying the boundary condition \(\mathcal{G}^{(1)}_{\mathcal{A}}=1\). We will now consider multi-instanton solutions with more general boundary conditions.
### Multi-instantons
In analogy to the closed topological string, following [15; 17], we can derive HAEs for the partition function of the real topological string, and use them to compute multi-instanton contributions. For this purpose we introduce
\[Z=\exp\left[\frac{1}{2}\mathcal{G}\right],\quad Z^{(0)}=\exp\left[\frac{1}{2} \mathcal{G}^{(0)}\right],\quad Z_{\rm r}=\frac{Z}{Z^{(0)}}. \tag{3.89}\]
We will call \(Z_{\rm r}\) the reduced partition function. Notice the additional factor of \(1/2\) in comparison to the closed string case discussed in [15; 17], which is necessary in order to find linear equations for the reduced partition function. These equations read,
\[\mathsf{W}Z_{\rm r} =\mathsf{D}^{2}Z_{\rm r}, \tag{3.90}\] \[\mathsf{R}Z_{\rm r} =-\mathsf{D}Z_{\rm r}.\]
We use a trans-series ansatz for \(Z_{\rm r}\) with two trans-series parameters \(C_{1}\) and \(C_{2}\), as in [15; 17],
\[Z_{\rm r}=1+\sum_{(n,m)\neq(0,0)}\mathcal{C}_{1}^{n}\mathcal{C}_{2}^{m}Z_{\rm r }^{(n|m)}. \tag{3.91}\]
This generalizes our previous ansatz (3.2). The sector \((n|m)\) correspond to \(n\) instantons and \(m\) "negative" instantons, and it behaves at small \(g_{s}\) as
\[Z_{\rm r}^{(n|m)}\sim\exp\left(-(n-m)\frac{\mathcal{A}}{g_{s}}\right). \tag{3.92}\]
These mixed sectors were first considered in a related context in [54], see [55; 56] for further developments. From (3.90) we find, by linearity,
\[\mathsf{W}Z_{\rm r}^{(n|m)} =\mathsf{D}^{2}Z_{\rm r}^{(n|m)}, \tag{3.93}\] \[\mathsf{R}Z_{\rm r}^{(n|m)} =-\mathsf{D}Z_{\rm r}^{(n|m)}.\]
These equations can now be treated with the previously introduced operator formalism. We make the ansatz
\[Z_{\rm r}^{(n|m)}=\mathfrak{a}_{(n|m)}\mathrm{e}^{\Sigma^{(n|m)}}. \tag{3.94}\]
Then one finds that \(\Sigma^{(n|m)}\) satifsies the following equations:
\[\mathsf{W}\Sigma^{(n|m)} =\mathsf{D}^{2}\Sigma^{(n|m)}+\left(\mathsf{D}\Sigma^{(n|m)} \right)^{2}, \tag{3.95}\] \[\mathsf{R}\Sigma^{(n|m)} =-\mathsf{D}\Sigma^{(n|m)}, \tag{3.96}\]
while \(\mathfrak{a}^{(n|m)}\) satisfies
\[\mathsf{W}\mathfrak{a}^{(n|m)} =\mathsf{D}^{2}\mathfrak{a}^{(n|m)}+2\left(\mathsf{D}\Sigma^{(n| m)}\right)\left(\mathsf{Da}^{(n|m)}\right), \tag{3.97}\] \[\mathsf{Ra}^{(n|m)} =-\mathsf{Da}^{(n|m)}.\]
The equations above only differ in numerical prefactors from their closed string siblings given in [15; 17]. Let us proceed to construct solutions to the above equations starting with \(\Sigma_{(n|m)}\)
Consider first with the object \(H\) introduced in (3.56). We claim that, in complete analogy to the construction in section 3.3, we have
\[\Sigma^{(n|m)}=\mathsf{O}^{(n-m)}H, \tag{3.98}\]
where we have introduced the operator
\[\mathsf{O}^{(\ell)}=\sum_{k=1}^{\infty}\frac{2^{k-1}}{k!}(-\ell)^{k}\mathsf{D}^ {k-1}=\frac{1}{2\mathsf{D}}\left(\mathrm{e}^{-2\ell\,\mathsf{D}}-1\right)=- \frac{1}{2}\int_{0}^{2\ell}\mathrm{e}^{-u\mathsf{D}}\mathrm{d}u. \tag{3.99}\]
Indeed, for the first equation (3.95) the proof is identical to the one for the one-instanton amplitude, whereas for the second equation (3.96) we remember that
\[[\mathsf{R},\mathsf{D}]=0, \tag{3.100}\]
which means that \(\left[\mathsf{R},\mathsf{O}^{(\ell)}\right]=0\). We conclude that equation (3.96) follows directly from equation (3.57).
We are left to investigate the prefactor \(\mathsf{a}^{(n|m)}\). To this end it will be useful to introduce the operator
\[\mathsf{M}^{(n|m)}=\mathsf{W}-\mathsf{D}^{2}-2\left(\mathsf{D}\Sigma^{(n|m)} \right)\mathsf{D}. \tag{3.101}\]
Then we want to solve4
Footnote 4: Notice that \(\mathsf{M}^{(n|m)}\) differs from its counterpart in [17] only by factors of 2.
\[\mathsf{M}^{(n|m)}\mathsf{a}^{(n|m)} =0, \tag{3.102}\] \[\mathsf{R}\mathsf{a}^{(n|m)} =-\mathsf{D}\mathsf{a}^{(n|m)}, \tag{3.103}\]
subject to the boundary condition
\[\mathsf{a}_{(n|m),\mathcal{A}}=\left(\frac{\mathcal{A}}{g_{s}}\right)^{\ell}. \tag{3.104}\]
As already explained in [15, 17], by linearity of (3.102) and (3.103), this suffices to obtain solutions whose boundary conditions are general polynomials in \(\mathcal{A}/g_{s}\). We will now construct objects \(\mathfrak{m}_{\ell}\), \(\ell=1,2,\cdots\), that fullfill equations (3.102) and (3.103), subject to the boundary condition (3.104). In complete analogy to the formalism developed in [15, 17] we introduce
\[X=H+2\,\mathsf{D}\Sigma_{(n|m)}, \tag{3.105}\]
which fullfills
\[\mathsf{M}^{(n|m)}X=0. \tag{3.106}\]
(The object \(X\) appearing in (3.105) should not be confused with the flat coordinates \(X^{I}\) introduced in (2.30)). Then we define \(\mathfrak{m}_{\ell}\) via
\[\Xi(\xi)=\exp\left(\mathcal{L}_{\xi}X\right)=\sum_{\ell=0}^{\infty}\frac{ \mathfrak{m}_{\ell}}{\ell!}\xi^{\ell}, \tag{3.107}\]
where
\[\mathcal{L}_{\xi}=\frac{1}{2}\int_{0}^{2\xi}\,\mathrm{e}^{u\mathsf{D}}\mathrm{ d}u=\sum_{k\geq 1}\frac{\xi^{k}}{k!}\left(2\mathsf{D}\right)^{k-1}. \tag{3.108}\]
First we notice that \(\Xi(\xi)\) fulfills equation (3.103), by using formulae (3.57), (3.96) and the fact that \(\mathsf{R}\) commutes with \(\mathsf{D}\). Concerning the first condition (3.102) it is sufficient to show that
\[\mathsf{M}^{(n|m)}\Xi(\xi)=0. \tag{3.109}\]
This can be proven by following exactly the procedure in [15, 17] and we will not repeat the explicit steps here. To illustrate the considerations above, we list the first few examples of \(\mathfrak{m}_{\ell}\):
\[\mathfrak{m}_{1} =X,\] \[\mathfrak{m}_{2} =X^{2}+2\mathsf{D}X, \tag{3.110}\] \[\mathfrak{m}_{3} =X^{3}+6X\mathsf{D}X+4\mathsf{D}^{2}X.\]
Finally, let us consider the holomorphic limit of the above results. The multi-instanton amplitude is given by formula (3.94) and in close analogy to the discussion in subsection 3.3 we find for the exponent \(\Sigma^{(n|m)}\)
\[\frac{1}{2}\left(\tilde{\mathcal{G}}(X^{I}-2(n-m)g_{s}c^{I})-\tilde{\mathcal{ G}}(X^{I})\right). \tag{3.111}\]
Since the prefactors appearing in the multi-instanton amplitudes consist of words made out of the letters \(\mathsf{D}^{k}X\) it suffices to establish that,
\[\mathsf{D}^{k-1}X\to g_{s}^{k}c^{I_{1}}\ldots c^{I_{k}}\frac{\partial^{k}}{ \partial X^{I_{1}}\ldots\partial X^{I_{k}}}\tilde{\mathcal{G}}\left(X^{I}-2(n -m)g_{s}\right), \tag{3.112}\]
which follows immediately from (3.52) and the holomorphic limits of \(H\) and \(\Sigma^{(n|m)}\).
### Boundary conditions
To obtain boundary conditions for the trans-series we follow [17, 27, 28, 52] and we consider the behavior of the theory near special points, like the conifold point and the large radius point. This behavior can be immediately translated in terms of a large order behavior for the amplitudes at large genus, which leads in turn to multi-instanton amplitudes. For simplicity, we will assume that we are in a situation with a single modulus.
Let us first consider the conifold point. In the case of the conventional closed topological string, we have the behavior (2.73). The formula for the Bernoulli numbers
\[B_{2g}=(-1)^{g-1}\frac{2(2g)!}{(2\pi)^{2g}}\sum_{\ell\geq 1}\ell^{-2g} \tag{3.113}\]
gives the all-orders asymptotic behavior for the pole term in (2.73):
\[\frac{1}{2\pi^{2}}\Gamma(2g-1)\sum_{\ell\geq 1}\left(\ell\mathcal{A}_{c}\right) ^{1-2g}\left(\mu_{0,\ell}+\frac{\ell\mathcal{A}_{c}}{2g-2}\mu_{1,\ell}\right), \tag{3.114}\]
where,
\[\mathcal{A}_{c}=2\pi\mathrm{i}t_{c},\qquad\mu_{0,\ell}=\frac{\mathcal{A}_{c}} {\ell},\qquad\mu_{1,\ell}=\frac{1}{\ell^{2}}. \tag{3.115}\]
According to the correspondence between large order behavior and exponentially small corrections (see e.g. [11]), (3.114) corresponds to an \(\ell\)-th instanton amplitude of the Pasquetti-Schiappa form [57],
\[\frac{1}{2\pi}\left(\frac{1}{\ell}\frac{\mathcal{A}_{c}}{g_{s}}+\frac{1}{\ell ^{2}}\right)\mathrm{e}^{-\ell\mathcal{A}_{c}/g_{s}}. \tag{3.116}\]
We recall that, since the original series is even in \(g_{s}\), we will have a trans-series with action \(-{\cal A}_{c}\) as well. They both combine to give the asymptotic behavior (3.114). In general, in the discussion below we have to consider as well the trans-series with opposite action5
Footnote 5: The asymptotic series for the real topological string free energy is obviously not even in \(g_{s}\). However, the boundary conditions associated to the conifold point involve both actions \(\pm{\cal A}\).
In the case of the real topological string, we have an additional term in \({\cal G}_{\chi}\) when \(\chi=2g-2\) is even, given by (2.74) and (2.75). As in the closed string case, we can extract from these terms a large genus behavior. For the first term in (2.75) we can use (3.113), and we find
\[-\frac{t_{c}^{2-2g}}{2^{2g+1}g(g-1)}(2^{2g}-1)B_{2g}=-\frac{1}{\pi^{2}}\Gamma( 2g-1)\sum_{\ell\,\text{odd}}(\ell{\cal A}_{c})^{2-2g}\frac{1}{\ell^{2}}\left( 1+\frac{1}{2g-2}\right). \tag{3.117}\]
The second term in (2.75) gives
\[\frac{t_{c}^{2-2g}}{2^{2g+1}(g-1)}E_{2g-2}=\frac{1}{\pi}\Gamma(2g-2)\sum_{\ell \,\text{odd}}(\ell{\cal A}_{c}^{0})^{2-2g}\frac{(-1)^{\frac{\ell-1}{2}}}{\ell}, \tag{3.118}\]
where
\[{\cal A}_{c}^{0}=\frac{1}{2}{\cal A}_{c}. \tag{3.119}\]
The first term leads to a multi-instanton correction which is minus twice the Pasquetti-Schiappa form (3.116),
\[-\frac{1}{\pi}\left(\frac{1}{\ell}\frac{{\cal A}_{c}}{g_{s}}+\frac{1}{\ell^{2} }\right)\text{e}^{-\ell{\cal A}_{c}/g_{s}}, \tag{3.120}\]
and in addition \(\ell\) takes only positive, odd integer values. The second term corresponds to a multi-instanton correction of the form
\[\frac{(-1)^{\frac{\ell-1}{2}}}{\ell}\text{e}^{-\ell{\cal A}_{c}^{0}/g_{s}}, \tag{3.121}\]
where \(\ell\) is also odd. The appearance of half the instanton action (3.119) is probably an effect of the orientifold action. Note that the leading multi-instanton corresponds to (3.121) with \(\ell=1\).
We conclude that the instanton actions or Borel singularities include the sequence
\[(2k+1)\,{\cal A},\qquad k\in\mathbb{Z}_{\geq 0}, \tag{3.122}\]
where \({\cal A}={\cal A}_{c}^{0}\). They lead to the boundary condition
\[{\cal G}_{\cal A}^{(2k+1)}=\frac{(-1)^{k}}{2k+1}\text{e}^{-(2k+1){\cal A}/g_{ s}},\qquad k=0,1,2,\cdots. \tag{3.123}\]
This is what we used in (3.10), with \(k=0\). We also have the sequence of Borel singularities
\[2k{\cal A},\qquad k\in\mathbb{Z}_{>0}. \tag{3.124}\]
For \(k\) even they are due to the contributions from the closed string sector. When \(k\) is odd, we have contributions from both the closed and the non-orientable sector. The resulting boundary condition is
\[{\cal G}_{\cal A}^{(2k)}=\frac{(-1)^{k}}{2\pi}\left(\frac{2}{k}\frac{{\cal A} }{g_{s}}+\frac{1}{k^{2}}\right)\text{e}^{-2k{\cal A}/g_{s}},\qquad k=1,2,\cdots. \tag{3.125}\]
Let us now consider the large radius point. It was shown in [17], in the closed string case, that the GV formula (7) determines the large genus asymptotics near the large radius point, and one can read from it the location of a sequence of instanton actions and the corresponding multi-instanton amplitudes and Stokes constants. It turns out that this asymptotics is determined by the genus zero GV invariants. Indeed, by expanding (7) in powers of \(g_{s}\), we find
\[\mathcal{F}_{g}(t)=\sum_{d\geq 1}\left(\frac{(-1)^{g-1}B_{2g}}{2g(2g-2)!}n_{0,d }+\frac{2(-1)^{g}n_{2,d}}{(2g-2)!}+\cdots\right)\mathrm{Li}_{3-2g}(\mathrm{e}^ {-d\cdot t}). \tag{126}\]
Only the first term inside the parentheses in the r.h.s. of (126) leads to factorial growth. By using again the asymptotics (113) and the identity,
\[\sum_{n\in\mathbb{Z}}\frac{1}{(2\pi\mathrm{i}n+t)^{m}}=\frac{1}{(m-1)!} \mathrm{Li}_{-m+1}(\mathrm{e}^{-t}),\qquad m\geq 2, \tag{127}\]
one finds that the contribution of genus zero GV invariants leads to instanton actions of the form
\[\mathcal{A}_{d,m}=2\pi d\cdot t+4\pi^{2}\mathrm{i}m,\qquad m\in\mathbb{Z}. \tag{128}\]
The corresponding \(\ell\)-instanton amplitudes are again of the Pasquetti-Schiappa form,
\[\mathcal{F}^{(\ell)}_{\mathcal{A}_{d,m}}=\frac{n_{0,d}}{2\pi}\left(\frac{1}{ \ell}\frac{\mathcal{A}_{d,m}}{g_{s}}+\frac{1}{\ell^{2}}\right)\mathrm{e}^{- \ell\mathcal{A}_{d,m}/g_{s}}, \tag{129}\]
and \(n_{0,d}\) is the corresponding Stokes constant (up to normalization).
In the case of the real topological string one has to consider the additional contribution (8). The only source of factorial growth is due to the term with \(r=-1\), and one has
\[\mathcal{G}_{\chi=2s-1}=\tilde{n}_{-1,d}\frac{\mathrm{i}(-1)^{s}B_{2s}(2^{2s- 1}-1)}{(2s)!}\left\{2^{2-2s}\mathrm{Li}_{2-2s}(\mathrm{e}^{-d\cdot t/2})- \mathrm{Li}_{2-2s}(\mathrm{e}^{-d\cdot t})\right\}+\cdots, \tag{130}\]
where the degrees \(d\) are odd positive numbers. This leads to a large \(\chi\) asymptotics which can be decoded in terms of multi-instanton amplitudes of the form
\[\mathrm{i}\,\tilde{n}_{-1,d}c_{\mathcal{A}}\frac{(-1)^{\ell-1}}{\ell}\mathrm{ e}^{-\ell\mathcal{A}/g_{s}},\qquad\ell\in\mathbb{Z}_{>0}, \tag{131}\]
where \(\mathcal{A}\) can take the following values
\[\frac{1}{2}\mathcal{A}_{d,m},\qquad\frac{1}{2}\mathcal{A}_{d,2m}. \tag{132}\]
Here, \(\mathcal{A}_{d,m}\) is given in (128), and the coefficient \(c_{\mathcal{A}}\) in (131) takes the values \(1\) and \(-2\) for the actions given in (132), respectively. Note that when \(\ell\) is even the multi-instanton amplitude (131) will combine with multi-instanton amplitudes (129) associated to the closed string sector.
This analysis leads to two main conclusions. First, the real topological string leads to a new type of boundary condition, encoded in the new multi-instanton amplitudes of the form (121), (131). Second, in the resurgent structure of the real topological string, the disk invariants \(\tilde{n}_{-1,d}\) appear as Stokes constants associated to the new singularities (132).
### Stokes automorphisms
The analysis of the conifold and large radius behavior indicate that there are two types of trans-series appearing in the resurgent structure of this theory. The first one is associated to the boundary conditions (3.123), (3.125), while the second one is associated to the boundary condition (3.131). A compact way to encode these trans-series is to obtain the corresponding Stokes automorphisms, as it was done in [19] for the closed topological string.
Let us consider the trans-series associated to the conifold behavior. They are multi-instanton amplitudes corresponding to the action \(\mathcal{A}=\mathcal{A}_{c}^{\text{o}}\). In the \(\mathcal{A}\)-frame, the \(\ell\)-instanton amplitudes with \(\ell\) odd are given by (3.123), while the ones with \(\ell\) even are given by (3.131). We first calculate the generating function
\[\sum_{\ell\geq 1}\mathcal{C}^{\ell}\mathcal{G}_{\mathcal{A}}^{(\ell)}=\mathcal{ B}_{c}\left(\mathcal{C}\mathrm{e}^{-\mathcal{A}/g_{s}},\frac{\mathcal{A}}{g_{s}} \right), \tag{3.133}\]
where
\[\mathcal{B}_{c}(x,y)=\tan^{-1}\left(x\right)+\frac{1}{2\pi}\mathrm{Li}_{2} \left(-x^{2}\right)-\frac{y}{\pi}\log\left(1+x^{2}\right). \tag{3.134}\]
The Stokes automorphism is defined by (see e.g. [9; 10] for background on alien derivatives and Stokes automorphism)
\[\mathfrak{S}_{\mathcal{C}}=\exp\left(\sum_{\ell=1}^{\infty}\mathcal{C}^{\ell }\Delta_{\mathcal{CA}}\right). \tag{3.135}\]
Let us now define the partition function \(\widetilde{Z}\) as in (3.89),
\[\widetilde{Z}=\mathrm{e}^{\tilde{\mathcal{G}}/2}, \tag{3.136}\]
and let us write the instanton action as a linear combination of periods, as in (3.51). Then, just like in [19], there are two different situations. If all the \(c^{I}\) vanish, the Stokes automorphism acts simply as a global multiplication factor,
\[\mathfrak{S}_{\mathcal{C}}(\widetilde{Z})=\exp\left[\frac{a}{2}\mathcal{B}_{c} \left(\mathcal{C}\mathrm{e}^{-\mathcal{A}/g_{s}},\frac{\mathcal{A}}{g_{s}} \right)\right]\widetilde{Z}, \tag{3.137}\]
where \(a\) is a Stokes constant (for the singularities at integer multiples of \(\mathcal{A}=\mathcal{A}_{c}^{\text{o}}\), one has \(a=1\), but one could have singularities with non-trivial Stokes constants). The factor of \(1/2\) in the exponent is inherited from the definition of \(\widetilde{Z}\) in (3.136). When not all \(c^{I}\) vanish, an argument similar to the one in [19] gives the following formula
\[\mathfrak{S}_{\mathcal{C}}(\widetilde{Z})=\exp\left[\frac{a}{2}\mathcal{B}_{c} \left(\mathcal{C}\mathrm{e}^{-2\kappa g_{s}c^{I}\partial_{I}},2\kappa g_{s}c^ {I}\partial_{I}\right)\right]\widetilde{Z}, \tag{3.138}\]
where we have re-introduced the normalization factor \(\kappa\) defined in (3.51). The expression (3.138) for the Stokes automorphism is rather different from the one obtained in [19], and from similar transformations that have appeared in the literature (see e.g. [58; 59]). However, as in [19], it is essentially determined by the conifold behavior of the free energies.
The Stokes automorphism associated to the multi-instanton amplitude (3.131) can be worked out in a similar way, and it involves the simpler generating function,
\[\mathcal{B}_{\text{LR}}(x)=\log(1+x). \tag{3.139}\]
As we noted above, this contribution has to be combined with the one due to the closed string sector, but the resulting transformation can be found in a straightforward way.
Experimental evidence: the real topological string on local \(\mathbb{P}^{2}\)
In this section we present experimental evidence for our non-perturbative results, based on the connection between instanton amplitudes and large order behavior of the perturbative series.
### Perturbative expansion
Our basic example in this section is the real topological string on local \(\mathbb{P}^{2}\) mentioned in section 2, where the choice of involution is given by complex conjugation on both the fiber and the base. This model was studied in detail in [35; 36], which we will follow closely. Of course, the closed string sector is well known, and the closed string topological free energy can be computed systematically with the conventional HAE, see e.g. [28; 51]. Some aspects of the special geometry of the model are summarized in the Appendix A.
The first new ingredient in the real topological string is the disk amplitude or domain wall tension. At large radius it is given by,
\[\mathcal{T}(z)=\xi 2\Gamma^{2}(3/2)\sum_{m\geq 0}\frac{\Gamma\left(3m+\frac{ 3}{2}\right)}{\Gamma\left(m+\frac{3}{2}\right)^{3}}(-1)^{m}z^{m+\frac{1}{2}} =2\xi\sqrt{z}\,_{4}F_{3}\left(\frac{1}{2},\frac{5}{6},1,\frac{7}{6};\frac{3}{2 },\frac{3}{2},\frac{3}{2};-27z\right). \tag{4.1}\]
Here, \(\xi\) is a normalization factor which has to be chosen appropriately to have a coherent addition of the different sectors of the real topological string. We will usually set
\[\xi=\mathrm{i}. \tag{4.2}\]
The complex variable \(z\) parametrizes the moduli space of complex structures of local \(\mathbb{P}^{2}\). The conifold point occurs at \(z=-1/27\), while \(z=0\) is the large radius point. Note also that the sign of \(z\) is the opposite one to what is used in [35; 36]. We can now use the integrality formula following from (2.8)
\[\frac{1}{\xi}\mathcal{T}=2\sum_{k,d\text{ odd}}\frac{1}{k^{2}}\tilde{n}_{-1,d }Q^{dk/2} \tag{4.3}\]
where \(Q=\mathrm{e}^{-t}\), to obtain the counting of disks
\[\sum_{d\,\text{odd}}\tilde{n}_{-1,d}Q^{d/2}=Q^{1/2}-Q^{3/2}+5Q^{5/2}-42Q^{7/2} +429Q^{9/2}+\cdots \tag{4.4}\]
One interesting observation in [35] is that the domain wall tension (4.1) can be obtained as the difference between the disk amplitudes of [60; 61], evaluated at two different values of the open moduli. We give some details of this computation in the Appendix A.2.
We will be interested in evaluating the real topological string amplitudes in differente frames. A particular important one is the conifold frame, in view of the behavior (2.73) and (2.74), (2.75). The appropriate flat coordinate in this frame is denoted by \(t_{c}(z)\) and defined in (A.8). It vanishes at the conifold point of the local \(\mathbb{P}^{2}\) geometry, and it has an expansion as a power series in the local coordinate around the conifold \(\delta=1+27z\), defined in (A.6). The domain wall tension \(\mathcal{T}\) in this frame can be simply obtained by noting that \(\mathcal{T}\) solves the inhomogeneous Picard-Fuchs equation
\[\frac{1}{\xi}\mathfrak{LT}=-\frac{\sqrt{-z}}{4}. \tag{4.5}\]
According to the discussion around (49), the domain wall tension in the conifold frame should vanish quadratically at the conifold point,
\[\mathcal{T}_{c}=a\delta^{2}+\mathcal{O}(\delta^{3}). \tag{46}\]
One finds,
\[\mathcal{T}_{c}=\frac{\delta^{2}}{24\sqrt{3}}+\frac{121\delta^{3}}{2592\sqrt{3} }+\frac{3197\delta^{4}}{69984\sqrt{3}}+\frac{4372889\delta^{5}}{100776960\sqrt {3}}+\mathcal{O}\left(\delta^{6}\right). \tag{47}\]
As expected from the discussion in (57), \(\mathcal{T}_{c}\) does not agree with the large radius tension (41). Let us take the closed string periods to be given by \(t_{\mathbb{R}}(z)\), \(t_{c}(z)\) and \(1\), where
\[t_{\mathbb{R}}=-\log(-z)-\widetilde{\varpi}_{1}(z), \tag{48}\]
and \(\varpi_{1}(z)\) is the series in (42). Then, one finds
\[\mathcal{T}_{c}-\mathcal{T}=\alpha_{0}+\alpha_{1}t_{\mathbb{R}}(z)+\beta t_{ c}(z) \tag{49}\]
where the coefficients \(\alpha_{0,1}\) and \(\beta\) are given by
\[\alpha_{0}=V,\qquad\alpha_{1}=-\frac{\pi}{6},\qquad\beta=\frac{\log(2)}{\sqrt {3}}, \tag{50}\]
and \(V\) is given by
\[V=2\operatorname{Im}\operatorname{Li}_{2}\left(\mathrm{e}^{\pi\mathrm{i}/3} \right). \tag{51}\]
These numbers were computed numerically to very high precision and then we fitted them to conjectural exact expressions by using Number Recognition in WolframAlpha. By using the value of \(t_{\mathbb{R}}(z)\) at the conifold point (see e.g. [62; 63; 64]),
\[t_{\mathbb{R}}\left(-\frac{1}{27}\right)=\frac{9V}{2\pi}, \tag{52}\]
we obtain the conjecture
\[\mathcal{T}\left(-\frac{1}{27}\right)=-\frac{V}{4}. \tag{53}\]
It would be very interesting prove the conjectures for the coefficients (50) and for the value of \(\mathcal{T}\) at the conifold (53) by extending the techniques and ideas of [62; 63; 65].
Once the domainwall tension is known, we can calculate the holomorphic limits of the Griffiths invariant and of the propagator \(\mathcal{R}^{z}\) for local \(\mathbb{P}^{2}\) (remember that \(\tilde{R}\) can be set to zero in the local case). For the Griffiths invariant we can use (49). If in addition we use flat coordinates, the covariant derivatives become conventional derivatives, and one has
\[\mathcal{D}_{tt}=\partial_{t}^{2}\mathcal{T}. \tag{54}\]
The holomorphic limit of the propagator follows then from (52), where one chooses \(d_{zz}=0\)[36]:
\[\mathcal{R}^{z}=-\frac{1}{C_{z}}\mathcal{D}_{tt}\left(t^{\prime}(z)\right)^{2}. \tag{55}\]
Here, as in [15], we have denoted by \(C_{z}\) the only entry of the Yukawa coupling \(C_{ijk}\) in the one-modulus case (its explicit expression in the case of local \(\mathbb{P}^{2}\) can be found in (A.7)). We recall
from [15] that the holomorphic limit of the closed string propagator in the local, one-modulus case can be written as
\[\mathcal{S}=-\frac{1}{C_{z}}\frac{t^{\prime\prime}(z)}{t^{\prime}(z)}-\mathfrak{ s}, \tag{111}\]
where \(\mathfrak{s}=-s_{zz}^{z}/C_{z}\) and we have denoted \(\mathcal{S}\equiv\mathcal{S}^{zz}\). It follows from this expression and (51), (48) that the real propagator can be also written as
\[\mathcal{R}^{z}=\frac{1}{C_{z}}\left(\frac{t^{\prime\prime}(z)}{t^{\prime}(z)} \frac{\partial\mathcal{T}}{\partial z}-\frac{\partial^{2}\mathcal{T}}{ \partial z^{2}}\right), \tag{112}\]
sinnce \(h_{zz}=d_{zz}=0\). This is the real counterpart of the closed string formula (111), and it is valid for arbitrary frames. In the conifold frame, for example, one finds
\[\mathcal{R}^{z}_{c}=-\frac{t_{c}}{108}+\frac{53t_{c}^{2}}{1296\sqrt{3}}-\frac {817t_{c}^{3}}{23328}+\frac{346487t_{c}^{4}}{5038848\sqrt{3}}+\mathcal{O}\left( t_{c}^{5}\right). \tag{113}\]
This agrees with the result in [36] up to a rescaling of \(t_{c}\) by 36.
Footnote 6: Notice how our conventions in [15] differ from ours by the rescaling \(t_{c}\to t_{c}/\sqrt{3}\).
With these ingredients, one can calculate the higher \(G_{\chi}\) by using (69) for \(\chi=0\) and Walcher's HAE for higher values of \(\chi\). The only non-trivial ingredient is the fixing of the holomorphic ambiguity at each value of \(\chi\). As in [36], we do this by combining the conifold behavior (73), (75) with an explicit calculation of the real topological string free energy with the real topological vertex of [36, 66]. This calculation is time-consuming and sets a practical limit to the number of terms that we can compute. We have obtained explicit results up to \(\chi=22\). This is not such a long perturbative series, as compared e.g. to what was used in [15] for the closed string free energies, but it is enough to check the asymptotic predictions, as we will see in the next section.
### Trans-series and asymptotics
We will now test the formulae derived in section 3 for the case of the real topological string on local \(\mathbb{P}^{2}\), by using the resurgent connection between instanton amplitudes and large order behavior of the perturbative series (see e.g. [11]). Our series will be given by the perturbative real string free energy in the large radius frame, and for simplicity we will focus on the region of moduli space where \(-1/27<z<0\).
The Borel singularity that controls the asymptotics of the perturbative sector not too far from the conifold point is set by the conifold behavior (73), (74) and (75). As we found in section 3.5, the smallest action associated to this behavior is given by
\[\mathcal{A}=\mathcal{A}_{c}^{\circ}=\pi\mathrm{i}t_{c}, \tag{114}\]
where \(\mathcal{A}_{c}^{\circ}\) was defined in (3.119), The corresponding trans-series is determined by the boundary condition (3.121) with \(\ell=1\), or (3.10), which fixes as well the overall coefficient or Stokes constant. It is given by the general expression (3.84), which we repeat here for the convenience of the reader,
\[\mathcal{G}^{(1)}=\exp\left[\frac{1}{2}\left(\tilde{\mathcal{G}}(t-2cg_{s})- \tilde{\mathcal{G}}(t)\right)\right]. \tag{115}\]
In this equation, the coefficient \(c\) is given by
\[c=\frac{3\mathrm{i}}{2}, \tag{116}\]
as it follows from (4.19) and the results in Appendix A (\(c\) is half the constant \(\alpha\) in [15]).
We are now ready to perform the explicit asymptotic checks of the perturbative series. It is more convenient to consider a real action and therefore in what follows we will rescale the real free energy as
\[\mathcal{G}_{\chi}\,\to\,(-{\rm i})^{\chi}\mathcal{G}_{\chi},\quad\mathcal{A} \to-{\rm i}\mathcal{A}.\]
From our analytical prediction, and taking into account the fact that instanton actions appear in pairs \(\mathcal{A}\), \(-\mathcal{A}\), we find the large order formula
\[\mathcal{G}_{\chi}\sim\frac{1}{\pi}\mathcal{A}^{-\chi}\Gamma(\chi)\left(\mu_{ 0}+\frac{\mu_{1}\mathcal{A}}{\chi}+\cdots\right),\qquad\chi\gg 1. \tag{4.22}\]
In this equation, \(\mu_{0}\) is given by
\[\mu_{0}=\exp\left(c^{2}\partial_{t}^{2}\tilde{\mathcal{G}}_{-2}(t)\right) \begin{cases}\cosh\left(c\partial_{t}\tilde{\mathcal{G}}_{-1}(t)\right),& \text{if $\chi$ even},\\ \sinh\left(c\partial_{t}\tilde{\mathcal{G}}_{-1}(t)\right),&\text{if $\chi$ odd},\end{cases} \tag{4.23}\]
while \(\mu_{1}\) is
\[\mu_{1}=\exp\left(c^{2}\partial_{t}^{2}\tilde{\mathcal{G}}_{-2}(t)\right) \begin{cases}\zeta_{1}\cosh(c\partial_{t}\tilde{\mathcal{G}}_{-1}(t))+\zeta_{ 2}\sinh(c\partial_{t}\tilde{\mathcal{G}}_{-1}(t)),&\chi\text{ even},\\ \zeta_{1}\sinh(c\partial_{t}\tilde{\mathcal{G}}_{-1}(t))+\zeta_{2}\cosh(c \partial_{t}\tilde{\mathcal{G}}_{-1}(t)),&\chi\text{ odd},\end{cases} \tag{4.24}\]
where
\[\zeta_{1}=c^{2}\frac{\partial^{2}\mathcal{T}}{\partial t^{2}},\qquad\zeta_{2} =-\frac{2}{3}c^{3}C_{ttt}-c\frac{\partial\mathcal{G}_{0}}{\partial t}. \tag{4.25}\]
As usual in resurgence, we can test these predictions by constructing auxiliary series that converge to the quantities \(\mathcal{A}\), \(\mu_{0,1}\) appearing in the asymptotic formula (4.22), as in e.g. [6]. For example, according to (4.22) the action \(\mathcal{A}\) should be the limiting value of the sequence
\[\sqrt{\frac{\mathcal{G}_{\chi}}{\mathcal{G}_{\chi-2}}}(\chi-1)(\chi-2) \tag{4.26}\]
as \(\chi\to\infty\). A numerical approximation to this limit can be obtained by first calculating the first \(N\) terms in this sequence (in our case, we have computed them up to \(N=22\)), and then by using acceleration methods, like Richardson transforms (RT), to reach higher precision. In figure 1 we compare the numerical values obtained after three RTs, which we represent by points, and the theoretical value \(\mathcal{A}=\pi t_{c}\), which is represented by a continuous line. We find an agreement of 4 to 5 digits depending on the value of \(z\). Similar comparisons can be made for \(\mu_{0}\) and \(\mu_{1}\), which we show in figures 2 and Fig. 3, respectively. In all cases we use 22 terms of the auxiliary series and three RTs. We find again an agreement of 4 to 5 digits, which is an excellent one given that the numbers of terms available is rather small.
Figure 1: The continuous line is the expected value for the action \(\mathcal{A}=\pi t_{c}\), as a function of \(z\), while the dots are numerical approximations based on extrapolation and acceleration of the sequence (4.26).
Figure 3: The continuous line is the expected value (4.24) for the coefficient \(\mu_{1}\), as a function of \(z\), while the dots are numerical approximations based on extrapolation and acceleration of the appropriate auxiliary sequence. The figure on the left corresponds to even values of \(\chi\), while the figure on the right corresponds to odd values.
Figure 2: The continuous line is the expected value (4.23) for the coefficient \(\mu_{0}\), as a function of \(z\), while the dots are numerical approximations based on extrapolation and acceleration of the appropriate auxiliary sequence. The figure on the left corresponds to even values of \(\chi\), while the figure on the right corresponds to odd values.
string case, since the boundary conditions coming from the large radius and the conifold behavior lead to different trans-series. However, as in the closed string case, the multi-instanton amplitudes are essentially obtained by an integer shift of the closed string background, while the D-brane and orientifold background is unaffected.
We find it remarkable that the operator formalism of [15; 17] can be extended naturally to the real case. The underlying reason might be that, as shown in [49; 67], one can relate the solutions of Walcher's extended HAE, to solutions of the HAE of [26] for the closed topological string. Perhaps the observations of [49; 67] lead to a simpler derivation of the operator formalism obtained in this paper.
The full resurgent structure of the real topological string involves additional Stokes constants, as compared to the closed string case. It is natural to conjecture that the new Stokes constants are related to the counting of BPS states in the presence of D-branes and orientifold planes. Concrete evidence for this connection has been obtained in section 3.5, where we found that the integer invariants counting disks indeed appear as Stokes constants in the resurgent structure. More generally, our results indicate that the theory of Donaldson-Thomas invariants underlying BPS counting has a natural extension to the real case. It would be very interesting to develop these observations further, and to find a wall-crossing interpretation for the new Stokes automorphism formula (3.138), in the spirit of [68]. Another interesting direction is the study of the real topological string on compact CYs from the point of view of resurgence. In this endeavour, a better understanding of the free energies at high Euler characteristic would be very useful, and it is clearly an interesting problem in itself.
## Acknowledgements
We would like to thank Johannes Walcher for his insightful comments on a preliminary draft of this paper. This work has been supported in part by the ERC-SyG project "Recursive and Exact New Quantum Theory" (ReNewQuantum), which received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program, grant agreement No. 810573.
## Appendix A Local \(\mathbb{P}^{2}\)
### Useful formulae
In this section we collect some explicit formulae for the special geometry of local \(\mathbb{P}^{2}\) which can be found in e.g. [28; 29; 51].
The periods solve the Picard-Fuchs equation \(\mathfrak{L}\varphi(z)=0\), where
\[\mathfrak{L}=\theta_{z}^{3}+3z\theta_{z}(3\theta_{z}+1)(3\theta_{z}+2)\] (A.1)
and \(\theta_{z}=z\mathrm{d}/\mathrm{d}z\). They are built from the power series
\[\begin{split}\widetilde{\varpi}_{1}(z)&=\sum_{j \geq 1}3\frac{(3j-1)!}{(j!)^{3}}(-z)^{j},\\ \widetilde{\varpi}_{2}(z)&=\sum_{j\geq 1}\frac{18}{j!} \frac{\Gamma(3j)}{\Gamma(1+j)^{2}}\left\{\psi(3j)-\psi(j+1)\right\}(-z)^{j},\end{split}\] (A.2)
so that
\[t=-\log(z)-\widetilde{\varpi}_{1}(z).\] (A.3)
The prepotential \(F_{0}(t)\) is defined by
\[\partial_{t}F_{0}(t)=\frac{\omega_{2}(z)}{6},\] (A.4)
where
\[\omega_{2}(z)=\log^{2}(z)+2\widetilde{\varpi}_{1}(z)\log(z)+\widetilde{\varpi} _{2}(z).\] (A.5)
The conifold discriminant is
\[\delta=1+27z.\] (A.6)
The Yukawa coupling is given by
\[C_{z}=-\frac{1}{3z^{3}\delta(z)},\] (A.7)
The flat coordinate at the conifold is
\[t_{c}(z)=\frac{\mathfrak{c}}{6}\left(\omega_{c}(z)-\pi^{2}\right),\qquad \mathfrak{c}=\frac{3}{2\pi}\] (A.8)
where
\[\omega_{c}(z)=\log^{2}(-z)+2\log(-z)\widetilde{\varpi}_{1}(z)+\widetilde{ \varpi}_{2}(z).\] (A.9)
It has the property that
\[t_{c}\left(-\frac{1}{27}\right)=0.\] (A.10)
Let us note that
\[\omega_{c}(z)=\omega_{2}(z)\pm 2\pi\mathrm{i}\omega_{1}(z)-\pi^{2},\] (A.11)
where the sign reflects the choice of branch for \(\log(z)\). The expressions (A.8) and (A.9) define \(t_{c}(z)\) inside the region of convergence of the series \(\widetilde{\varpi}_{1,2}(z)\). Outside this region, it is convenient to use an expression for \(t_{c}\) in terms of a Meijer function:
\[t_{c}(z)=\frac{3}{2\pi}\left(\mathcal{G}(-z)-\frac{\pi^{2}}{6}\right),\] (A.12)
where
\[\mathcal{G}(z)=\frac{G_{3,3}^{3,2}\left(27z\left|\begin{array}{c}\frac{1}{3},\frac{2}{3},1\\ 0,0,0\end{array}\right.\right)}{2\sqrt{3}\pi}-\frac{5\pi^{2}}{18}.\] (A.13)
Another useful expression is [28, 52]
\[t_{c}(z)=\frac{2\pi}{3}\left(\frac{3\psi}{\Gamma\left(\frac{2}{3}\right)^{3}} \right.\left.{}_{3}F_{2}\left(\frac{1}{3},\frac{1}{3},\frac{1}{3};\frac{2}{3},\frac{4}{3}\right|\psi^{3}\right)-\frac{9\,\psi^{2}}{2\Gamma\left(\frac{1}{3 }\right)^{3}}\right.\left.{}_{3}F_{2}\left(\frac{2}{3},\frac{2}{3},\frac{2}{3} ;\frac{4}{3},\frac{5}{3}\right|\psi^{3}\right)-1\right),\] (A.14)
where
\[\psi^{3}=-\frac{1}{27z}.\] (A.15)
This flat coordinate has the following power series expansion near the conifold point,
\[t_{c}(z)=\frac{1}{\sqrt{3}}\delta+\frac{11}{18\sqrt{3}}\delta^{2}+\frac{109}{ 243\sqrt{3}}\delta^{3}+\cdots\] (A.16)
### An integral representation of the domain wall tension
It was pointed out in [35] that the domain wall tension for local \(\mathbb{P}^{2}\) (4.1) can be obtained as a definite integral of the canonical Liouville one-form on the mirror curve. This is the local version of the general approach of [69, 33, 34] to domain wall tensions. Since this is not fully developed in [35], we provide here some details for completeness.
The mirror curve of local \(\mathbb{P}^{2}\) is given by
\[\mathrm{e}^{x}+\mathrm{e}^{y}+\mathrm{e}^{-y-x}+\kappa=0,\] (A.17)
where \(\kappa^{3}=z^{-1}\) and \(z\) is the modulus of local \(\mathbb{P}^{2}\). It is useful to consider the exponentiated variables
\[X=\mathrm{e}^{x},\qquad Y=\mathrm{e}^{y}.\] (A.18)
We note that the equation for the curve can be solved as
\[Y=\frac{X^{2}+\kappa X+\sqrt{(X^{2}+\kappa X)^{2}-4X}}{2X}.\] (A.19)
Let us consider the points \(p=(X,Y)\) in exponentiated variables given by
\[p_{\pm}=\pm\left(\kappa^{-1/2},-\kappa^{-1/2}\right),\] (A.20)
which belong to the curve (A.17). Then, we have that
\[\frac{1}{\xi}\mathcal{T}(z)=\int_{X_{-}}^{X_{+}}\log(Y)\frac{\mathrm{d}X}{X}.\] (A.21)
To verify this, we expand in series around \(z=0\), to make contact with the expression (4.1). The expansion of \(y\) reads
\[\log(Y)=-\frac{1}{3}\log(z)+\sum_{n\geq 1}z^{n/2}p_{n}(u),\] (A.22)
where we have introduced the variable \(u\) through \(X=\kappa^{-1/2}u\), and \(p_{n}(u)\) are Laurent polynomials in \(u\). One has, for example,
\[p_{1}(u)=u-u^{-1},\qquad p_{2}(u)=-\frac{u^{2}}{2}-\frac{3}{2u^{2}}+2.\] (A.23)
One also notes that \(p_{n}(u)\) are even (odd) functions of \(u\) for \(n\) even (odd). To perform the integral (A.21) we have to be careful, since the integrand is singular at \(u=0\). We just consider the indefinite integral
\[\int^{u}p_{n}(u^{\prime})\frac{\mathrm{d}u^{\prime}}{u^{\prime}}=P_{n}(u),\] (A.24)
and we declare
\[\frac{1}{\xi}\mathcal{T}(z)=\int_{-1}^{1}\log(Y)\frac{\mathrm{d}u}{u}=2\sum_{ k\geq 0}z^{k+1/2}P_{2k+1}(1),\] (A.25)
so that only the integrals of odd terms in the series contribute. Then, one verifies that the expansion (4.1) is recovered, as claimed in [35]. | We are researching the resurgence structure of Walcher's real topological string on general Calabi–Yau manifolds. We find all-order trans-series solutions to the corresponding holomorphic anomaly equations, by extending the operator formalism of the closed topological string, and we obtain explicit formulae for multi-instanton amplitudes. We find that the integer invariants counting disks appear as Stokes constants in the resurgence structure, and we provide experimental evidence for our results in the case of the real topological string on local $\mathbb P^2$. |
2309.15729 | MindGPT: Interpreting What You See with Non-invasive Brain Recordings | Decoding of seen visual contents with non-invasive brain recordings has
important scientific and practical values. Efforts have been made to recover
the seen images from brain signals. However, most existing approaches cannot
faithfully reflect the visual contents due to insufficient image quality or
semantic mismatches. Compared with reconstructing pixel-level visual images,
speaking is a more efficient and effective way to explain visual information.
Here we introduce a non-invasive neural decoder, termed as MindGPT, which
interprets perceived visual stimuli into natural languages from fMRI signals.
Specifically, our model builds upon a visually guided neural encoder with a
cross-attention mechanism, which permits us to guide latent neural
representations towards a desired language semantic direction in an end-to-end
manner by the collaborative use of the large language model GPT. By doing so,
we found that the neural representations of the MindGPT are explainable, which
can be used to evaluate the contributions of visual properties to language
semantics. Our experiments show that the generated word sequences truthfully
represented the visual information (with essential details) conveyed in the
seen stimuli. The results also suggested that with respect to language decoding
tasks, the higher visual cortex (HVC) is more semantically informative than the
lower visual cortex (LVC), and using only the HVC can recover most of the
semantic information. The code of the MindGPT model will be publicly available
at https://github.com/JxuanC/MindGPT. | Jiaxuan Chen, Yu Qi, Yueming Wang, Gang Pan | 2023-09-27T15:35:20 | http://arxiv.org/abs/2309.15729v1 | # MindGPT: Interpreting What You See with Non-invasive Brain Recordings
###### Abstract
Decoding of seen visual contents with non-invasive brain recordings has important scientific and practical values. Efforts have been made to recover the seen images from brain signals. However, most existing approaches cannot faithfully reflect the visual contents due to insufficient image quality or semantic mismatches. Compared with reconstructing pixel-level visual images, speaking is a more efficient and effective way to explain visual information. Here we introduce a non-invasive neural decoder, termed as MindGPT, which interprets perceived visual stimuli into natural languages from fMRI signals. Specifically, our model builds upon a visually guided neural encoder with a cross-attention mechanism, which permits us to guide latent neural representations towards a desired language semantic direction in an end-to-end manner by the collaborative use of the large language model GPT. By doing so, we found that the neural representations of the MindGPT are explainable, which can be used to evaluate the contributions of visual properties to language semantics. Our experiments show that the generated word sequences truthfully represented the visual information (with essential details) conveyed in the seen stimuli. The results also suggested that with respect to language decoding tasks, the higher visual cortex (HVC) is more semantically informative than the lower visual cortex (LVC), and using only the HVC can recover most of the semantic information. The code of the MindGPT model will be publicly available at [https://github.com/JxuanC/MindGPT](https://github.com/JxuanC/MindGPT).
## 1 Introduction
Humans can describe the visual objects of the world using a finite number of words, and draw an analogy between verbal and visual when communicating with others. This flexible cognition capacity suggests that semantic information, conveyed in language, is deeply intertwined and entangled with various types of sensory input, especially for vision. Neuroscience studies (Popham et al., 2021; Tang et al., 2023; Fairhall and Caramazza, 2013; Binder and Desai, 2011) hold that amodal semantic representations are shared between visual and linguistic (V&L) perceptions, e.g., the word "cat" evokes similar conceptual content to the image of a cat in our mind. However, how the brain infers semantic relations of conceptual categories, and fulfills seamless switching between V&L modalities has been rarely quantized or implemented with computational models.
Recent neural decoders (Chen et al., 2023a,b; Takagi and Nishimoto, 2023) demonstrated that visual content can be reconstructed from visual cortex (VC) representations recorded using functional Magnetic Resonance Imaging (fMRI). Nevertheless, the reconstructed images still suffered from being blurry and semantically meaningless or mismatched. On the other hand, the neuroscience community has presented compelling evidence (Popham et al., 2021) to support the notion that semantic concepts in both V&L forms can be accessed in the brain's VC. The findings strongly encourage us to introduce a new "mind reading" technology, aiming to verbally interpret what you see. Such
an endeavor has great scientific significance in revealing cross-modal semantic integration mechanisms and may provide potential application values for restorative or augmentative brain-computer interfaces (BCIs).
Here, we introduce a non-invasive neural language decoder, termed as MindGPT, which translates the blood-oxygen-level-dependent (BOLD) patterns elicited by static visual stimuli into well-formed word sequences, as shown in Fig. 1 Left. For the non-invasive language decoder, to the best of our knowledge, Tang et al. (2023) made the pioneering attempt to develop a non-invasive neural decoder for perceived speech reconstruction, which can even recover the meaning of silent videos. Due to the poor temporal resolution of fMRI, however, the method requires collecting a large amount of fMRI signals (recorded while subjects listened to spoken stories) to predict the fine-grained semantic relevance between the candidate words and the evoked brain responses. On the contrary, this study focuses on whether and to what extent the static visual sensory experiences such as a single image provide semantic labels for our amodal language maps.
Our MindGPT is designed to meet two key criteria: i) it must be capable of capturing visual semantic representations (VSRs) from brain activities, and ii) should incorporate a mechanism to transition from acquired VSRs into well-formed word sequences. To do so, we first opt to employ a large language model GPT-2 (Radford et al., 2019), which is pre-trained on WebText dataset of millions of webpages, as our text generator, thus allowing us to constrain sentence structures to resemble well-formed natural language. Then, we customize a simple yet efficient CLIP-guided (Radford et al., 2021) fMRI encoder with cross-attention layers to bridge the semantic gap between brain-visual-linguistic (B&V&L) representations in an end-to-end fashion. This formulation of neural decoding results in a highly reduced number of learnable parameters, leaving it both light and effective.
In this study, we have demonstrated that the MindGPT could be the bridge of robust V&L semantic transformations of the brain's VC and machine. The language generated by our MindGPT reflects the visual semantics of the observed stimuli (see Fig. 1 Right) with high accuracy, which suggested that our method successfully learned the generalizable neural semantic representations, and gained a wide understanding of B&V&L modalities. Furthermore, we found that the well-trained MindGPT appears to emerge with the ability to capture visual cues (i.e., salient regions) of stimulus images, even from highly limited fMRI-image training data, which facilitates us to explore the contributions of visual properties to language semantics. With the help of visualization tool, we also observed that the latent neural representations learned by MindGPT exhibited desirable locality-sensitive properties both in low-level visual features and high-level semantic concepts, which conforms to some neuroscience findings (Bellmund et al., 2018; Yamins & DiCarlo, 2016). Overall, our MindGPT,
Figure 1: Left: The overall pipeline of non-invasive language decoder MindGPT. Right: Reconstruction results of our MindGPT, image captioning model SMALLCAP (Ramos et al., 2023), and visual decoding methods VQ-fMRI (Chen et al., 2023) & MinD-Vis (Chen et al., 2023).
different from Tang et al. (2023), indicated that the semantic relations between V&L representations can be inferred from our brain's VC without consideration for temporal resolution of fMRI.
## 2 Related Work
The neural decoding technique offers a unique fashion for advancing our understanding of human perception. With deep learning technological changes (Goodfellow et al., 2014; Radford et al., 2021; Kingma & Welling, 2013; Ho et al., 2020; Rombach et al., 2022) and neuroscience advances (Haxby et al., 2001; Kamitani & Tong, 2005; Yamins & DiCarlo, 2016; Popham et al., 2021), the visual neural decoding community is progressing quickly. In recent decades, a lot of inspiring work with vital guiding implications has sprung up, which can be broadly broken down into three main paradigms based on decoding objectives (Du et al., 2023), i.e., stimuli classification (Haxby et al., 2001; Van Gerven et al., 2010; Damarla & Just, 2013; Yargholi & Hossein-Zadeh, 2016; Du et al., 2023), recognition (Haynes & Rees, 2006; Kay et al., 2008; Horikawa & Kamitani, 2017; Naselaris et al., 2009), and reconstruction (Belyi et al., 2019; Lin et al., 2022; Chen et al., 2023a;b;c). Among them, visual reconstruction, which aims to recover the overall organization of seen images, is the most challenging yet exciting. In the remaining section, we will briefly review the background material of reconstruction tasks, that puts our study into context.
The key to the success of image reconstruction techniques is to extract low-level image details of visual stimuli from brain activity using fMRI. Interestingly, for the target of visual reconstruction tasks, there has been a trend in recent years away from pixel-wise reconstruction and toward seeking the semantically correct images (namely, allowing visual structure variance under the same semantics) with the rise of diffusion models (Ho et al., 2020; Rombach et al., 2022). The decoded outcomes of early techniques (Shen et al., 2019b;a; Belyi et al., 2019; Ren et al., 2021; Du et al., 2022) can preserve the outlines and postures of original stimuli, but they often fail to recover the intricate texture and rich color in natural scenes due to the limited number of fMRI-image annotations. On the other hand, high-level semantic decoding methods incorporate visual semantic information into the GAN models (Mozafari et al., 2020; Ozcelik et al., 2022) or diffusion models (Lu et al., 2023; Takagi & Nishimoto, 2023; Chen et al., 2023b;c), resulting in realistic images due to inherited strong generative capabilities. However, the models lack control over low-level details such as contour and texture. More importantly, the reconstructed image usually has a large semantic gap with the actual stimulus, leaving it difficult to interpret what you see. For humans, remembering the detail of a seen scene is a tricky issue since our visual system is not like a camera that stores every pixel of images (Chen et al., 2023a; Desimone et al., 1995), but we are skilled at a general description of the seen objects, meaning that speaking is a simple but more effective fashion of presenting visual semantics. Therefore, unlike the existing visual decoding paradigm, our MindGPT is designed to explore the semantic relations between vision and language by using non-invasive fMRI recordings. To the best of our knowledge, generating linguistic semantic information directly from a single brain image has not been adequately explored.
## 3 The MindGPT Approach
MindGPT is a lightweight non-invasive neural decoder, which combines off-the-shelf large language model GPT-2 (Radford et al., 2019) and pre-trained CLIP (Radford et al., 2021), to describe the meaning of perceived images by natural language, as shown in Fig. 2.
### Dataset and Preprocessing
In this study, a widely used benchmark dataset that was designed for fMRI-based decoding, termed as DIR (Shen et al., 2019b), was leveraged to evaluate our MindGPT. In natural image presentation experiments, including training and test sessions, three healthy subjects were required to view natural images selected from ImageNet (Deng et al., 2009), and simultaneously fMRI signals were collected using a 3.0-Tesla Siemens MAGNETOM Verio scanner. Each scanning session includes anatomical (implane T2) and functional (EPI) images covering the entire brain (TR, 2 s; TE, 43 ms; voxel size, \(2\times 2\times 2\) mm; number of slices, 76). The visual stimuli (1200 training images, and 50 test images) involved in the experiment are identical to those used in another fMRI-image dataset (Horikawa & Kamitani, 2017), but the DIR dataset contains a larger number of image-fMRI pairs
(5\(\times\)1200 training samples, and 24\(\times\)50 test samples). Note that 5 and 24 represent the number of repetitions. To avoid scanner instability effects, for each run, the first 8 s of scans were discarded. All fMRI data were subjected to 3-dimensional motion correction using SPM, and then co-registered to the high-resolution anatomical images and regions-of-interest (ROIs) selection (Shen et al., 2019). In this study, we used the voxels from the brain's visual areas including V1-V4, LOC, FFA, and PPA, where V1 to V3 is defined as the lower visual cortex (LVC), and the higher visual cortex (HVC) is formed by LOC, FFA, and PPA (Horikawa and Kamitani, 2017).
### CLIP-Guided Neural Embedding
The goal of our MindGPT is the process of generating a descriptive sentence for brain activity patterns evoked by visual objects. To this end, the key here is to guide our model towards a desired visual direction (i.e., semantic information of stimulus images) with each generation step. Firstly, to handle fMRI signals, we split the fMRI into a sequence of voxel vectors \(z\in\mathbb{R}^{7\times H}\) including V1-V4, LOC, FFA, and PPA, where \(H\) denotes the number of voxels, which is flattened and padded to the same size. Next, voxel vectors \(z\in\mathbb{R}^{7\times H}\) are fed into a trainable linear projection, followed by a Transformer encoder, to predict latent fMRI representations \(\mathcal{Z}\). During the training phase, we leverage the hidden class embedding \(X_{clip}\in\mathbb{R}^{768}\) of CLIP visual encoder (Radford et al., 2021) as neural proxy, and then seeking a joint semantic space across images and fMRI signals via fMRI-image representation alignment. Moreover, since the size of the carefully curated dataset is fairly limited, we present a simple data augmentation strategy, building virtual training examples by performing linear interpolation on the fMRIs evoked by the same category of images. This practice shares similarities with mixup technique (Zhang et al., 2018), but the difference is that the corresponding labels are randomly sampled from the subset (annotated with the same category) of ImageNet (Deng et al., 2009) rather than generated via equal-weighted interpolation. By doing so, the model can be encouraged to extract shared high-level semantic features of augmented images.
### Vision-Language Joint Modelling
In order to restrict the decoded word sequences to well-formed language, our approach uses an autoregressive language model GPT-2, which specializes in modelling text semantic interactions between the next token \(s_{i}\) and past tokens \((s_{1},s_{2},\cdots,s_{i-1})\) at each time-step. Given any initial prompt, such as "The seen image shows", GPT-2 will infer the likelihood of words \(P(s_{i}|[s_{j}]_{j<i})\) that could come next. Nevertheless, even with the constraints imposed by the prior probability distribution \(P(S)=\prod_{i=1}^{n}P(s_{i}|[s_{j}]_{j<i})\) learned from WebText dataset (Radford et al., 2019), it may
Figure 2: Schematic diagram of MindGPT framework. We first split an fMRI signal into fixed-size low-to-high-level ROIs (namely, V1-V4, LOC, FFA, and PPA), and feed the resulting sequence of voxel vectors to a standard ViT for fMRI visual representations learning guided by CLIP visual encoder. Then, we use trainable cross-attention modules to bridge a frozen GPT-2 and fMRI encoder. In this way, our model can generate a word sequence conditioned on the input fMRI.
be computationally problematic to formalize visually-guided neural language decoding problem as \(P(s_{i}|[s_{j}]_{j<i},\mathcal{Z})\) directly. This is due to that the fMRI encoder and GPT-2 model operate in different embedding spaces.
For coupling the V&L representations, we use multi-head cross-attention layers to bridge the fMRI encoder and GPT decoder, thus leaving each layer of the GPT decoder attends to the fMRI encoder outputs (Vaswani et al., 2017). Under the circumstances, our task can be boiled down to an end-to-end multi-task optimization problem. Given an fMRI-image pair \((z,y)\), our loss function \(\mathcal{L}_{mind}\) can then be written as
\[\mathcal{L}_{mind}=\mathcal{L}_{gpt}\Big{(}\mathbf{F}_{t}(y),\mathbf{E}_{ \Phi}(z);\Theta\Big{)}+\mathcal{L}_{clip}\Big{(}\mathbf{E}_{c}(y),\mathbf{E}_{ \Phi}(z)\Big{)}, \tag{1}\]
where \(\mathbf{F}_{t}(y)=[s_{i}]_{1:M}\) is a visual captioning of image \(y\) generated from SMALLCAP (Ramos et al., 2023), \(\mathbf{E}_{c}(\cdot)\) denotes frozen CLIP encoder, which returns the hidden visual embedding \(\mathcal{K}_{clip}\in\mathbb{R}^{768}\), \(\mathbf{E}_{\Phi}(\cdot)\) indicates fMRI encoder with trainable parameters \(\Phi\), and \(\Theta\) is the weights in the cross-attention modules. The first term uses the standard cross-entropy loss for minimizing the sum of the negative log-likelihood conditioned on the fMRI embedding and the previous tokens, i.e.,
\[\mathcal{L}_{gpt}=-\sum_{i=1}^{M}logP(s_{i}|s_{<i},\mathbf{E}_{\Phi}(z);\Theta). \tag{2}\]
Note that we freeze the GPT decoder and CLIP encoder, and only train the randomly-initialized fMRI encoder as well as cross-attention layers. The second term of Eq. 1 is a mean-squared loss for alignment purposes:
\[\mathcal{L}_{clip}=\lambda\Big{|}\Big{|}[\mathbf{E}_{c}(y)]_{0}-[\mathbf{E}_{ \Phi}(z)]_{0}\Big{|}\Big{|}_{2}^{2}, \tag{3}\]
where \([\cdot]_{0}\) returns the class embedding of Transformer encoder, and \(\lambda=10\) is a trade-off hyperparameter weighing \(\mathcal{L}_{gpt}\) and \(\mathcal{L}_{clip}\). Overall, our MindGPT provides a mechanism to learn a direct mapping between brain activity and text by preserving language attributes under the guidance of visual cues, which brings desirable expandability, i.e., our framework can easily be extended to other types of neural decoding such as fMRI-to-sound by an appropriate choice of the decoder. Moreover, as the result of avoiding separate visual feature decoding step, learning in an end-to-end fashion can effectively help in reducing the information loss (Shen et al., 2019).
## 4 Experimental Results
### Implementation Details and Evaluation Metrics
In this work, the architecture of our MindGPT contains two frozen pre-trained sub-models, CLIP-ViT-B/32 and GPT-\(2_{\rm Base}\), which are provided on HuggingFace (Wolf et al., 2020). In the MindGPT model, only the parameters of the fMRI encoder and cross-attention layers are trainable. For the fMRI encoder, we use a standard ViT model with an embedding size of 768, layer number of 8, and 8-head self-attention. The cross-attention layer with 12-head is added to each of the 12 layers of GPT-2 decoder. In order to further reduce the number of learnable parameters, following Ramos et al. (2023), we diminish the default dimensional size (64) of the projection matrices in the cross-attention layers to 8. During the training phase, we optimize MindGPT by using Adam solver (Kingma & Ba, 2014) with \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), learning rate of 1e-4, and applying a low weight decay of 1e-4 until the model converges, which we found to be useful. Our MindGPT trained on DIR and a subset of ImageNet (Deng et al., 2009), including 150 categories totaling 200.7k images. Note that there's no overlap between the training and test categories. The MindGPT is implemented by Pytorch, and ran on 4 NVIDIA GeForce RTX3090 GPUs.
To provide an across-the-board evaluation of MindGPT's language decoding performance, we consider the following standard metrics: BLEU-1 (B@1), BLEU-4 (B@4) (Papineni et al., 2002), ROUGE (Lin & Hovy, 2003), METEOR (Denkowski & Lavie, 2014), CIDEr (Vedantam et al., 2015) and SPICE (Anderson et al., 2016), which widely used in various NLP tasks, e.g., translation, and image-to-text (image captioning) (Tewel et al., 2022; Ramos et al., 2023). These language similarity metrics are calculated using COCO evaluation package.
### Neural Decoding across Vision and Language
**Qualitative Results.** In order to provide an intuitive understanding of the linguistic decoding capacity guided by visual stimuli, Fig. 3 reports few-shot and zero-shot generation examples from subject
3 of the DIR dataset. Note that the default training/test split of DIR has no overlapping image categories, we randomly sampled 50 fMRI-image training pairs, and added them to the test set for the few-shot evaluation. For each group of results, the right shows the linguistic decoding result of our MindGPT, and provides the reference caption generated by Ramos et al. (2023). From the results, we see that MindGPT can produce semantically satisfying word sequences in both few-shot and zero-shot decoding, which extracted not only the meaning of the raw visual stimuli but often even exact category names such as "airplane", "windmill", "graps", "school bus", and "bathroom". This demonstrates that fine-grained language semantics information can be recovered from the BOLD signal evoked by visual objects. Interestingly enough, we observe that our MindGPT appears to exhibit the capability to capture color information or infer the color tones of images, e.g., "black and white photo" (col 1, row 3), "brown and white animal" (col 1, row 4), "yellow school bus" (col 2, row 1). Moreover, although our method may not consistently infer correct classes of objects, it can still decode approximate semantic information, e.g., "beer"-"wine" (col 2, row 4), "fly"-"insect" (col 3, row 4), and "sunflower"-"flower" (col 2, row 2), which supports the assumption that V&L semantic information are well-represented in visual cortex (Popham et al., 2021).
**Quantitative Results.** Here, we report quantitative results of our MindGPT on different model configurations. For convenience, we use brief notation to indicate the model variants. For example, compared to the base model, MindGPT-S/8 means the smaller variant, and the scaling factor \(N=8\) of cross-attention layers. Note that the number of parameters is inversely proportional to the scaling factor \(N\). The main results, as summarized in Tab. 1, are based on test data of the DIR. From the Tab. 1, a few patterns can be observed. Firstly, the larger model MindGPT-L outperforms MindGPT-B and MindGPT-S on a range of language similarity metrics. Specifically, with the BLEU-4, which reflects the matching precision of four consecutive words (i.e., 4-gram), the MindGPT-L/16 is \(21\%\) to \(27\%\) higher than the MindGPT-B and MindGPT-S. With the ROUGE, which is mainly designed to consider recall rate, the MindGPT-L/16 obtains a high value of 41.7. For the CIDEr, which calculated the semantic similarity between sentences and used TF-IDF to consider word frequency, performance peaked at 116.5 with MindGPT-L/16 and decreased as the parameters of cross-attention layers increased. Under the SPICE, which computes the semantic matching degree between generated descriptions and the reference texts, the larger model, MindGPT-L/16 achieves a high value of
Figure 3: The language decoding results of our MindGPT. Top: Reconstruction results on known visual categories. Bottom: Reconstruction results on unknown visual categories that are out of the training set (zero-shot). For each group, the left represents the raw visual stimuli, the right reports the neural language decoding results of our MindGPT and image captioning results of SMALLCAP.
15.2, which is \(29\%\) to \(52\%\) higher than the other model variants. Secondly, we also note that decoding performance not only depends on the size of the fMRI encoder, but also on the cross-attention layers. The reconstruction quality generally increased as cross-attention parameters decreased, i.e., the smaller cross-attention modules are good for performance, which is somewhat surprising. Our MindGPT may have not reached saturation yet within the range tried, we leave it to future work.
### The Impact of Hierarchical Coding Property on language Reconstruction
In neuroscience, a fairly well-accepted theory is that visual information propagation from the lower visual cortex (LVC) to the higher visual cortex (HVC) has a hierarchical nature (Yamins & DiCarlo, 2016; Horikawa & Kamitani, 2017). This finding has been widely studied in visual reconstruction tasks (Fang et al., 2020; Takagi & Nishimoto, 2023). However, it is unclear how the hierarchical structure of information affects our decoding at the granularity of words and phrases, which regions are consistently engaged in language reconstruction. In other words, are the LVC and the HVC complementary or redundant for language representations?
**Performance of Different Brain Areas.** To preliminarily validate the underlying contributions of different brain regions to the language decoding task, we repeatedly run quantitative experiments using fMRI voxels of different visual areas (VC, LVC and HVC). Here, voxels of LVC are composed of V1, V2, and V3, voxels from FFA, PPA, and LOC form the HVC, and VC denotes the whole visual cortex. It should be noted that the default model configuration MindGPT-B/8, and the same training strategy are used for all three experiments. Tab. 2 shows the results. We find two phenomena worth exploring: (1) Decoding from the HVC yielded the best performance on all language evaluation metrics; (2) the decoding performance of using complete VC is better than that of LVC. These evidences seem to point in that there is no complementary relationship between LVC and HVC. Does this mean that LVC is redundant in decoding tasks? For answers, we will perform the analysis studies of the latent neural representations in the next sub-section.
**Analysis of the Latent Neural Representations.** Our MindGPT model allows us to decode linguistic semantic information, in which the latent fMRI representations play a crucial role. Therefore, examining the representation distributions of different brain regions is beneficial to further explain the above phenomena. The dimension of latent representations is too big, so we leverage the t-SNE
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**fMRI Encoder**} & \multirow{2}{*}{**Cross-Attention**} & \multirow{2}{*}{**Params**} & \multicolumn{6}{c}{**Language Similarity Metrics**} \\ \cline{3-3} \cline{5-10} & Layers & & & & B@1 & B@4 & ROUGE & METEOR & CIDEr & SPICE \\ \hline MindGPT-S/4 & & & N = 4 & 38M & 34.1 & 10.7 & 32.6 & 10.5 & 39.2 & 7.2 \\ MindGPT-S/8 & 4 & 4 & N = 8 & 35M & 37.9 & 15.9 & 36.4 & 12.9 & 65.7 & 10.0 \\ MindGPT-S/16 & & & N = 16 & 33M & 37.5 & 17.0 & 36.9 & 12.9 & 89.6 & 10.0 \\ \hline MindGPT-B/4 & & & N = 4 & 67M & 38.8 & 15.4 & 37.0 & 13.1 & 64.0 & 10.4 \\ MindGPT-B/8 & 8 & 8 & N = 8 & 63M & 37.9 & 15.7 & 35.9 & 12.8 & 70.8 & 10.3 \\ MindGPT-B/16 & & & N = 16 & 61M & 39.7 & 16.2 & 39.2 & 13.8 & 77.3 & 11.8 \\ \hline MindGPT-L/4 & & & N = 4 & 123M & 35.7 & 11.5 & 34.7 & 11.3 & 55.1 & 9.5 \\ MindGPT-L/8 & 16 & 16 & N = 8 & 120M & 40.8 & 17.5 & 40.4 & 14.4 & 75.2 & 12.3 \\ MindGPT-L/16 & & & N = 16 & 118M & **42.1** & **20.5** & **41.7** & **15.5** & **116.5** & **15.2** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative results of neural language reconstruction. We report the decoding performance of our MindGPT on the DIR default test set. Note that all training parameters are set to the default for different model configurations. The **best** and worst are highlighted in **bold** and red, respectively.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**ROI Variants**} & \multirow{2}{*}{**Voxel Number**} & \multicolumn{6}{c}{**Language Similarity Metrics**} \\ \cline{3-9} & & & B@1 & B@4 & ROUGE & METEOR & CIDEr & SPICE \\ \hline \multirow{4}{*}{MindGPT-B/8} & LVC (V1 + V2 + V3) & 6550 & 39.9 & 14.1 & 38.6 & 12.7 & 54.1 & 9.4 \\ & HVC (LOC + FFA + PPA) & 5633 & **40.8** & **17.8** & **39.4** & **14.6** & **91.4** & **13.0** \\ \cline{1-1} & VC (V4 + LVC + HVC) & 14034 & 37.9 & 15.7 & 35.9 & 12.8 & 70.8 & 10.3 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Language semantics predictions of different brain areas for perceived visual images. All results are computed by language similarity metrics between the MindGPT predictions and the corresponding image captions. The **best** and worst are highlighted in **bold** and red, respectively.
technique (Van der Maaten and Hinton, 2008), which can preserve the local structure of data in low-dimensional embedding space, to visualize the distributions of fMRI representations. We separately map VC, LVC, and HVC neural representations to 2-dimensional t-SNE embedding spaces, and put the corresponding visual stimulus at the position, as shown in Fig. 4 Top. From the visualization results of VC and HVC, we can observe that our MindGPT learned a locality-sensitive embedding structure, which contains several clusters representing different high-level semantic properties or concepts, e.g., biotic, vehicle, and music. The embedding structure of LVC, by contrast, has no obvious clustering rule. However, we can still find that similar low-level appearance features are located at nearby positions such as round and cube. In terms of the latent embedding space of VC, it inherits the semantic properties of low and high levels from LVC and HVC, but why is there a performance degradation when using the entire VC? The reason for the performance decline with VC may be that each brain region has a non-trivial probability of decoding failure, which means that the more brain areas we use, the harder it is to guarantee that all of the brain areas are always functional within the existing learning paradigm. We can see in Fig. 4 Bottom that low-level visual features are usually insufficient for effective semantic reconstruction, which tend to generate semantically inaccurate targets that are similar in appearance. More failure examples are provided in Fig. 5. To more intuitively evaluate the semantic reconstruction deviation, on the right of each example, we use off-the-shelf Stable Diffusion (version 1.4) (Rombach et al., 2022) with PLMS sampler to reconstruction visual stimuli (without fine-tuning) by conditioning on our linguistic decoding results.
### Discovering the Visual Cues that Guide Semantic Reconstruction
At present little is known about how the MindGPT encodes or infers semantic relations between V&L representations. We question whether the muted success of MindGPT in linguistic decoding
Figure 4: Top: The t-SNE visualization results of latent neural representation for different brain areas. Bottom: Examples of our textual reconstruction conditioned on different brain regions.
Figure 5: Typical imperfect cases of our MindGPT in textual reconstruction. To intuitively understand semantic bias, we also provide visual reconstruction results obtained using the decoded text of our MindGPT and off-the-shelf Stable Diffusion (Rombach et al., 2022).
can be attributed to the appropriate modeling of visual cues. This is also in line with the characteristics of our human vision system: only a certain part of the rich visual information contained in an image that interests us is perceived by our brain (Chen et al., 2023a).
Typically, the [CLS] token's self-attention weighting coefficient of the ViT can be used to answer what a visual model is focusing on. However, the self-attention maps of our MindGPT encoder represent dependencies between different brain regions. In order to discover the visual cues that guide semantic reconstruction, our practice is using a CLIP visual encoder with 16 \(\times\) 16 input patch size (i.e., CLIP-ViT-B/16) to produce a sequence of image patch embeddings, and then calculating the cosine similarity matrix between each image patch embedding and the class embedding of fMRI. As shown qualitatively in Fig. 6, the cosine similarity matrices contain information about the salient regions of an image. Note that we do not provide any supervision signals of salient positions in the form of labeled data or constraints during the training phase. We observe in Fig. 6 that the semantic reconstruction process is guided by attention-driven visual cues, i.e., the masks of similarity maps are highly related semantically to the meaning of words or phrases in decoded language such as "a piano", "airplane flying in the sky", and "a tall building". The semantic deviation of reconstruction even can be explained by the visual cues. Specifically, for the \(5^{th}\) example in Fig. 6, we can clearly see that fMRI representation focused on the water around a whale, thus decoding the word "beach". In the \(6^{th}\) example, only the gesture of holding is captured, resulting in the decoded phrase "a person holding". As for the \(7^{th}\) example, the mask nearly covers the key part of the bicycle, except for the blue frame, which leads to the decoding bias about color information, i.e., "a black and white photo of a bicycle". Since humans often pay attention to task-related objects based on the high-level task at hand (Shi et al., 2023), it is not yet clear whether such visual attention information is present in neural signals, which motivates future decoding efforts.
## 5 Conclusion
In this study, we have explored a non-invasive decoder when coupled with large (vision) language models to calculate the modality shift from visual to linguistic representations. Our initial experimental results reveal that this simple, yet scalable, framework works surprisingly well, which suggests that there might be a rich connection between the amodal semantic concept and visual objects of the physical world. While this hypothesis has been proposed in the neuroscience community, our study is the first to demonstrate that vision-to-language reasoning conditioned on a single brain image would be promising by using a computational model. In general, our MindGPT is not only beneficial to decipher how the brain bridges different types of sensory information and then infers amodal semantic concepts, but also provides potential therapeutic values for people who are unable to communicate as a result of semantic dementia.
This work also leaves some open questions, and many challenges remain, although the potential of MindGPT is encouraging. One is that whether the amount of semantic information provided to the VC can be quantified by the selective visual attention of humans, which awaits further exploration
Figure 6: Schematic illustration of the semantic reconstruction guided by visual cues. On top of each group, we show the attention map based on the cosine similarity between fMRI and CLIP embedding, and its masking counterpart obtained by thresholding, respectively.
and verification. Another question is how to explore the semantic relations between the VC and the anterior temporal lobe (ATL). The extensive evidence shows that ATL degeneration results in semantic dementia, and the answer to that question could help develop neuro-semantic prostheses for bypassing the ATL, thus recovering the loss of semantic signals due to ATL lesions.
#### Acknowledgments
This work was supported in part by the Science and Technology Innovation (STI) 2030 Major Projects (2021ZD0200400), the Key Research and Development Program of Zhejiang Province in China (2020C03004), the Natural Science Foundation of China (NSFC) (U1909202, 61925603, 62276228), and the Lingang Laboratory (LG-QS-202202-04).
| 視覚情報を非侵襲的な脳波記録を用いてデコードすることは、科学的および実用的な価値を持つ。脳波信号から視覚画像の回復に取り組んできたが、大部分の既存の方法では、画像の品質が不足しているか、意味の不一致が生じて、視覚情報を忠実に反映することができない。ピクセルレベルの視覚画像を再構築するよりも、話す方が視覚情報の説明に効率的かつ効果的である。ここでは、FMR信号から自然言語に変換する非侵襲的な神経デコーダーであるMindGPTを導入する。具体的には、このモデルは視覚をガイドする神経エンコーダとクロスアテネーションメカニズムを備えており、これは、大規模言語モデルGPTの協働利用によって、潜在的な神経表現を目標言語の言語的方向に導くことができる。そうすることで、私たちは、MindGPTの神経表現が説明可能であることを発見 |
2310.00521 | Minimal special degenerations and duality | This paper includes the classification, in a simple Lie algebra, of the
singularities of Slodowy slices between special nilpotent orbits that are
adjacent in the partial order on nilpotent orbits. The irreducible components
of most singularities are (up to normalization) either a simple surface
singularity or the closure of a minimal special nilpotent orbit in a smaller
rank Lie algebra. Besides those cases, there are some exceptional cases that
arise as certain quotients of the closure of a minimal orbit in types $A_2$ and
$D_n$. We also consider the action on the slice of the fundamental group of the
smaller orbit. With this action, we observe that under Lusztig-Spaltenstein
duality, in most cases, a simple surface singularity is interchanged with the
closure of a minimal special orbit of Langlands dual type (or a cover of it
with action). This empirical observation generalizes an observation of Kraft
and Procesi in type $A_n$, where all nilpotent orbits are special. We also
resolve a conjecture of Lusztig that concerns the intersection cohomology of
slices between special nilpotent orbits. | Daniel Juteau, Paul Levy, Eric Sommers | 2023-09-30T23:09:06 | http://arxiv.org/abs/2310.00521v1 | # Minimal special degenerations and duality
###### Abstract.
This paper includes the classification, in a simple Lie algebra \(\mathfrak{g}\), of the singularities of Slodowy slices between special nilpotent orbits that are adjacent in the partial order on nilpotent orbits. The irreducible components of most singularities are (up to normalization) either a simple surface singularity or the closure of a minimal special nilpotent orbit in a smaller rank Lie algebra. Besides those cases, there are some exceptional cases that arise as quotients of the closure of a minimal orbit in types \(D_{n}\) by \(V_{4}\), in type \(A_{2}\) by \(\mathfrak{S}_{2}\) or in type \(D_{4}\) by \(\mathfrak{S}_{4}\). We also consider the action on the slice of the fundamental group of the smaller orbit. With this action, we observe that under Lusztig-Spalenstein duality, in most cases, a singularity of simple surface singularity is interchanged with the closure of a minimal special orbit of Langlands dual type (or a cover of it with action). Lusztig's canonical quotient helps explain when this duality fails. This empirical observation generalizes an observation of Kraft and Procesi in type \(A_{n}\), where all nilpotent orbits are special. We also resolve a conjecture of Lusztig that concerns the intersection cohomology of slices between special nilpotent orbits.
## 1. Introduction
### Minimal degenerations
Let \(G\) be a simple algebraic group over \(\mathbb{C}\) and \(\mathfrak{g}\) its Lie algebra. Let \(\mathcal{N}_{o}:=\mathcal{N}(\mathfrak{g})/G\) be the set of nilpotent orbits in \(\mathfrak{g}\). The partial order on \(\mathcal{N}_{o}\) is defined so that \(\mathcal{O}^{\prime}{<}\mathcal{O}\) whenever \(\mathcal{O}^{\prime}\subsetneq\overline{\mathcal{O}}\) for \(\mathcal{O},\mathcal{O}\in\mathcal{N}_{o}\), where \(\overline{\mathcal{O}}\) is the closure of \(\mathcal{O}\). A pair \(\mathcal{O}^{\prime}{<}\mathcal{O}\) is called a _degeneration_. If \(\mathcal{O}\) and \(\mathcal{O}^{\prime}\) are adjacent in the partial order (that is, there is no orbit strictly between them), then the pair is called a _minimal degeneration_. There are two minimal degenerations at either extreme of the poset \(\mathcal{N}_{o}\): the regular and subregular nilpotent orbits give a minimal degeneration, as does the minimal nilpotent orbit and the zero orbit.
Given \(e\in\mathcal{N}_{o}\), let \(\mathfrak{s}\subset\mathfrak{g}\) be an \(\mathfrak{sl}_{2}\)-triple \(\{e,h,f\}\) through \(e\). Then \(\mathcal{S}_{e}:=e+\mathfrak{g}^{f}\), where \(\mathfrak{g}^{f}\) is the centralizer of \(f\) in \(\mathfrak{g}\), is called a Slodowy slice. Associated to any degeneration \(\mathcal{O}^{\prime}{<}\mathcal{O}\) is a smooth equivalence class of singularities \(\operatorname{Sing}(\mathcal{O},\mathcal{O}^{\prime})\)[12], which can be represented by the intersection \(\mathcal{S}_{\mathcal{O},e}:=\mathcal{S}_{e}\cap\overline{\mathcal{O}}\), where \(e\in\mathcal{O}^{\prime}\). We call \(\mathcal{S}_{\mathcal{O},e}\) a _Slodowy slice singularity_.
The singularities \(\mathcal{S}_{\mathcal{O},e}\) of minimal degenerations are known in the classical types by [12] and [12] and in the exceptional types by [11] and [11], up to normalization for a few cases in \(E_{7}\) and \(E_{8}\). These results can be summarized as:
* the irreducible components of \(\mathcal{S}_{\mathcal{O},e}\) are pairwise isomorphic;
* if \(\dim(\mathcal{S}_{\mathcal{O},e})=2\), then the normalization of an irreducible component of \(\mathcal{S}_{\mathcal{O},e}\) is isomorphic to \(\mathbb{C}^{2}/\Gamma\) where \(\Gamma\subset\operatorname{SL}_{2}(\mathbb{C})\) is a finite subgroup, possibly trivial. Such a variety is called a _simple surface singularity_ when \(\Gamma\) is non-trivial.
* if \(\dim(\mathcal{S}_{\mathcal{O},e})\geq 4\), then an irreducible component of \(\mathcal{S}_{\mathcal{O},e}\) is isomorphic to the closure of a minimal nilpotent orbit in some simple Lie algebra, or else is one of four exceptional cases, denoted \(m^{\prime}\), \(\tau\), \(\chi\), or \(a_{2}/\mathfrak{S}_{2}\) in [11] and each appearing exactly one time.
### Action on slices
A simple surface singularity \(X=\mathbb{C}^{2}/\Gamma\) corresponds to the Dynkin diagram of a simply-laced Lie algebra (e.g., \(A_{n}\), \(D_{n}\), \(E_{n}\)) either by using the irreducible representations of \(\Gamma\) as done by McKay, or by looking at the exceptional fiber of the minimal resolution of \(X\), which is union of projective lines, whose arrangement yields the Dynkin diagram. Slodowy defined an action on \(X\) by using a normalizing subgroup \(\Gamma^{\prime}\) of \(\Gamma\) in \(\mathrm{SL}_{2}(\mathbb{C})\)[16, III.6]. Looking at the image of the action of \(\Gamma^{\prime}\) on the Dynkin diagram, he introduced the notation \(B_{n}\) (resp. \(C_{n}\), \(F_{4}\), \(G_{2}\)) to denote a simple surface singularity \(A_{2n-1}\) (resp. \(D_{n+1}\), \(E_{6}\), \(D_{4}\)) singularity with an "outer" action of \(\mathfrak{S}_{2}\) (resp. \(\mathfrak{S}_{2}\), \(\mathfrak{S}_{2}\), \(\mathfrak{S}_{3}\)). Here, "outer" refers to the fact that on the corresponding Lie algebra these come from outer automorphisms. It is also possible to do the same thing for the simple surface singularity \(A_{2n}\), where we used the notation \(A_{2n}^{+}\) in [14], when the outer action is included. Note, however, that this arises from a cyclic group of order four acting on \(X\).
The centralizer \(G^{e}\) of \(e\) in \(G\) has a reductive part \(C(\mathfrak{s})\), given by the centralizer of \(\mathfrak{s}\) in \(G\). Then \(C(\mathfrak{s})\) acts on \(\mathcal{S}_{\mathcal{O},e}\) and we are interested in the image of \(C(\mathfrak{s})\) in \(\mathrm{Aut}(\mathcal{S}_{\mathcal{O},e})\). Slodowy [16, IV.8] showed for the regular/subregular minimal degeneration, that \(\mathcal{S}_{\mathcal{O},e}\) with the action induced from \(C(\mathfrak{s})\) is exactly the simple surface singularity denoted by the type of \(\mathfrak{g}\). This explains his choice of notation.
Let \(a_{n},b_{n},\dots,g_{2}\) denote the closure of the minimal nilpotent orbit according to the type of \(\mathfrak{g}\). In [14], we introduced the notation \(a_{n}^{+}\), \(d_{n}^{+}\), \(e_{6}^{+}\), \(d_{4}^{++}\) to denote these varieties with the outer action of \(\mathfrak{S}_{2}\), \(\mathfrak{S}_{2}\), \(\mathfrak{S}_{2}\), \(\mathfrak{S}_{3}\), respectively, coming from the outer automorphisms of \(\mathfrak{g}\). In _op. cit._, using these two notions of action, we studied the action of \(C(\mathfrak{s})\) on \(\mathcal{S}_{\mathcal{O},e}\) for all minimal degenerations, where we found that \(C(\mathfrak{s})\) acts transitively on the irreducible components of \(\mathcal{S}_{\mathcal{O},e}\) and in some sense acts as non-trivially as possible on \(\mathcal{S}_{\mathcal{O},e}\) given the size of the component group \(A(e):=C(\mathfrak{s})/C^{\circ}(\mathfrak{s})\). In this paper one of our results is to repeat this calculation for the classical groups (see SS5).
### Minimal Special Degenerations
Lusztig defined the notion of special representations of the Weyl group \(W\) of \(G\)[17], which led him to define the special nilpotent orbits, denoted \(\mathcal{N}_{o}^{sp}\), via the Springer correspondence. The regular, subregular, and zero nilpotent orbits are always special, but the minimal nilpotent orbit is only special when \(\mathfrak{g}\) is simply-laced (types \(A_{n}\), \(D_{n}\), or \(E_{n}\)). In the other types, there is always a unique minimal (nonzero) special nilpotent orbit. We denote the closure of the minimal special nilpotent orbits (which are not minimal nilpotent) by \(b_{n}^{sp},c_{n}^{sp},f_{4}^{sp}\), and \(g_{2}^{sp}\), according to the type of \(\mathfrak{g}\).
In this paper, we classify the Slodowy slice singularities \(\mathcal{S}_{\mathcal{O},e}\) when \(\mathcal{O}\) and \(\mathcal{O}^{\prime}\) are adjacent special orbits, i.e., there is no special orbit strictly between them. We call these _minimal special degenerations_. Since \(\dim(\mathcal{S}_{\mathcal{O},e})=2\) implies the degeneration is already a minimal degeneration, we are left only to classify the cases where \(\dim(\mathcal{S}_{\mathcal{O},e})\geq 4\). Our main result on the classification of **minimal special degenerations** is summarized as:
* the irreducible components of \(\mathcal{S}_{\mathcal{O},e}\) are pairwise isomorphic;
* if \(\dim(\mathcal{S}_{\mathcal{O},e})=2\), then the normalization of an irreducible component of \(\mathcal{S}_{\mathcal{O},e}\) is isomorphic to \(\mathbb{C}^{2}/\Gamma\) where \(\Gamma\subset\mathrm{SL}_{2}(\mathbb{C})\) is a finite, non-trivial subgroup.
* if \(\dim(\mathcal{S}_{\mathcal{O},e})\geq 4\), then an irreducible component of \(\mathcal{S}_{\mathcal{O},e}\) is isomorphic to the closure of a minimal special nilpotent orbit in some simple Lie algebra, or else is isomorphic to one of the following quotients of the closure of a minimal (special) nilpotent orbit: \(a_{2}/\mathfrak{S}_{2}\), \(d_{n+1}/V_{4}\) or \(d_{4}/\mathfrak{S}_{4}\).
The singularities \(a_{2}/\mathfrak{S}_{2}\) and \(d_{4}/\mathfrak{S}_{4}\) arose in [14] and along with \(d_{n+1}/V_{4}\), they also appear in the physics literature [14].
In the case where \(\dim(\mathcal{S}_{\mathcal{O},e})\geq 4\), the singularities of \(\mathcal{S}_{\mathcal{O},e}\) are mostly controlled by the simple factors of \(\mathfrak{c}(\mathfrak{s})\) (see Corollary 4.12 and the remarks after it), just as occurs for most of the minimal degenerations of dimension four or more.
For dimension two, there is a single slice where one of its irreducible components is known not to be normal, namely the \(\mu\) singularity from [10], which occurs once in \(E_{8}\) (it is irreducible). We expect the other components of slices of dimension two all to be normal in the case of minimal special degenerations, unlike the case of minimal degenerations. The components of slices of dimension at least four are all known to be normal.
The irreducible minimal special degenerations in the classical types \(B\), \(C\), \(D\), are listed in Tables 1 and 2, in analogy with the classification of Kraft and Procesi for minimal degenerations [11, Table 1]. The minimal special degenerations of codimension two are already minimal degenerations and so are contained in [11], except for the action of \(A(e)\). The notation of \([2B_{n}]^{+}\), means that the image of \(C(\mathfrak{s})\) acts by a Klein 4-group \(V_{4}\) on \(\mathcal{S}_{\mathcal{O},e}\), where one generator switches the two components of the exceptional fiber and a second generator preserves both components of type \(A_{2n-1}\), but acts by outer automorphism on each one. The table assumes that \(G\) is the orthogonal group O(2n) for type \(D_{n}\), hence making use of the outer \(\mathfrak{S}_{2}\)-action of \(D_{n}\).
In type \(D_{n}\) without this outer action, we would get these same singularities but without some or all of the action on \(\mathcal{S}_{\mathcal{O},e}\). Specifically, \(D_{k}\) and \(d_{k}\) arise, without the \(\mathfrak{S}_{2}\)-action. The singularity \([2B_{k}]^{+}\) will become \(B_{k}\) for the minimal degenerations where \(\mathcal{O}\) is a very even orbit. We discuss this further in SS8.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Name of singularity & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) \\ Lie algebra & \(\mathfrak{sp}_{2}\) & \(\mathfrak{sp}_{2n}\) & \(\mathfrak{so}_{2n+1}\) & \(\mathfrak{sp}_{4n+2}\) & \(\mathfrak{so}_{4n}\) \\ & \(n\geq 2\) & \(n\geq 1\) & \(n\geq 1\) & \(n\geq 1\) \\ \(l\) rows removed & \(l\equiv\epsilon^{\prime}\) & any & \(l\not\equiv\epsilon^{\prime}\) & \(l\equiv\epsilon^{\prime}\) & \(l\equiv\epsilon^{\prime}\) \\ \(s\) columns removed & \(s\not\equiv\epsilon\) & \(s\equiv\epsilon\) & \(s\equiv\epsilon\) & \(s\not\equiv\epsilon\) & \(s\equiv\epsilon\) \\ \hline \(\lambda\) & \([2]\) & \([2n]\) & \([2n{+}1]\) & \([2n{+}1,2n{+}1]\) & \([2n,2n]\) \\ \(\mu\) & \([1,1]\) & \([2n{-}2,2]\) & \([2n{-}1,1,1]\) & \([2n,2n,2]\) & \([2n{-}1,2n{-}1,1,1]\) \\ \hline Singularity & \(C_{1}\) & \(C_{n}\) & \(B_{n}\) & \(B_{n}\) & \([2B_{n}]^{+}\) \\ \hline \end{tabular}
\end{table}
Table 1. Minimal special degenerations of codimension two
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Name of singularity & \(g_{sp}\) & \(h\) & \(f_{sp}^{1}\) & \(f_{sp}^{2}\) & \(h_{sp}\) \\ Lie algebra & \(\mathfrak{sp}_{2n}\) & \(\mathfrak{so}_{2n}\) & \(\mathfrak{so}_{2n+1}\) & \(\mathfrak{sp}_{4n+2}\) & \(\mathfrak{sp}_{4n}\) \\ & \(n\geq 2\) & \(n\geq 3\) & \(n\geq 2\) & \(n\geq 2\) & \(n\geq 2\) \\ \(l\) rows removed & \(l\equiv\epsilon^{\prime}\) & \(l\equiv\epsilon^{\prime}\) & \(l\not\equiv\epsilon^{\prime}\) & \(l\equiv\epsilon^{\prime}\) & \(l\not\equiv\epsilon^{\prime}\) \\ \(s\) columns removed & \(s\not\equiv\epsilon\) & \(s\equiv\epsilon\) & \(s\equiv\epsilon\) & \(s\not\equiv\epsilon\) & \(s\not\equiv\epsilon\) \\ \hline \(\lambda\) & \([2^{2},1^{2n-4}]\) & \([2^{2},1^{2n-4}]\) & \([3,1^{2n-2}]\) & \([3^{2},2^{2n-2}]\) & \([4,2^{2n-2}]\) \\ \(\mu\) & \([1^{2n}]\) & \([1^{2n}]\) & \([1^{2n+1}]\) & \([2^{2n+1}]\) & \([2^{2n}]\) \\ codimension & \(4n{-}2\) & \(4n{-}6\) & \(4n{-}2\) & \(4n{-}2\) & \(4n{-}2\) \\ Singularity & \(c_{n}^{sp}\) & \(d_{n}^{+}\) & \(b_{n}^{sp}\) & \(b_{n}^{sp}\) & \(d_{n+1}/V_{4}\) \\ \hline \end{tabular}
\end{table}
Table 2. Minimal Special Degenerations of codimension 4 or more
_Remark 1.1_.: The \(h\) singularity for \(n=2\) is \(d_{2}^{+}\), which coincides with the \(e\) singularity for \(n=1\). We use \(d_{2}^{+}\) in the graphs for the classical groups since the action of \(A(e)\) for the \(e\)-singularity with \(n=1\) is actually only by \(\mathfrak{S}_{2}\).
The proof that these tables give the classification of minimal special degenerations is given in SS3. In SS4, we establish that the singularities in classical types are as given in Table 2, and in SS4.4 we complete the story in the exceptional groups. In SS5.1 and SS5.7, we establish the \(A(e)\)-action both for minimal special degenerations and minimal degenerations. The graphs at the end of the paper give the results for the exceptional groups and several examples in the classical groups SS11.
### Duality
Using the Springer correspondence, Lusztig defined two maps, which are order-reversing involutions: \(d:\mathcal{N}_{o}^{sp}\to\mathcal{N}_{o}^{sp}\) and \(d_{LS}:\mathcal{N}_{o}^{sp}\to{}^{L}\mathcal{N}_{o}^{sp}\) (see [10]).
For \(G=GL_{n}\) all nilpotent orbits are special and Kraft and Procesi [11] computed the singularity type of \(\mathcal{S}_{\mathcal{O},e}\) for minimal degenerations (hence, minimal special degenerations). The singularity is either of type \(A_{k}\) or \(a_{k}\) for some \(k\). Kraft and Procesi observed that if the singularity of \((\mathcal{O},\mathcal{O}^{\prime})\) is of type \(A_{k}\) then the singularity of \((d(\mathcal{O}^{\prime}),d(\mathcal{O}))\) is of type \(a_{k}\). In the case of \(GL_{n}\), each orbit is given by a partition and the dualities \(d=d_{LS}\) are given by taking the transpose partition.
Our duality is a generalization of the Kraft-Procesi observation, but with some wrinkles. It says that typically an irreducible component of a simple surface singularity (with \(A(e)\)-action) is interchanged with the minimal special orbit of Langlands dual type (after taking the quotient of the \(A(e)\)-action). More explicitly, \(d_{LS}\) exchanges the following singularities.
\[A_{n} \leftrightarrow a_{n}\] \[B_{n} \leftrightarrow a_{2n-1}^{+}\text{ or }c_{n}^{sp}\] \[C_{n} \leftrightarrow d_{n+1}^{+}\text{ or }b_{n}^{sp}\] \[D_{n} \leftrightarrow d_{n}\] \[G_{2} \leftrightarrow d_{4}^{++}\text{ or }g_{2}^{sp}\] \[F_{4} \leftrightarrow e_{6}^{+}\text{ or }f_{4}^{sp}\] \[E_{n} \leftrightarrow e_{n}\]
The only interchange of dimension two with dimension two is when both slices have irreducible components of type \(A_{1}\). The fact that for each dual pair of orbits, one of the pairs yields a slice of dimension two was observed by Lusztig [14]. For the cases with two options on the right, notice that the first option arises as cover of the second (see e.g. [12]). Indeed we expect this cover to occur intrinsically since in all these cases \(\mathcal{O}\) itself admits such a cover. We could also alternatively say that the second option is a quotient of the first by the \(A(e)\)-action.
There are three families of situations that do not obey this relationship.
1. Sometimes \[C_{n+1}\leftrightarrow c_{n}^{sp}\text{ or }a_{2n-1}^{+}\]
2. When \(d_{n+1}/V_{4}\) or \(d_{4}/\mathfrak{S}_{4}\) occurs in a dual pair of orbits, we always have \[C_{n} \leftrightarrow d_{n+1}/V_{4}\] \[G_{2} \leftrightarrow d_{4}/S_{4}\]
3. For the three exceptional special orbits in \(E_{7}\) and \(E_{8}\), \[A_{2}^{+} \leftrightarrow a_{2}^{+}\text{ or }a_{2}/S_{2}\] \[A_{4}^{+} \leftrightarrow a_{4}^{+}\]
In the first case, Lusztig's canonical quotient of \(A(e)\) is playing a role. Namely, the kernel of the map from \(A(e)\) to the canonical quotient \(\bar{A}(e)\) is acting by outer action on \(\mathcal{S}_{\mathcal{O},e}\). We denote this property by adding a \(*\) to the singularity, \(C_{n+1}^{*}\). This phenomenon is described in SS6. In the second case, there is an impact of the canonical quotient, see again SS6. In the third case, these cases arise because the only representative of an order two element in \(A(e)\) is an order 4 element in \(C(\mathfrak{s})\) (see [10]). We gather the duality results into one theorem in SS10.
### Full automorphism group of \(\mathfrak{g}\)
We also consider, building on work of Slodowy, the case where \(G=\operatorname{Aut}(\mathfrak{g})\). For \(A_{n},E_{6}\), and \(D_{4}\), we leave this for SS8. We find that in type \(A_{n}\), all singularities acquire the expected outer action and thus, for example, \(A_{k}^{+}\leftrightarrow a_{k}^{+}\) for the full automorphism group of \(\mathfrak{g}\).
To get more uniform statements for type \(D_{n}\), we use \(G=\operatorname{O}(2\text{n})\) at the beginning and then explain what changes when \(G=\operatorname{SO}(2\text{n})\) in SS5.6 and SS5.8.
### Three quartets of singularities in classical types \(B\), \(C\), \(D\)
The duality of the last section has a richer structure in types \(B\), \(C\), \(D\). The internal duality \(d\) for \(B_{n}\) and \(C_{n}\), together with \(d_{LS}\) and the composition \(f:=d\circ d_{LS}\), yield 4 related special orbits (see Figure 1). Applying these 3 maps to a minimal special degeneration, we find there are only three possible outputs for the four singularities that arise (see Figure 2).
There is also a story that involves \(D_{n}\). As mentioned above, we work with \(G=\operatorname{O}(2\text{n})\). Then there is a subset of the nilpotent orbits in type \(C_{n}\) that is a slight modification of the special orbits, by changing the parity condition in the definition. We call these the alternative special nilpotent orbits in type \(C\) and denote them by \(\mathcal{N}_{o}^{C,asp}\) in SS2.2. Its minimal element is the minimal orbit in type \(C_{n}\) of dimension \(2n\). There is a bijection between \(\mathcal{N}_{o}^{D,sp}\) and \(\mathcal{N}_{o}^{C,asp}\), also denoted \(f\), that preserves the partial order and codimensions (more precisely, it sends an orbit of dimension \(N\) to one of dimension \(N+2n\)). This bijection, together with \(d_{LS}\) and \(d=f\circ d_{LS}\), also gives rise to the same three quartets of singularities as in Figure 2. An example is given in Figure 3. This is also the first case where all three quartets arise.
Figure 1. Dualities
### Lusztig's Weyl group conjecture
In [12, SS0.4], Lusztig attached a Weyl group \(W^{\prime}\) to each minimal special degeneration. He then made a conjecture relating the exponents of \(W^{\prime}\) to what amounts to the \(C(\mathfrak{s})\)-invariant part of the intersection homology \(\operatorname{IH}^{*}(\mathcal{S}_{\mathcal{O},e})\) when \(\dim(\mathcal{S}_{\mathcal{O},e})\geq 4\). In SS9 we prove his conjecture, which is open in types \(B\), \(C\), \(D\), although we have to modify slightly the \(W^{\prime}\) that he attaches to those minimal degenerations \((\lambda,\mu)\) in type \(D_{n}\) where there is a single odd part in \(\mu\).
### Acknowledgments
This work, major parts of which were sketched in 2012, is a continuation of the papers [13], [14] that were jointly authored with Baohua Fu. We thank him for his vital contribution to the project from its inception.
## 2. Background material in the classical group case
### Notation on partitions
In the classical groups it will be helpful to have a description of the elements of \(\mathcal{N}_{o}\) and the map \(d\) in terms of partitions. We introduce that notation following the references [13], [12], [2].
Let \(\mathcal{P}(N)\) denote the set of partitions of \(N\). For \(\lambda\in\mathcal{P}(N)\), we write \(\lambda=[\lambda_{1},\dots,\lambda_{k}]\), where \(\lambda_{1}\geq\dots\geq\lambda_{k}>0\) and \(|\lambda|:=\sum\lambda_{j}\) is equal to \(N\). Define
\[m_{\lambda}(s)=\#\{j\ |\ \lambda_{j}=s\},\]
Figure 2. The three quartets of possible singularities in classical groups
the multiplicity of the part \(s\) in \(\lambda\). We use \(m(s)\) if the partition is clear. Sometimes we write \([\dots,s^{m(s)},\dots]\) instead of
\[[\dots,\overbrace{s,s,\dots,s}^{m(s)},\dots]\]
for a part \(s\) in \(\lambda\). The set of nilpotent orbits \(\mathcal{N}_{o}\) in \(\mathfrak{g}=\mathfrak{sl}_{n}\) under the adjoint action of \(G=SL_{n}\) is in bijection with \(\mathcal{P}(n)\).
For \(\epsilon\in\{0,1\}\), let \(V=V_{\epsilon}\) be a vector space, of dimension \(N\), with a nondegenerate bilinear form satisfying \(\langle v,v^{\prime}\rangle=(-1)^{\epsilon}\langle v^{\prime},v\rangle\) for \(v,v^{\prime}\in V\). Let \(\mathfrak{g}(V)\) be the Lie algebra associated to the form on \(V\), so that \(\mathfrak{g}(V)=\mathfrak{so}_{N}\) when \(\epsilon=0\) and \(\mathfrak{g}(V)=\mathfrak{sp}_{N}\) when \(\epsilon=1\) and \(N\) is even.
Let
\[\mathcal{P}_{\epsilon}(N):=\{\lambda\in\mathcal{P}(N)\ |\ m(s)\equiv 0\ \text{whenever}\ s\equiv\epsilon\},\]
where all congruences are modulo \(2\). Then the set of nilpotent orbits \(\mathcal{N}_{o}\) in \(\mathfrak{g}(V)\) under the group \(G=G(V)\) preserving the form is given by \(\mathcal{P}_{1}(2n)\) when \(\mathfrak{g}\) is of type \(C_{n}\); by \(\mathcal{P}_{0}(2n+1)\) when \(\mathfrak{g}\) is of type \(B_{n}\); and by \(\mathcal{P}_{0}(2n)\) when \(\mathfrak{g}\) is of type \(D_{n}\), except that those partitions with all even parts correspond to two orbits in \(\mathcal{N}_{o}\) (called the very even orbits, where there are two orbits interchanged by the orthogonal group). We will also refer to \(\mathcal{P}_{1}(2n)\) as \(\mathcal{P}_{C}(2n)\); to \(\mathcal{P}_{0}(2n+1)\) as \(\mathcal{P}_{B}(2n+1)\); and to \(\mathcal{P}_{0}(2n)\) as \(\mathcal{P}_{D}(2n)\). We sometimes call a partition \(\lambda\in\mathcal{P}_{\epsilon}(N)\) an \(\epsilon\)-partition. For \(\lambda\in\mathcal{P}(N)\) or \(\lambda\in\mathcal{P}_{\epsilon}(N)\), we denote by \(\mathcal{O}_{\lambda}\) the corresponding nilpotent orbit in \(\mathfrak{g}\).
Define the height of a part \(s\) in \(\lambda\) to be the number
\[h_{\lambda}(s):=\#\{\lambda_{j}\,|\,\lambda_{j}\geq s\}.\]
We write \(h(s)\) if the partition is clear. In terms of Young diagrams, the position \((s,h(s))\) is a corner of the diagram, writing each part \(\lambda_{i}\) as the boxes with upper right corner \((1,i),\dots(\lambda_{i},i)\). In other words, we have \(\lambda_{h(s)}=s\) and \(\lambda_{h(s)+1}<\lambda_{h(s)}\).
The dual or transpose partition of \(\lambda\), denoted \(\lambda^{*}\), is defined by
\[(\lambda^{*})_{i}=\#\{j\ |\ \lambda_{j}\geq i\}.\]
If we set \(j=h(s)\), then \(\lambda^{*}\) is the partition with part \(h(s)\) occurring \(\lambda_{j}-\lambda_{j+1}=s-\lambda_{j+1}\) times.
The set \(\mathcal{P}(N)\) is partially ordered by the dominance order on partitions, where \(\mu\preceq\lambda\) whenever \(\sum_{i=1}^{k}\mu_{i}\leq\sum_{i=1}^{k}\lambda_{i}\) for all \(k\). This induces a partial ordering on the sets \(\mathcal{P}_{C}(2n)\), \(\mathcal{P}_{B}(2n+1)\), and \(\mathcal{P}_{D}(2n)\) and these partial orderings coincide with the partial ordering on nilpotent orbits given by the closure ordering. We will refer to nilpotent orbits and partitions interchangeably in the classical groups (with the caveat mentioned earlier for the very even orbits in type \(D\)).
Let \(X=B\), \(C\), or \(D\). Let \(N\) be even (resp. odd) if \(X\) is of type \(C\) or \(D\) (resp. \(B\)). The \(X\)-collapse of \(\lambda\in\mathcal{P}(N)\) is the partition \(\lambda_{X}\in\mathcal{P}_{X}(N)\) satisfying \(\lambda_{X}\preceq\lambda\) and such that if \(\mu\in\mathcal{P}_{X}(N)\) and \(\mu\preceq\lambda\), then \(\mu\preceq\lambda_{X}\). The \(X\)-collapse always exists and is unique.
### Special partitions and the duality maps
The special nilpotent orbits were defined by Lusztig [10]. Denote by \(\mathcal{N}_{o}^{sp}\) the special nilpotent orbits in \(\mathfrak{g}=\mathfrak{g}(V)\). All nilpotent orbits are special in type \(A\). Here we describe the special nilpotent orbits in types \(B\), \(C\), and \(D\), as well as introduce a second subset of \(\mathcal{N}_{o}\), which behaves like special orbits. We define four sets of partitions, with \(\epsilon^{\prime}\in\{0,1\}\), as follows
\[\mathcal{P}_{\epsilon,\epsilon^{\prime}}(N):=\{\lambda\in\mathcal{P}_{\epsilon }(N)\ |\ h(s)\equiv\epsilon^{\prime}\ \text{whenever}\ s\equiv\epsilon\}. \tag{1}\]
Because of the \(s=0\) case, for \(N\) odd, the set is nonempty only when \((\epsilon,\epsilon^{\prime})=(0,1)\). For \(N\) even, the set is nonempty for \((\epsilon,\epsilon^{\prime})\in\{(0,0),(1,0),(1,1)\}\). Then the partitions for the special orbits in type \(B_{n},C_{n},D_{n}\) are given by \(\mathcal{P}_{B}^{sp}(2n+1):=\mathcal{P}_{0,1}(2n+1)\), \(\mathcal{P}_{C}^{sp}(2n):=\mathcal{P}_{1,0}(2n)\), and \(\mathcal{P}_{D}^{sp}(2n):=\mathcal{P}_{0,0}(2n)\). The fourth case leads to a second subset of \(\mathcal{P}_{C}(2n)\), which is \(\mathcal{P}_{C}^{asp}(2n):=\mathcal{P}_{1,1}(2n)\). We refer to these nilpotent orbits in type \(C\) as the alternative special nilpotent orbits.
Each of \(\mathcal{P}_{B}^{sp}(2n+1)\), \(\mathcal{P}_{C}^{sp}(2n)\), \(\mathcal{P}_{D}^{sp}(2n)\), and \(\mathcal{P}_{C}^{asp}(2n)\) inherits the partial order from the set of all partitions and this agrees with the one coming from inclusion of closures of the corresponding nilpotent orbits.
The sets \(\mathcal{P}_{B}^{sp}(2n\!+\!1)\) and \(\mathcal{P}_{C}^{sp}(2n)\) are in bijection (see [10], [11]), and also the sets \(\mathcal{P}_{D}^{sp}(2n)\) and \(\mathcal{P}_{C}^{asp}(2n)\) are in bijection, as we now describe.
Given \(\lambda=[\lambda_{1}\geq\cdots\geq\lambda_{k-1}\geq\lambda_{k}>0]\), let
\[\lambda^{-}=[\lambda_{1}\geq\cdots\geq\lambda_{k-1}\geq\lambda_{k}-1]\]
and
\[\lambda^{+}=[\lambda_{1}+1\geq\cdots\geq\lambda_{k-1}\geq\lambda_{k}].\]
Then the bijections are given as follows, using \(f\) for each map:
\[\begin{split}& f_{BC}:\mathcal{P}_{B}^{sp}(2n+1)\to\mathcal{P}_{C}^{ sp}(2n)\text{ given by }f(\lambda)=(\lambda^{-})_{C}\\ & f_{CB}:\mathcal{P}_{C}^{sp}(2n)\to\mathcal{P}_{B}^{sp}(2n+1) \text{ given by }f(\lambda)=(\lambda^{+})_{B}\\ & f_{DC}:\mathcal{P}_{D}^{sp}(2n)\to\mathcal{P}_{C}^{asp}(2n) \text{ given by }f(\lambda)=((\lambda^{+})^{-})_{C}\\ & f_{CD}:\mathcal{P}_{C}^{asp}(2n)\to\mathcal{P}_{D}^{sp}(2n) \text{ given by }f(\lambda)=\lambda_{D}\end{split} \tag{2}\]
Note that in general \(f\) maps \(\mathcal{P}_{\epsilon,\epsilon^{\prime}}\) to \(\mathcal{P}_{1-\epsilon,1-\epsilon^{\prime}}\).
Each of these maps respects the partial order. The first two maps are dimension preserving (and codimension preserving). The second two maps are codimension preserving (as we shall see). More precisely, the \(f_{DC}\) map sends an orbit of dimension \(N\) to one of dimension \(N+2n\). The shift is because the minimal orbit in \(C_{n}\) is the minimal element in \(\mathcal{P}_{C}^{asp}(2n)\).
Write \(d(\lambda)\) for \(\lambda^{*}\). It is known (or easy to show) that \(d\) determines bijections between the following sets:
\[\begin{split}\mathcal{P}_{B}^{sp}(2n\!+\!1)\stackrel{{ \mathrm{d}}}{{\longrightarrow}}\mathcal{P}_{B}^{sp}(2n\!+\!1)\\ \mathcal{P}_{C}^{sp}(2n)\stackrel{{\mathrm{d}}}{{ \longrightarrow}}\mathcal{P}_{C}^{sp}(2n)\\ \mathcal{P}_{D}^{sp}(2n)\stackrel{{\mathrm{d}}}{{ \longleftarrow}}\mathcal{P}_{C}^{asp}(2n)\end{split} \tag{3}\]
It is order-reversing since this holds for all partitions. We refer to \(d\) as the _internal duality_ or just _transpose_.
It is known that \(d\circ f=f\circ d\) and the duality \(d_{LS}\) of Lusztig-Spaltenstein is given by \(d_{LS}=d\circ f=f\circ d\). We have squares relating the three kinds of maps between the orbits (or their corresponding partitions) as shown in Figure 1.
### Explicit description of \(f\) maps
We now describe more specifically how the \(f\) maps work.
Let \(X\) be one of the types \(B\), \(C\), \(D\). Let \(\lambda\in\mathcal{P}(N)\) with \(N\) even for type \(C\) and \(D\) and odd for type \(B\). We want to find \(\lambda_{X}\). Set \(\epsilon=\epsilon_{X}\). List the parts \(s\) in \(\lambda\) with \(s\equiv\epsilon\) and \(m(s)\) odd as \(a_{1}>a_{2}>\cdots>a_{2n}\ \geq 0\). For \(X=C\), these are the odd parts, so there is an even number of them. But for \(X=B\) or \(X=D\), since \(\epsilon=0\), we will add a single part equal to \(0\), if necessary, so that there is an even number of even parts with odd multiplicity. Next,
between \(a:=a_{2i-1}\) and \(c:=a_{2i}\), list the parts \(s\equiv\epsilon\) as \(b_{1}>b_{2}>\dots>b_{j}\), which necessarily have \(m(b_{i})\) even. Ignoring the parts not congruent to \(\epsilon\), then \(\lambda\) will look locally like
\[a^{m(a)},b_{1}^{m(b_{1})},\dots,b_{j}^{m(b_{j})},c^{m(c)}.\]
Then under the collapse of \(\lambda\) to \(\lambda_{X}\), these values will change to
\[a^{m(a)-1},a-1,b_{1}+1,b_{1}^{m(b_{1})-2},b_{1}-1,\dots,b_{j}+1,b_{j}^{m(b_{j} )-2},b_{j}-1,c+1,c^{m(c)-1}\]
so that the multiplicities of the parts congruent to \(\epsilon\) are now even, as required to be in \(\mathcal{P}_{X}(N)\). The other parts of \(\lambda\) are unaffected under the collapse. As a result of this rule, there is a formula for the collapse for a part \(s\) based on its height \(h(s)\) and its multiplicity \(m(s)\).
**Lemma 2.1**.: _Let \(X\) be \(B,C,\) or \(D\) and \(\epsilon=\epsilon_{X}\). Let \(\lambda\in\mathcal{P}(N)\) with \(N\) as above. Assume \(m(s)\) is even if \(s\not\equiv\epsilon\). Let \(s\equiv\epsilon\) be a part in \(\lambda\). Then \([s^{m(s)}]\) in \(\lambda\) changes to the following in \(\lambda_{X}\):_
\[[s^{m(s)-1},s-1] \text{if }h(s)\equiv 1,m(s)\equiv 1\] \[[s+1,s^{m(s)-1}] \text{if }h(s)\equiv 0,m(s)\equiv 1\] \[[s+1,s^{m(s)-2},s-1] \text{if }h(s)\equiv 1,m(s)\equiv 0\] \[[s^{m(s)}] \text{if }h(s)\equiv 0,m(s)\equiv 0\]
Proof.: Since \(h(s)=\sum_{j\geq s}m(j)\), it is clear that \(h(s)\equiv\#\{m(j)\ |\ m(j)\text{ is odd and }j\geq s\}\). Since any part \(s\) with \(s\not\equiv\epsilon\) has \(m(s)\) even, it follows that
\[h(s)\equiv\#\{m(j)\ |\ m(j)\equiv 1,j\geq s,\text{ and }j\equiv\epsilon\}.\]
So the four conditions in the lemma specify whether the part \(s\) plays the role of some \(a_{2i-1}\), \(a_{2i}\), \(b_{k}\), or a part between some \(a_{2i}\) and \(a_{2i+1}\) and hence unaffected by the collapse.
Let \(X\) be one of the types \(B,C,D\) or \(C^{\prime}\). Here \(C^{\prime}\), refers to the alternative special setting. Now we can say what happens under the \(f\) maps passing from \(X\) to type \(f(X)\).
**Lemma 2.2**.: _Let \(\lambda\in\mathcal{P}_{X}^{sp}(N)\). Let \(s\not\equiv\epsilon_{X}\) be a part in \(\lambda\). Then \(f(\lambda)\) is computed by replacing each occurrence of \([s^{m(s)}]\) by_
\[[s^{m(s)-1},s-1] \text{if }h(s)\equiv\epsilon_{X}^{\prime},m(s)\equiv 1 \tag{5}\] \[[s+1,s^{m(s)-1}] \text{if }h(s)\not\equiv\epsilon_{X}^{\prime},m(s)\equiv 1\] (6) \[[s+1,s^{m(s)-2},s-1] \text{if }h(s)\equiv\epsilon_{X}^{\prime},m(s)\equiv 0\] (7) \[[s^{m(s)}] \text{if }h(s)\not\equiv\epsilon_{X}^{\prime},m(s)\equiv 0 \tag{4}\]
Proof.: First, if \(s\not\equiv\epsilon_{X}\), then \(s\equiv\epsilon_{f(X)}\). For \(f_{BC}\) and \(f_{CD}\) the map is just the ordinary collapse (except for the smallest two parts in type \(B\)). In these cases, \(\epsilon_{X}^{\prime}=1\) and we are in the situation of the previous lemma when performing the collapse in type \(f(X)\). In type \(B\), there are a couple of cases to check that the effect of \(\lambda_{k}\) being replaced by \(\lambda_{k}-1\) is consistent with the above cases.
On the other hand, for \(f_{DC}\) and \(f_{CB}\) we have \(\epsilon_{X}^{\prime}=0\). For the \(f\) map, we first increase \(\lambda_{1}\) by \(1\) and then perform the collapse for type \(f(X)\). This \(1\), under the collapse, moves down to the first part \(x\) with \(x\not\equiv\epsilon_{X}\). By the assumption that the parts congruent to \(\epsilon_{X}\) have even multiplicity, we have that \(h(s)\) odd. So the rule is correct for the part \(x\). Call this new partition, where \(x\) changes to \(x+1\), \(\lambda^{\prime}\). Then
\[h_{\lambda^{\prime}}(s)\not\equiv\#\{m_{\lambda^{\prime}}(j)\ |\ m_{\lambda^{ \prime}}(j)\equiv 1,j\geq s,\text{ and }j\equiv\epsilon_{f(X)}\}.\]
Since \(\epsilon^{\prime}_{X}=0\), the previous lemma gives the result again for the collapse of \(\lambda^{\prime}\), which is \(f(\lambda)\). For \(f_{DC}\), there are a couple of cases to check that the effect of \(\lambda_{k}\) being replaced by \(\lambda_{k}-1\) is consistent with the above cases.
### Special pieces result
Spaltenstein [10] showed that each non-special orbit \(\mathcal{O}^{\prime}\) belongs to the closure of a unique special orbit \(\mathcal{O}\), which is minimal among all special orbits whose closure contains \(\mathcal{O}^{\prime}\). That is, if a special orbit contains \(\mathcal{O}^{\prime}\) in its closure, then it contains \(\mathcal{O}\) in its closure.
We now describe the process for finding the partition \(\lambda\) for \(\mathcal{O}\) given the partition \(\nu\) for \(\mathcal{O}^{\prime}\). Let \(X\) be one of the four types \(B,C,D,C^{\prime}\). Let \(\nu\in\mathcal{P}_{X}(N)\) be non-special. Let \(S\) be the collection of parts \(s\) in \(\nu\) such that \(s\equiv\epsilon_{X}\) and \(h(s)\not\equiv\epsilon^{\prime}_{X}\). These are the parts that fail the condition for \(\nu\) to be in \(\mathcal{P}_{\epsilon,\epsilon^{\prime}}\) as required by (1). Note that \(s\equiv\epsilon_{X}\) means that \(m(s)\) is even, so \(m(s)\geq 2\). Let \(\lambda\) be obtained from \(\nu\) by replacing the subpartition \([s^{m(s)}]\) in \(\nu\) by \([s+1,s^{m(s)-2},s-1]\), for each \(s\in S\). It is clear that \(\lambda\in\mathcal{P}_{X}(N)\) and that it satisfies the special condition in (1), so lies in \(\mathcal{P}_{X}^{sp}(N)\). In fact, \(\lambda\) is the partition for \(\mathcal{O}\) by [10] for the cases of \(B,C,D\) (the case of \(C^{\prime}\) is similar).
### Removing rows and columns from a partition
In [10] and [10], Kraft and Procesi defined two operations that take a pair of partitions \((\lambda,\mu)\in\mathcal{P}(N)\) to another pair of partitions. Viewing the partitions as Young diagrams, the first operation is removing common initial rows of \(\lambda\) and \(\mu\) and the second operation is removing common initial columns.
#### 2.5.1. Type A
More precisely, we say \((\lambda,\mu)\) is _leading-row-equivalent (after removing \(r\) rows)_ to \((\lambda^{\prime},\mu^{\prime})\) if \(\lambda_{i}=\mu_{i}\) for \(i\leq r\), while \(\lambda_{i}=\lambda^{\prime}_{i-r}\) and \(\mu_{i}=\mu^{\prime}_{i-r}\) for \(i>r\). We say \((\lambda,\mu)\) is _column-equivalent (after removing \(s\) columns)_ to \((\lambda^{\prime},\mu^{\prime})\) if \(\lambda_{i}=\mu_{i}\) for \(i>\ell\) and \(\lambda_{i}=\lambda^{\prime}_{i}+s\) and \(\mu_{i}=\mu^{\prime}_{i}+s\) for \(i\leq\ell\), where \(\ell=\max\{i\ |\ \lambda_{i}>s\}\). In both cases, \(|\lambda^{\prime}|=|\mu^{\prime}|\), so \(\lambda^{\prime}\) and \(\mu^{\prime}\) are partitions of the same integer. We say \((\lambda,\mu)\) is _equivalent_ to \((\lambda^{\prime},\mu^{\prime})\) if they are related by a sequence of these two equivalences, and it follows in that case when \(\lambda\preceq\mu\) that
1. \(\lambda^{\prime}\preceq\mu^{\prime}\)
2. \(\operatorname{codim}_{\bar{\mathcal{O}}_{\lambda}}\mathcal{O}_{\mu}= \operatorname{codim}_{\bar{\mathcal{O}}_{\lambda^{\prime}}}\mathcal{O}_{\mu^ {\prime}}\)
3. The singularity of \(\bar{\mathcal{O}}_{\lambda}\) at \(\mathcal{O}_{\mu}\) is smoothly equivalent to the singularity of \(\bar{\mathcal{O}}_{\lambda^{\prime}}\) at \(\mathcal{O}_{\mu^{\prime}}\).
for the corresponding nilpotent orbits in \(\mathfrak{sl}_{n}\)[10].
#### 2.5.2. Other classical types
For \(\epsilon\in\{0,1\}\) and \(\mathfrak{g}=\mathfrak{g}(V_{\epsilon})\) as in SS2.1, similar results hold as above hold when we cancel \(r\) leading rows and \(s\) columns, with an additional condition. Let \(\lambda,\mu\in\mathcal{P}_{\epsilon}(N)\) and assume when we cancel \(r\) leading rows that
\[[\lambda_{1},\ldots,\lambda_{r}]\text{ is an $\epsilon$-partition}. \tag{8}\]
This condition always holds if we choose the maximal possible number of rows to cancel between \(\lambda\) and \(\mu\). If (8) holds, then \(\lambda^{\prime}\) and \(\mu^{\prime}\) are \(\tilde{\epsilon}\)-partitions, with \(\tilde{\epsilon}\equiv\epsilon+s\), where \(s\) is the number of columns canceled. Then the above three results hold when the nilpotent orbits are considered in \(\mathfrak{g}\)[10, SS13]. A pair of partitions \((\lambda,\mu)\) is _irreducible_ if no common rows or columns can be canceled.
Next, we say \((\lambda,\mu)\in\mathcal{P}_{\epsilon}(N)\) is _\(\epsilon\)-row-equivalent_ to \((\lambda^{\prime},\mu^{\prime})\in\mathcal{P}_{\epsilon}(m)\), if the latter is obtained from the former by canceling some leading and some trailing rows of the Young diagram. Namely, there exist \(r,r^{\prime}\in\mathbb{N}\) so that \(\lambda_{i}=\mu_{i}\) for \(i\leq r\) and \(i\geq r^{\prime}\), while \(\lambda_{i}=\lambda^{\prime}_{i-r}\) and \(\mu_{i}=\mu^{\prime}_{i-r}\) for \(r<i<r^{\prime}\). We pad the partitions by adding zeros so that both partitions have the same number of parts. If we set \(\nu=[\lambda_{1},\ldots,\lambda_{r},\lambda_{r^{\prime}},\lambda_{r^{\prime}+1},\ldots]\), then \(\nu\) is also an \(\epsilon\)-partition. We also say \((\lambda,\mu)\) is _locally of the form \((\lambda^{\prime},\mu^{\prime})\)._
Now suppose that \((\lambda,\mu)\) is \(\epsilon\)_-row-equivalent_ to \((\lambda^{\prime},\mu^{\prime})\). Let \(V=V_{\epsilon}\). Then, as in [10, SS13.4], there is an orthogonal decomposition \(V=V_{1}\oplus V_{2}\), with \(\dim V_{1}=|\lambda^{\prime}|=|\mu^{\prime}|\) and \(\dim V_{2}=|\nu|\) and the \(V_{i}\) carry a nondegenerate \(\epsilon\)-form by restriction from \(V\). Moreover, \(\lambda^{\prime},\mu^{\prime}\in\mathcal{P}_{\epsilon}(\dim V_{1})\) and \(\nu\in\mathcal{P}_{\epsilon}(\dim V_{2})\), so we can pick nilpotent elements \(x_{1},e_{1}\in\mathfrak{g}(V_{1})\) with partitions \(\lambda^{\prime},\mu^{\prime}\), respectively, and \(e_{2}\in\mathfrak{g}(V_{2})\) with partition \(\nu\). Then \(x=x_{1}+e_{2}\) has partition \(\lambda\) and \(e=e_{1}+e_{2}\) has partition \(\mu\). The arguments in SS13 in [10] give
**Proposition 2.3**.: _Choose an \(\mathfrak{sl}_{2}\)-triple for \(x_{1}\) in \(\mathfrak{g}(V_{1})\). Then the natural map of \(\mathfrak{g}(V_{1})\) to \(\mathfrak{g}(V_{1})+e_{2}\subset\mathfrak{g}\) gives an isomorphism of the slice \(\mathcal{S}_{\mathcal{O}_{\lambda^{\prime}},e_{1}}\) in \(\mathfrak{g}(V_{1})\) to the slice \(\mathcal{S}_{\mathcal{O}_{\lambda},e}\) in \(\mathfrak{g}\)._
The key ideas in the proof are that both slices have the same dimension (by the codimension result) and the fact that the closure of any nilpotent orbit in \(\mathfrak{gl}_{N}\) is normal.
We note that if \((\lambda^{\prime},\mu^{\prime})\) are obtained from \((\lambda,\mu)\) by removing \(r\) leading rows and \(s\) columns and if condition (8) holds, then \((\lambda,\mu)\) is \(\epsilon\)_-row-equivalent_ to :
\[(\lambda^{\prime\prime},\mu^{\prime\prime}):=([\lambda^{\prime}_{1}+s,\lambda^ {\prime}_{2}+s,\dots],[\mu^{\prime}_{1}+s,\mu^{\prime}_{2}+s,\dots]) \tag{9}\]
Finally, we call \((\lambda,\mu)\) and \((\lambda^{\prime\prime},\mu^{\prime\prime})\) locally equivalent, or say locally \((\lambda,\mu)\) is equal to \((\lambda^{\prime\prime},\mu^{\prime\prime})\).
In the next section SS3, we show that each pair of partitions corresponding to a minimal special degeneration in orthogonal and symplectic type is equivalent to a unique pair of partitions \((\lambda,\mu)\) of \(N\) for a unique smallest \(N\). These pairs are irreducible in the sense of Kraft and Procesi: the maximal possible number of common rows and columns has been removed from the original pair of partitions to obtain \((\lambda,\mu)\).
## 3. Combinatorial classification of minimal special degenerations in \(B\), \(C\), \(D\)
**Theorem 3.1**.: _Let \((\lambda,\mu)\in\mathcal{P}_{\epsilon,\epsilon^{\prime}}\) be partitions corresponding to a minimal special degeneration in the corresponding classical Lie algebra. Then \((\lambda,\mu)\) is equivalent to a unique entry in Table 1 or Table 2._
Proof.: If a minimal special degeneration \((\lambda,\mu)\) is not already minimal, then there exists a non-special orbit \(\mathcal{O}_{\nu}\) such that \(\mathcal{O}_{\mu}\leq\mathcal{O}_{\nu}\leq\mathcal{O}_{\lambda}\), and such that \((\nu,\mu)\) is a minimal degeneration. Hence, the latter would be one of the entries in the Kraft-Procesi list [10, SS3.4]. We need to show that \((\lambda,\mu)\) must be equivalent to one of the five cases in Table 2.
First, since \(\mathcal{O}_{\nu}\) is not special, there is a unique special orbit whose closure contains \(\mathcal{O}_{\nu}\) and which is contained in the closure of all other special orbits whose closure contains \(\mathcal{O}_{\nu}\) (see SS2.4). Consequently, \(\mathcal{O}_{\lambda}\) must be this orbit, as we are assuming the degeneration \((\lambda,\mu)\) is minimal among special degenerations.
Next, we will show that \((\nu,\mu)\) cannot be one of the cases in Table 1. Let \(X\) be the type of the Lie algebra and \(\epsilon=\epsilon_{X}\), \(\epsilon^{\prime}=\epsilon^{\prime}_{X}\).
If \((\nu,\mu)\) is type \(a\), then locally it is \(([s{+}2,s],[(s{+}1)^{2}])\) where \(s\not\equiv\epsilon\). Since \(s{+}1\equiv\epsilon\) and \(s{+}1\) appears exactly twice, the heights satisfy \(h_{\nu}(x)\equiv h_{\mu}(x)\) for all \(x\equiv\epsilon\). This means that \(\nu\) must be special since \(\mu\) is special. Therefore the type \(a\) minimal degeneration cannot occur between a larger orbit that is not special and a smaller orbit that is special.
If \((\nu,\mu)\) is type \(b\), then locally it is \(([s{+}2n,s],[s{+}2n{-}2,s{+}2])\) where \(s\not\equiv\epsilon\). Hence, all four of \(s{+}2n,s,s{+}2n{-}2\), and \(s{+}2\) are not congruent to \(\epsilon\). As in the previous case, \(h_{\nu}(x)\equiv h_{\mu}(x)\) for all \(x\equiv\epsilon\), and again this forces \(\nu\) to be special too, a contradiction.
If \((\nu,\mu)\) is of type \(c,d\) or \(e\), it will be possible for \(\nu\) to be non-special, but we will show that then the degeneration \((\lambda,\mu)\) is not minimal among degenerations between special orbits.
For type \(c\), the pair \((\nu,\mu)\) is locally \(([s{+}2n{+}1,s^{2}],[s{+}2n{-}1,(s{+}1)^{2}])\) where \(s\equiv\epsilon\). In this case \(\nu\) will be non-special, as noted in the Table 1, exactly when the number of rows
removed is congruent to \(\epsilon^{\prime}\). This means \(h_{\nu}(s)=l+3\not\equiv\epsilon^{\prime}\) (and necessarily \(s\geq 1\)). If that is the case, then by SS2.4, \(\lambda\) must locally be \([s\!+\!2n\!+\!1,s\!+\!1,s\!-\!1]\). But then \(\lambda\) degenerates to the partition \(\nu^{\prime}\) that is locally \([s\!+\!2n\!-\!1,s\!+\!3,s\!-\!1]\), which is also special and degenerates to \(\mu\). Hence the degeneration \((\lambda,\mu)\) is not a minimal special degeneration, which is what we wanted to show.
For type \(d\), the pair \((\nu,\mu)\) is locally
\[([s\!+\!2n\!+\!1,s\!+\!2n\!+\!1,s],[s\!+\!2n,s\!+\!2n,s\!+\!2])\]
where \(s\not\equiv\epsilon\). In this case \(\nu\) will be non-special exactly when \(h_{\nu}(s\!+\!2n\!+\!1)\not\equiv\epsilon^{\prime}\). If that is the case, then \(\lambda\) must locally be \([s\!+\!2n\!+\!2,s\!+\!2n,s]\) from SS2.4. But \(\lambda\) also degenerates to the partition \(\nu^{\prime}\) that is locally \([s\!+\!2n\!+\!2,s\!+\!2n\!-\!2,s\!+\!2]\), which is also special and degenerates to \(\mu\). Hence the degeneration \((\lambda,\mu)\) is not a minimal special degeneration.
For type \(e\), the pair \((\nu,\mu)\) is locally
\[([s\!+\!2n,s\!+\!2n,s,s],[s\!+\!2n\!-\!1,s\!+\!2n\!-\!1,(s\!+\!1)^{2}])\]
where \(s\equiv\epsilon\). In this case \(\nu\) will be non-special exactly when \(h_{\nu}(s\!+\!2n)\not\equiv\epsilon^{\prime}\) (and \(s\geq 1\) is forced). Then \(\lambda\) must locally be \([s\!+\!2n\!+\!1,s\!+\!2n\!-\!1,s\!+\!1,s\!-\!1]\) by SS2.4. But \(\lambda\) degenerates to the partition \(\nu^{\prime}\) that is locally \([s\!+\!2n\!-\!1,s\!+\!2n\!-\!1,s\!+\!3,s\!-\!1]\), whenever \(n\geq 2\). This orbit is special since \(\mu\) is. Moreover, \(\nu^{\prime}\) degenerates to \(\mu\), so \((\lambda,\mu)\) is not a minimal special degeneration.
This shows, for any minimal special degeneration \((\lambda,\mu)\), which is not already a minimal degeneration, that there exists an intermediate orbit \(\nu\) such that \((\nu,\mu)\) is a minimal degeneration of codimension at least 4, **unless**\(\nu\) is of type \(e\) with \(n=1\). In Kraft-Procesi's classification [12, Table I], the minimal degenerations of dimension at least 4 are labeled \(f,g\) and \(h\), and are given by the minimal nilpotent orbit closures in types \(B\), \(C\) and \(D\), respectively.
Starting with type \(g\), where \(n\geq 2\), the pair \((\nu,\mu)\) is locally
\[([s\!+\!2,(s\!+\!1)^{2n-2},s],[(s\!+\!1)^{2n}])\]
with \(s\not\equiv\epsilon\). Then \(\nu\) is never special since \(\mu\), being special, forces \(h_{\nu}(s\!+\!1)=h_{\mu}(s\!+\!1)\!+\!1\) to fail the special condition. Then \(\lambda\) is forced, locally, to equal \([(s\!+\!2)^{2},(s\!+\!1)^{2n-4},s^{2}]\) by SS2.4. Because \(\mu\) is special, the number of rows \(l\) removed is congruent to \(\epsilon^{\prime}\). After removing \(s\) columns, we see that \((\lambda,\mu)\) has the type of \(g_{sp}\), and the latter is indeed a minimal special degeneration, containing only the (non-special) orbit between \(\lambda\) and \(\mu\).
For type \(f\), the pair \((\nu,\mu)\) is locally
\[([(s\!+\!2)^{2},(s\!+\!1)^{2n-3},s^{2}],[(s\!+\!1)^{2n+1}])\]
with \(s\equiv\epsilon\) and \(n\geq 2\). This is never special since \(h_{\nu}(s+2)\) and \(h_{\nu}(s)\) have different parities, so exactly one of them fails the special condition. In the former case, \(\lambda\) is locally equal to \([s\!+\!3,(s\!+\!1)^{2n-2},s^{2}]\) and the degeneration is given by \(f_{sp}^{1}\) in the Table 2. That this is a minimal such degeneration follows since \(\nu\) is the only (non-special) orbit between \(\lambda\) and \(\mu\). In the latter case, which forces \(s\geq 1\), then \(\lambda\) is locally equal to \([(s\!+\!2)^{2},(s\!+\!1)^{2n-2},s\!-\!1]\) and this is the minimal special degeneration \(f_{sp}^{2}\). Again, \(\nu\) is the only (non-special) orbit between \(\lambda\) and \(\mu\).
Finally, assume the pair \((\nu,\mu)\) is locally
\[([(s\!+\!2)^{2},(s\!+\!1)^{2n-4},s^{2}],[(s\!+\!1)^{2n}])\]
with \(s\equiv\epsilon\) for \(n\geq 2\). This is type \(e\) if \(n=2\) (for \(n=1\) in the table for \(e\)) and type \(h\) for \(n\geq 3\). Observe that the special condition \(h_{\nu}(s+2)\equiv\epsilon^{\prime}\) is satisfied if and only if \(h_{\nu}(s)\equiv\epsilon^{\prime}\), since these heights differ by the even number \(2n-4\). If both conditions are met, then \(\nu\) will be special since \(\mu\) is special and this is handled by the minimal degeneration cases.
Otherwise, both the pairs \((s{+}2,s{+}2)\) and \((s,s)\) in \(\nu\) cause \(\nu\) to fail to be special, which implies \(s\geq 1\). Then \(\lambda\) takes the form locally \([s{+}3,(s{+}1)^{2n-2},s{-}1]\) by SS2.4. This is the form of \(h_{sp}\) in Table 2 after removing \(s{-}1\) columns. This is a minimal special degeneration containing 3 (non-special) orbits between \(\lambda\) and \(\mu\):
\[[(s{+}2)^{2},(s{+}1)^{2n-3},\widetilde{s{-}1}]\]
The four unlabeled edges all have an \(A_{1}=C_{1}\) singularity (type \(a\)).
We have therefore shown that every minimal special degeneration is either minimal or takes the form in Table 2.
Next we will show that each degeneration in Table 2 has the given singularity type.
## 4. Determining the singularities in Table 2
For each type in Table 2, we need to show that the degeneration is as promised. The case of type \(h\) was done in [10]. We begin with the \(g_{sp}\) case.
### Type \(g_{sp}\) case
As discussed in SS2, for the classical Lie algebras \(\mathfrak{so}_{2n+1}\), \(\mathfrak{sp}_{2n}\), and \(\mathfrak{so}_{2n}\), the nilpotent orbits under the groups \(\mathrm{O}(2n+1)\), \(\mathrm{Sp}(2n)\), and \(\mathrm{O}(2n)\) are parametrized by partitions in \(\mathcal{P}_{B}(2n+1)\), \(\mathcal{P}_{C}(2n)\), and \(\mathcal{P}_{D}(2n)\). This occurs via the Jordan-canonical form of the matrix in the ambient general linear Lie algebra.
Let \(e\in\mathfrak{g}\) be nilpotent. Fix an \(\mathfrak{sl}_{2}\)-subalgebra \(\mathfrak{s}\) through \(e\) and let \(\mathfrak{c}(\mathfrak{s})\) be the centralizer of \(\mathfrak{s}\) in \(\mathfrak{g}\), which is a maximal reductive subalgebra of the centralizer of \(e\) in \(\mathfrak{g}\). Let \(C(\mathfrak{s})\) be the centralizer in \(G\). Then \(C(\mathfrak{s})\) is a product of orthogonal and symplectic groups, with each part \(s\) of \(\lambda\) contributing a factor \(G^{s}\), which is isomorphic to \(\mathrm{O}(m(s))\) when \(s\not\equiv\epsilon\) and isomorphic to \(\mathrm{Sp}(m(s))\) when \(s\equiv\epsilon\). Denote by \(\mathfrak{g}^{s}\) the Lie algebra of \(G^{s}\). See [11] for this background material.
Let \(V\) denote the defining representation of \(\mathfrak{g}\) via the ambient general linear Lie algebra. If \(\lambda\) is the partition corresponding to \(e\), then under \(\mathfrak{s}\), the representation \(V\) decomposes as a direct sum
\[\bigoplus_{s}V(s{-}1)^{\oplus m(s)}\]
over the distinct parts \(s\) of \(\lambda\). Here \(V(m)\) is the irreducible \(\mathfrak{sl}_{2}\)-representation of highest weight \(m\).
Now let \(e_{0}\in\mathfrak{c}(\mathfrak{s})\) be nilpotent. Then \(e_{0}=\sum_{s}e_{0}^{(s)}\) for some nilpotent \(e_{0}^{(s)}\in\mathfrak{g}^{s}\). Choose an \(\mathfrak{sl}_{2}\)-subalgebra through \(e_{0}^{(s)}\) in \(\mathfrak{g}^{s}\) and let \(\mathfrak{s}_{0}\) be the diagonal \(\mathfrak{sl}_{2}\)-subalgebra for
\[e_{0}=\sum_{s}e_{0}^{(s)}.\]
Each \(e_{0}^{(s)}\) corresponds to a partition \(\mu^{(s)}\) of \(m(s)\), using the defining representation of \(\mathfrak{g}^{s}\).
Under the sum \(\mathfrak{s}\oplus\mathfrak{s}_{0}\), \(V\) decomposes as
\[\bigoplus_{s,j}V(s{-}1)\otimes V(\mu_{j}^{(s)}{-}1)\]
where \(s\) runs over the distinct parts of \(\lambda\) and \(j\) indexes the parts of \(\mu_{s}\).
Now consider the diagonal \(\mathfrak{sl}_{2}\)-subalgebra for \(e+e_{0}\) in \(\mathfrak{s}+\mathfrak{s}_{0}\). An application of the Clebsch-Gordan formula immediately gives
**Lemma 4.1**.: _The nilpotent element \(e+e_{0}\) in \(\mathfrak{g}\) has partition equal to the union of the partitions_
\[[s{+}\mu_{j}^{(s)}{-}1,\ \ s{+}\mu_{j}^{(s)}{-}3,\ldots,|s-\mu_{j}^{(s)}|{+}1]\]
_for each distinct part \(s\) in \(\lambda\) and each part \(\mu_{j}^{(s)}\) of \(\mu^{(s)}\)._
Suppose that \(e_{0}\in\mathfrak{c}(\mathfrak{s})\) is a nilpotent element such that each \(e_{0}^{(s)}\in\mathfrak{g}^{s}\) has partition of the form
\[\mu^{(s)}=[2^{a_{s}},1^{b_{s}}] \tag{10}\]
for some positive integers \(a_{s}\) and \(b_{s}\) with \(2a_{s}+b_{s}=m(s)\). Then the partition \(\nu\) of \(e+e_{0}\) equals the union of the partitions
\[[(s+1)^{a_{i}},s^{b_{i}},(s-1)^{a_{i}}]\]
for each part \(s\) in \(\lambda\). This follows immediately from the previous lemma since the part \(s\) contributes \([s{+}1,s{-}1]\) to \(\nu\) when \(\mu_{j}^{(s)}=2\) and it contributes \([s]\) when \(\mu_{j}^{(s)}=1\).
**Proposition 4.2**.: _Let \(e_{0}\in\mathfrak{c}(\mathfrak{s})\) be a nilpotent element satisfying (10). Let \(\mathcal{O}\) be the orbit through \(e+e_{0}\). Then the slice \(\mathcal{S}_{\mathcal{O},e}\) is isomorphic to_
\[\prod_{s}\overline{\mathcal{O}}_{\mu^{(s)}} \tag{11}\]
_where the product is over the distinct parts \(s\) of \(\lambda\). Here, \(\mathcal{O}_{\mu^{(s)}}\) is the orbit with partition \(\mu^{(s)}\) in \(\mathfrak{sp}(m(s))\) if \(s\equiv\epsilon\) and in \(\mathfrak{so}(m(s))\) if \(s\not\equiv\epsilon\)._
Proof.: The partition of \(e_{0}^{(s)}\) in \(\mathfrak{g}\) (rather than \(\mathfrak{g}^{i}\)) is equal to
\[[2^{s{\cdot}a_{s}},1^{N-2s{\cdot}a_{s}}]\]
where \(N\) is the dimension of \(V\). Setting \(a=\sum_{s}a_{s}\), the partition of \(e_{0}\) is equal to
\[[2^{a},1^{N-2a}],\]
which is of height \(2\) in \(\mathfrak{g}\). Then Corollary 4.9 in [10] implies that \(\mathcal{S}_{\mathcal{O},e}\) and \(\prod_{s}\overline{\mathcal{O}}_{\mu^{s}}\) have the same dimension. The latter is isomorphic to
\[f+\overline{C(\mathfrak{s})\cdot(e+e_{0})}=f+e+\overline{C(\mathfrak{s})\cdot e _{0}},\]
which is a subvariety of \(\mathcal{S}_{\mathcal{O},e}\). The result follows from [10, Cor 13.3] if we can show that \(\overline{\mathcal{O}}\) is normal at \(e\).
Since the only minimal degeneration of \([2^{a_{s}},1^{b_{s}}]\) in \(\mathfrak{g}^{s}\) is \([2^{a_{s}-1},1^{b_{s}+2}]\) when \(a_{s}>1\) is of minimal type (that is, types \(a\), \(f\), \(g\), or \(h\) in [10, Table 1]), the only minimal degenerations of \(\mathcal{O}\) that contain \(e\) are also of minimal type. The argument in [10, Thm 16.2] then shows that \(\overline{\mathcal{O}}\) is normal at \(e\).
We can now prove the \(g_{sp}\) case.
**Corollary 4.3**.: _Let \(\mathcal{O}=\mathcal{O}_{\lambda}\) and \(e\in\mathcal{O}_{\mu}\), where \((\lambda,\mu)\) are of type \(g_{sp}\) in Table 2. Then \(\mathcal{S}_{\mathcal{O},e}\) is isomorphic to \(\overline{\mathcal{O}}_{[2^{2},1^{2n-4}]}\), the closure of the minimal special orbit in \(\mathfrak{sp}_{2n}\)._
Proof.: If \(k\) is the number of columns removed, then \(s=k+1\) is the relevant part of \(\mu\) and \(m(s)=2n\). We take \(e_{0}\) to be exactly equal to \(e_{0}^{(s)}\in\mathfrak{g}^{s}\) with partition \(\mu^{s}=[2^{2},1^{2n-4}]\). Then the result follows from the proposition. Since \(k\not\equiv\epsilon\) for the \(g_{sp}\) case, we have \(s\equiv\epsilon\) and \(\mathfrak{g}^{s}=\mathfrak{sp}_{2n}\).
_Remark 4.4_.: The \(h\) case in the table, already done in [10], corresponds to the situation where \(s\not\equiv\epsilon\) and \(m(s)\) is even, as well as having \(\mathcal{O}_{\lambda}\) is special (so the condition on \(l\) holds). Of course, the slice is still of type \(h\) even if the degeneration is not a minimal special one.
### Type \(f_{sp}^{1}\) and \(f_{sp}^{2}\)
These cases occur when \((\lambda,\mu)\) is \(\epsilon\)-row equivalent to \((\lambda^{(i)},[a^{N}])\) where \(N\) is odd and \(a\not\equiv\epsilon\) and
\[\lambda^{(1)} =[a{+}2,a^{N-3},(a{-}1)^{2}]\quad\text{ (type $f_{sp}^{1}$)}\] \[\lambda^{(2)} =[(a{+}1)^{2},a^{N-3},a{-}2]\quad\text{ (type $f_{sp}^{2}$)}\]
with \(a\geq 1\) when \(i=1\) and \(a\geq 2\) when \(i=2\). By Proposition 2.3, it is enough to assume that \(\mu=[a^{N}]\) and \(\lambda\) equals \(\lambda^{(1)}\) or \(\lambda^{(2)}\). Since we will need it for the next section,we consider the more general case \(N\) is any integer with \(N\geq 3\).
**Proposition 4.5**.: _Let \(\mathfrak{g}=\mathfrak{so}_{aN}\) if \(a\) is odd and \(\mathfrak{sp}_{aN}\) if \(a\) is even. Let \(e\in\mathcal{O}_{\mu}\). For \(i\in\{1,2\}\), let \(\mathcal{O}=\mathcal{O}_{\lambda^{(i)}}\). Then there is an isomorphism_
\[\mathcal{S}_{\mathcal{O},e}\simeq\overline{\mathcal{O}}_{[3,1^{N-3}]}\]
_where the orbit closure is in \(\mathfrak{so}_{N}\)._
Proof.: If \(a=1\), the \(\lambda^{(1)}\) case is clear since there is equality between the slice and the orbit closure, so assume \(a\geq 2\). The situation is very similar to SS11.2 in [11]. Let \(I_{N}\) be the \(N\times N\) identity matrix. Let us define the form on \(V\) explicitly to be the one defined by the \(a\times a\) block anti-diagonal matrix \(J\) with
\[J=\begin{pmatrix}0&0&\dots&0&0&I_{N}\\ 0&0&\dots&0&-I_{N}&0\\ 0&0&\dots&I_{N}&0&0\\ \dots&&&&\\ (-1)^{a-1}I_{N}&0&\dots&0&0&0\end{pmatrix}.\]
The bilinear form defined by \(J\) is nondegenerate and is symmetric if \(a\) is odd and symplectic if \(a\) is even. Since \(a\not\equiv\epsilon\), this is the correct form for defining \(\mathfrak{g}=\mathfrak{g}(V_{\epsilon})\).
The \(a\times a\)-block-matrices \(e\) and \(f\) given by
\[e=\begin{pmatrix}0&0&\dots&0&0\\ c_{1}I_{N}&0&\dots&0&0\\ 0&c_{2}I_{N}&\dots&0&0\\ &\dots&&&\\ 0&0&\dots&c_{N-1}I_{N}&0\end{pmatrix},\ \ f=\begin{pmatrix}0&0&\dots&0&0\\ I_{N}&0&\dots&0&0\\ 0&I_{N}&\dots&0&0\\ 0&\dots&I_{N}&0\end{pmatrix}^{T} \tag{12}\]
with \(c_{j}=j(N-j)\) lie in \(\mathfrak{g}\) and \(e\) and \(f\) are both nilpotent with partition \(\mu\). They complete to an \(\mathfrak{sl}_{2}\)-triple as in [11, SS11.2]. The centralizer \(\mathfrak{g}^{f}\) is the set of block upper triangular
matrices of the form
\[X=\begin{pmatrix}Y_{1}&Y_{2}&Y_{3}&...&Y_{a-2}&Y_{a-1}&Y_{a}\\ 0&Y_{1}&Y_{2}&\dots&Y_{a-2}&Y_{a-1}\\ 0&0&Y_{1}&\dots&Y_{a-3}&Y_{a-2}\\ &&\dots&&\\ 0&0&0&\dots&Y_{2}&Y_{3}\\ 0&0&0&\dots&Y_{1}&Y_{2}\\ 0&0&0&\dots&0&Y_{1}\end{pmatrix}.\]
Then \(X\) lies in \(\mathfrak{g}\) if and only if \(Y_{i}=(-1)^{i}Y_{i}^{T}\). Let \(\Sigma_{N}\) denote the set of \(N\times N\) symmetric matrices and set
\[\phi:\mathfrak{so}_{N}\times\Sigma_{N}\times\mathfrak{so}_{N}\times\dots \rightarrow\mathfrak{g}^{f}\]
denote the map where \(\phi(Y_{1},Y_{2},\dots)\) is given by the matrix \(X\) above. The reductive centralizer \(\mathfrak{c}(\mathfrak{s})\simeq\mathfrak{so}_{N}\) is given by \(Y_{i}=0\) for \(i\geq 2\); similarly \(C(\mathfrak{s})\simeq\mathrm{O}(\mathds{N})\). An element \(g\in C(\mathfrak{s})\) acts on \(e+X\in\mathcal{S}_{e}\) by sending \(Y_{i}\) to \(gY_{i}g^{T}\). The \(\mathbb{C}^{*}\)-action on \(\mathcal{S}_{e}\) is given by \(t.Y_{i}=t^{2i}Y_{i}\) for \(t\in\mathbb{C}^{*}\).
Let \(e_{0}\in\mathfrak{c}(\mathfrak{s})\) be a nilpotent element with partition \([3,1^{N-3}]\). Pick an \(\mathfrak{sl}_{2}\)-triple \(\mathfrak{s}_{0}\) through \(e_{0}\) in \(\mathfrak{c}(\mathfrak{s})\) and assume that the semisimple element \(h_{0}\) is a diagonal matrix. By Lemma 4.1 or computing \(h+h_{0}\) directly, we see that \(e+e_{0}\) has partition \(\nu:=[a{+}2,a^{N-2},a{-}2]\) since \(a\geq 2\).
Let \(N=3\). By [10] we have \(\mathcal{S}_{\mathcal{O}_{\lambda^{(i)}},e}\simeq\overline{\mathcal{O}}_{[3]}\) since for \(i{=}1\) the degeneration is type \(c\) and for \(i{=}2\) it is type \(d\) in Table 1. Set \(A=e_{0}\), then \(A^{2}\in\Sigma_{N}\). Then \(\phi(0,A^{2},0,\dots)\) is an eigenvector for both \(\mathrm{ad}(h)\) and \(\mathrm{ad}(h_{0})\), with eigenvalue \(-2\) and \(4\), respectively. Since the absolute values of the eigenvalues of \(\mathrm{ad}(h_{0})\) on \(\mathfrak{z}(e)\) are at most \(4\) and the eigenvalue \(4\) only occurs once in \(\Sigma_{N}\), there are no other exceptional pairs in the sense of [10, SS4]. It follows that \(e+\phi(A,z_{i}A^{2},0,\dots,0)\in\mathcal{O}_{\lambda^{(i)}}\) for a unique \(z_{i}\in\mathbb{C}^{*}\). Since \(\mathcal{S}_{\mathcal{O}_{\lambda^{(i)}},e}\) has dimension two and is irreducible, this means it is exactly the set of elements \(e+\phi(A,z_{i}A^{2},0,\dots,0)\) where \(A\in\mathfrak{so}_{3}\) is nilpotent, giving the isomorphism to \(\overline{\mathcal{O}}_{[3]}\) explicitly.
Now consider the general \(N\) case. We can embed \(\mathfrak{so}_{3}\) into \(\mathfrak{c}(\mathfrak{s})\simeq\mathfrak{so}_{N}\) via the first \(3\) coordinates and similarly for the rest of the centralizer of \(e\) in the \(N=3\) case. Clearly, for \(A\in\mathcal{O}_{[3]}\), then \(\phi(A,0\dots,0)\in\mathfrak{c}(\mathfrak{s})\) lies in \(\mathcal{O}_{[3,1^{N-3}]}\), but also \(e+\phi(A,z_{i}A^{2},0,\dots,0)\in\mathcal{S}_{e}\) lies in \(\mathcal{O}_{\lambda^{(i)}}\), by observing the action of this element on the standard basis of \(V\). It follows, using the action of \(C(\mathfrak{s})\), that \(\mathcal{S}_{\mathcal{O}_{\lambda^{(i)}},e}\) contains \(e+A+z_{i}A^{2}\) for \(A\in\overline{\mathcal{O}}_{[3,1^{N-3}]}\).
Next, \(\dim\mathcal{O}_{\lambda_{1}}=\dim\mathcal{O}_{\lambda_{2}}\) since both orbits are minimal degenerations from \(\mathcal{O}_{\nu}\) of type \(a\), hence they are codimension two in \(\overline{\mathcal{O}_{\nu}}\). The pair \((\lambda_{1},\mu)\) is equivalent to \(([3,1^{N-3}],[1^{N}])\) after canceling \(a{-}1\) columns, thus the codimension of \(\mathcal{O}_{\mu}\) in \(\overline{\mathcal{O}_{\lambda^{(i)}}}\) equals the dimension of \(\overline{\mathcal{O}_{[3,1^{N-3}]}}\) for both \(i=1\) and \(i=2\). The only minimal degeneration from \(\mathcal{O}_{\lambda^{(i)}}\) that contains \(\mathcal{O}_{\mu}\) is to the partition \([(a{+}1)^{2},a^{N-4},(a{-}1)^{2}]\), which an \(A_{1}\) singularity for both \(i=1\) and \(i=2\). Hence, as in SS4.1, we have \(\overline{\mathcal{O}_{\lambda^{(i)}}}\) is unibranch at \(e\). Thus \(\mathcal{S}_{\mathcal{O}_{\lambda^{(i)}},e}\simeq\overline{\mathcal{O}}_{[3,1^{ N-3}]}\).
### Type \(h_{sp}\)
As in the previous subsection, we are reduced by Proposition 9 to the case where \(\lambda=[a{+}2,a^{N-2},a{-}2]\) and \(\mu=[a^{N}]\). We have the same description for \(e\in\mathcal{O}_{\mu}\), \(\mathfrak{s}\), etc. as above. In the previous subsection, \(\mathcal{S}_{\mathcal{O}_{\lambda^{(i)}},e}\) was the closure of a \(C(\mathfrak{s})\)-orbit. We will first show that \(\mathcal{S}_{\mathcal{O}_{\lambda},e}\) is the closure of a \(C(\mathfrak{s})\times\mathbb{C}^{*}\)-orbit for \(N\geq 2\), where the \(\mathbb{C}^{*}\)-action is as above.
The arguments from [10, SS11.2.2] apply. First, we use them to show that for \(M\in\mathcal{S}_{\mathcal{O}_{\lambda},e}\), the matrices \(Y_{3},Y_{4},\dots\) are equal to sums of products of \(Y_{1}\) and \(Y_{2}\). This follows since \(\operatorname{rank}(M^{i})\leq N(a-i)\) for \(i=1,\dots,a-2\) as in loc. cit. However, \(\operatorname{rank}(M^{a-1})\leq N+1\) unlike in loc. cit. and this implies that
\[\operatorname{rank}(Y_{2}-dY_{1}^{2})\leq 1 \tag{13}\]
for some \(d\in\mathbb{C}^{*}\). The condition \(M^{a+2}=0\) yields the equation, in the block lower left corner,
\[d_{1}Y_{3}+d_{2}(Y_{1}Y_{2}+Y_{2}Y_{1})+d_{3}Y_{1}^{3}=0 \tag{14}\]
where \(d_{1}=\frac{(a+2)!(a-1)!}{6}\) and \(d_{3}=\frac{(a-1)(a-2)}{5}\cdot d_{1}\). It follows that the \(C(\mathfrak{s})\times\mathbb{C}^{*}\)-equivariant map from \(\mathcal{S}_{\mathcal{O}_{\lambda,e}}\) to the space \(e+X\) where \(Y_{i}=0\) for \(i\geq 3\) is an isomorphism.
Now for \(N\geq 3\), there are actually two equations involving a linear term for \(Y_{3}\). The one from the lower left corner of \(M^{a+2}=0\) and the one from the \(\operatorname{rank}(M^{a-2})=N\) condition:
\[c_{1}Y_{3}+c_{2}(Y_{1}Y_{2}+Y_{2}Y_{1})+c_{3}Y_{1}^{3}=0. \tag{15}\]
where \(c_{1}=\frac{a!(a-1)!^{2}(a-2)!^{2}(a-3)!}{24}\) and \(c_{3}=\frac{(a+1)(a+2)}{5}\cdot c_{1}\). The equations (14) and (15) are not multiples of each other since \(\frac{d_{3}}{d_{1}}\neq\frac{c_{3}}{c_{1}}\) for \(a>0\). It follows, by canceling the \(Y_{3}\) term, that
\[tY_{1}^{3}=Y_{1}Y_{2}+Y_{2}Y_{1} \tag{16}\]
for some nonzero \(t\).
Consider the \(N=2\) case. Conjugating in \(GL_{2}\) so that \(C(\mathfrak{s})\) becomes the diagonal torus in \(SL_{2}\), we can represent \(Y_{1}=\left(\begin{smallmatrix}x&0\\ 0&-x\end{smallmatrix}\right)\) and \(Y_{2}=\left(\begin{smallmatrix}y&z\\ w&y\end{smallmatrix}\right)\) for \(x,y,z,w\in\mathbb{C}\). Then (16) implies that either \(x\) is identically \(0\), i.e., \(Y_{1}\equiv 0\) or \(y=\frac{t}{2}x^{2}\), i.e., \(\operatorname{tr}(Y_{2}-\operatorname{t}Y_{1}^{2})=0\). By (13), \(\det(Y_{2}-dY_{1}^{2})=0\). If \(Y_{1}\equiv 0\), then \(\det(Y_{2})=y^{2}-zw=0\) and \(x=0\) are the conditions defining the slice. If \(t=d\), the condition is \(zw=0\), with \(x\) arbitrary. Since \(([a+2,a-2],[a^{2}])\) is a minimal degeneration of type \(b\), the slice is isomorphic to the \(A_{3}\)-simple surface singularity, so neither of these cases hold. Instead, \(d\neq t\) and the defining equations are \(y=\frac{t}{2}x^{2}\) and \(t^{2}x^{4}=4wz\), which is indeed an \(A_{3}\)-singularity. Moreover, the points with \(x\neq 0\) form a single \(C(\mathfrak{s})\times\mathbb{C}^{*}\)-orbit, each of which has finite stabilizer, so this orbit is dense in the slice. Let \(v=e+\phi(v_{1},v_{2},\dots)\) be such a point in the slice.
Next, as in SS4.5, we bootstrap up to the general case by embedding the slice for the \(N=2\) case into the general slice by using the first two coordinates in each block. Since the coefficients in the equations given above continue to hold for \(\mathcal{S}_{\mathcal{O}_{\lambda,e}}\) in \(\mathcal{S}_{e}\), independent of \(N\), we do indeed have a \(\operatorname{SO}_{2}\times\mathbb{C}^{*}\)-equivariant embedding of the \(N=2\) case. Note that \(v_{1}\in\mathfrak{so}_{N}\) is a multiple of a \(C(\mathfrak{s})\)-conjugate of \(h_{0}\) from SS4.5. Is stabilizer in \(C(\mathfrak{s})\) is \(\operatorname{SO}_{2}\times\operatorname{SO}_{N-2}\subset C(\mathfrak{s})\). From the \(N=2\) case it follows that the connected stabilizer in \(C(\mathfrak{s})\times\mathbb{C}^{*}\)of \(v\) is \(\operatorname{SO}_{N-2}\). Hence, the \(C(\mathfrak{s})\times\mathbb{C}^{*}\)-orbit through \(v\) has dimension \((N(N-1)/2+1)-(N-2)(N-3)/2=2N-2\). This is also the dimension \(\mathcal{S}_{\mathcal{O}_{\lambda,e}}\). Since \(\overline{\mathcal{O}}_{\lambda}\) is unibranch at \(e\), we conclude
**Proposition 4.6**.: _The slice \(\mathcal{S}_{\mathcal{O}_{\lambda,e}}\) is isomorphic to the closure of a \(C(\mathfrak{s})\times\mathbb{C}^{*}\)-orbit through \((A,B)\in\mathfrak{so}_{N}\times\Sigma_{N}\) where \(A=h_{0}\in\mathfrak{so}_{N}\), and \(B\in\Sigma_{N}\) satisfies \(\operatorname{rank}(B-dA^{2})=1\) and \(\operatorname{tr}(\operatorname{B}-\operatorname{tA}^{2})=0\) for some nonzero \(d,t\) with \(d\neq t\)._
Our goal now is to identify the subvariety of \(\mathfrak{so}_{N}\times\Sigma_{N}\) in the proposition with the quotient of closure of the orbit \(\mathcal{O}_{[3,1^{N-2}]}\) in \(\mathfrak{so}_{N+1}\). First, by employing the \(C(\mathfrak{s})\times\mathbb{C}^{*}\)-equivariant isomorphism of \(\mathfrak{so}_{N}\times\Sigma_{N}\) to itself sending \((A,B)\to(A,B-dA^{2})\), it follows that the slice is isomorphic to the closure of the \(C(\mathfrak{s})\times\mathbb{C}^{*}\)-orbit through \((A,B)\in\mathfrak{so}_{N}\times\Sigma_{N}\) where \(B\) is now a matrix of rank \(1\) and \(\operatorname{tr}(\operatorname{B}-\operatorname{tA}^{2})=0\) for some nonzero \(t\).
Next, we recall material from [10, SS2]. Let \(X\in\mathfrak{so}_{N+1}\) be written as
\[\begin{pmatrix}M&u\\ -u^{T}&0\end{pmatrix}\]
where \(M\in\mathfrak{so}_{N}\) and \(u\in\mathbb{C}^{N}\) is a column vector. Let \(\theta\) be the involution of \(\mathfrak{so}_{N+1}\) given by conjugation of \(\operatorname{diag}(1,1,\dots,1,-1)\in\operatorname{O}_{N+1}\). Identifying \(\mathfrak{so}_{N+1}\) with the set of pairs \((M,u)\), we see that \(\theta\) maps \((M,u)\mapsto(M,-u)\). Then the map \(\varphi:\mathfrak{so}_{N+1}\to\mathfrak{so}_{N}\times\Sigma_{N}\) sending \(X\) to
\((M,uu^{T})\) induces an \(\mathrm{O}_{n}\times\mathbb{C}^{*}\)-equivariant isomorphism of \(\mathfrak{so}_{n+1}/\langle\theta\rangle\) with \(\mathfrak{so}_{n}\times\Xi\) where \(\Xi\) is the cone of elements of \(\Sigma_{N}\) of rank at most \(1\).
We can now state
**Proposition 4.7**.: _The slice \(\mathcal{S}_{\mathcal{O}_{\lambda},e}\) is isomorphic to_
\[\overline{\mathcal{O}}_{[3,1^{N-2}]}/\langle\theta\rangle.\]
Proof.: By [11, Corollary 2.,2], if \(Y=\overline{\mathcal{O}}_{[3,1^{N-2}]}\), then \(Y/\langle\theta\rangle\simeq\varphi(Y)\). As before we can use the \(N=2\) case. In that case, \(X=\begin{pmatrix}0&a&b\\ -a&0&c\\ -b&0&-c\end{pmatrix}\) and
\[\varphi(X)=\left(\left(\begin{smallmatrix}0&a\\ -a&0\end{smallmatrix}\right),\left(\begin{smallmatrix}b^{2}&bc\\ bc&c^{2}\end{smallmatrix}\right)\right).\]
The condition for \(X\) to be nilpotent is \(a^{2}+b^{2}+c^{2}=0\) and so the image is exactly the matrices \((A,B)\) where \(\det(B)=0\) and \(\mathrm{tr}(\mathrm{B}+\mathrm{A}^{2})=0\). In the general case, we embed \(\mathfrak{so}_{3}\) into the lower right corner. It follows from the discussion above and the proof of Proposition 4.6 that \(\varphi(Y)\) is isomorphic to the closure of the \(C(\mathfrak{s})\times\mathbb{C}^{*}\)-orbit through \((A,B)\) with \(A\neq 0\), and hence isomorphic to the slice by Proposition 4.6.
Let the Klein four-group \(V_{4}\) act on \(\mathfrak{so}_{N+2}\) via the pair of commuting involutions \(\theta_{1},\theta_{2}\) given by conjugation by \(\mathrm{diag}(1,\ldots,1,-1)\) and \(\mathrm{diag}(1,\ldots,1,-1,1)\), respectively. Let \(\overline{\mathcal{O}}_{[2^{2},1^{N-2}]}\) be the minimal orbit in \(\mathfrak{so}_{N+2}\). Then by [11, Corollary 2.5], for example, it follows that
\[\overline{\mathcal{O}}_{[3,1^{N-2}]}\simeq\overline{\mathcal{O}}_{[2^{2},1^{ N-2}]}/\langle\theta_{1}\rangle.\]
**Corollary 4.8**.: _We have the isomorphism_
\[\mathcal{S}_{\mathcal{O}_{\lambda},e}\simeq\overline{\mathcal{O}}_{[2^{2},1^{ N-2}]}/\langle\theta_{1},\theta_{2}\rangle.\]
_and hence the minimal special degeneration \(h_{sp}\) in Table 2 is \(d_{n+1}/V_{4}\)._
Proof.: The \(h_{sp}\) degeneration is covered by the case when \(N\) is even with \(n=N/2\).
_Remark 4.9_.: a) The special case \(n=3\) was already observed in [10]. In that case we have \(\overline{\mathcal{O}_{\min}(\mathfrak{so}_{5})}\cong\mathbb{C}^{4}/\{\pm 1\}\) and so we obtain isomorphisms of \(\mathcal{S}_{\mathcal{O},e}\cap\overline{\mathcal{O}^{\prime}}\) with (i) \(\mathbb{C}^{4}/W(B_{2})\); (ii) \(\mathcal{N}(\mathfrak{so}_{4})/\mathfrak{S}_{2}=(\mathcal{N}(\mathfrak{sl}_{2} )\times\mathcal{N}(\mathfrak{sl}_{2}))/\langle\theta\rangle\) where \(\theta\) swaps the two copies of \(\mathfrak{sl}_{2}\).
b) The orbits which intersect non-trivially with \(\mathcal{S}_{\mathcal{O},e}\) are the nilpotent orbits lying between \(\mathcal{O}\) and \(\mathcal{O}^{\prime}\) in the partial order. If \(N\geq 4\) then there are five of these, as indicated by the following diagram:
where \(\mathfrak{so}_{N+1}^{(1)}\) and \(\mathfrak{so}_{N+1}^{(2)}\) are the two fixed point subalgebras for the two proper parabolic subgroups of \(\mathfrak{S}_{2}\mathfrak{S}_{2}\). For \(N=3\) there is no orbit with partition \([3^{2},2^{N-4},1^{2}]\), equivalently, \(\mathfrak{so}_{3}\) contains no elements of \(\mathcal{O}_{\min}(\mathfrak{so}_{5})\).
a) For \(N\) odd, the singularity \(\overline{\mathcal{O}}_{[2^{2},1^{N-2}]}/\langle\theta_{1},\theta_{2}\rangle\) arises as a slice, but never for a minimal special degeneration. This is because the \(f_{sp}\) singularities arise in this case as the minimal special degenerations.
### Minimal special degenerations in the exceptional Lie algebras
There are three unexpected singularities that arise in the exceptional Lie algebra: (i) \(\mu\) (with normalization \(A_{3}\)); (ii) \(a_{2}/\mathfrak{S}_{2}\); (iii) \(d_{4}/\mathfrak{S}_{4}\), which are dealt with in [10], [10]. They appear once, once, and twice, respectively. We will show in this subsection that all remaining singularities associated to minimal special degenerations in exceptional types are unions of simple surface singularities or minimal special singularities.
The case of \(G_{2}\) is clear. Most of the minimal special degenerations are minimal degenerations and hence were dealt with in [10] or [10]. There are three (resp. three, eight, ten) minimal special degenerations which are not minimal degenerations in type \(F_{4}\) (resp. \(E_{6}\), \(E_{7}\), \(E_{8}\)). These cases, with two exceptions, are covered by the following proposition.
**Proposition 4.10**.: _Let \(\mathcal{O}^{\prime}\) be a special nilpotent orbit in an exceptional Lie algebra such that the reductive centralizer \(\mathfrak{c}(\mathfrak{s})\) contains a non-simply-laced simple component \(\mathfrak{c}_{0}=\mathrm{Lie}(C_{0})\)._
_(a) There is a unique special orbit \(\mathcal{O}>\mathcal{O}^{\prime}\) such that \(\mathrm{codim}_{\overline{\mathcal{O}}}\,\mathcal{O}^{\prime}\) is equal to the dimension of the minimal special nilpotent \(C_{0}\)-orbit \(\mathcal{O}_{0}\) in \(\mathfrak{c}_{0}\)._
_(b) If \(\mathcal{O}^{\prime}=\mathcal{O}_{2A_{2}}\) in type \(E_{8}\) then there are two such simple components \(\mathfrak{c}_{0}\), both of type \(G_{2}\), and \(\mathcal{S}_{\mathcal{O},e}\) is a union of two copies of \(\overline{\mathcal{O}_{0}}\). The two copies are interchanged by \(C(\mathfrak{s})\). Other than this case, there is exactly one such \(\mathfrak{c}_{0}\) and \(\mathcal{S}_{\mathcal{O},e}\simeq\overline{\mathcal{O}_{0}}\)._
Proof.: Statement (a) is a straightforward check using the tables of nilpotent orbits and Hasse diagrams in [11].
The singularities in (b) can be classified using the arguments in [10, SS4.3]. Indeed, several of these are discussed there, see [10, SS11, Table 13]. Let \(e_{0}\in\mathcal{O}_{0}\). We claim that, with the sole exception of \(\mathcal{O}^{\prime}=\mathcal{O}_{A_{2}+3A_{1}}\) in type \(E_{7}\), \(e+e_{0}\in\mathcal{O}\). By unibranchness and dimensions, it follows that \(\mathcal{S}_{\mathcal{O},e}=\overline{f+C_{0}\cdot e_{0}}\cong\overline{ \mathcal{O}_{0}}\). By [10, Prop. 4.8], it suffices to verify the following condition: let \(\langle h_{0},e_{0},f_{0}\rangle=\mathfrak{s}_{0}\subset\mathfrak{c}_{0}\) be an \(\mathfrak{sl}_{2}\)-subalgebra, then all irreducible \(\mathfrak{s}_{0}\)-summands in \(\mathfrak{g}^{e}(i)\) have dimension \(\leq(i+1)\). This can be checked by inspecting the tables in [11]. If \(\mathfrak{c}_{0}\) is of type \(B\), then all non-trivial simple summands for the action on the centralizer of \(e\) are natural modules or spin modules; a short root element acts with Jordan blocks of size \(2\) on the spin module and of size \(\leq 3\) on the natural module, so we only need to check that no natural modules occur in \(\mathfrak{g}^{f}(1)\). When \(\mathfrak{c}_{0}\) is of type \(G_{2}\) (excluding \(\mathcal{O}^{\prime}=\mathcal{O}_{A_{2}+3A_{1}}\) in type \(E_{7}\)), all non-trivial summands are isomorphic to the minimal faithful representation for \(\mathfrak{c}_{0}\); \(e_{0}\) acts on the minimal faithful representation with Jordan blocks of size \(\leq 3\), so we only need to check that the minimal representation doesn't appear in \(\mathfrak{g}^{e}(1)\). Finally, \(\mathfrak{c}_{0}\) of type \(C\) occurs once, when \(\mathcal{O}^{\prime}=\mathcal{O}_{D_{4}}\) in type \(E_{7}\) and \(\mathfrak{c}_{0}=\mathfrak{sp}_{6}\); here one has to check that \(e_{0}\) has no Jordan blocks of size \(>7\) on \(V(\varpi_{2})\), hence on the alternating square of the natural module, which is straightforward.
This only leaves the case \(\mathcal{O}^{\prime}=\mathcal{O}_{A_{2}+3A_{1}}\) in \(E_{7}\). Here \(\mathfrak{c}_{0}=\mathfrak{g}^{e}\cap\mathfrak{g}^{h}\) is simple of type \(G_{2}\) and the positive graded parts of \(\mathfrak{g}^{f}\) are:
\[\mathfrak{g}^{f}(2)=V(2\varpi_{1})\oplus\mathbb{C}e,\quad\mathfrak{g}^{f}(4)= V_{\mathrm{min}},\]
where \(V_{\mathrm{min}}=V(\varpi_{1})\) is the minimal faithful representation for \(\mathfrak{c}_{0}\). Note that the action of \(\mathfrak{c}_{0}\) on \(V_{\mathrm{min}}\) induces an embedding in \(\mathfrak{so}_{7}\), and \(\mathfrak{sl}_{7}\) decomposes over \(\mathfrak{c}_{0}\subset\mathfrak{so}_{7}\) as \(\mathfrak{so}_{7}\oplus V(2\varpi_{1})\). (In the notation of SS4.5, \(V(2\varpi_{1})=\Sigma_{7}\).) Furthermore, the matrix square operation on \(\mathfrak{gl}_{7}\) determines a quadratic map \(\mathfrak{so}_{7}\to V(2\varpi_{1})\) which restricts to \(\mathfrak{c}_{0}\) to give a \(C_{0}\)-equivariant map \(\psi:\mathfrak{c}_{0}\to V(2\varpi_{1})\subset\mathfrak{g}^{e}(2)\). In particular, if \(x=e_{\beta_{2}}+e_{3\beta_{1}+\beta_{2}}\) then \(\psi(x)\) is non-zero of weight \(3\beta_{1}+2\beta_{2}\). We checked using GAP that (with this notation) there exists an element of \(\mathcal{O}=\mathcal{O}_{D_{4}(a_{1})}\) of the form: \(e+x+\psi(x)\), where \(x\) is in the subregular nilpotent orbit in \(\mathfrak{c}_{0}\)
It follows that \(\mathcal{S}_{\mathcal{O},e}=e+\overline{C_{0}\cdot(x+\psi(x))}\), hence is isomorphic to the closure of the \(C_{0}\)-orbit through \(x\), which completes our proof.
**Proposition 4.11**.: _All minimal special degenerations in exceptional Lie algebras are either: (1) minimal degenerations, (2) covered by the above proposition, or (3) isomorphic to \(d_{4}/\mathfrak{S}_{4}\), which occurs for the two cases of \(\mathcal{O}_{F_{4}(a_{3})}>\mathcal{O}_{A_{2}}\) in \(F_{4}\) and \(\mathcal{O}_{E_{8}(a_{7})}>\mathcal{O}_{D_{4}+A_{2}}\) in \(E_{8}\)._
Proof.: We checked that all the minimal special degenerations, which are not minimal degenerations, are covered by the proposition, except for the two cases listed. The exceptional special degeneration \(\mathcal{O}_{F_{4}(a_{3})}>\mathcal{O}_{A_{2}}\) in type \(F_{4}\) was dealt with in [11, Theorem 4.11]. Hence it remains to show that the Slodowy slice singularity from \(D_{4}+A_{2}\) to \(E_{8}(a_{7})\) in \(E_{8}\) is also isomorphic to \(d_{4}/\mathfrak{S}_{4}\). To do this, we first consider \(\mathcal{O}^{\prime}=\mathcal{O}_{D_{4}}\). Let \(\mathcal{S}_{D_{4}}\) be the Slodowy slice at \(f\in\mathcal{O}^{\prime}\). Repeating the calculation in the proof of Proposition 4.10, we see that the condition of [11, Prop. 4.8] holds for an element of \(\mathfrak{c}\) of type \(F_{4}(a_{3})\). It follows that \(\mathcal{S}_{D_{4}}\cap\overline{\mathcal{O}_{E_{8}(a_{7})}}=f+\overline{C_{ 0}\cdot e_{2}}\) where \(e_{2}\) belongs to the \(F_{4}(a_{3})\) orbit in \(\mathfrak{c}_{0}\). By the same calculation (or by direct observation), \(\mathcal{S}_{D_{4}}\cap\overline{\mathcal{O}_{D_{4}+A_{2}}}=f+\overline{C_{0} \cdot e_{1}}\) where \(e_{1}\) is in the \(A_{2}\) orbit in \(\mathfrak{c}_{0}\). Now we use the following fact (which follows from equality of dimensions): if \(\{e_{1},h_{1},f_{1}\}\) is an \(\mathfrak{sl}_{2}\)-triple in \(\mathfrak{c}_{0}\) such that \(\dim C_{0}\cdot e_{1}\) equals the codimension of \(\mathcal{O}^{\prime}\) in \(\overline{G\cdot(f+f_{1})}\), then the centralizer of \(e+e_{1}\) equals \(\mathfrak{g}^{e}\cap\mathfrak{g}^{e_{1}}\). Hence the Slodowy slice at \(f+f_{1}\) is contained in the Slodowy slice at \(f\). It follows that \(\mathcal{S}_{D_{4}+A_{2}}\cap\overline{\mathcal{O}_{E_{8}(a_{7})}}\) is isomorphic to the Slodowy slice singularity in \(F_{4}\) from \(\mathcal{O}_{A_{2}}\) to \(\mathcal{O}_{F_{4}(a_{3})}\), hence is isomorphic to \(d_{4}/\mathfrak{S}_{4}\).
The following is true in both the classical and exceptional types.
**Corollary 4.12**.: _Let \(\mathcal{O}^{\prime}=\mathcal{O}^{\prime}_{e}\) be special. The action of \(C(\mathfrak{s})\) on \(\mathfrak{c}(\mathfrak{s})\) induces an action of \(A(e)\) on the set of simple components of \(\mathfrak{c}(\mathfrak{s})\). Each \(A(e)\)-orbit of simple components \(\mathfrak{c}_{0}\) corresponds to a unique special nilpotent orbit \(\mathcal{O}\) in \(\mathfrak{g}\) such that \((\mathcal{O},\mathcal{O}^{\prime})\) is a minimal special degeneration. Moreover, \(\mathcal{S}_{\mathcal{O},e}\) contains a subvariety isomorphic to the minimal special nilpotent orbit closure in \(\mathfrak{c}_{0}\). All minimal special degenerations of codimension at least \(4\) arise in this way_
Proof.: We just showed this in the exceptional types when \(\mathfrak{c}_{0}\) is not simply-laced, but it also holds when \(\mathfrak{c}_{0}\) is simply-laced where it gives a minimal degeneration. It also holds in the cases of \(d_{4}/\mathfrak{S}_{4}\) and \(a_{2}/\mathfrak{S}_{2}\) from [11]. In the classical types, we showed that each simple factor of \(\mathfrak{c}(\mathfrak{s})\) leads to a unique minimal special degeneration. The \(A(e)\)-orbits on the simple factors of \(\mathfrak{c}(\mathfrak{s})\) are singletons except for the case where \(\mathfrak{c}(\mathfrak{s})\) contains a copy of \(\mathfrak{so}_{4}\). This corresponds to the case of \([2A_{1}]^{+}=d_{2}^{+}\).
## 5. \(A(e)\)-action on slices
In this section we compute the action of \(A(e)\) on the slice \(\mathcal{S}_{\mathcal{O},e}\) for both minimal degenerations and minimal special degenerations in the classical types, and determine when the action is outer. This was done in the exceptional groups in [11] for minimal degenerations. There is only a single case of a minimal special degeneration not covered by those results: the case of \(e\in A_{2}\) from Proposition 4.10, which we now denote as \([2g_{2}^{sp}]^{+}\).
### Union of simple surface singularities
Recall that \(C(\mathfrak{s})\) acts on \(\mathcal{S}_{\mathcal{O},e}\). In the case of a simple surface singularity, as discussed in the introduction, we use Slodowy's notion of action, which amounts to the action on the projective lines in the exceptional fiber. Even when \(\mathcal{S}_{\mathcal{O},e}\) is not irreducible, we want to describe how \(C(\mathfrak{s})\) permutes the projective lines in the fiber, something we did in the exceptional groups. Since \(C^{\circ}(\mathfrak{s})\) acts trivially, we get a permutation action of \(A(e)\simeq C(\mathfrak{s})/C^{\circ}(\mathfrak{s})\) on the \(\mathbb{P}^{1}\)'s. We call this the outer action of \(A(e)\) on the slice.
To compute the action for \(\dim(\mathcal{S}_{\mathcal{O},e})=2\), we use [11, Lemma 5.8]. We do not assume that the orbits are special, so the set-up is a minimal degeneration \((\mathcal{O}_{\lambda},\mathcal{O}_{\mu})\) in the classical groups where \(\dim(\mathcal{S}_{\mathcal{O},e})=2\) for \(e\in\mathcal{O}_{\mu}\), and where \(\lambda,\mu\) are the appropriate partitions indexing the nilpotent orbits. Let \(\mathfrak{n}_{P}\) denote the nilradical of the Lie algebra of a parabolic subgroup \(P\) of \(G\) such that \(\mathcal{O}_{\lambda}\) is Richardson for \(\mathfrak{n}_{P}\). Then we have the proper, surjective map \(\pi:G\times^{P}\ \mathfrak{n}_{P}\to\overline{\mathcal{O}}_{\lambda}\), which is generically finite. Below, we will always choose \(\mathfrak{n}_{P}\) so that \(\pi\) is birational.
Next, assume that the reductive centralizer for an element in \(\mathcal{O}_{\lambda}\) is semisimple. Let \(\mathcal{O}_{1},\mathcal{O}_{2},...\mathcal{O}_{t}\) be the maximal orbits in the complement of \(\mathcal{O}_{\lambda}\) in its closure. Assume that all \(\mathcal{O}_{i}\) are codimension two in \(\bar{\mathcal{O}}_{\lambda}\). Let \(e_{i}\in\mathcal{O}_{i}\). Let \(r_{i}\) equal the number of \(A(e_{i})\)-orbits on \(\pi^{-1}(e_{i})\). Then as in [11, Lemma 5.8], if \(G\) is connected, we have \(\sum_{i}r_{i}\) is equal to rank of \(\mathfrak{g}\) minus the rank of the Levi subgroup of \(P\). The quantities \(r_{i}\) will be enough to determine the outer action.
Remarkably, in types \(B\) and \(C\), the actions are large as possible as they were in the exceptional types (at least given the size of \(A(e)\)).
**Proposition 5.1**.: _In the classical groups \(B,C,D\) (working in the full orthogonal group for \(D\)),_
1. _If_ \(\mathcal{S}_{\mathcal{O},e}\) _is a simple surface singularity of type_ \(D_{k+1}\) _or_ \(A_{2k-1}\)_, then the_ \(A(e)\)_-action upgrades these singularities to_ \(C_{k}\) _and_ \(B_{k}\)_, respectively._
2. _If_ \(\mathcal{S}_{\mathcal{O},e}\) _is a union of two branches of type_ \(A_{2k-1}\)_, the_ \(A(e)\)_-action is_ \([2B_{k}]^{+}\) _as described in SS_1.3_._
The proof will occupy the remainder of this section. For the moment let \(G=\mathrm{O}(\mathrm{V})\) or \(\mathrm{Sp}(\mathrm{V})\), so that, as noted in SS4, a reductive subgroup of the centralizer \(G^{e}\) of \(e\) in \(G\) is \(C(\mathfrak{s})\), which is a product of orthogonal and symplectic groups.
Then the component group \(A(e):=G^{e}/(G^{e})^{\circ}\) of \(e\) with partition \(\mu\) is generated by the corners of Young diagram corresponding to parts \(s\) with \(s\not\equiv\epsilon\). Each such part \(s\) determines a copy of an orthogonal group in \(C(\mathfrak{s})\) and we denote by \(x_{s}\) an element of determinant \(-1\) in each orthogonal group. Then \(A(e)\) is elementary abelian \(\mathbf{Z}_{2}^{r}\) where \(r\) is the number of parts \(s\) with \(s\not\equiv\epsilon\).
### Type \(b\) degeneration
This is the case of a simple surface singularity of type \(D_{k+1}\) and it arises whenever \((\lambda,\mu)\) is locally \((\lambda^{\prime},\mu^{\prime}):=([a+2k,a],[a+2k-2,a+2])\), by [12]. Here \(k\geq 2\). This is a valid pair of partitions when \(a\) is even if \(\mathfrak{g}\) is of type \(C\) and odd if \(\mathfrak{g}\) is of types \(B\) or \(D\). By Proposition 2.3, we can replace \((\lambda,\mu)\) by \((\lambda^{\prime},\mu^{\prime})\). We note that the centralizer of \(e_{1}\) in \(G(V_{1})\) is a subgroup of the centralizer of \(e\) in \(G\). This gives an embedding of the component group fo \(e_{1}\) of \(G(V_{1})\), which is the Klein 4-group \(V_{4}\), into \(A(e)\), given by sending \(A(e_{1})\) to the subgroup of \(A(e)\) generated by \(x_{a\!+\!2k\!-\!2}\) and \(x_{a\!+\!2}\). The other parts contributing to \(A(e)\) act trivially on \(\mathfrak{g}(V_{1})\) and hence trivially on the slice.
#### 5.2.1. \(G\) is of type \(C\), \(a\) even
The weighted Dynkin diagram for \(\mathcal{O}_{\lambda}\) is
\[\overbrace{2\dots 2}^{k}\overbrace{0202\dots 02}^{a/2}\]
where the final node corresponds to the long simple root. Taking the associated parabolic subgroup \(P\), the map \(\pi\) above is birational.
If \(a=0\), we are in type \(C_{k}\) and \(\mathcal{O}_{\lambda}\) is regular. There is a unique minimal degeneration to \(\mathcal{O}_{\mu}\), the subregular orbit. Hence, using [11, Lemma 5.8], there are exactly \(k\) orbits for \(A(e)\) on the \(\mathbb{P}^{1}\)'s in the fiber, which implies the action on \(D_{k+1}\) must be \(C_{k}\). Indeed, the sole
\(A(e)\)-orbit of size two is coming from the orbital variety corresponding to the long root. (We could use knowledge of the Springer fiber in this case too).
Next if \(a>0\), which means \(a\geq 2\) since \(a\) is even, there is the degeneration of \(\lambda\) to \(\mu\) but also to \(\mu^{\prime}=[a{+}2k,a{-}2,2]\). The latter minimal degeneration is equivalent to \(([a],[a{-}2,2])\), which is a simple surface singularity of type \(D_{\frac{a}{2}+1}\) with action of \(A(e_{\mu^{\prime}})\) having \(\frac{a}{2}\) orbits, by induction. Since the total number of component group orbits on the fiber is \(k+\frac{a}{2}\), that leaves \(k\) orbits corresponding to the degeneration to \(e=e_{\mu}\). This forces the action on \(D_{k+1}\) to be non-trivial and must be \(C_{k}\), as desired. Indeed, we could explicitly see by using instead the parabolic \(P\) for the diagram
\[\overbrace{0202\ldots 02}^{a/2}\overbrace{2\ldots 2}^{k},\]
which is also birational to \(\mathcal{O}_{\lambda}\). Then the orbital varieties for \(\mathcal{O}_{\mu}\) correspond to the last \(k\) two's. The last node gives the \(A(e)\)-orbit with two elements.
Finally, the element \(x_{a+2k}x_{a}\) acts trivially on the fibers, since it belongs to the center of \(G\). So both \(x_{a+2k}\) and \(x_{a}\) will yield the outer action on the slice.
#### 5.2.2. \(G\) is of type \(D\), \(a\) odd
The weighted Dynkin diagram for \(\mathcal{O}_{\lambda}\) is
\[\overbrace{2\ldots 2}^{k-1}\overbrace{0202\ldots 02}^{(a-1)/2}2\]
where the two final nodes correspond to orthogonal simple roots and the first \(k-1\) nodes form a subsystem of type \(A_{k-1}\). Taking associated parabolic subgroup \(P\), the map \(\pi\) above is birational. This is similar to the type \(C\) case. If we work in the full orthogonal group then \(A(e)\) permutes the two \(\mathbf{P}^{1}\)'s corresponding to the tails of the Dynkin diagram. Finally, the element \(x_{a+2k}x_{a}\) acts trivially on the fiber, since it belongs to the center of \(G\). So both \(x_{a+2k}\) and \(x_{a}\) will yield the outer action on the slice.
### Type \(c\) singularity
This is a simple surface singularity of type \(A_{2k-1}\) and it arises whenever \((\lambda,\mu)\) is equivalent to
\[([a{+}2k{+}1,a,a],[a{+}2k{-}1,a{+}1,a{+}1]).\]
Here, \(a\) is even for types \(B,D\) and odd for type \(C\). As in SS5.2 using Proposition 2.3, we can first reduce to the case of \(([a{+}2k{+}1,a,a],[a{+}2k{-}1,a{+}1,a{+}1])\) where \(G\) is type \(B\) for \(a\) even and type \(C\) for \(a\) odd.
The \(A_{2k-1}\) simple surface singularity arises from the diagonal cyclic group \(\Gamma\) of order \(2k\) in \(\operatorname{SL}_{2}(\mathbb{C})\). The centralizer of \(\Gamma\) in \(\operatorname{SL}_{2}(\mathbb{C})\) is the diagonal one-dimensional torus, leading to an invariant of degree two for the action of \(\Gamma\) on \(\mathbb{C}^{2}\). Since the isomorphism to the slice is \(\mathbb{C}^{*}\)-equivariant, we see that the slice, upon projection to \(\mathfrak{c}(\mathfrak{s})\) must be isomorphic to the Lie algebra of the torus for \(\operatorname{O}(2)\) corresponding to the part \(a{+}1\) in \(\mu\). The outer automorphism on \(\mathbb{C}^{2}/\Gamma\) acts non-trivially on the diagonal torus, we see that \(x_{a+1}\) gives rise to the action, while \(x_{a{+}2k{-}1}\) acts trivially.
### Type \(d\) degeneration
This is again a simple surface singularity of type \(A_{2k-1}\) and it arises whenever \((\lambda,\mu)\) is equivalent to
\[([a{+}2k{+}1,a{+}2k{+}1,a],[a{+}2k,a{+}2k,a{+}2]).\]
This is a valid pair of partitions when \(a\) is even in type \(C\) and odd in types \(B\) or \(D\).
As in the previous case, it is enough to work it out for the case \(\lambda=[a{+}2k{+}1,a{+}2k{+}1,a]\) and \(\mu=[a{+}2k,a{+}2k,a{+}2]\). when \(G\) of type \(C\) for \(a\) even and type \(B\) when \(a\) is odd. As
before, we can detect the action by looking at the action of \(\mathfrak{c}(\mathfrak{s})\). Thus \(x_{a+2k}\) acts by outer action and \(x_{a+2}\) acts trivially.
### Type \(e\) degeneration
This is a union of simple surface singularities \(A_{2k-1}\cup A_{2k-1}\) and it arises whenever \((\lambda,\mu)\) is equivalent to
\[([a{+}2k,a{+}2k,a,a],[a{+}2k{-}1,a{+}2k{-}1,a{+}1,a{+}1]).\]
Here, \(a\) is odd in type \(C\) and even in types \(B\) or \(D\). As before, we are reduced to the case of \(\lambda=[a{+}2k,a{+}2k,a,a]\) and \(\mu=[a{+}2k{-}1,a{+}2k{-}1,a{+}1,a{+}1]\) in type \(D\) for \(a\) even and type \(C\) for \(a\) odd. Here \(C(\mathfrak{s})\simeq\operatorname{O}(2)\times\operatorname{O}(2)\).
The full automorphism group of the singularity is dihedral of order eight. We want to show \(A(e)\) embeds as the Klein 4-group generated by the reflections through the midpoints of edges of the square. This will follow if we show that there is at least one orbit of size 4 of \(A(e)\) on the fiber over \(e\). This will force there to be \(k-1\) orbits of size 4 on the \(4k-2\) projective lines and one orbit of size 2,
By the method of the previous two sections, the element \(x_{a+2k-1}x_{a+1}\) must fix each irreducible component and act by outer automorphism on each one individually. This is because it is acting by \(-1\) on the two-dimensional space \(\mathfrak{c}(\mathfrak{s})\). The action \(x_{a+2k-1}\) and \(x_{a+1}\) can be determined in each case separately. Both of them will interchange the two irreducible components.
#### 5.5.1. C case
The Dynkin diagram of \(\mathcal{O}_{\lambda}\) is
\[\overbrace{0202\dots 02}^{k}\overbrace{00020002\dots 0002}^{(a-1)/2}00.\]
Using the method of SS5.2, if \(a=0\), we find there \(k\) orbits for the unique minimal degeneration to \(\mathcal{O}_{\mu}\). At the same time, there are \(4k-2\) projective lines in the fiber over \(e\). Since \(A(e)\) is isomorphic to \(V_{4}\), the possible orbit sizes are 1, 2, and 4. The only way for this to work is for there to be \(k-1\) orbits of size 4 and one orbit of size 2. Therefore the action is as desired.
When \(a>0\), there is another minimal degeneration to \(\mathcal{O}^{\prime}_{\mu}=[(a{+}2k)^{2},(a{-}1)^{2},2]\). Then \((\lambda,\mu^{\prime})\) is equivalent to \(([a,a],[(a{-}1)^{2},2])\), which is of a type \(d\) generation and has the form \(B_{a-1\over 2}\). This degeneration therefore accounts for \({a-1\over 2}\) of the \(k+{a-1\over 2}\) orbits, leaving \(k\) for the studied minimal degeneration and the result follows as in the \(a=0\) case.
#### 5.5.2. D case
If \(G\) is the full orthogonal group, there is a single orbit \(\mathcal{O}_{\lambda}\) with the given singularity. Working in the special orthogonal group, there are two very even orbits with the given partition, interchanged by the action of any element of \(O(N)\) not in \(\operatorname{SO}(N)\). This is where the two irreducible components are coming from, as they both degenerate to \(\mu\), which has an element fixed by action. Hence, the result follows.
### \(G\) is special orthogonal
When \(G\) is special orthogonal, there are two situations where the component group action changes.
For the type \(b\) singularity when \(\mu\) has exactly two odd parts (e.g., \(\mu=[8,8,5,3]\) or \(\mu=[8,8,5,5]\) ). In this case the component group is trivial. If there were more than two odd parts for this degeneration, there would have to be at least 3 distinct odd parts, which would guarantee the non-trivial action of \(A(e)\).
For the type \(e\) singularity when \(\mu\) again has only the odd parts that appear in the local version of \(\mu\) in Table 2. Otherwise, \(\mu\) would have at least two additional odd parts (possibly equal), which would ensure the same action by \(V_{4}\). Now if \(\mu\) has only the odd parts, say \([(a{+}2k{-}1)^{2},(a{+}1)^{2}]\), then since its others parts are even, the partition \(\lambda\) must be very even.
Then there are two orbits corresponding to \(\lambda\) and \(A(e)\simeq\mathfrak{S}_{2}\) acts by outer automorphism on each degeneration to \(\mu\), so both are of type \(B_{k}\).
### Dimension four or greater
In [10], we studied the image of \(C(\mathfrak{s})\) in \(\operatorname{Aut}(\mathfrak{c}(\mathfrak{s}))\) via the adjoint action in the exceptional groups, and then restricted the action to orbits of simple factors of \(\mathfrak{c}(\mathfrak{s})\). We observed using [12] (also computable using [1]) that \(C(\mathfrak{s})\) tends to act by outer automorphisms of simple factors of \(\mathfrak{c}(\mathfrak{s})\) that admit outer automorphisms. As in Corollary 4.12, the minimal (and minimal special degenerations) are controlled by \(\mathfrak{c}(\mathfrak{s})\) for most cases when \(\dim(\mathcal{S}_{\mathcal{O},e})\geq 4\). We then recorded this outer action on minimal singularities \(a_{n}\), \(d_{n}\), \(d_{4}\), and \(e_{6}\), when they arose.
A more intrinsic framework is to use the intersection homology \(IH^{*}(\mathcal{S}_{\mathcal{O},e})\) of \(\mathcal{S}_{\mathcal{O},e}\) under the induced action of \(A(e)\). Let \(\operatorname{p(X)}=\sum_{i}\dim(\operatorname{IH}^{2i}(X))\mathrm{q}^{i}\). When \(\mathcal{S}_{\mathcal{O},e}\simeq\overline{\mathcal{O}_{\min.\operatorname{ sp}}}\) for the minimal special orbit in the simple Lie algebra \(\mathfrak{c}_{0}\), then we have
\[p(\mathcal{S}_{\mathcal{O},e})=q^{e_{1}-1}+q^{e_{2}-1}+\cdots+q^{e_{k}}\]
where \(e_{i}\) are the exponents of \(\mathfrak{c}_{0}\) (see [11]).
Let \(\mathfrak{c}_{0}\) be of type \(A_{k}\), \(D_{k}\), or \(E_{6}\) and \(\theta\) be an outer involution and denote by \(\mathfrak{c}_{0}^{\prime}:=\mathfrak{c}_{0}^{\langle\theta\rangle}\) the fixed subalgebra. Then \(\langle\theta\rangle\) acts trivially on the part of \(IH^{*}(\overline{\mathcal{O}}_{\min.\operatorname{sp}})\) corresponding to exponents of \(\mathfrak{c}_{0}^{\prime}\) and by the sign representation on the remaining part. In other words,
\[IH^{*}(\overline{\mathcal{O}}_{\min.\operatorname{sp}})^{\langle\theta\rangle }=IH^{*}(\overline{\mathcal{O}}_{\min.\operatorname{sp}}/\langle\theta\rangle).\]
In the case of \(\mathfrak{S}_{3}\) acting by outer automorphisms when \(\mathfrak{c}_{0}\) is of type \(D_{4}\), the \(\mathfrak{S}_{3}\)-invariants on \(IH^{*}(\overline{\mathcal{O}}_{\min.\operatorname{sp}})\) correspond to the exponents of \(G_{2}\) (namely, \(1\) and \(5\)) and \(\mathfrak{S}_{3}\) acts by the reflection representation on the two-dimensional space \(IH^{4}(\overline{\mathcal{O}}_{\min.\operatorname{sp}})\) for the two exponents of \(D_{4}\) equal to \(3\) and again
\[IH^{*}(\overline{\mathcal{O}}_{\min.\operatorname{sp}})^{K}=IH^{*}(\overline{ \mathcal{O}}_{\min.\operatorname{sp}}/K).\]
Since \(C^{\circ}(\mathfrak{s})\) acts trivially on \(IH^{*}(\mathcal{S}_{\mathcal{O},e})\), there is an action of \(A(e)\) on \(IH^{*}(\mathcal{S}_{\mathcal{O},e})\) and this gives an intrinsic way to see the outer action when the slice is isomorphic to the closure of a minimal special orbit, rather than appealing to the action on \(\mathfrak{c}_{0}\) itself, when \(\mathfrak{c}_{0}\) is the relevant factor of \(\mathfrak{c}(\mathfrak{s})\) as in Corollary 4.12.
### Type \(h\) singularity
This corresponds to the closure of the minimal nilpotent orbit in type \(D_{k}\). The local action of the reductive centralizer coincides with the orthogonal group \(\operatorname{O(2k)}\), which contains an outer involution of \(\mathfrak{so}_{2k}\) and so the \(A(e)\) acts by outer action and coincides with the \(d_{k}^{+}\).
In the case of \(G=\operatorname{SO}(2N)\), the component group \(A(e)\) will still act by outer involution in this way, except for those cases where the partition \(\mu\) contains exactly one odd part (of even multiplicity \(2k\)).
### Exceptional degenerations
#### 5.9.1. The case of \(d_{n+1}/V_{4}\)
From SS4.3, the \(V_{4}\) is acting on \(d_{n+1}\) with \(\theta_{1}\) outer and \(\theta_{2}\) inner. Hence,
\[IH^{*}(d_{n+1}/V_{4})\simeq IH^{*}(d_{n+1}/\theta_{1})\simeq IH^{*}(b_{n}^{sp}).\]
Let \(\mathcal{S}_{\mathcal{O},e}\) be a slice of type \(h_{sp}\). Recall that there is a natural \(\operatorname{O(2n)}\) action on \(d_{n+1}/V_{4}\) which is the fixed points of the \(V_{4}\)-action on \(O(2n)\). Under the isomorphism to \(\mathcal{S}_{\mathcal{O},e}\), the \(\operatorname{O(2n)}\)-action becomes the action of \(C(\mathfrak{s})\) on \(\mathcal{S}_{\mathcal{O},e}\).
Since the action of \(O(2n)\) on \(d_{n+1}\) is also inner, we find that \(O(2n)\) acts trivially on \(IH^{*}(d_{n+1}/V_{4})\), and hence \(C(\mathfrak{s})\) acts trivially on \(d_{n+1}/V_{4}\).
On the other hand, it seems relevant that if we take the minimal degeneration to \(\mu\) in \(\mathcal{S}_{\mathcal{O},e}\), which is of type \(h\), then indeed \(A(e)\) acts by outer action on this \(d_{n}\).
#### 5.9.2. The case of \(d_{4}/s_{4}\)
From the proof of [10, Theorem 4.11], \(S_{4}\) acts on \(d_{4}\) by the semi-direct product of an inner \(V_{4}\) group and an outer \(S_{3}\) group. Hence,
\[IH^{*}(d_{4}/S_{4})\simeq IH^{*}(d_{4}/S_{3})\simeq IH^{*}(g_{2}^{sp}).\]
Let \(\mathcal{S}_{\mathcal{O},e}\) be one of the two slices of type \(d_{4}/S_{4}\). There is a natural action of \(SL_{3}\rtimes\mathfrak{S}_{2}\) on \(d_{4}/S_{4}\), which is the fixed points of \(S_{4}\) on the adjoint group of type \(D_{4}\). This action, as in the previous section, corresponds to \(C(\mathfrak{s})\) on \(\mathcal{S}_{\mathcal{O},e}\) under the equivariant isomorphism. The action of the \(\mathfrak{S}_{2}\) is inner, so we again find that \(A(e)\) acts trivially on \(\mathcal{S}_{\mathcal{O},e}\).
The minimal degeneration in \(\mathcal{S}_{\mathcal{O},e}\) corresponds to an \(a_{2}\) singularity coming from \(\mathfrak{c}(\mathfrak{s})\) and we note again that the action on this singularity is outer, \(a_{2}^{+}\).
## 6. Action of the canonical quotient
Let \(\mathcal{S}_{\mathcal{O},e}\) be the slice for a minimal special degeneration. In this section we explain how the kernel \(H\) of the homomorphism from \(A(e)\) to Lusztig's canonical quotient \(\bar{A}(e)\) acts on the slice \(\mathcal{S}_{\mathcal{O},e}\). When \(H\) acts by outer action, the exchange of singularities under the duality is not as expected.
### Exceptional groups
**Proposition 6.1**.: _Assume \(G\) is connected of exceptional type and \(H\) is nontrivial for \(A(e)\). Then there exists a unique minimal special degeneration to \(e\) and the degeneration is \(C_{k}\) for \(k\geq 2\), \(\mu\), or \(d_{4}/\mathfrak{S}_{4}\)._
_In the \(C_{k}\) cases, \(H=A(e)=\mathfrak{S}_{2}\) acts by outer automorphism on \(\mathcal{S}_{\mathcal{O},e}\)._
_In the one case \((D_{7}(a_{1}),E_{8}(b_{6}))\), where the singularity is of type \(\mu\) (which is \(C_{2}\) upon normalization), \(H\) acts trivially on \(\mathcal{S}_{\mathcal{O},e}\) and the induced action of \(\bar{A}(e)\) is by outer automorphism._
_In the two cases where \(\mathcal{S}_{\mathcal{O},e}\) is \(d_{4}/\mathfrak{S}_{4}\), the action of \(H\) is trivial on \(IH^{*}(\mathcal{S}_{\mathcal{O},e})\), however the action of \(H\) on the minimal degeneration to \(e\) is outer._
Proof.: The cases in the exceptional groups where \(H\) is nontrivial for \(A(e)\) can be read off from [11]. They are \(A_{2}\) and \(F_{4}(a_{2})\) in type \(F_{4}\); \(A_{3}+A_{2}\) and \(E_{7}(a_{4})\) in type \(E_{7}\); and \(A_{3}+A_{2}\), \(D_{4}+A_{2}\), \(E_{7}(a_{4})\), \(D_{5}+A_{2}\), \(E_{8}(b_{6})\), \(D_{7}(a_{1})\), and \(E_{8}(b_{4})\) in type \(E_{8}\). In all these cases, there is a unique \(\mathcal{O}\) such that \((\mathcal{O},\mathcal{O}^{\prime}_{e})\) is a minimal (special) degeneration.
If \(e\) does not belong to the \(E_{8}(b_{6})\) orbit and is not of type \(A_{2}\) in \(F_{4}\) or \(D_{4}+A_{2}\) in \(E_{8}\), then \(A(e)\simeq\mathfrak{S}_{2}\) and \(H=A(e)\), so that \(\bar{A}(e)\) is trivial. Since we already know from [10] that \(A(e)\) is acting by outer action, we see that \(H\) does too.
If \(e\) is not of type \(A_{2}\) in \(F_{4}\) or \(D_{4}+A_{2}\) in \(E_{8}\), then there exists a unique \(\mathcal{O}\) where \(\mathcal{O}\) is a minimal (special) degeneration to \(e\) and \(A(e)\) acts non-trivially \(\mathcal{S}_{\mathcal{O},e}\). The singularity of \(\mathcal{S}_{\mathcal{O},e}\) is a simple surface singularity \(D_{k+1}\), yielding a \(C_{k}\) singularity. It follows that \(H\) itself acts non-trivially on \(IH^{*}(\mathcal{S}_{\mathcal{O},e})\).
For the \(E_{8}(b_{6})\) orbit, we have \(A(e)\simeq\mathfrak{S}_{3}\) and \(H\) is the cyclic group of order \(3\). The slice of \(D_{7}(a_{1})\) at \(e\) is of type \(\mu\), which is not normal, but has normalization of type \(A_{3}=D_{3}\) and we previously computed that \(A(e)\) acts by outer action upon the normalization, so that the normalization is \(C_{2}\). Since the elements of \(H\) cannot give an outer action, the outer action descends to \(\bar{A}(e)\).
If \(e\) is of type \(A_{2}\) in \(F_{4}\) or \(D_{4}+A_{2}\) in \(E_{8}\), then \(\mathcal{S}_{\mathcal{O},e}\) is isomorphic to \(d_{4}/S_{4}\). This occurs for \((F_{4}(a_{3}),A_{2})\) and \((E_{8}(a_{7}),D_{4}+A_{2})\). By SS5.7, the action of \(A(e)\) on \(IH^{*}(\mathcal{S}_{\mathcal{O},e})\) is trivial.
On the other hand, the action of \(A(e)\) is non-trivial on \(IH^{*}(\mathcal{S}_{\mathcal{O}^{\prime\prime},e})\), where \(\mathcal{O}^{\prime\prime}\) is the minimal non-special orbit between \(\mathcal{O}\) and \(\mathcal{O}_{e}\).
### Classical types
Let \(X\) be of type \(B,C,D\), or \(C^{\prime}\). Let \(\epsilon\) and \(\epsilon^{\prime}\) be defined for the given type. For a partition \(\mu\), define
\[R:=\{s\ |\ s\not\equiv\epsilon,m_{\mu}(s)\neq 0,h_{\mu}(s)\not\equiv\epsilon^{ \prime}\}.\]
For \(s\in R\), define \(s^{\prime}\) to satisfy \(s^{\prime}\not\equiv\epsilon\), \(m(s^{\prime})\neq 0\) and maximal for this property with \(s^{\prime}<s\). Set \(s^{\prime}=0\) if no such \(s^{\prime}\) exists and set \(x_{0}=1\) in \(A(e)\). Define \(H\) to the be subgroup of \(A(e)\) generated by the following elements of \(A(e)\):
\[H:=\langle x_{s}x_{s^{\prime}}\ |\ s\in R\rangle. \tag{17}\]
By [10], the quotient of \(A(e)\) by \(H\) gives Lusztig's canonical quotient in \(B\),\(C\). In \(D\) we get an extra factor of \(\mathbf{Z}_{2}\), as opposed to working in the special orthogonal group. In type \(C^{\prime}\), we get something new and we take this as the definition of the canonical quotient (we can give a definition that is simliar to the characterization in _op. cit._) Let \(r=\#R\). Then the canonical quotient \(\bar{A}(e)\) is elementary abelian with \(r\) generators in types \(C,D,C^{\prime}\) and \(r-1\) in type \(B\).
Let \(G\) be classical group.
**Proposition 6.2**.: _If the type of the minimal special degeneration is type \(C_{n}\) for \(n\geq 2\) and \(l\equiv\epsilon^{\prime}\) in Table 1, then \(H\) acts non-trivially on slice. Otherwise, \(H\) acts trivially on \(\mathcal{S}_{\mathcal{O},e}\)._
_When the slice \(\mathcal{S}_{\mathcal{O},e}\simeq d_{n}/V_{4}\), \(H\) acts by outer automorphism on the \(\mathcal{S}_{\mathcal{O}^{\prime\prime},e}\), where \(\mathcal{O}^{\prime\prime}\) is the minimal non-special orbit between \(\mathcal{O}\) and \(\mathcal{O}_{e}\)._
Proof.: In Table 1, the element \(e\in\mathcal{O}_{\mu}\). For the type \(B_{n}\) singularities:
* Type \(c\). The elements acting non-trivially on the slice involve \(x_{a+1}\). But the part \(a{+}2n{-}1\) has height \(l+1\) and the part \(a+1\) has height \(l+3\), both of which are congruent to \(\epsilon^{\prime}\). Hence, all the elements in \(H\) do not involve \(x_{a+1}\) and \(H\) acts trivially.
* Type \(d\). The elements acting non-trivially on the slice involve \(x_{a+2n}\). But the part \(a{+}2n\) has height \(l+2\), which is congruent to \(\epsilon^{\prime}\). For \(s\) minimal for \(s>a{+}2n\) and \(s\not\equiv\epsilon\), we must have \(h(s)\) even since the parts between \(s\) and \(a{+}2n\) are congruent to \(\epsilon\) and so come with even multiplicity. Hence none of the elements generating \(H\) involve \(x_{a+2n}\) and \(H\) acts trivially.
* Type \(e\). The elements acting non-trivially on the slice involve \(x_{a+2n-1}\) or \(x_{a+1}\). Both of these parts have height congruent to \(\epsilon^{\prime}\), so as in the type \(d\), none of the elements generating \(H\) involve either part and \(H\) acts trivially.
Next, we treat the case of type \(b\). Here, \(H\) acts non-trivially if either \(x_{a+2n-2}\) or \(x_{a+2}\) are involved in a generator of \(H\), but not both. The height of \(a{+}2n{-}2\) is \(l+1\) and \(a+2\) is \(l+2\). If \(l\equiv\epsilon^{\prime}\), then \(x_{a+2n-2}x_{a+2}\) is in \(H\), but no other generator involves \(a{+}2n{-}2\) since \(s\) minimal for \(s>a{+}2n{-}2\) and \(s\not\equiv\epsilon\) must have \(s\equiv\epsilon^{\prime}\). So \(H\) acts trivially. But if \(l\not\equiv\epsilon^{\prime}\), then some element of the form \(x_{a+2}x_{s^{\prime}}\) is in \(H\) and \(H\) does not act trivially on the slice. This happens exactly when the second diagram in Table 2 occurs, for the upper right \(C_{n+1}\) singularity. Hence it is denoted \(C_{n+1}^{*}\).
For exceptional type \(g_{sp}\) there is no \(A(e)\) outer action. However, \(H\) acts non-trivially on the \(\mathcal{S}_{\mathcal{O}^{\prime\prime},e}\). The proof is similar to the above cases, as is the proof that \(H\) acts trivially for type \(h\)
## 7. Combinatorial statement of duality, classical groups
Let \((\lambda,\mu)\) be a minimal special degeneration in types \(B,C,D\) or \(C^{\prime}\). Then all three of \((f(\lambda),f(\mu))\), \((d(\mu),d(\lambda))\), and \((d_{LS}(\mu),d_{LS}(\lambda))\) are minimal special degenerations. We now prove that the four types of singularities are given by the Figure 2.
There is a bit more going on in the first quartet where the partition pattern of type \(c\) will get interchanged under internal duality with that of type \(f^{1}_{sp}\) and that of type \(d\) is interchanged under internal duality with type \(f^{2}_{sp}\). Let internal duality map from type \(X\) to \(d(X)\).
**Lemma 7.1**.: _The vertical arrows in Figure 2 are correct._
_In particular, if \(l\equiv\epsilon^{\prime}_{X}\), then the singularity of type \(b\) is interchanged with the minimal special \(g_{sp}\)._
_If \(l\not\equiv\epsilon^{\prime}_{X}\), then the singularity of type \(b\) is interchanged with the minimal special \(h\)._
_Each of the two types of \(B_{n}\) singularities switches with a corresponding type of \(b^{sp}_{n}\) singularity: type \(c\) with \(f^{1}_{sp}\) and type \(d\) with \(f^{2}_{sp}\). The type \(e\) singularity switches with type \(h_{sp}\) and type \(a\) goes to type \(a\)._
Proof.: If \((\lambda,\mu)\) becomes \((\lambda^{\prime},\mu^{\prime})\) after removing \(l\) rows and \(s\) columns, then clearly \((d(\mu),d(\lambda))\) becomes \((d(\mu^{\prime}),d(\lambda^{\prime}))\) after removing \(s\) rows and \(l\) columns. So it is sufficient to work with the irreducible forms in Tables 1 and 2 to understand how a pair of partitions behaves under \(d\).
A quick check shows that, under the transpose operation on partitions, the partition in the table of type \(c\) is interchanged with the one of type \(f^{1}_{sp}\); the type \(d\) partition is interchanged with the one of type \(f^{2}_{sp}\); the one of type \(e\) is interchanged with the one of type \(h_{sp}\); the one of type \(b\) is interchanged with the partition found in ether \(g_{sp}\) and \(h\); and type \(a\) is self-dual.
The behavior of \(\epsilon\) and \(\epsilon^{\prime}\) under \(d\) is also follows:
\[(\epsilon,\epsilon^{\prime})_{X}+(\epsilon^{\prime},\epsilon)_{d(X)}\equiv(1,1).\]
As a result, for the interchange of the first three partition types described above, the switching of \(l\) and \(s\) upon going from \((\lambda,\mu)\) to \((d(\mu),d(\lambda))\) agrees with the restriction on \(l\) and \(s\) in the dual type \(d(X)\) given in the Tables. However, for type \(b\), when \(l\equiv\epsilon^{\prime}_{X}\), the interchange is with type \(g_{sp}\) and when \(l\not\equiv\epsilon^{\prime}_{X}\), the interchange is with type \(h\). The self-dual type \(a\) is clear.
Next we want to write down the rules for the horizontal arrows in Figure 2. First, we start with the case of type \(b\), the \(C_{n}\) singularity from [10]. In that case, we can write \((\lambda,\mu)\) locally as \(([(2n+s)^{t},s^{u}],(2n+s)^{t-1},2n-2+s,s+2,s^{u-1}]\) for positive integers \(t\) and \(u\), where \(s\not\equiv\epsilon_{X}\) where \(f\) maps \(X\) to \(f(X)\).
**Lemma 7.2**.: _Assume the degeneration is of type \(b\) for a given \(n\). When \(l\equiv\epsilon^{\prime}_{X}\), under the \(f\) map, the degeneration \((\lambda,\mu)\) is carried to a singularity of type_
\[e \text{ if }t\geq 2,u\geq 2\] \[d \text{ if }t\geq 2,u=1\] \[c \text{ if }t=1,u\geq 2\] \[C_{n+1} \text{ if }t=u=1\]
_When \(l\not\equiv\epsilon^{\prime}_{X}\), then type \(b\) is exchanged with type \(b\) with \(n\) replaced by \(n{-}1\), that is \(C_{n-1}\)._
Proof.: Since \(l\) rows are removed, \(h_{\lambda}(2n+s)=l+1\) and \(h_{\lambda}(s)=l+1+m_{\lambda}(s)\). Also, \(h_{\mu}(2n{-}2+s)=l+1\) and \(h_{\mu}(s+2)=l+2\) and \(m_{\mu}(2n{-}2+s)=m_{\mu}(s{+}2)=1\) if \(n>2\); and \(h_{\mu}(2n{-}2+s)=h_{\mu}(s{+}2)=l{+}2\) and \(m_{\mu}(s{+}2)=2\) if \(n=2\). All these parts are not congruent to \(\epsilon_{X}\) since \(s\not\equiv\epsilon\).
If \(l\not\equiv\epsilon^{\prime}_{X}\),then \(h_{\lambda}(2n{+}s)\equiv\epsilon^{\prime}_{X}\) and \(h_{\lambda}(s)+m_{\lambda}(s)\equiv\epsilon^{\prime}_{X}\). In particular, in Lemma 2.2, \(2n{+}s\) obeys line 1 or 3, and \(s\) obeys lines 2 or 3. Hence the partition \([2n{+}s,s]\) in \(\lambda\) gets replaced by \([2n{+}s{-}1,s{+}1]\) in \(f(\lambda)\) regardless of the parities of \(m_{\lambda}(2n{+}s)\) and \(m_{\lambda}(s)\). Moreover, \([2n{-}2{+}s,s{+}2]\) in \(\mu\) goes to \([2n{-}3{+}s,s{+}3]\) since \(s{+}2\) obeys line 2 if \(n>2\) and line 3 if \(n=2\). Hence we end of with \(f(\lambda),f(\mu)\) locally equal to
\[([2n{-}1{+}s,s{+}1],[2n{-}3{+}s,s{+}3]),\]
which is of type \(C_{n-1}\) with \(s+1\) rows removed. Note that \(s+1\not\equiv\epsilon_{f(X)}\) since \(\epsilon_{X}\) and \(\epsilon_{f(X)}\) have different parities.
Now if \(l\equiv\epsilon^{\prime}\), there are four cases to consider depending on \(t=m_{\lambda}(2n{+}s)\) and \(u=m_{\lambda}(s)\): if \(t=u=1\), \(\lambda\) is locally \([2n{+}s,s]\)
Now if \(l\equiv\epsilon^{\prime}\), then \(h_{\lambda}(2n{+}s)\not\equiv\epsilon^{\prime}_{X}\) so if \(m(2n{+}s)\geq 2\), the last two values of \(2n{+}s\) in \(\lambda\) are unchanged in \(f(\lambda)\) since lines two or four apply in Lemma 2.2. But if \(m(2n{+}s)=1\), then line two applies and \(2n{+}s\) becomes \(2n{+}s+1\). Now we have \(h_{\lambda}(s)+m_{\lambda}(s)\not\equiv\epsilon^{\prime}_{X}\), so we are in the setting of lines one and four of the lemma. Thus the first two values of \(s\) are unchanged if \(m(s)\geq 2\) and the solve values of \(s\) changes to \(s-1\) if \(m(s)=1\). Thus \(f(\lambda)\) is locally \([2n{+}s,2n{+}s,s,s]\), \([2n{+}s,2n{+}s,s{-}1]\), \([2n{+}s{+}1,s,s]\), or \([2n{+}s{+}1,s{-}1]\) when \((t,u)\) is \((\geq 2,\geq 2)\), \((\geq 2,1)\), \((1,\geq 2)\), \((1,1)\), respectively, after removing \(l{-}1\), \(l{-}1\), \(l\) or \(l\) rows, respectively.
In the four cases, \(\mu\) looks locally like
\[[2n{+}s,2n{+}s{-}2,s{+}2,s],[2n{+}s,2n{+}s{-}2,s{+}2],[2n{+}s{-}2,s{+}2,s],\ \text{and}\ [2n{+}s{-}2,s{+}2],\]
respectively. Using that \(h_{\mu}(2n{+}s{-}2)\not\equiv\epsilon^{\prime}_{X}\) and \(m_{\mu}(2n{+}s{-}2)=1\) and \(h_{\mu}(s)+m_{\mu}(s)\equiv\epsilon^{\prime}\) we get using Lemma 2.2, that \(f(\mu)\) is locally \([2n{+}s{-}1,2n{+}s{-}1,s{+}1,s{+}1]\)\([2n{+}s{-}1,2n{+}s{-}1,s{+}1]\)\([2n{+}s{-}1,s{+}1]\)\([2n{+}s{-}1,s{+}1]\), after removing \(l{-}1\), \(l{-}1\), \(l\) or \(l\) rows, respectively.
Hence, after removing \(s\), \(s{-}1\), \(s\), or \(s{-}1\) rows, respectively, we find that \((f(\lambda),f(\mu))\) is of type \(e\), \(d\), \(c\), for the same value of \(n\) or type \(b\) with \(C_{n+1}\).
**Proposition 7.3**.: _The four singularities behave as in the three quartets._
Proof.: The same ideas in the previous lemma, or the fact that \(f\circ f=\mathrm{id}\), shows that the type \(c\),\(d\),\(e\) partitions for rank \(n\) are mapped to the \(C_{n}\) singularity.
Now given any of the singularities in Table 1 or 2, either the singularity is codimension two or the degeneration obtained using \(d\) is. Either this singularity is type \(C_{n}\) or applying \(f\) to the degeneration gives a type \(C_{n}\) singularity with \(l\equiv\epsilon^{\prime}_{X}\). Putting this singularity in the upper left corner of the square of singularities, the two lemmas show that the three corners are as shown in Figure 2 after we note that if \(C_{n}\) is carried to \(C_{n+1}\) under \(f\), the value of \(l\) stays the same and satisfies \(l\not\equiv\epsilon^{\prime}_{f(X)}\). This means that applying \(d\) leads to the \(h\) singularity according to Lemma 7.1. In effect, the three quartets (or four if we include the two different ways to obtain \(B_{n}\) and \(b_{n}^{sp}\)) are controlled by the four possibilities for \(t\) and \(u\) in the \(l\equiv\epsilon^{\prime}_{X}\) case in Lemma 7.2.
_Remark 7.4_.: We can also write the specific conditions that describe the action under \(f\) where the singularity starts as \(c_{n}^{sp}\). Writing \(\mu\) locally as \([(s{+}2)^{t},(s{+}1)^{n},s^{u}]\) where \(t=m(s{+}2)\) and \(u=m(s)\), then \(c_{n}^{sp}\) maps to \(h,f_{sp}^{1},f_{sp}^{2}\), or \(h_{sp}\) when \((t,u)\) is \((\geq 1,\geq 1)\), \((0,\geq 1)\), \((\geq 1,0)\), \((0,0)\). respectively. Here, \(s=0\) is considered a part in type \(C\). These conditions are exactly the conditions from the four cases in Lemma 7.2 for \(C_{n}\), under \(d\) when \(l\equiv\epsilon^{\prime}_{X}\).
## 8. Outer automorphisms of \(\mathfrak{g}\)
### Outer automorphisms for \(A_{n}\) and \(E_{6}\)
Besides considering the automorphism group for \(D_{n}\), we can do the same for \(A_{n}\) and \(E_{6}\) (and the full automorphism group for \(D_{4}\)). The ideas follow [11, SS7.5] where the case of the regular and subregular orbits were handled. Let \(CA(\mathfrak{s})\), called the _outer reductive centralizer_ in _loc. cit._, denote the centralizer of \(\mathfrak{s}\) in \(\operatorname{Aut}(\mathfrak{g})\). There is a surjective map \(\pi:\operatorname{Aut}(\mathfrak{g})\to\operatorname{Aut}(\Delta)\) where \(\Delta\) is the Dynkin diagram of \(\mathfrak{g}\). Let \(e\in\mathcal{N}_{o}\) and \(\mathfrak{s}\) be an \(\mathfrak{sl}_{2}\)-triple \(\{e,h,f\}\) for \(e\). Assume that the weighted Dynkin diagram for \(e\) is invariant under \(\operatorname{Aut}(\Delta)\). Then \(\sigma(h)\) is conjugate to \(g.h\) for some \(g\in G\). This assumption holds as long as \(e\) is not very even in \(D_{2n}\) and is not \([5,1^{3}]\) or \([3,1^{5}]\) in \(D_{4}\). Then the proof of Lemma 2 in _loc. cit._ still applies.
**Lemma 8.1**.: _For all \(\mathfrak{sl}_{2}\)-triples \(\mathfrak{s}\) as above, the map \(\pi\) restricted to \(CA(\mathfrak{s})\) is surjective onto \(\operatorname{Aut}(\Delta)\). In particular, \(CA(\mathfrak{s})/C(\mathfrak{s})\simeq\operatorname{Aut}(\Delta)\)._
As before, notice that \(CA(\mathfrak{s})\) acts on \(\mathfrak{c}(\mathfrak{s})\) and we are interested in this action. Let \(\mathfrak{c}_{0}\subset\mathfrak{c}(\mathfrak{s})\) be a simple factor or the central toral subalgebra of \(\mathfrak{c}(\mathfrak{s})\). The rest of Lemma 2 in _loc. cit._ generalizes as follows:
**Lemma 8.2**.: _Suppose that \(\mathfrak{g}\) has type \(A_{n},D_{2n+1},\) or \(E_{6}\). Then there is an element \(\phi\in CA(\mathfrak{s})\) that stabilizes \(\mathfrak{c}_{0}\) and acts by \(-1\) on some maximal toral subalgebra of \(\mathfrak{c}_{0}\). In particular, if \(\mathfrak{c}_{0}\) is simple of type \(A_{k},D_{2k+1},\) or \(E_{6}\), then the image of \(CA(\mathfrak{s})\) in \(\operatorname{Aut}(\mathfrak{c}_{0})\) is an outer automorphism of order two._
Proof.: Fix a toral subalgebra \(\mathfrak{h}\) of \(\mathfrak{g}\). Let \(\sigma\in\operatorname{Aut}(\mathfrak{g})\) be an automorphism of \(\mathfrak{g}\) that acts by \(-1\) on \(\mathfrak{h}\). Choose \(\mathfrak{m}\) to be a standard Levi subalgebra of \(\mathfrak{g}\) relative to \(\mathfrak{h}\) so that \(e\) is distinguished in \(\mathfrak{m}\)[10]. Pick \(\mathfrak{s}\) so that \(\mathfrak{s}\subset\mathfrak{m}\). Then \(\sigma(\mathfrak{m})=\mathfrak{m}\) and since \(e\in\mathfrak{m}\) is distinguished (and so satisfies the assumption above), there exists \(g\in G\) (specifically in the subgroup \(M\) with Lie algebra \(\mathfrak{m}\)) such that \(\operatorname{Int}(\mathrm{g})\circ\sigma\) is the identity on \(\mathfrak{s}\). That is, \(\phi:=\operatorname{Int}(\mathrm{g})\circ\sigma\in\operatorname{CA}(\mathfrak{ s})\).
Next, by [10] the center \(\mathfrak{t}\) of \(\mathfrak{m}\), which lies in \(\mathfrak{h}\), is a maximal toral subalgebra of \(\mathfrak{c}(\mathfrak{s})\). Since \(M\) acts trivially on \(\mathfrak{t}\), it follows that \(\phi\) acts by \(-1\) on \(\mathfrak{t}\). In particular, \(\phi(\mathfrak{c}_{0})=\mathfrak{c}_{0}\) and \(\phi\) acts by \(-1\) on its maximal toral subalgebra \(\mathfrak{c}_{0}\cap\mathfrak{t}\). Since \(-1\) is not in the Weyl groups of \(A_{k},D_{2k+1},\) or \(E_{6}\), the induced automorphism of \(\mathfrak{c}_{0}\) must by outer (since \(-w_{0}\) i s then clearly outer).
_Remark 8.3_.: We want to apply this when \(-1\) is not in the Weyl group of \(\mathfrak{g}\), but it is also applies when \(-1\) is in the Weyl group and \(\mathfrak{c}(\mathfrak{s})\) has simple factors of type \(a_{k}\), where it forces \(C(\mathfrak{s})\) to contain an element of order two. So in \(F_{4},E_{7}\) and \(E_{8}\), this explains why the action is upgraded to \(a_{k}^{+}\) always. There are no type \(D_{k}\) simple factors of \(\mathfrak{c}(\mathfrak{s})\) for \(k>4\) in the exceptional groups, except for the sole case of \(d_{6}\) the minimal orbit in \(E_{7}\). It remains (along with its dual), the only case where a natural outer action exists on the slice, but the induced action of \(CA(\mathfrak{s})\) does not realize it.
**Corollary 8.4**.: _Let \(G=\operatorname{Aut}(\mathfrak{g})\)._
_In type \(A_{n}\), any singularity for a minimal degeneration, which are not type \(A_{1}\), acquires an outer action (that is, becomes \(A_{k}^{+}\) or \(a_{k}^{+}\), when \(k\geq 2\))._
_In type \(E_{6}\), the singularities with no outer action using \(C(\mathfrak{s})\), all acquire the natural outer action._
Proof.: For the minimal orbit types, we can use the previous lemma since the simple factors of \(\mathfrak{c}(\mathfrak{s})\) are all of type \(A\) or \(E_{6}\).
For the simple surface singularities, we can use the action on the center of \(\mathfrak{c}(\mathfrak{s})\) as in SS5 (the subregular case already follows from Slodowy), since they are simple surface singularities of type \(A_{k}\).
Finally, the case of \([2a_{2}]^{+}\) becomes \([2a_{2}^{+}]^{+}\) since the outer automorphism preserves each simple factor of \(\mathfrak{c}(\mathfrak{s})\).
In SS11 we include the diagram for the \(D_{4}\) case with the \(\mathfrak{S}_{3}\)-action.
## 9. A conjecture of Lusztig
In [10, SS0.4], Lusztig associated to each minimal special degeneration in \(\mathfrak{g}\) a certain Weyl group \(W^{\prime}\). We describe it in the classical groups.
Let \(h:=\dim(\mathcal{S}_{\mathcal{O},e})/2+1\) for the slice \(\mathcal{S}_{\mathcal{O},e}\) associated to the degeneration. In types \(B\),\(C\),\(D\), the slice \(\mathcal{S}_{\mathcal{O},e}\) is either dimension two or the dimension of the minimal special orbit for a Lie algebra \(\mathfrak{c}_{0}\) of type \(B\), \(C\), \(D\). The dimension of the latter is \(2h^{\prime}-2\) where \(h^{\prime}\) is the Coxeter number of \(\mathfrak{c}_{0}\). Since \(h^{\prime}\) is even for types \(B_{k}\), \(C_{k}\), \(D_{k}\), respectively equal to \(2k\), \(2k\), \(2k-2\), the number \(h\) is even when \(\mathfrak{g}\) has type \(B,C,D\).
Here is the assignment of \(W^{\prime}\) to the degeneration, which varies in type \(D\) from Lusztig's assignment. For \(\mathfrak{g}\) of type \(A\), the Weyl group \(W^{\prime}\) associated to the degeneration is the Weyl group of type \(A_{h-1}\); for \(\mathfrak{g}\) of types \(B/C\), it is of type \(B_{h/2}\); and for \(\mathfrak{g}\) of type \(D\), it is of type \(D_{h/2}+1\) when \(\mathcal{S}_{\mathcal{O},e}\) is of type \(d_{k}\) when \(G=\operatorname{SO}(2N)\), and it is of type \(B_{h/2}\), otherwise.
Lusztig worked with the associated special representations of \(W\), the Weyl group of \(\mathfrak{g}\). Our definition of \(W^{\prime}\) is slightly different in type \(D\). It is necessary to include more special representations than were given in [10] for the conjecture to hold for the case of associating \(W^{\prime}=W(D_{h/2}+1)\), as the following lemma shows. In op. cit., only the case of \(s=0\) and \(\nu\) with parts all equal to \(1\) was considered for associating \(W^{\prime}=W(D_{h/2}+1)\).
**Lemma 9.1**.: _Let \((\lambda,\mu)\) be a minimal special degeneration of type \(d_{k}\) in \(\mathfrak{g}\) for \(G=\operatorname{SO}(2N)\). By SS5.8, this occurs precisely when \(\mu\) has a single odd part equal to \(2s+1\) with even multiplicity \(2j\)._
_Then the Springer representations of \(W(D_{N})\) attached to \(\mathcal{O}_{\lambda}\) and \(\mathcal{O}_{\mu}\) with the trivial local systems on \(\mathcal{O}_{\lambda}\) and \(\mathcal{O}_{\mu}\) are given respectively by the bipartitions_
\[([\nu,s{+}1,s^{j},\nu^{\prime}],[\nu,(s{+}1)^{j-1},\nu^{\prime}])\]
_and_
\[([\nu,s^{j},\nu^{\prime}],[\nu,(s{+}1)^{j},\nu^{\prime}]),\]
_where \(\nu\) is a partition with smallest part at least \(s+1\) and \(\nu^{\prime}\) is a partition with largest part at most \(s\), and such that \(2|\nu|+2|\nu^{\prime}|+j(2s+1)=N\)._
_Moreover, any such bipartition corresponds to a minimal special degeneration with one odd part in \(\mu\) (necessarily of even multiplicity)._
Proof.: We carried out the algorithm in [11].
Let \(\operatorname{p^{\prime}}(\mathcal{S}_{\mathcal{O},\mathrm{e}})=\sum_{i} \dim(\mathrm{IH}^{2i}(\mathcal{S}_{\mathcal{O},e})^{\mathrm{A(e)}})\mathrm{q}^ {i}\). In classical types, [10, Conjecture 1.4] can be interpreted as saying:
**Theorem 9.2**.: _Let \(G\) be classical and assume \(h>1\). Then_
\[\operatorname{p^{\prime}}(\mathcal{S}_{\mathcal{O},\mathrm{e}})=\operatorname{ q}^{\mathrm{e}_{1}-1}+\operatorname{q}^{\mathrm{e}_{2}-1}+\cdots+\operatorname{q}^{ \mathrm{e}_{k}-1}\]
_where \(e_{1},e_{2},\ldots,e_{k}\) are the exponents of \(W^{\prime}\)._
Proof.: Since \(h>1\), then \(\dim(\mathcal{S}_{\mathcal{O},e})\geq 4\) and so by Table 2, the singularity of \(\mathcal{S}_{\mathcal{O},e}\) in types \(B\) and \(C\) is either \(c_{h/2}^{sp}\), \(b_{h/2}^{sp}\), \(d_{h/2+1}^{+}\), or \(d_{h/2+1}/V_{4}\). Each of these satisfies
\[\mathrm{p}^{\prime}(\mathcal{S}_{\mathcal{O},e})=1+\mathrm{q}^{3}+\cdots+ \mathrm{q}^{\mathrm{h}-1},\]
by SS5.7, which are the exponents of \(B_{h/2}\).
This also holds for \(\mathrm{SO}(2N)\), except for the case where the singularity is \(d_{h/2+1}\), in which case \(\mathrm{p}^{\prime}(\mathcal{S}_{\mathcal{O},e})=1+\mathrm{q}^{3}+\cdots+ \mathrm{q}^{\mathrm{h}-1}+\mathrm{q}^{\mathrm{h}/2}\), coming from the exponents will be those of \(D_{h/2+1}\).
In the exceptional groups a similar interpretation exists (except that a variation is needed for the \(3\) exceptional orbits in \(E_{7}\) and \(E_{8}\)). That is to say, the simple factors of \(\mathfrak{c}(\mathfrak{s})\) under the action of \(A(e)\) explain why \(\mathrm{p}^{\prime}(\mathcal{S}_{\mathcal{O},e})\) sees the exponents that Lusztig observes in [10, SS4]. We also point out a typo in _loc. cit._: the label of \(567_{46}--1400_{37}\) should be \(B_{5}\).
## 10. Duality
We can now gather up our results to state the duality result for a minimal special degeneration \((\mathcal{O},\mathcal{O}^{\prime})\) in \(\mathfrak{g}\) and its dual minimal special degeneration \((d_{LS}(\mathcal{O}^{\prime}),d_{LS}(\mathcal{O}))\) in \({}^{L}\mathfrak{g}\), the Langlands dual Lie algebra of \(\mathfrak{g}\).
Let \(X\) be the normalization of an irreducible component of the slice \(\mathcal{S}_{\mathcal{O},e}\) for \((\mathcal{O},\mathcal{O}^{\prime})\), where \(e\in\mathcal{O}^{\prime}\). Let \(Y\) be an irreducible component of the slice \(\mathcal{S}\) for \((d_{LS}(\mathcal{O}^{\prime}),d_{LS}(\mathcal{O}))\). Let \(e^{\prime}\in d_{LS}(\mathcal{O})\).
By [10], we can assume that \(\dim(\mathcal{S}_{\mathcal{O},e})=2\). This also follows from Proposition 7.3 and inspection of the graphs in SS11. Hence, \(X\) is simple surface singularity. Denote by \(\mathrm{Out}(X)\) its group of outer automorphisms, which are the graph automorphisms of the ADE diagram corresponding to \(X\) as in SS5. From [11] and SS5, we know that \(A(e)\) acts transitively on the irreducible components of \(\mathcal{S}_{\mathcal{O},e}\) and of \(\mathcal{S}\). Let \(J(e)\subset A(e)\) be the stabilizer of \(X\). Let \(K(e)\) be the image of \(J(e)\) in \(\mathrm{Out}(X)\).
On the dual side, let \(\mathrm{Out}(Y)\) be the outer automorphisms of the minimal symplectic leaf in \(Y\), as discussed in SS5.7. Let \(J(e^{\prime})\subset A(e^{\prime})\) be the stabilizer of \(Y\) and let \(K(e^{\prime})\) be the image of \(J(e^{\prime})\) in the \(\mathrm{Out}(Y)\).
The pair of minimal degenerations falls into one of three mutually exclusive cases:
1. The map from \(K(e)\) to its image in \(\bar{A}(e)\) is bijective and the map from \(K(e^{\prime})\) to its image in \(\bar{A}(e^{\prime})\) is bijective.
2. The map from \(K(e)\) to its image in \(\bar{A}(e)\) is not bijective.
3. The map from \(K(e^{\prime})\) to its image in \(\bar{A}(e^{\prime})\) is not bijective.
That these are mutually exclusive follows from SS6.
**Theorem 10.1**.: _Let \(G\) be of adjoint type or \(G=\mathrm{Aut}(\mathfrak{g})\) or \(G=\mathrm{O}(8)\). We have the following duality of singularities under the Lusztig-Spaltenstein involution:_
* _In case (1):_ 1. _If_ \((X,K(e))\) _corresponds to a simple Lie algebra_ \(\mathfrak{m}\)_, in the sense of Slodowy, then_ \(Y/K(e^{\prime})\) _is isomorphic to the closure of the minimal special nilpotent orbit in the fixed subalgebra_ \(({}^{L}\mathfrak{m})^{K(e^{\prime})}\)_, where_ \({}^{L}\mathfrak{m}\) _is a simple component of the reductive centralizer of_ \(e^{\prime}\) _in_ \({}^{L}\mathfrak{g}\)_._ 2. _If the pair_ \((X,K(e))\) _is of type_ \(A_{k}^{+}\)_, then_ \(Y/K(e^{\prime})\) _is isomorphic to_ \(a_{k}/\mathfrak{S}_{2}\)_._
* _In case (2): The pair_ \((X,K(e))\) _is of type_ \(C_{n+1}\)_, and_ \(Y/K(e^{\prime})\) _is isomorphic to_ \(c_{n}^{sp}\)_._
* _In case (3): The pair_ \((X,K(e))\) _is of type_ \(C_{n}\) _or_ \(G_{2}\)_, and_ \(Y\) _is isomorphic to_ \(d_{n+1}/V_{4}\) _or_ \(d_{4}/\mathfrak{S}_{4}\)_, respectively._
Proof.: This amounts to gathering up our results. For the classical groups, the duality statements follow from SS7. For the exceptional groups, it is by inspection of the graphs in SS11. When \(G=\operatorname{Aut}(\mathfrak{g})\), we make use of SS8.
_Remark 10.2_.: We noticed in case (2) that the simple surface singularity \(B_{n}\) is a two-fold cover of the simple surface singularity \(C_{n+1}\). In case (3) we observe that \(b_{n}^{sp}\) is a two-fold cover of \(d_{n+1}/V_{4}\) (as in SS4.3) and that \(g_{2}^{sp}\) is a four-fold cover of \(d_{4}/\mathfrak{S}_{4}\). Accessing these covers would allow cases (2) and (3) to behave like the more well-behaved duality in case (1).
As we were finishing this paper, the preprint [1] appeared. We expect there is some overlap with our results, but we have not had a chance yet to understand the connection.
## 11. Graphs
We include here the Hasse diagrams of the minimal special degenerations for the exceptional Lie algebras, except for the straightforward \(G_{2}\), as well as several examples in the classical types. We write \((Y)\) when we only know that the normalization of the Slodowy slice singularity is isomorphic to \(Y\). See [10, SS6.2] for a discussion of the component group action and branching in the exceptional types for the more complicated cases in the graphs. We write \(C_{n+1}^{*}\) to indicate when the kernel of the map to Lusztig's canonical quotient acts by outer action on \(\mathcal{S}_{\mathcal{O},e}\) as in SS6. We use the notation \(A_{1}\) in the exceptional groups, but in the classical groups we use \(B_{1}\) or \(C_{1}\) to be consistent with Table 1; these are all the same singularity.
\[\begin{array}{ccc}\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{ \@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{ e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{ e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{ e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{ e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{ e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012 }{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[ \@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012 }{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012 }{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012 }{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}&\text{\@@cite[cite]{[\@@bibref{}{e:2012 }{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:2012}{}{}]}}\\ &\text{\@@cite[cite]{[\@@bibref{}{e:
\(D_{5}\) Minimal Special Degenerations \(C_{5}\) Alternative Minimal Special Degenerations
Figure 5. \(F_{4}\) and inner \(E_{6}\)
| This paper は、特殊 nilpotent 軌道が隣接する Slodowy スライスにおける Singularities の分類を、単純 Lie アルgebra で扱う。大部分の Singularities の irreducibles components は、(正規化まで)単純な表面 Singularities または、より小さいランクの Lie アルgebraにおける最小の特殊 nilpotent 軌道が閉じた部分空間である。それ以外の場合、$A_2$ 型と $D_n$ 型の最小軌道が閉じた部分空間の調和である。さらに、この動作は、より小さい軌道のスライスに作用する。この動作により、Lusztig-Spaltenstein duality によって、大部分の Singularities は、単純な表面 Singularities が Langlands dual 型の最小特殊軌道 (またはその作用を持つカバー) と交換される。これは Kraft と Procesi の観察に基づいている。さらに、Lusztig の予想を解くことができる。これは、特殊 nilpotent |
2301.00260 | Confidence Sets under Generalized Self-Concordance | This paper revisits a fundamental problem in statistical inference from a
non-asymptotic theoretical viewpoint $\unicode{x2013}$ the construction of
confidence sets. We establish a finite-sample bound for the estimator,
characterizing its asymptotic behavior in a non-asymptotic fashion. An
important feature of our bound is that its dimension dependency is captured by
the effective dimension $\unicode{x2013}$ the trace of the limiting sandwich
covariance $\unicode{x2013}$ which can be much smaller than the parameter
dimension in some regimes. We then illustrate how the bound can be used to
obtain a confidence set whose shape is adapted to the optimization landscape
induced by the loss function. Unlike previous works that rely heavily on the
strong convexity of the loss function, we only assume the Hessian is lower
bounded at optimum and allow it to gradually becomes degenerate. This property
is formalized by the notion of generalized self-concordance which originated
from convex optimization. Moreover, we demonstrate how the effective dimension
can be estimated from data and characterize its estimation accuracy. We apply
our results to maximum likelihood estimation with generalized linear models,
score matching with exponential families, and hypothesis testing with Rao's
score test. | Lang Liu, Zaid Harchaoui | 2022-12-31T17:45:11 | http://arxiv.org/abs/2301.00260v1 | # Confidence Sets under Generalized Self-Concordance
###### Abstract
This paper revisits a fundamental problem in statistical inference from a non-asymptotic theoretical viewpoint--the construction of confidence sets. We establish a finite-sample bound for the estimator, characterizing its asymptotic behavior in a non-asymptotic fashion. An important feature of our bound is that its dimension dependency is captured by the effective dimension--the trace of the limiting sandwich covariance--which can be much smaller than the parameter dimension in some regimes. We then illustrate how the bound can be used to obtain a confidence set whose shape is adapted to the optimization landscape induced by the loss function. Unlike previous works that rely heavily on the strong convexity of the loss function, we only assume the Hessian is lower bounded at optimum and allow it to gradually becomes degenerate. This property is formalized by the notion of generalized self-concordance which originated from convex optimization. Moreover, we demonstrate how the effective dimension can be estimated from data and characterize its estimation accuracy. We apply our results to maximum likelihood estimation with generalized linear models, score matching with exponential families, and hypothesis testing with Rao's score test.
## 1 Introduction
The problem of statistical inference on learned parameters is regaining the importance it deserves as machine learning and data science are increasingly impacting humanity and society through an increasingly large range of successful applications from transportation to healthcare (see, e.g., 15, 14). The classical asymptotic theory of M-estimation is well established in a rather general setting under the assumption that the parametric model is well-specified, i.e., the underlying data distribution belongs to the parametric family. Two types of confidence sets can be constructed from this theory: (a) the Wald-type one which relies on the weighted difference between the estimator and the target parameter, and (b) the likelihood-ratio-type one based on the log-likelihood ratio between the estimator and the target parameter. The main tool is the local asymptotic normality (LAN) condition introduced by Le Cam [26]. We mention here, among many of them, the monographs [22, 41, 40].
In many real problems, the parametric model is usually an approximation to the data distribution, so it is too restrictive to assume that the model is well-specified. To relax this restriction, model misspecification has been considered in the asymptotic regime; see, e.g., [19, 45, 13]. Another limitation of classical asymptotic theory is its asymptotic regime where \(n\to\infty\) and the parameter dimension \(d\) is fixed. This is inapplicable in the modern context where the data are of a rather high dimension involving a huge number of parameters.
The non-asymptotic viewpoint has been fruitful to address high dimensional problems--the results are developed for all fixed \(n\) so that it also captures the asymptotic regime where \(d\) grows with \(n\). Early works in this line of research focus on specific models such as Gaussian models [7, 8, 25, 5], ridge regression [18], logistic regression [3], and robust M-estimation [52, 12]; see Bach [4] for a survey. Spokoiny [36] addressed the finite-sample regime in full generality in a spirit similar to the classical LAN theory. The approach of [36] relies on heavy empirical process machinery and requires strong global assumptions on the deviation of the empirical risk process. More recently, Ostrovskii and Bach [32] focused on risk bounds, specializing their discussion to linear models with (pseudo) self-concordant losses and obtained a more transparent analysis under neater assumptions.
A critical tool arising from this line of research is the so-called _Dikin ellipsoid_, a geometric object identified in the theory of convex optimization [31, 6, 9, 39, 11, 10]. The Dikin ellipsoid corresponds to the distance measured by the Euclidean distance weighted by the Hessian matrix at the optimum. This weighted Euclidean distance is adapted to the geometry near the target parameter and thus leads to sharper bounds that do not depend on the minimum eigenvalue of
the Hessian. This important property has been used fruitfully in various problems of learning theory and mathematical statistics [51, 47, 16].
**Outline.** We review in Sec. 2 the empirical risk minimization framework and the two types of confidence sets from classical asymptotic theory. We establish finite-sample bounds to characterize these two confidences sets, whose sizes are controlled by the _effective dimension_, in a non-asymptotic fashion in Sec. 3. Our results hold for a general class of models characterized by the notion of _generalized self-concordance_. Along the way, we show how the effective dimension can be estimated from data and provide its estimation accuracy. This is a novel result and is of independent interest. We apply our results to compare Rao's score test, the likelihood ratio test, and the Wald test for goodness-of-fit testing in Sec. 4. Finally, in Sec. 5, we illustrate the interest of our results on synthetic data.
## 2 Problem Formulation
We briefly recall the framework of statistical inference via empirical risk minimization. Let \((\mathbb{Z},\mathcal{Z})\) be a measurable space. Let \(Z\in\mathbb{Z}\) be a random element following some unknown distribution \(\mathbb{P}\). Consider a parametric family of distributions \(\mathcal{P}_{\Theta}:=\{P_{\theta}:\theta\in\Theta\subset\mathbb{R}^{d}\}\) which may or may not contain \(\mathbb{P}\). We are interested in finding the parameter \(\theta_{\star}\) so that the model \(P_{\theta_{\star}}\) best approximates the underlying distribution \(\mathbb{P}\). For this purpose, we choose a _loss function_\(\ell\) and minimize the _population risk_\(L(\theta):=\mathbb{E}_{Z\sim\mathbb{P}}[\ell(\theta;Z)]\). Throughout this paper, we assume that
\[\theta_{\star}=\operatorname*{arg\,min}_{\theta\in\Theta}L(\theta)\]
uniquely exists and satisfies \(\theta_{\star}\in\text{int}(\Theta)\), \(\nabla_{\theta}L(\theta_{\star})=0\), and \(\nabla_{\theta}^{2}L(\theta_{\star})\succ 0\).
**Consistent loss function.** We focus on loss functions that are consistent in the following sense.
**Assumption 0**.: When the model is _well-specified_, i.e., there exists \(\theta_{0}\in\Theta\) such that \(\mathbb{P}=P_{\theta_{0}}\), it holds that \(\theta_{0}=\theta_{\star}\). We say such a loss function is _consistent_.
In the statistics literature, such loss functions are known as proper scoring rules [13]. We give below two popular choices of consistent loss functions.
**Example 1** (Maximum likelihood estimation).: A widely used loss function in statistical machine learning is the negative log-likelihood \(\ell(\theta;z):=-\log p_{\theta}(z)\) where \(p_{\theta}\) is the probability mass/density function for the discrete/continuous case. When \(\mathbb{P}=P_{\theta_{0}}\) for some \(\theta_{0}\in\Theta\), we have \(L(\theta)=\mathbb{E}[-\log p_{\theta}(Z)]=\mathrm{KL}(p_{\theta_{0}}\|p_{ \theta})-\mathbb{E}[\log p_{\theta_{0}}(Z)]\) where \(\mathrm{KL}\) is the Kullback-Leibler divergence. As a result, \(\theta_{0}\in\operatorname*{arg\,min}_{\theta\in\Theta}\mathrm{KL}(p_{\theta _{0}}\|p_{\theta})=\operatorname*{arg\,min}_{\theta\in\Theta}L(\theta)\). Moreover, if there is no \(\theta\) such that \(p_{\theta}\stackrel{{\text{a.s.}}}{{=}}p_{\theta_{0}}\), then \(\theta_{0}\) is the unique minimizer of \(L\). We give in Tab. 1 a few examples from the class of generalized linear models (GLMs) proposed by Nelder and Wedderburn [30].
**Example 2** (Score matching estimation).: Another important example appears in _score matching_[21]. Let \(\mathbb{Z}=\mathbb{R}^{\tau}\). Assume that \(\mathbb{P}\) and \(P_{\theta}\) have densities \(p\) and \(p_{\theta}\) w.r.t the Lebesgue measure, respectively. Let \(p_{\theta}(z)=q_{\theta}(z)/\Lambda(\theta)\) where \(\Lambda(\theta)\) is an unknown normalizing constant. We can choose the loss
\[\ell(\theta;z):=\Delta_{z}\log q_{\theta}(z)+\frac{1}{2}\left\|\nabla_{z}\log q _{\theta}(z)\right\|^{2}+\text{const.}\]
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **Data** & **Model** & **Loss** \\ \hline Linear & \((X,Y)\) & \(Y\mid X\sim\mathcal{N}(\theta^{\top}X,\sigma^{2})\) & \(\frac{1}{2}(y-\theta^{\top}x)^{2}\) \\ Logistic & \((X,Y)\) & \(Y\mid X\sim\text{Bernoulli}\big{(}(1+e^{-\theta^{\top}X})^{-1}\big{)}\) & \(\log\big{(}1+\exp(-y(\theta^{\top}x))\big{)}\) \\ Poisson & \((X,Y)\) & \(Y\mid X\sim\text{Poisson}(\exp(\theta^{\top}X))\) & \(-y(\theta^{\top}x)+\exp(\theta^{\top}x)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Loss function of generalized linear models.
Here \(\Delta_{z}:=\sum_{k=1}^{p}\partial^{2}/\partial z_{k}^{2}\) is the Laplace operator. Since [21, Thm. 1]
\[L(\theta)=\frac{1}{2}\operatorname{\mathbb{E}}\left[\left\|\nabla_{z}q_{\theta} (z)-\nabla_{z}p(z)\right\|^{2}\right],\]
we have, when \(p=p_{\theta_{0}}\), that \(\theta_{0}\in\operatorname*{arg\,min}_{\theta\in\Theta}L(\theta)\). In fact, when \(q_{\theta}>0\) and there is no \(\theta\) such that \(p_{\theta}\stackrel{{\text{a.s.}}}{{=}}p_{\theta_{0}}\), the true parameter \(\theta_{0}\) is the unique minimizer of \(L\)[21, Thm. 2].
**Empirical risk minimization.** Assume now that we have an i.i.d. sample \(\{Z_{i}\}_{i=1}^{n}\) from \(\mathbb{P}\). To learn the parameter \(\theta_{\star}\) from the data, we minimize the empirical risk to obtain the _empirical risk minimizer_
\[\theta_{n}\in\operatorname*{arg\,min}_{\theta\in\Theta}\left[L_{n}(\theta):= \frac{1}{n}\sum_{i=1}^{n}\ell(\theta;Z_{i})\right].\]
This applies to both maximum likelihood estimation and score matching estimation. In Sec. 3, we will prove that, with high probability, the estimator \(\theta_{n}\) exists and is unique under a generalized self-concordance assumption.
**Confidence set.** In statistical inference, it is of great interest to quantify the uncertainty in the estimator \(\theta_{n}\). In classical asymptotic theory, this is achieved by constructing an asymptotic confidence set. We review here two commonly used ones, assuming the model is well-specified. We start with the _Wald confidence set_. It holds that \(n(\theta_{n}-\theta_{\star})^{\top}H_{n}(\theta_{n})(\theta_{n}-\theta_{\star}) \rightarrow_{d}\chi_{d}^{2}\), where \(H_{n}(\theta):=\nabla^{2}L_{n}(\theta)\). Hence, one may consider a confidence set \(\{\theta:n(\theta_{n}-\theta)^{\top}H_{n}(\theta_{n})(\theta_{n}-\theta)\leq q _{\chi_{d}^{2}}(\delta)\}\) where \(q_{\chi_{d}^{2}}(\delta)\) is the upper \(\delta\)-quantile of \(\chi_{d}^{2}\). The other is the _likelihood-ratio (LR) confidence set_ constructed from the limit \(2n[L_{n}(\theta_{\star})-L_{n}(\theta_{n})]\rightarrow_{d}\chi_{d}^{2}\), which is known as the Wilks' theorem [46]. These confidence sets enjoy two merits: 1) their shapes are an ellipsoid (known as the _Dikin ellipsoid_) which is adapted to the optimization landscape induced by the population risk; 2) they are asymptotically valid, i.e., their coverages are exactly \(1-\delta\) as \(n\rightarrow\infty\). However, due to their asymptotic nature, it is unclear how large \(n\) should be in order for it to be valid.
Non-asymptotic theory usually focuses on developing finite-sample bounds for the _excess risk_, i.e., \(\mathbb{P}(L(\theta_{n})-L(\theta_{\star})\leq C_{n}(\delta))\geq 1-\delta\). To obtain a confidence set, one may assume that the population risk is twice continuously differentiable and \(\lambda\)-strongly convex. Consequently, we have \(\lambda\left\|\theta_{n}-\theta_{\star}\right\|_{2}^{2}/2\leq L(\theta_{n})-L (\theta_{\star})\) and thus we can consider the confidence set \(\mathcal{C}_{\text{finite},n}(\delta):=\{\theta:\left\|\theta_{n}-\theta \right\|_{2}^{2}\leq 2C_{n}(\delta)/\lambda\}\). Since it originates from a finite-sample bound, it is valid for fixed \(n\), i.e., \(\mathbb{P}(\theta_{\star}\in\mathcal{C}_{\text{finite},n}(\delta))\geq 1-\delta\) for all \(n\); however, it is usually conservative, meaning that the coverage is strictly larger than \(1-\delta\). Another drawback is that its shape is a Euclidean ball which remains the same no matter which loss function is chosen. We illustrate this phenomenon in Fig. 1. Note that a similar observation has also been made in the bandit literature [16].
We are interested in developing finite-sample confidence sets. However, instead of using excess risk bounds and strong convexity, we construct in Sec. 3 the Wald and LR confidence sets in a non-asymptotic fashion, under a generalized self-concordance condition. These confidence sets have the same shape as their asymptotic counterparts while maintaining validity for fixed \(n\). These new results are achieved by characterizing the critical sample size enough to enter the asymptotic regime.
Figure 1: Dikin ellipsoid and Euclidean ball.
## 3 Main Results
### Preliminaries
**Notation.** We denote by \(S(\theta;z):=\nabla_{\theta}\ell(\theta;z)\) the gradient of the loss at \(z\) and \(H(\theta;z):=\nabla_{\theta}^{2}\ell(\theta;z)\) the Hessian at \(z\). Their population versions are \(S(\theta):=\mathbb{E}[S(\theta;Z)]\) and \(H(\theta):=\mathbb{E}[H(\theta;Z)]\), respectively. We assume standard regularity assumptions so that \(S(\theta)=\nabla_{\theta}L(\theta)\) and \(H(\theta)=\nabla_{\theta}^{2}L(\theta)\). We write \(H_{\star}:=H(\theta_{\star})\). Note that the two optimality conditions then read \(S(\theta_{\star})=0\) and \(H_{\star}\succ 0\). It follows that \(\lambda_{\star}:=\lambda_{\min}(H_{\star})>0\) and \(\lambda^{\star}:=\lambda_{\max}(H_{\star})>0\). Furthermore, we let \(G(\theta;z):=S(\theta;z)S(\theta;z)^{\top}\) and \(G(\theta):=\mathbb{E}[S(\theta;Z)S(\theta;Z)^{\top}]\) be the autocorrelation matrices of the gradient. We write \(G_{\star}:=G(\theta_{\star})\). We define their empirical quantities as \(L_{n}(\theta):=n^{-1}\sum_{i=1}^{n}\ell(\theta;Z_{i})\), \(S_{n}(\theta):=n^{-1}\sum_{i=1}^{n}S(\theta;Z_{i})\), \(H_{n}(\theta):=n^{-1}\sum_{i=1}^{n}H(\theta;Z_{i})\), and \(G_{n}(\theta):=n^{-1}\sum_{i=1}^{n}G(\theta;Z_{i})\). The first step of our analysis is to localize the estimator to a _Dikin ellipsoid_ at \(\theta_{\star}\) of radius \(r\), i.e.,
\[\Theta_{r}(\theta_{\star}):=\left\{\theta\in\Theta:\left\|\theta-\theta_{ \star}\right\|_{H_{\star}}<r\right\},\]
where, given a positive semi-definite matrix \(J\), we let \(\left\|x\right\|_{J}:=\left\|J^{1/2}x\right\|_{2}=\sqrt{x^{\top}Jx}\).
**Effective dimension.** A quantity that plays a central role in our analysis is the _effective dimension_.
**Definition 1**.: We define the effective dimension to be
\[d_{\star}:=\mathbf{Tr}(H_{\star}^{-1/2}G_{\star}H_{\star}^{-1/2}). \tag{1}\]
The effective dimension appears recently in non-asymptotic analyses of (penalized) M-estimation; see, e.g., [37, 32]. It better characterizes the complexity of the parameter space \(\Theta\) than the parameter dimension \(d\). When the model is well-specified, it can be shown that \(H_{\star}=G_{\star}\) and thus \(d_{\star}=d\). When the model is misspecified, it can be much smaller than \(d\) depending on the spectra of \(H_{\star}\) and \(G_{\star}\). Moreover, it is closely connected to classical asymptotic theory of M-estimation under model misspecification--it is the trace of the limiting covariance matrix of \(\sqrt{n}H_{n}(\theta_{n})^{1/2}(\theta_{n}-\theta_{\star})\); see Sec. 3.5 for a thorough discussion.
**Generalized self-concordance.** We will use the notion of _self-concordance_ from convex optimization in our analysis. Self-concordance originated from the analysis of the interior-point and Newton-type convex optimization methods [31]. It was later modified by Bach [3], which we call the _pseudo self-concordance_, to derive finite-sample bounds for the generalization properties of the logistic regression. Recently, Sun and Tran-Dinh [38] proposed the _generalized self-concordance_ which unifies these two notions. For a function \(f:\mathbb{R}^{d}\to\mathbb{R}\), we define \(D_{x}f(x)[u]:=\frac{\mathrm{d}}{\mathrm{d}t}f(x+tu)|_{t=0}\), \(D_{x}^{2}f(x)[u,v]:=D_{x}(D_{x}f(x)[u])[v]\) for \(x,u,v\in\mathbb{R}^{d}\), and \(D_{x}^{3}f(x)[u,v,w]\) similarly.
**Definition 2** (Generalized self-concordance).: Let \(\mathcal{X}\subset\mathbb{R}^{d}\) be open and \(f:\mathcal{X}\to\mathbb{R}\) be a closed convex function. For \(R>0\) and \(\nu>0\), we say \(f\) is \((R,\nu)\)-generalized self-concordant on \(\mathcal{X}\) if
\[\left|D_{x}^{3}f(x)[u,u,v]\right|\leq R\left\|u\right\|_{\nabla^{2}f(x)}^{2} \left\|v\right\|_{\nabla^{2}f(x)}^{\nu-2}\left\|v\right\|_{2}^{3-\nu}\]
Figure 2: Strong convexity v.s. self-concordance. Black curve: population risk; colored dot: reference point; colored dashed curve: quadratic approximation at the corresponding reference point.
with the convention \(0/0=0\) for the case \(\nu<2\) and \(\nu>3\). Recall that \(\|u\|_{\nabla^{2}f(x)}^{2}:=u^{\top}\nabla^{2}f(x)u\).
**Remark.** When \(\nu=2\) and \(\nu=3\), this definition recovers the pseudo self-concordance and the standard self-concordance, respectively.
In contrast to strong convexity which imposes a gross lower bound on the Hessian, generalized self-concordance specifies the rate at which the Hessian can vary, leading to a finer control on the Hessian. Concretely, it allows us to bound the Hessian in a neighborhood of \(\theta_{\star}\) with the Hessian at \(\theta_{\star}\), which is key to controlling \(H_{n}(\theta_{n})\). We illustrate the difference between them in Fig. 2. As we will see in Sec. 3.3, thanks to the generalized self-concordance, we are able to remove the direct dependency on \(\lambda_{\star}\) in our confidence set. To the best of our knowledge, this is the first work extending classical results for M-estimation to generalized self-concordant losses.
**Concentration of Hessian.** One key result towards deriving our bounds is the concentration of empirical Hessian, i.e., \((1-c_{n}(\delta))H(\theta)\preceq H_{n}(\theta)\preceq(1+c_{n}(\delta))H(\theta)\) with probability at least \(1-\delta\). When the loss function is of the form \(\ell(\theta;z):=\ell(y,\theta^{\top}x)\) (e.g., GLMs), the empirical Hessian reads \(H_{n}(\theta)=n^{-1}\sum_{i=1}^{n}\ell^{\prime\prime}(Y_{i},\theta^{\top}X_{ i})X_{i}X_{i}^{\top}\) where \(\ell^{\prime\prime}(y,\bar{y}):=\mathrm{d}^{2}\ell(y,\bar{y})/\mathrm{d}\bar{ y}^{2}\), which is of the form of a sample covariance. Assuming \(X\) to be sub-Gaussian, Ostrovskii and Bach [32] obtained a concentration bound for \(H_{n}(\theta_{\star})\) with \(c_{n}(\delta)=O(\sqrt{(d+\log{(1/\delta)})/n})\) via the concentration bound for sample covariance [42, Thm. 5.39]. For general loss functions, such a special structure cannot be exploited. We overcame this challenge by the matrix Bernstein inequality [44, Thm. 6.17], obtaining a sharper concentration bound with \(c_{n}(\delta):=O(\sqrt{\log{(d/\delta)}/n})\). Note that the matrix Bernstein inequality has been used to control the empirical Hessian of kernel ridge regression with random features [34, Prop. 6] and later extended to regularized empirical risk minimization [28, Lem. 30]. However, their results require the regularization parameter to be strictly positive (otherwise the bounds are vacuous) and the sample Hessian to be bounded. On the contrary, our technique allows for zero regularization and unbounded Hessian as long as the Hessian satisfies a matrix Bernstein condition. Moreover, combining generalized self-concordance with matrix Bernstein, we are able to show the concentration of \(H_{n}(\theta_{n})\) around \(H_{\star}\) for general losses, which is itself a novel result.
### Assumptions
Our key assumption is the generalized self-concordance of the loss function.
**Assumption 1** (Generalized self-concordance).: For any \(z\in\mathcal{Z}\), the scoring rule \(\ell(\cdot;z)\) is \((R,\nu)\)-generalized self-concordant for some \(R>0\) and \(\nu\geq 2\). Moreover, \(L(\cdot)\) is also \((R,\nu)\)-generalized self-concordant.
**Remark.** If \(\ell(\cdot;z)\) is generalized self-concordant with \(\nu=2\), so is \(L(\cdot)\).
Many loss functions in statistical machine learning satisfy this assumption. We give in Sec. 4.1 examples from generalized linear models and score matching.
In order to control the empirical gradient \(S_{n}(\theta)\), we assume that the normalized gradient at \(\theta_{\star}\) is sub-Gaussian.
**Assumption 2** (Sub-Gaussian gradient).: There exists a constant \(K_{1}>0\) such that the normalized gradient at \(\theta_{\star}\) is sub-Gaussian with parameter \(K_{1}\), i.e., \(\|G_{\star}^{-1/2}S(\theta_{\star};Z)\|_{\psi_{2}}\leq K_{1}\). Here \(\left\|\cdot\right\|_{\psi_{2}}\) is the sub-Gaussian norm whose definition is recalled in Appx. \(\mathrm{C}\).
When the loss function is of the form \(\ell(\theta;z)=\ell(y,\theta^{\top}x)\), we have \(S(\theta;Z)=\ell^{\prime}(Y,\theta^{\top}X)X\). As a result, Asm. 2 holds true if (i) \(\ell^{\prime}(Y,\theta_{\star}^{\top}X)\) is sub-Gaussian and \(X\) is bounded or (ii) \(\ell^{\prime}(Y,\theta_{\star}^{\top}X)\) is bounded and \(X\) is sub-Gaussian. For least squares with \(\ell(y,\theta^{\top}x)=\frac{1}{2}(y-\theta^{\top}x)^{2}\), the derivative \(\ell^{\prime}(Y,\theta_{\star}^{\top}X)=\theta_{\star}^{\top}X-Y\) is the negative residual. Asm. 2 is guaranteed if the residual is sub-Gaussian and \(X\) is bounded. For logistic regression with \(\ell(y,\theta^{\top}x)=-\log\sigma(y\cdot\theta^{\top}x)\) where \(\sigma(u)=(1+e^{-u})^{-1}\), the derivative \(\ell^{\prime}(Y,\theta_{\star}^{\top}X)=[\sigma(Y\cdot\theta_{\star}^{\top}X)- 1]Y\in[-1,1]\) is bounded. Thus, Asm. 2 is guaranteed if \(X\) is sub-Gaussian.
In order to control the empirical Hessian, we assume that the Hessian of the loss function satisfies the matrix Bernstein condition in a neighborhood of \(\theta_{\star}\).
**Assumption 3** (Matrix Bernstein of Hessian).: There exist constants \(K_{2},r>0\) such that, for any \(\theta\in\Theta_{r}(\theta_{\star})\), the standardized Hessian
\[H(\theta)^{-1/2}H(\theta;Z)H(\theta)^{-1/2}-I_{d}\]
satisfies a Bernstein condition (defined in Appx. C) with parameter \(K_{2}\). Moreover,
\[\sigma_{H}^{2}:=\sup_{\theta\in\Theta_{\star}(\theta_{\star})}\left\|\mathbb{V} \mathrm{ar}\left(H(\theta)^{-\frac{1}{2}}H(\theta;Z)H(\theta)^{-\frac{1}{2}} \right)\right\|_{2}<\infty,\]
where \(\left\|\cdot\right\|_{2}\) is the spectral norm and \(\mathbb{V}\mathrm{ar}(J):=\mathbb{E}[JJ^{\top}]-\mathbb{E}[J]\,\mathbb{E}[J]^ {\top}\). By convention, we let \(\Theta_{0}(\theta_{\star})=\{\theta_{\star}\}\).
### Main Results
We now give simplified versions of our main theorems. We use \(C_{\nu}\) to represent a constant depending only on \(\nu\) that may change from line to line; and \(C_{K_{1},\nu}\) similarly. We use \(\lesssim\) and \(\gtrsim\) to hide constants depending only on \(K_{1},K_{2},\sigma_{H},\nu\). The precise versions can be found in Appx. A. Recall that \(\lambda_{\star}:=\lambda_{\min}(H_{\star})\) and \(\lambda^{\star}:=\lambda_{\max}(H_{\star})\).
**Theorem 1**.: _Let \(\nu\in[2,3)\). Under Asms. 1 to 3 with \(r=0\), it holds that, whenever_
\[n\gtrsim\log\left(2d/\delta\right)+\lambda_{\star}^{-1}\left[R^{2}d_{\star} \log\left(e/\delta\right)\right]^{1/(3-\nu)},\]
_the empirical risk minimizer \(\theta_{n}\) uniquely exists and satisfies, with probability at least \(1-\delta\),_
\[\left\|\theta_{n}-\theta_{\star}\right\|_{H_{\star}}^{2}\lesssim\log\left(e/ \delta\right)\frac{d_{\star}}{n}. \tag{2}\]
With a local matrix Bernstein condition, we can replace \(H_{\star}\) by \(H_{n}(\theta_{n})\) in (2) and obtain a finite-sample version of the Wald confidence set.
**Theorem 2**.: _Let \(\nu\in[2,3)\). Suppose the same assumptions in Thm. 1 hold true. Furthermore, suppose that Asm. 3 holds with \(r=C_{\nu}\lambda_{\star}^{(3-\nu)/2}/R\). Let \(\mathcal{C}_{\text{Build},n}(\delta)\) be_
\[\left\{\theta\in\Theta:\left\|\theta-\theta_{n}\right\|_{H_{n}(\theta_{n})}^{ 2}\leq C_{K_{1},\nu}\frac{d_{\star}}{n}\log\frac{e}{\delta}\right\}. \tag{3}\]
_Then we have \(\mathbb{P}(\theta_{\star}\in\mathcal{C}_{\text{Build},n}(\delta))\geq 1-\delta\) whenever_
\[n\gtrsim\log\frac{2d}{\delta}+d\log n+\lambda_{\star}^{-1}\left[R^{2}d_{\star} \log\frac{e}{\delta}\right]^{\frac{1}{3-\nu}}. \tag{4}\]
**Remark.** In the precise versions of Thms. 1 and 2, the term \(d_{\star}\log\left(e/\delta\right)\) in the bounds (2) and (3) should be replaced by \(d_{\star}+\log\left(e/\delta\right)\|G_{\star}^{1/2}H_{\star}^{-1}G_{\star}^{ 1/2}\|_{2}\), which almost match the misspecified Cramer-Rao lower bound (e.g., 17, Thm. 1) up to a constant factor.
Thm. 2 suggests that the tail probability of \(\left\|\theta_{n}-\theta_{\star}\right\|_{H_{n}(\theta_{n})}^{2}\) is governed by a \(\chi^{2}\) distribution with \(d_{\star}\) degrees of freedom, which coincides with the asymptotic result. In fact, according to Huber [19], under suitable regularity assumptions, it holds that \(\sqrt{n}H_{n}(\theta_{n})^{1/2}(\theta_{n}-\theta_{\star})\to_{d}W\sim\mathcal{ N}(0,H_{\star}^{-1/2}G_{\star}H_{\star}^{-1/2})\) which implies that
\[n(\theta_{n}-\theta_{\star})^{\top}H_{n}(\theta_{n})(\theta_{n}-\theta_{\star} )\to_{d}W^{\top}W.\]
This induces an asymptotic confidence set with a similar form of (3) and radius \(O(\mathbb{E}[W^{\top}W]/n)=O(d_{\star}/n)\). Our result characterizes the _critical sample size_ enough to enter the asymptotic regime.
From Thm. 2 we can also derive a finite-sample version of the LR confidence set.
**Corollary 3**.: _Let \(\nu\in[2,3)\). Suppose the same assumptions in Thm. 2 hold true. Let \(\mathcal{C}_{\text{LR},n}(\delta)\) be_
\[\left\{\theta\in\Theta:2[L_{n}(\theta)-L_{n}(\theta_{n})]\leq C_{K_{1},\nu} \frac{d_{\star}}{n}\log\frac{e}{\delta}\right\}. \tag{5}\]
_Then we have \(\mathbb{P}(\theta_{\star}\in\mathcal{C}_{\text{LR},n}(\delta))\geq 1-\delta\) whenever_
\[n\gtrsim\log\frac{2d}{\delta}+d\log n+\lambda_{\star}^{-1}\left[R^{2}d_{\star} \log\frac{e}{\delta}\right]^{\frac{1}{3-\nu}}.\]
We give the proof sketches of Thm. 1, Thm. 2, and Cor. 3 here and defer their full proofs to Appx. A. We discuss in Sec. 3.5 how our proof techniques and theoretical results complement and improve on previous works.
We start by showing the existence and uniqueness of \(\theta_{n}\). The next result shows that \(\theta_{n}\) exists and is unique whenever the quadratic form \(S_{n}(\theta_{\star})^{\top}H_{n}^{-1}(\theta_{\star})S_{n}(\theta_{\star})\) is small. Note that this quantity is also known as Rao's score statistic for goodness-of-fit testing. This result also localizes \(\theta_{n}\) to a neighborhood of the target parameter \(\theta_{\star}\).
**Proposition 4**.: _Under Asm. 1, if \(\left\|S_{n}(\theta_{\star})\right\|_{H_{n}^{-1}(\theta_{\star})}\leq C_{\nu} [\lambda_{\min}(H_{n}(\theta_{\star}))]^{(3-\nu)/2}/(Rn^{\nu/2-1})\), then the estimator \(\theta_{n}\) uniquely exists and satisfies_
\[\left\|\theta_{n}-\theta_{\star}\right\|_{H_{n}(\theta_{\star})}\leq 4\left\|S_{ n}(\theta_{\star})\right\|_{H_{n}^{-1}(\theta_{\star})}.\]
The main tool used in the proof of Prop. 4 is a strong convexity type result for generalized self-concordant functions recalled in Appx. C. In order to apply Prop. 4, we need to control \(\left\|S_{n}(\theta_{\star})\right\|_{H_{n}^{-1}(\theta_{\star})}\). This result is summarized in the following proposition.
**Proposition 5**.: _Under Asms. 2 and 3 with \(r=0\), it holds that, with probability at least \(1-\delta\),_
\[\left\|S_{n}(\theta_{\star})\right\|_{H_{n}^{-1}(\theta_{\star})}^{2}\lesssim \frac{d_{\star}}{n}\log\left(e/\delta\right)\]
_whenever \(n\gtrsim\log\left(2d/\delta\right)\)._
The proof of Prop. 5 consists of two steps: (a) lower bound \(H_{n}(\theta_{\star})\) by \(H_{\star}\) up to a constant using the Bernstein inequality and (b) upper bound \(\left\|S_{n}(\theta_{\star})\right\|_{H^{-1}(\theta_{\star})}\) using a concentration inequality for isotropic random vectors, where the tools are recalled in Appx. C. Combining them implies that \(\left\|S_{n}(\theta_{\star})\right\|_{H^{-1}(\theta_{\star})}\) can be arbitrarily small and thus satisfies the requirement in Prop. 4 for sufficiently large \(n\). This not only proves the existence and uniqueness of the empirical risk minimizer \(\theta_{n}\) but also provides an upper bound for \(\left\|\theta_{n}-\theta_{\star}\right\|_{H_{n}(\theta_{\star})}\) through \(\left\|S_{n}(\theta_{\star})\right\|_{H_{n}^{-1}(\theta_{\star})}\).
In order to prove Thm. 2, it remains to upper bound \(H_{n}(\theta_{n})\) by \(H_{\star}\) up to a constant factor. This can be achieved by the following result.
**Proposition 6**.: _Under Asms. 1 and 3 with \(r=C_{\nu}\lambda_{\star}^{(\nu-3)/2}/R\), it holds that, with probability at least \(1-\delta\),_
\[\frac{1}{2C_{\nu}}H_{\star}\preceq H_{n}(\theta)\preceq\frac{3}{2}C_{\nu}H_{ \star},\text{ for all }\theta\in\Theta_{r}(\theta_{\star}),\]
_whenever \(n\gtrsim\{\log\left(2d/\delta\right)+d(\nu/2-1)\log n\}\)._
Finally, Cor. 3 follows from Thm. 2 and the Taylor expansion: there exists \(\bar{\theta}_{n}\in\text{Conv}\{\theta_{n},\theta_{\star}\}\) such that
\[2[L_{n}(\theta_{\star})-L_{n}(\theta_{n})]=\left\|\theta_{n}-\theta_{\star} \right\|_{H_{n}(\bar{\theta}_{n})},\]
where we have used \(\nabla L_{n}(\theta_{n})=0\).
### Approximating the effective dimension
One downside of Thm. 2 and Cor. 3 is that \(d_{\star}\) depends on the unknown data distribution. Alternatively, we use the following empirical counterpart
\[d_{n}:=\mathbf{Tr}\left(H_{n}(\theta_{n})^{-1/2}G_{n}(\theta_{n})H_{n}(\theta_ {n})^{-1/2}\right).\]
The next result implies that we do not lose much if we replace \(d_{\star}\) by \(d_{n}\). This result is novel and of independent interest since one also needs to estimate \(d_{\star}\) in order to construct asymptotic confidence sets under model misspecification.
**Assumption 2'.** There exist constants \(r,K_{1}>0\) such that, for any \(\theta\in\Theta_{r}(\theta_{\star})\), we have \(\left\|G(\theta)^{-1/2}S(\theta;Z)\right\|_{\psi_{2}}\leq K_{1}\).
**Assumption 4**.: There exists \(r>0\) such that \(M:=\mathbb{E}[M(Z)]<\infty\), where \(M(z)\) is defined as
\[\sup_{\theta_{1}\neq\theta_{2}\in\Theta_{r}(\theta_{\star})}\frac{\left\|G_{ \star}^{-1/2}[G(\theta_{1};z)-G(\theta_{2};z)]G_{\star}^{-1/2}\right\|_{2}}{ \left\|\theta_{1}-\theta_{2}\right\|_{H_{\star}}}.\]
**Remark**.: Asm. 4 is a Lipschitz-type condition for \(G(\theta;z)\). This assumption was previously used by [29, Assumption 3] to analyze non-convex risk landscapes.
**Proposition 7**.: _Let \(\nu\in[2,3)\). Under Asms. 1, 2', 3 and 4 with \(r=C_{\nu}\lambda_{\star}^{(3-\nu)/2}/R\), it holds that_
\[\frac{1}{C_{\nu}}d_{\star}\leq d_{n}\leq C_{\nu}d_{\star},\]
_with probability at least \(1-\delta\), whenever \(n\) is large enough (see Appx. A.3 for the precise condition)._
**Remark**.: The precise version of \(Prop.\)7 in Appx. A.3 implies that \(d_{n}\) is a consistent estimator of \(d\).
With Prop. 7 at hand, we can obtain finite-sample confidence sets involving \(d_{n}\), which can be computed from data. We illustrate it with the Wald confidence set.
**Corollary 8**.: _Suppose the same assumptions in Prop. 7 hold true. Let \(\mathcal{C}^{\prime}_{\text{Wald},n}(\delta)\) be_
\[\left\{\theta\in\Theta:\|\theta-\theta_{\star}\|_{H_{n}(\theta_{n})}^{2}\leq C _{K_{1},\nu}\log\left(e/\delta\right)\frac{d_{n}}{n}\right\}.\]
_Then we have \(\mathbb{P}(\theta_{\star}\in\mathcal{C}^{\prime}_{\text{Wald},n}(\delta))\geq 1-\delta\) whenever \(n\) satisfies the same condition as in Prop. 7._
### Discussion
**Fisher information and model misspecification.** When the model is well-specified, the autocorrelation matrix \(G(\theta)\) coincides with the well-known Fisher information \(\mathcal{I}(\theta):=\mathbb{E}_{Z\sim P_{\theta}}[S(\theta;Z)S(\theta;Z)^{ \top}]\) at \(\theta_{\star}\). The Fisher information plays a central role in mathematical statistics and, in particular, M-estimation; see [33, 23, 2, 35] for recent developments in this line of research. It quantifies the amount of information a random variable carries about the model parameter. Under a well-specified model, it also coincides with the Hessian matrix \(H(\theta)\) at the optimum which captures the local curvature of the population risk. When the model is misspecified, the Fisher information deviates from the Hessian matrix. In the asymptotic regime, this discrepancy is reflected in the limiting covariance of the weighted M-estimator which admits a sandwich form \(H_{\star}^{-1/2}G_{\star}H_{\star}^{-1/2}\); see, e.g., [19, Sec. 4].
**Effective dimension.** The counterpart of the sandwich covariance in the non-asymptotic regime is the effective dimension \(d_{\star}\); see, e.g., [37, 32]. Our bounds also enjoy the same merit--its dimension dependency is via the effective dimension. When the model is well-specified, the effective dimension reduces to \(d\), recovering the same rate of convergence \(O(d/n)\) as in classical linear regression; see, e.g., [4, Prop. 3.5]. When the model is misspecified, the effective dimension provides a characterization of the problem complexity which is adapted to both the data distribution and the loss function via the matrix \(H_{\star}^{-1/2}G_{\star}H_{\star}^{-1/2}\). To gain a better understanding of the effective dimension \(d_{\star}\), we summarize it in Tab. 3 in Appx. A under different regimes of eigendecay, assuming that \(G_{\star}\) and \(H_{\star}\) share the same eigenvectors. It is clear that, when the spectrum of \(G_{\star}\) decays faster than the one of \(H_{\star}\), the dimension dependency can be better than \(O(d)\). In fact, it can be as good as \(O(1)\) when the spectrum of \(G_{\star}\) and \(H_{\star}\) decay exponentially and polynomially, respectively.
**Comparison to classical asymptotic theory.** Classical asymptotic theory of M-estimation is usually based on two assumptions: (a) the model is well-specified and (b) the sample size \(n\) is much larger than the parameter dimension \(d\). These assumptions prevent it from being applicable to many real applications where the parametric family is only an approximation to the unknown data distribution and the data is of high dimension involving a large number of parameters. On the contrary, our results do not require a well-specified model, and the dimension dependency is replaced by the effective dimension \(d_{\star}\) which captures the complexity of the parameter space. Moreover, they are of
non-asymptotic nature--they hold true for any \(n\) as long as it exceeds some constant factor of \(d_{\star}\). This allows the number of parameters to potentially grow with the same size.
**Comparison to recent non-asymptotic theory.** Recently, Spokoiny [36] achieved a breakthrough in finite-sample analysis of parametric M-estimation. Although fully general, their results require strong global assumptions on the deviation of the empirical risk process and are built upon advanced tools from empirical process theory. Restricting ourselves to generalized self-concordant losses, we are able to provide a more transparent analysis with neater assumptions only in a neighborhood of the optimum parameter \(\theta_{\star}\). Moreover, our results maintain some generality, covering several interesting examples in statistical machine learning as provided in Sec. 4.1.
Ostrovskii and Bach [32] also considered self-concordant losses for M-estimation. However, their results are limited to generalized linear models whose loss is (pseudo) self-concordant and admits the form \(\ell(\theta;Z):=\ell(Y,\theta^{\top}X)\). While sharing the same rate \(O(d_{\star}/n)\), our results are more general than theirs in two aspects. First, the loss need not be of the form \(\ell(Y,\theta^{\top}X)\), encompassing the score matching loss in Ex. 4 below. Second, we go beyond pseudo self-concordance via the notion of generalized self-concordance. Moreover, they focus on bounding the excess risk rather than providing confidence sets, and they do not study the estimation of \(d_{\star}\).
Pseudo self-concordant losses have been considered for semi-parametric models [27]. However, they focus on bounding excess risk and require a localization assumption on \(\theta_{n}\). Here we prove the localization result in Prop. 4 and we focus on confidence sets.
**Regularization.** Our results can also be applied to regularized empirical risk minimization by including the regularization term in the loss function. Let \(\theta_{n}^{\lambda}\) and \(\theta_{\star}^{\lambda}\) be the minimizers of the _regularized_ empirical and population risk, respectively. Let \(d_{\star}^{\lambda}:=\mathbf{Tr}\left((H_{\star}^{\lambda})^{-1/2}G_{\star}^{ \lambda}(H_{\star}^{\lambda})^{-1/2}\right)\) where \(H_{\star}^{\lambda}\) and \(G_{\star}^{\lambda}\) are the regularized Hessian and the autocorrelation matrix of the regularized gradient at \(\theta_{\star}^{\lambda}\), respectively. Then our results characterize the concentration of \(\theta_{n}^{\lambda}\) around \(\theta_{\star}^{\lambda}\):
\[\left\|\theta_{n}^{\lambda}-\theta_{\star}^{\lambda}\right\|_{H_{ \star}^{\lambda}}^{2}\leq O(d_{\star}^{\lambda}/n).\]
This result coincides with Spokoiny [37, Thm. 2.1]. If the goal is to estimate the unregularized population risk minimizer \(\theta_{\star}\), then we need to pay an additional error \(\left\|\theta_{\star}^{\lambda}-\theta_{\star}\right\|_{H_{\star}^{\lambda}}^{2}\) which is referred to as the modeling bias [37, Sec. 2.5]. One can invoke a so-called _source condition_ to bound the modeling bias and a _capacity condition_ to bound \(d_{\star}^{\lambda}\). An optimal value of \(\lambda\) can be obtained by balancing between these two terms [see, e.g., 28].
For instance, let \(Z:=(X,Y)\) where \(X\in\mathbb{R}^{d}\) with \(\mathbb{E}[XX^{\top}]=I_{d}\) and \(Y\in\mathbb{R}\). Consider the regularized squared loss \(\ell^{\lambda}(\theta;z):=1/2\,(y-\theta^{\top}x)^{2}+1/2\,\theta^{\top}U\theta\) where \(U=\mathbf{diag}\{\mu_{1},\ldots,\mu_{d}\}\). The regularized effective dimension is then [37, Sec. 2.1] of order \(O\big{(}\sum_{k=1}^{d}1/(1+\mu_{k})\big{)}\) which can be much smaller than \(d\) if \(\{\mu_{k}\}\) is increasing.
## 4 Examples and Applications
We give several examples whose loss function is generalized self-concordant so that our results can be applied. We also provide finite-sample analysis for Rao's score test, the likelihood ratio test, and the Wald test in goodness-of-fit testing. All the proofs and derivations are deferred to Appx. B.
### Examples
**Example 3** (Generalized linear models).: Let \(Z:=(X,Y)\) be a pair of input and output, where \(X\in\mathcal{X}\subset\mathbb{R}^{d}\) and \(Y\in\mathcal{Y}\subset\mathbb{R}\). Let \(t:\mathcal{X}\times\mathcal{Y}\rightarrow\mathbb{R}^{d}\) and \(\mu\) be a measure on \(\mathcal{Y}\). Consider the statistical model
\[p_{\theta}(y\mid x)\sim\frac{\exp(\theta^{\top}t(x,y))}{\int\exp( \theta^{\top}t(x,\bar{y}))\mathrm{d}\mu(\bar{y})}\mathrm{d}\mu(y)\]
with \(\left\|t(X,Y)\right\|_{2}\leq_{a.s.}M\). It induces the loss function
\[\ell(\theta;z):=-\theta^{\top}t(x,y)+\log\int\exp(\theta^{\top}t (x,\bar{y}))\mathrm{d}\mu(\bar{y}),\]
which is generalized self-concordant for \(\nu=2\) and \(R=2M\). Moreover, this model satisfies Asms. 2 to 4 and 2'.
**Example 4** (Score matching with exponential families).: Assume that \(\mathbb{Z}=\mathbb{R}^{p}\). Consider an exponential family on \(\mathbb{R}^{d}\) with densities
\[\log p_{\theta}(z)=\theta^{\top}t(z)+h(z)-\Lambda(\theta).\]
The non-normalized density \(q_{\theta}\) then reads \(\log q_{\theta}(z)=\theta^{\top}t(z)+h(z)\). As a result, the score matching loss becomes
\[\ell(\theta;z)=\frac{1}{2}\theta^{\top}A(z)\theta-b(z)^{\top} \theta+c(z)+\text{const},\]
where \(A(z):=\sum_{k=1}^{p}\frac{\partial t(z)}{\partial z_{k}}\left(\frac{\partial t (z)}{\partial z_{k}}\right)^{\top}\) is positive semi-definite, \(b(z):=\sum_{k=1}^{p}\left[\frac{\partial^{2}t(z)}{\partial z_{k}^{2}}+\frac{ \partial h(z)}{\partial z_{k}}\frac{\partial t(z)}{\partial z_{k}}\right]\), and \(c(z):=\sum_{k=1}^{p}\left[\frac{\partial^{2}h(z)}{\partial z_{k}^{2}}+\left( \frac{\partial h(z)}{\partial z_{k}}\right)^{2}\right]\). Therefore, the score matching loss \(\ell(\theta;z)\) is convex. Moreover, since the third derivatives of \(\ell(\cdot;z)\) is zero, the score matching loss is generalized self-concordant for all \(\nu\geq 2\) and \(R\geq 0\).
### Rao's Score Test and Its Relatives
We discuss how our results can be applied to analyze three classical goodness-of-fit tests. In this subsection, we will assume that the model is well-specified. Due to Asmp. \(0\), we will use \(\theta_{\star}\) to denote the true parameter of \(\mathbb{P}\) and reserve \(\theta_{0}\) for the parameter under the null hypothesis.
Given a subset \(\Theta_{0}\subset\Theta\), a goodness-of-fit testing problem is to test the hypotheses
\[\mathbf{H}_{0}:\theta_{\star}\in\Theta_{0}\leftrightarrow\mathbf{H}_{1}: \theta_{\star}\notin\Theta_{0}.\]
We focus on a simple null hypothesis where \(\Theta_{0}:=\{\theta_{0}\}\) is a singleton. A statistical test consists of a test statistic \(T:=T(Z_{1},\ldots,Z_{n})\) and a prescribed critical value \(t\), and we reject the null hypothesis if \(T>t\). Its performance is quantified by the _type I error rate_\(\mathbb{P}(T>t\mid\mathbf{H}_{0})\) and _statistical power_\(\mathbb{P}(T>t\mid\mathbf{H}_{1})\). Classical goodness-of-fit tests include Rao's score test, the likelihood ratio test (LRT), and the Wald test. Their test statistics are \(T_{\text{Rao}}:=\left\|S_{n}(\theta_{0})\right\|_{H_{n}^{-1}(\theta_{0})}^{2}\), \(T_{\text{LR}}:=2[\ell_{n}(\theta_{0})-\ell_{n}(\theta_{n})]\), and \(T_{\text{Wald}}:=\left\|\theta_{n}-\theta_{0}\right\|_{H_{n}(\theta_{n})}^{2}\), respectively.
Our approach can be applied to analyze the type I error rate of these tests as summarized in the following proposition.
**Proposition 9** (Type I error rate).: _Suppose that Asms. 2 and 3 with \(r=0\) hold true. Under \(\mathbf{H}_{0}\), we have, with probability at least \(1-\delta\),_
\[T_{\text{Rao}}\lesssim\log{(e/\delta)}\frac{d}{n}\]
_whenever \(n\gtrsim\log{(2d/\delta)}\). Furthermore, if Asms. 1 to 3 with \(r=C_{\nu}\lambda_{\star}^{(\nu-3)/2}/R\) hold true, we have, with probability at least \(1-\delta\),_
\[T_{\text{LR}},T_{\text{Wald}}\lesssim\log{(e/\delta)}\frac{d}{n}\]
_whenever \(n\) satisfies (4)._
This result implies that the three test statistics all scale as \(O(d/n)\) under the null hypothesis. Consequently, for a fixed significance level \(\alpha\in(0,1)\), we can choose the critical value \(t=t_{n}(\alpha)=O(d/n)\) so that their type I error rates are below \(\alpha\). With this choice, we can then characterize the statistical powers of these tests under alternative hypotheses \(\theta_{\star}\neq\theta_{0}\) where \(\theta_{\star}\) may depend on \(n\). Let \(\Omega(\theta):=G(\theta)^{1/2}H(\theta)^{-1}G(\theta)^{1/2}\) and \(h(\tau):=\min\{\tau^{2},\tau\}\).
**Proposition 10** (Statistical power).: _Let \(\theta_{\star}\neq\theta_{0}\). The following statements are true for sufficiently large \(n\)._
1. _Suppose that Asms. 1 to 3 hold true with_ \(r=0\)_. When_ \(\theta_{\star}-\theta_{0}=O(n^{-1/2})\) _and_ \(\tau_{n}:=t_{n}(\alpha)/4-\left\|S(\theta_{0})\right\|_{H(\theta_{0})^{-1}}^{2} -\mathbf{Tr}(\Omega(\theta_{0}))/n>0\)_, we have_ \[\mathbb{P}(T_{\text{Rao}}>t_{n}(\alpha))\] \[\leq 2de^{-C_{K_{2},\sigma_{H}}n}+e^{-C_{K_{1}}h(n\tau_{n}/\left\| \Omega(\theta_{0})\right\|_{2})}.\]
_When \(\theta_{*}-\theta_{n}=\omega(n^{-1/2})\), we have_
\[\mathbb{P}(T_{\textit{Rao}}>t_{n}(\alpha))\] \[\geq 1-2de^{-C_{K_{2},\sigma_{H}}n}-e^{-C_{K_{1}}n\bar{\tau}_{n}/ \left\|\Omega(\theta_{0})\right\|_{2}},\]
_where \(\bar{\tau}_{n}=\Theta(\left\|\theta_{\star}-\theta_{n}\right\|^{2})\)._
2. _Suppose that the assumptions in Thm._ 2 _hold true. When_ \(\theta_{\star}-\theta_{0}=O(n^{-1/2})\) _and_ \(\tau_{n}^{\prime}:=t_{n}(\alpha)/384-\left\|\theta_{\star}-\theta_{0}\right\|_ {H(\theta_{\star})}^{2}/64-d/n>0\)_, we have_ \[\mathbb{P}(T_{\textit{LR}}>t_{n}(\alpha))\] \[\leq e^{-C_{K_{1}}h(n\tau_{n}^{\prime}/\left\|\Omega(\theta_{ \star})\right\|_{2})}+e^{-C_{K_{1},\nu}(\lambda_{\star}n)^{3-\nu}/(R^{2}d)}.\] _When_ \(\theta_{*}-\theta_{n}=\omega(n^{-1/2})\)_, we have_ \[\mathbb{P}(T_{\textit{LR}}>t_{n}(\alpha))\] \[\geq 1-e^{-C_{K_{1}}\frac{n\tau_{n}^{\prime}}{\left\|\Omega( \theta_{*})\right\|_{2}}}-e^{-\frac{C_{K_{1},\nu}(\lambda_{\star}n)^{3-\nu}}{ R^{2}d}},\] _where_ \(\bar{\tau}_{n}^{\prime}=\Theta(\left\|\theta_{\star}-\theta_{n}\right\|^{2})\)_._
3. _The same statements replacing_ \(T_{\textit{LR}}\) _by_ \(T_{\textit{Wild}}\)_._
According to Prop. 10, when \(\theta_{\star}-\theta_{0}=O(n^{-1/2})\), the powers of the three tests are asymptotically upper bounded; when \(\theta_{\star}-\theta_{0}=\omega(n^{-1/2})\), the power of Rao's score test tends to one at rate \(O(e^{-n\left\|\theta_{\star}-\theta_{0}\right\|^{2}})\) and the ones of the other two tests tend to one at rate \(O(e^{-n\left\|\theta_{\star}-\theta_{0}\right\|^{2}\wedge n^{3-\nu}})\).
## 5 Numerical Studies
We run simulation studies to illustrate our theoretical results. We start by demonstrating the consistency of \(d_{n}\) and the shape of the Wald confidence set defined in Cor. 8, i.e.,
\[\mathcal{C}^{\prime}_{\text{Wild},n}(\delta)=\left\{\theta\in\Theta:\left\| \theta-\theta_{n}\right\|_{H_{n}(\theta_{n})}^{2}\leq C_{K_{1},\nu}\frac{d_{n} }{n}\log\left(e/\delta\right)\right\}.\]
Figure 3: Absolute error of the empirical effective dimension. **(Left)**: least squares; **(Right)**: logistic regression.
Note that the oracle Wald confidence set should be constructed from \(\left\|\theta_{n}-\theta_{\star}\right\|_{H_{\star}}\) and \(d_{\star}\); however, Cor. \(8\) suggests that we can replace \(H_{\star}\) and \(d_{\star}\) by \(H_{n}(\theta_{n})\) and \(d_{n}\) without losing too much. To empirically verify our theoretical results, we calibrate the Wald confidence set based on \(\left\|\theta_{n}-\theta_{\star}\right\|_{H_{n}(\theta_{n})}\) with the threshold from the oracle Wald confidence set and compare its coverage with the one calibrated by the multiplier bootstrap--a popular resampling-based approach for calibration. Finally, we compare the coverage of the Wald and LR confidence sets calibrated by the multiplier bootstrap. In all the experiments, we generate \(n\) i.i.d. pairs by sampling \(X\) and then sampling \(Y\mid X\).
### Numerical Illustrations
Approximation of the effective dimension.By Prop. 7, we know that \(d_{n}\) is a consistent estimator of \(d_{\star}\). We verify it with simulations. We consider two models. For least squares, the data are generated from \(X\sim\mathcal{N}(0,I_{d})\) and \(Y|X\sim\mathcal{N}(\mathbf{1}^{\top}X,1)\). For logistic regression, the data are generated from \(X\sim\mathcal{N}(0,I_{d})\) and \(Y\mid X\sim p(Y\mid X)=\sigma(Y\ \mathbf{1}^{\top}X)\) for \(Y\in\{-1,1\}\) where \(\sigma(u):=(1+e^{-u})^{-1}\). We then estimate \(d_{\star}=d\) (since the model is well-specified) by \(d_{n}\) and quantify its estimation error by \(\mathbb{E}\left|d_{n}/d_{\star}-1\right|\). We vary \(n\in[2000,10000]\) and \(d\in\{5,10,15,20\}\), and give the plots in Fig. 3. For a fixed \(d\), the absolute error decays to zero as the sample size increases as predicted by Prop. 7. For a fixed \(n\), the absolute error raises as the dimension becomes larger in logistic regression, but it remains similar in least squares.
Shape of the Wald confidence set.Recall that the Wald confidence set in Thm. 2 is an ellipsoid whose shape is determined by the empirical Hessian \(H_{n}(\theta_{n})\) and thus can effectively handles the local curvature of the empirical risk. We illustrate this feature on a logistic regression example. We generate data from \(X\sim\mathcal{N}(0,\Sigma)\) with different \(\Sigma\)'s and \(Y\mid X\sim p(Y\mid X)=\sigma(Y\theta_{0}^{\top}X)\) for \(Y\in\{-1,1\}\) where \(\theta_{0}=(-1,2)^{\top}\). We then construct the confidence set with \(d_{\star}=d\). As shown in Fig. 4, the shape of the confidence set varies with \(\Sigma\) and captures the curvature of the empirical risk at \(\theta_{0}\).
### Calibration
We investigate two calibration schemes. Inspired by the setting in Chen and Zhou [12, Sec. 5.1], we generate \(n=100\) i.i.d. observations from three models with true parameter \(\theta_{0}\) whose elements are equally spaced between \([0,1]\)--\(1\)) _well-specified least squares_ with \(X\sim\mathcal{N}(0,I_{d})\) and \(Y\mid X\sim\mathcal{N}(\theta_{0}^{\top}X,1)\), \(2\)) _misspecified least squares_ with \(X\sim\mathcal{N}(0,I_{d})\) and \(Y\mid X\sim\theta_{0}^{\top}X+t_{3.5}\), and \(3\)) _well-specified logistic regression_ with \(X\sim\mathcal{N}(0,I_{d})\) and \(Y\mid X\sim p(Y\mid X)=\sigma(Y\theta_{0}^{\top}X)\) for \(Y\in\{-1,1\}\). For each \(\delta\in\{0.95,0.9,0.85,0.8,0.75\}\), we construct a confidence set using either _oracle calibration_ or _multiplier bootstrap_. We repeat the whole process for \(1000\) times and report the coverage of each confidence set in Tab. 2.
Oracle calibration.According to Thm. 1, if we have access to \(H_{\star}\) and \(d_{\star}\), we can construct a confidence set of the form \(\mathcal{C}_{\star}(\delta):=\{\theta:\left\|\theta_{n}-\theta\right\|_{H_{ \star}}\leq d_{\star}/n+c_{n}(\delta)\}\). Now Cor. \(8\) suggests that \(H_{\star}\) and \(d_{\star}\) can be accurately estimated by
\(H_{n}(\theta_{n})\) and \(d_{n}\), respectively, leading the confidence set \(\mathcal{C}_{n}(\delta):=\{\theta:\|\theta_{n}-\theta\|_{H_{n}(\theta_{n})}\leq d _{n}/n+c_{n}(\delta)\}\). To calibrate \(\mathcal{C}_{n}(\delta)\), we use the data generating distribution to estimate \(c_{n}(\delta)\) so that \(\mathbb{P}(\theta_{\star}\in\mathcal{C}_{\star}(\delta))\approx 1-\delta\), and then plug it into \(\mathcal{C}_{n}(\delta)\). We call it the _oracle Wald confidence set_. As shown in Tab. 2, its coverage is very close to the prescribed confidence level in the well-specified case and it tends to be more conservative in the misspecified case.
Multiplier bootstrap.To further evaluate the oracle calibration, we compare its coverage with the one calibrated by the multiplier bootstrap [e.g., 12]--a popular resampling-based calibration approach that is widely used in practice. We construct a _bootstrap Wald confidence set_ (BootWald) with \(B=2000\) bootstrap samples in the following steps. For each \(b\in\{1,\ldots,B\}\), we 1) generate weights \(\{W_{i}^{b}\}_{i=1}^{n}\stackrel{{\text{i.i.d.}}}{{\sim}}\mathcal{ N}(1,1)\), 2) compute the bootstrap estimator
\[\theta_{n}^{b}=\operatorname*{arg\,min}_{\theta}\left[L_{n}^{b}(\theta):= \frac{1}{n}\sum_{i=1}^{n}W_{i}^{b}\ell(\theta;Z_{i})\right],\]
3) compute the bootstrap Wald statistic \(T_{\text{Wald}}^{b}:=\left\|\theta_{n}^{b}-\theta_{n}\right\|_{H_{n}^{b}( \theta_{n}^{b})}^{2}\) where \(H_{n}^{b}(\theta):=\nabla_{\theta}^{2}L_{n}^{b}(\theta)\). Finally, we compare \(\left\|\theta_{n}-\theta_{0}\right\|_{H_{n}(\theta_{n})}^{2}\) with the upper \(\delta\) quantile of \(\{T_{\text{Wald}}^{b}\}_{b=1}^{B}\) to decide if the Wald confidence set covers the true parameter. It is clear that the bootstrap Wald confidence set performs similarly as the oracle Wald confidence set in least squares, but it is more liberal in logistic regression.
For comparison purposes, we also describe the procedure to construct a _bootstrap likelihood ratio confidence set_ (BootLR). The first two steps are the same as the bootstrap Wald confidence set, while the third step is to compute the bootstrap LR statistic \(T_{\text{LR}}^{b}:=2[L_{n}^{b}(\theta_{n})-L_{n}^{b}(\theta_{n}^{b})]\). And we compare \(2[L_{n}(\theta_{0})-L_{n}(\theta_{n})]\) with the upper \(\delta\) quantile of \(\{T_{\text{LR}}^{b}\}_{b=1}^{B}\) to decide if the bootstrap LR confidence set covers the true parameter. For the well-specified least squares, the two bootstrap confidence sets perform similarly with coverages close to the target ones. However, when the target coverage is small (i.e., \(0.75\)), they tend to be liberal. For the misspecified least squares, the bootstrap two confidence sets perform similarly. When the target coverage is large, they tend to be conservative; when the target coverage is small, they tend to be liberal. For the well-specified logistic regression, the bootstrap Wald confidence set tends to be liberal and the bootstrap LR one tends to be conservative.
### Acknowledgements
The authors would like to thank K. Jamieson, L. Jain, and V. Roulet for fruitful discussions. L. Liu is supported by NSF CCF-2019844 and NSF DMS-2023166 and NSF DMS-2133244. Z. Harchaoui is supported by NSF CCF-2019844, NSF DMS-2134012, NSF DMS-2023166, CIFAR-LMB, and faculty research awards. Part of this work was done while Z. Harchaoui was visiting the Simons Institute for the Theory of Computing.
\begin{table}
\begin{tabular}{c l c c c c c} \hline \hline
**Model** & **Confidence set** & \(\delta=0.95\) & \(\delta=0.9\) & \(\delta=0.85\) & \(\delta=0.8\) & \(\delta=0.75\) \\ \hline \multirow{3}{*}{Well-specified least squares} & Oracle & 0.957 & 0.908 & 0.868 & 0.792 & 0.770 \\ & BootWald & 0.947 & 0.908 & 0.855 & 0.791 & 0.735 \\ & BootLR & 0.949 & 0.906 & 0.852 & 0.792 & 0.737 \\ \hline \multirow{3}{*}{Misspecified least squares} & Oracle & 0.972 & 0.916 & 0.882 & 0.841 & 0.764 \\ & BootWald & 0.968 & 0.924 & 0.865 & 0.779 & 0.727 \\ & BootLR & 0.972 & 0.923 & 0.865 & 0.784 & 0.727 \\ \hline \multirow{3}{*}{Well-specified logistic regression} & Oracle & 0.961 & 0.915 & 0.868 & 0.809 & 0.776 \\ & BootWald & 0.938 & 0.885 & 0.826 & 0.781 & 0.706 \\ \cline{1-1} & BootLR & 0.976 & 0.948 & 0.901 & 0.866 & 0.791 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Coverage of the oracle and bootstrap confidence sets. | この論文では、非漸近的な理論的な視点から統計推定における根本的な問題を再考 $\unicode{x2013}$ 信頼区間構築。私たちは、その推定量の有限サンプルの bound を確立し、非漸近的な手法で its asymptotic behavior を特徴づける。この bound の重要な特徴は、その次元依存性が、有効次元 $\unicode{x2013}$ 制限サンドイッチコバランシーのトレース $\unicode{x2013}$ によって表現され、その有効な次元は、いくつかの領域ではパラメータ次元よりも小さい。この bound を用いて、最適化のランド scape に対応するように形状が適応された信頼区間を構築する。これまでの研究では、損失関数の強凸性が強く依存しているが、ここでは損失関数のHessianの最小値と、最適点におけるHessianの漸進的な退化を許容する。この |
2309.14437 | Universally Robust Quantum Control | We study the robustness of the evolution of a quantum system against small
uncontrolled variations in parameters in the Hamiltonian. We show that the
fidelity susceptibility, which quantifies the perturbative error to leading
order, can be expressed in superoperator form and use this to derive control
pulses which are robust to any class of systematic unknown errors. The proposed
optimal control protocol is equivalent to searching for a sequence of unitaries
that mimics the first-order moments of the Haar distribution, i.e. it
constitutes a 1-design. We highlight the power of our results for error
resistant single- and two-qubit gates. | Pablo M. Poggi, Gabriele De Chiara, Steve Campbell, Anthony Kiely | 2023-09-25T18:00:34 | http://arxiv.org/abs/2309.14437v2 | # Universally Robust Quantum Control
###### Abstract
We study the robustness of the evolution of a quantum system against small uncontrolled variations in parameters in the Hamiltonian. We show that the fidelity susceptibility, which quantifies the perturbative error to leading order, can be expressed in superoperator form and use this to derive control pulses which are robust to any class of systematic unknown errors. The proposed optimal control protocol is equivalent to searching for a sequence of unitaries that mimics the first-order moments of the Haar distribution, i.e. it constitutes a 1-design. We highlight the power of our results for error resistant single- and two-qubit gates.
_Introduction.-_ Tremendous advances in the ability to manipulate states of light and matter are ushering in the new generation of quantum-enhanced devices. As recently remarked [1], it is precisely the ability to develop schemes to control a system that endows scientific knowledge with the potential to revolutionise technological landscapes [2; 3]. However, while exquisite levels of control are now routinely applied in a variety of platforms [4; 5; 6], there will always be systematic errors due to imperfect fabrication and incomplete knowledge of the parameters, either in relation to the model itself or the ambient conditions under which it is operating. Thus, several strategies to explicitly mitigate such errors have been devised, e.g. shortcuts to adiabaticity [7; 8; 9], numerical optimization [10; 11; 1], geometric space curves [12; 13; 14], and dynamical decoupling [15].
When these systematic errors are important, typically the control problem is cast in such a way that two, sometimes implicit, assumptions are made regarding the source of the error: (i) that it arises from a weak perturbation, and (ii) that its mathematical structure is exactly known. While the former is a reasonable working condition to assume (if it were not then the fundamental description of the system would need to be adjusted), the latter is arguably less well justified. Indeed, concerted effort is currently invested in identifying the correct physical description of noisy intermediate-scale quantum devices, e.g. determining the most relevant noise sources that they are subject to in order to enhance their efficacy [16]. Ultimately there will always be some level of uncertainty in our knowledge of the precise structure of the noise and therefore it is highly desirable to develop a framework that allows to coherently manipulate quantum systems even in the presence of an unknown (even possibly unknowable) source of error.
In this work we develop such a framework which accounts for this uncertainty, termed universally robust control (URC). It provides a straightforward cost function to be minimised to ensure generic robustness in quantum control problems. It can also be easily restricted to specific classes of errors, to account for a limited but useful knowledge of the error type.
_Fidelity in the presence of systematic error.-_Consider the full system Hamiltonian \(H_{\lambda}(t)\!=\!H_{0}(t)+\lambda V\) where \(H_{0}(t)\) is the error-free control Hamiltonian, \(V\) is the error operator acting with unknown strength, \(\lambda\). We assume a pure initial state, \(\sigma\), with no \(\lambda\) dependence.
The time evolution operator of \(H_{\lambda}(t)\) is given by \(U_{\lambda}(t,0)\), which leads to a state (which is dependent on \(\lambda\)) \(\rho_{\lambda}=U_{\lambda}(t_{f},0)\sigma U_{\lambda}^{\dagger}(t_{f},0)\) at the final time \(t=t_{f}\). The fidelity between the perturbed and ideal evolution is \(F(\lambda)=\mathrm{Tr}(\rho_{\lambda}\rho_{0})\), which can be expanded for small \(\lambda\) as
\[F(\lambda)\approx F(0)+F^{\prime}(0)\lambda+\frac{1}{2}F^{\prime\prime}(0) \lambda^{2}. \tag{1}\]
By definition \(F(0)=1\) and from this follows \(F^{\prime}(0)=0\).
The second derivative can be calculated by noting that, for pure states, \(\partial_{\lambda}^{2}\rho_{\lambda}\!=\!2(\partial_{\lambda}\rho_{\lambda})^ {2}+\rho_{\lambda}\left(\partial_{\lambda}^{2}\rho_{\lambda}\right)+\left( \partial_{\lambda}^{2}\rho_{\lambda}\right)\rho_{\lambda}\). Multiplying by \(\rho_{0}\) and evaluating the trace at \(\lambda\!=\!0\) we get
\[F^{\prime\prime}(0)=-2\chi_{S}(\rho_{\lambda}), \tag{2}\]
where \(\chi_{S}(\rho_{\lambda})=\mathrm{Tr}\left\{\rho_{0}\left(\partial_{\lambda} \rho_{\lambda}\right)^{2}\right\}_{\lambda=0}\) is the fidelity susceptibility [17], which quantifies how sensitive the evolution is with respect to small perturbations, i.e. \(F(\lambda)\simeq 1-\chi_{S}(\rho_{\lambda})\lambda^{2}\). It is clear that \(\chi_{S}(\rho_{\lambda})\) is simply the quantum Fisher information (QFI) associated to the family of states \(\{\rho_{\lambda}\}\)[18]. The QFI quantifies how much information about \(\lambda\) is encoded in the evolution of the state and, therefore, minimizing the QFI at \(\lambda=0\) is equivalent to increasing the robustness of a control protocol.
Evaluating explicitly the QFI we find [18]
\[\chi_{S}(\rho_{\lambda})=\frac{t_{f}^{2}}{\hbar^{2}}\left(\Delta\overline{V}_{ 0}\right)^{2}, \tag{3}\]
where
\[\overline{V}_{0}=\frac{1}{t_{f}}\int_{0}^{t_{f}}ds\ U_{0}^{\dagger}(s,0)VU_{0} (s,0), \tag{4}\]
is the time average of \(V\) in the interaction picture with respect to the unperturbed evolution and the variance is taken with respect to the initial state, \(\left(\Delta\overline{V}_{0}\right)^{2}=\mathrm{Tr}[\sigma\overline{V}_{0}^{2}]- \mathrm{Tr}[\sigma\overline{V}_{0}]^{2}\).
A similar result can be derived for the case of the evolution of unitaries (instead of states). By defining the corresponding fidelity as \(F_{U}(\lambda)=\frac{1}{d^{2}}\left|\mathrm{Tr}\left(U_{0}^{\dagger}U_{ \lambda}\right)\right|^{2}\), we obtain that \(F_{U}(\lambda)\simeq 1-\chi_{U}(U_{\lambda})\lambda^{2}\)[18]. The susceptibility is
\[\chi_{U}(U_{\lambda})=\frac{t_{f}^{2}}{\hbar^{2}d}\|\overline{V}_{0}\|^{2}, \tag{5}\]
where \(\|\cdot\|\) is the norm associated with the Hilbert-Schmidt inner product \((A|B)=\mathrm{Tr}(A^{\dagger}B)\) and \(d\) is the Hilbert space dimension. Robust control protocols then correspond to finding a \(H_{0}(t)\) such that \(\rho_{0}=\rho_{\mathrm{target}}\) or \(U_{0}(t_{f},0)=U_{\mathrm{target}}\) while concurrently minimizing \(\chi_{S}\) for a known perturbation model \(V\)[19; 20; 21]. We now demonstrate that such robust control can be achieved even _without_ knowledge of \(V\).
_Universally robust control._**-**Our construction is based on a superoperator picture where the operator
\[\mathcal{M}_{0}[V]\equiv\overline{V}_{0}, \tag{6}\]
can be seen as the action of a (linear) superoperator \(\mathcal{M}_{0}\) acting on \(V\) and we assume that \(\mathrm{Tr}V=0\)[22]. To construct it more explicitly, we go to a doubled Hilbert space. If our original Hilbert space \(\mathcal{H}\) is spanned by the orthonormal basis \(\{|i\rangle\}\) where \(i=1,\ldots,d\), we take
\[A=\sum_{ij}A_{ij}|i\rangle\langle j|\rightarrow|A\rangle=\sum_{ij}A_{ij}\,|i \rangle\otimes|j\rangle\,, \tag{7}\]
where now the vector \(|A\rangle\) lives in \(\mathcal{H}\otimes\mathcal{H}\). Thus, from Eq. (6) we can define
\[M_{0}=\frac{1}{t_{f}}\int_{0}^{t_{f}}ds\left[U_{0}(s,0)\otimes U_{0}(s,0)^{*} \right]^{\dagger}\,. \tag{8}\]
such that \(\left|\overline{V}_{0}\right\rangle=M_{0}\,|V\rangle\). The fidelity susceptibility of Eq. (5) can be expressed in terms of the superoperator \(M_{0}\) as
\[\|\overline{V}_{0}\|^{2}=(V|\,M_{0}^{\dagger}M_{0}\,|V\rangle\,. \tag{9}\]
By virtue of Eq. (5) we can increase the robustness of a unitary control protocol irrespective of \(V\) by choosing \(H_{0}(t)\) to minimize the operator norm of \(M_{0}\). This also holds for state control, c.f. Eq. (3), because \(\Delta\overline{V}_{0}\) is upper bounded by \(\|M_{0}\|\)[18].
The trace of any operator \(V\) is unitarily invariant. For the identity operator \(\mathbb{I}\), \(M_{0}\,|\mathbb{I}\rangle=|\mathbb{I}\rangle\) so the norm of \(M_{0}\) cannot be arbitrarily reduced. To sidestep this issue, we restrict to the set of traceless perturbation operators by defining the projector in the doubled Hilbert space \(\mathbb{P}_{0}=|\mathbb{I}\rangle\,(|\mathbb{I}\rangle\,/\,d\) such that \(\mathbb{P}_{0}\,|A\rangle=\mathrm{Tr}(A)\,|\mathbb{I}\rangle\,/\,d\), and redefine the relevant superoperator
\[\tilde{M}_{0}=M_{0}(\mathbb{I}-\mathbb{P}_{0}). \tag{10}\]
For any operator \(V^{\prime}\), this acts as
\[\tilde{M}_{0}\,\big{|}V^{\prime}\big{)}=M_{0}(\mathbb{I}-\mathbb{P}_{0})\, \big{|}V^{\prime}\big{)}=M_{0}\,|V\rangle=\left|\overline{V}_{0}\right\rangle, \tag{11}\]
where \(V\) is a traceless version of \(V^{\prime}\).
The goal of URC is then to minimize the norm of the modified superoperator \(\tilde{M}_{0}\), which is related to the previous norm as
\[\|\tilde{M}_{0}\|^{2}=\|M_{0}\|^{2}-\mathrm{Tr}(M_{0}^{\dagger}M_{0}\mathbb{P }_{0})=\|M_{0}\|^{2}-1. \tag{12}\]
This allows us to find choices of \(U_{0}\) which yield \(\tilde{M}_{0}\simeq 0\), thus achieving \(\left|\overline{V}_{0}\right\rangle\simeq 0\) for any \(V\).
To understand how a single solution for \(U_{0}(t)\) can be made robust to arbitrary perturbations, we note the following connection with unitary designs [23; 24]. Discretizing the integral in Eq. (4) into \(L\gg 1\) intervals, we find \(\overline{V}_{0}\sim\frac{1}{L}\sum\limits_{k=1}^{L}U_{0}^{(k)\dagger}VU_{0}^ {(k)}\), which has the form of an average of the operator \(V\) conjugated over a discrete set of unitaries, \(U_{0}^{(k)}\). If the distribution of such unitaries is uniform according to the Haar measure [25], then it is known that the average
\[\mathbb{B}_{[U_{0}^{(k)}]}[U^{\dagger}VU]=\frac{1}{d}\mathrm{Tr}(V) \tag{13}\]
vanishes for all traceless \(V\)[25]. A less stringent requirement is for the distribution to only match the first-order moment of the uniform distribution, i.e. to be a 1-design. In fact, since \(\mathbb{P}_{0}\,|A\rangle=\mathrm{Tr}(A)\,|\mathbb{I}\rangle\,/\,d\), we see that the requirement \(\tilde{M}_{0}=0\) immediately implies Eq. (13) for any operator, thus making the path traced by the unitary evolution operator \(U_{0}(t)\) a 1-design.
Leveraging randomization to increase robustness in quantum processes is routinely done in the context of quantum computing, particularly by tools like dynamical decoupling [26; 15], dynamically corrected gates [27; 13; 28] and randomized compiling [29]. Our work shows that, for general quantum systems, it is possible to translate this connection into a requirement on a single object, the superoperator \(\tilde{M}_{0}\), leading to robustness to any perturbation to first order. As we show in the following, this allows us to set up a quantum optimal control problem to find evolutions that reach a predefined target while at the same time remain robust to arbitrary perturbations.
_Optimal control._**-**We now demonstrate how URC can be naturally leveraged in numerical optimizations. A generic quantum optimal control (QOC) approach considers a series of control parameters, \(\{\phi_{k}\}\), which determine the time dependence of \(H_{0}(t)\) and aims to maximize the fidelity between a target process \(U_{\mathrm{target}}\) and the actual (ideal) evolution operator \(U_{0}(t_{f},0)\) by minimizing a cost functional \(J_{0}=1-F_{U}(U_{\mathrm{target}},U_{0}(t_{f},0))\) with respect to \(\{\phi_{k}\}\). Additionally, robust QOC usually aims at achieving resilience to perturbations characterized by a known operator \(V\). For this task, one can concurrently minimize the fidelity susceptibility given by the control functional \(J_{V}=\frac{1}{d}\|\overline{V}_{0}\|^{2}\) (see for instance [21; 28]). Our proposed approach of universally robust QOC instead aims at achieving robustness to an _unknown_ error operator \(V\). This can be achieved by instead minimizing the functional \(J_{\mathrm{U}}=\frac{1}{d}\|\tilde{M}_{0}\|^{2}\)[30].
We begin with the simple case of a single qubit with restricted controls with Hamiltonian
\[H_{0}(t)=\Omega\left[\cos\phi(t)\sigma_{x}+\sin\phi(t)\sigma_{y}\right], \tag{14}\]
where \(\sigma_{\alpha}\) are the Pauli operators, and we consider the control field \(\phi(t)\) to be piecewise constant with time steps \(\Delta t\) and values \(\{\phi_{k}\}\), \(k=1,\ldots,N_{p}\)[31]. The model in Eq. (14) is fully controllable [32, 33]. We set the target transformation to be \(U_{\rm target}=\exp(-i\sigma_{z}\pi/2)\) and numerically seek the QOC parameters that minimize either only \(\mathcal{J}_{\rm target}=J_{0}\), \(\mathcal{J}_{\rm robust}=(J_{0}+wJ_{\rm w-\sigma_{z}})/(1+w)\) or \(\mathcal{J}_{\rm URC}=(J_{0}+wJ_{\rm U})/(1+w)\), where \(w\) is a non-negative weight which can be changed to improve the resulting balance between the terms. Note that evaluating these functionals requires only computing the error-free evolution given by \(H_{0}(t)\), and so no numerical simulations of the perturbed dynamics are required at any stage. In Fig. 1(a) we plot the optimized functional for each case against the evolution time \(t_{f}\). The curves display behavior reminiscent of Pareto-fronts [34, 35], indicative of the fact that optimization always succeeds for sufficiently large \(t_{f}\), with the optimization failing when the time becomes too constrained. A minimum control time, \(t_{\rm MCT}\), can be assigned to each process by identifying the minimum value of \(t_{f}\) such that the optimization succeeds (which in this case we take as yielding functional values below \(10^{-7}\)). For target-only and robust control optimizations, we find \(t_{MCT}^{\rm T}=2\pi/\Omega\) and \(t_{MCT}^{\rm R}=4\pi/\Omega\) which are consistent with previous analytical and numerical studies [32, 33]. In contrast, universally robust control demands \(t_{MCT}^{\rm U}=5\pi/\Omega\) (see also [36]).
To characterize how these longer control waveforms yield robust control processes, we study how well the evolution under the perturbed Hamiltonian \(H_{\lambda}(t)=H_{0}(t)+\lambda V\) is able to achieve the target transformation. Fig. 1 shows the cases for (b) \(V=\sigma_{z}\) and (c) \(V=\vec{n}\cdot\vec{\sigma}\) with \(\vec{n}\) a randomly chosen unit vector. The gate fidelity is plotted against the uncertainty parameter \(\lambda\) for the three types of optimal controls found. All cases yield high fidelities if \(\lambda=0\), but the target-only optimization results deviate substantially from the ideal value once \(\lambda\neq 0\). In (b), we see that the robust control optimization (blue curve) is insensitive to perturbations in \(V=\sigma_{z}\), as it was designed to be. But (c) reveals that the same control is sensitive to generic perturbations. Remarkably, the URC solution (orange curve) is insensitive to first order with respect to perturbations along _any_ direction. This holds true even accounting for the faster minimal control times required for the other protocols [18].
_Generalized robustness.--_Building upon the superoperator in Eq. (10) we can generalize this framework to optimize for robustness to any desired subset of operators. This is particularly relevant for systems beyond a single qubit where the nature of the noise or inhomogeneity is partially known instead of being completely arbitrary. Thus, rather than making a control protocol robust to all possible operators \(V\), we can instead focus on achieving robustness to a particular set of perturbations, for instance, those generated by local operators. In this case, we are interested in the action of the superoperator, \(M_{0}\), only on this reduced set. The advantage of imposing these generalized robustness requirements is that the optimization is less constrained, as effectively less matrix elements are being minimized. Therefore, it is easier to find good solutions even with restricted control time. For example, the total number of operators for \(N\) qubits is \(4^{N}\) while for the set of local operators is only \(3N\).
Consider a quantum system with Hilbert space dimension, \(d\), and an orthonormal operator basis \(\left\{\Lambda_{j}\right\}\), \(j=0,1,\ldots d^{2}-1\). We introduce a covering of this basis set, \(\{C_{k}\}\), such that \(\left\{\Lambda_{j}\right\}=\cup_{k=1}^{K}C_{k}\). The projector onto \(C_{k}\) is \(\mathcal{P}_{k}(A)=\sum_{\Lambda_{j}\in C_{k}}\mathrm{Tr}(\Lambda_{j}^{\dagger }A)\Lambda_{j}\). In the superoperator picture, this is equivalent to defining \(\mathbb{P}_{k}=\sum_{\Lambda_{j}\in C_{k}}\left|\Lambda_{j}\right\rangle \left(\Lambda_{j}\right|\). These superoperators are clearly projectors, as \(\mathbb{P}_{k}^{2}=\mathbb{P}_{k}\) and \(\sum_{k=0}^{K}\mathbb{P}_{k}=\mathbb{I}\). By construction, we take \(\Lambda_{0}=\mathbb{I}/\sqrt{d}\) so that \(\mathbb{P}_{0}\) is defined as before. In order to look for controls which are insensitive to any operator within a given subset we seek to minimize the norm of
\[\tilde{M}_{0}=M_{0}\left(1-\sum_{k\in\mathbb{P}}\mathbb{P}_{k}\right), \tag{15}\]
Figure 1: Universal robust control for single-qubit gates. (a) Optimized control functionals as a function of the total evolution time \(t_{f}\) for target-only control (gray, circles), target and robustness to a known \(V\) (blue, squares) and target and robustness to an unknown \(V\) (orange, triangles). (b) and (c) Gate fidelity as a function of perturbation strength \(\lambda\) for the cases where \(V=\sigma_{z}\) (b) and \(V=\vec{n}\cdot\vec{\sigma}\) with \(\vec{n}\) a random unit vector (results shown correspond to the average fidelity over 20 realizations). Lower panels shows zoomed-in data of the infidelity \(1-F\) in logarithmic scale. We choose a target \(U_{\rm target}=\exp(-i\sigma_{z}\pi/2)\), \(N_{p}=40\) control parameters, a balanced functional \(w=1\), and an operation time \(\Omega t_{f}/(2\pi)=3.5\) for (b) and (c).
where the sum runs over all relevant operator subsets \(\eta\) (typically including \(\Lambda_{0}\)). Note that \(\mathbb{P}_{k}\) corresponds to the operators we do not need to be robust to. To illustrate the procedure of imposing generalized robustness requirements into a QOC problem, consider a model of two-qubits with symmetric controls,
\[H_{0}(t)=\Omega_{x}(t)S_{x}+\Omega_{y}(t)S_{y}+\beta S_{z}^{2}, \tag{16}\]
where \(S_{\alpha}=(\sigma_{\alpha}^{(1)}+\sigma_{\alpha}^{(2)})/2\) are collective spin operators and the interaction strength \(\beta>0\) is fixed. The perturbation operator, \(V\), can be either single-body (\(C_{1}\)) or two-body (\(C_{2}\)). We thus have a variety of possible optimization functionals depending on the level of robustness desired. Here we compare three cases: robustness to a single \(V=S_{x}\), robustness to all single-body operators (\(V\in C_{1}\)) and universal robustness (\(V\in C_{1}\cup C_{2}\)). We set the target as a randomly-chosen symmetric two-qubit unitary [18]. For this system we find that choosing an unbalanced optimization functional with \(w=0.1\) yields a good compromise between the fidelity at zero perturbation (\(\lambda=0\)) and the degree of robustness achieved [37]. In Fig. 2 we show the performance of the optimization using the different functionals introduced thus far, in the presence of various perturbations. As expected, the optimal control procedure is able to find fields which are robust to arbitrary single-body perturbations (green curve), which are not necessarily robust to arbitrary perturbations as the URC (orange curve). On the other hand, the URC solution results in evolutions which are more robust to any type of perturbation, including a two-body one of the form \(V=S_{x}^{2}\), when compared to the other methods.
The approach outlined above for designing generalized robustness requirements can be readily carried over to more complex systems. In the Supplementary Material [18] we show additional results that illustrate how this framework can be used to robustly generate entangled states in many-body systems.
_Conclusion.-_ We have introduced a versatile method, universally robust control (URC), to mitigate the effects of unknown sources of error. By recasting the impact of an arbitrary perturbation to the systems in terms of a single object, here captured by the superoperator in Eq. (8), we showed that since this superoperator has no explicit dependence on the precise operator form of the error, it can be efficiently minimized to provide the necessary, highly robust, control pulses. We demonstrated the effectiveness of our approach for the realization of single- and two-qubit quantum gates, and have shown that it can be generalized to tackle state control problems or to the case of classical fluctuations [38]. Furthermore, we have demonstrated that the URC formalism can exploit partial information about the source of errors to build arbitrary robustness requirements into the optimal control problem. When combined with powerful numerical optimization techniques, we expect this flexible approach to be able to tackle a broad class of questions in quantum control which are of key importance for the development of quantum technologies. For instance, what is the fundamental trade-off between robustness and experimental constraints (such as bandwidth or evolution time)? how much control resources are required to achieve various levels of robustness in a quantum device? Finally, as our protocol introduces control pulses which dynamically implement 1-designs, this could be generalized to other \(t\)-designs which can be readily exploited for quantum computing protocols such as randomized benchmarking [39].
_Acknowledgments.-_ The authors acknowledge fruitful discussions with Lorenza Viola. PMP acknowledges support by U.S. National Science Foundation (grant number PHY-2210013) and AFOSR (grant number FA9550-181-1-0064). GDC acknowledges support by the UK EPSRC EP/S02994X/1. SC acknowledges support from the Alexander von Humboldt Foundation. AK and SC are supported by the Science Foundation Ireland Starting Investigator Research Grant "SpeedDemon" No. 18/SIRG/5508.
| 量子系の進化の robustness を、ハミルトニアンのパラメータの微小な不制御変動に対する試験として研究します。このfidility susceptibility は、 perturbative 誤差を定量的に評価するもので、超操作子形式で表現できます。そして、この超操作子を使って、任意の系統的な未知のエラーに対する頑健な制御パルスを導出します。提案された最適制御プロトコルは、Haar分布の 1次モーメントを模倣するユニタリなシーケンスを求めることと等価です。つまり、これは 1 構成の制御プロトコルです。私たちの成果の力強さを、エラー耐性を持つ1量子ビットと2量子ビットのゲートに示します。 |
2309.13358 | Towards Quantum Software Requirements Engineering | Quantum software engineering (QSE) is receiving increasing attention, as
evidenced by increasing publications on topics, e.g., quantum software
modeling, testing, and debugging. However, in the literature, quantum software
requirements engineering (QSRE) is still a software engineering area that is
relatively less investigated. To this end, in this paper, we provide an initial
set of thoughts about how requirements engineering for quantum software might
differ from that for classical software after making an effort to map classical
requirements classifications (e.g., functional and extra-functional
requirements) into the context of quantum software. Moreover, we provide
discussions on various aspects of QSRE that deserve attention from the quantum
software engineering community. | Tao Yue, Shaukat Ali, Paolo Arcaini | 2023-09-23T12:34:04 | http://arxiv.org/abs/2309.13358v1 | # Towards Quantum Software Requirements Engineering
###### Abstract
Quantum software engineering (QSE) is receiving increasing attention, as evidenced by increasing publications on topics, e.g., quantum software modeling, testing, and debugging. However, in the literature, quantum software requirements engineering (QSRE) is still a software engineering area that is relatively less investigated. To this end, in this paper, we provide an initial set of thoughts about how requirements engineering for quantum software might differ from that for classical software after making an effort to map classical requirements classifications (e.g., functional and extra-functional requirements) into the context of quantum software. Moreover, we provide discussions on various aspects of QSRE that deserve attention from the quantum software engineering community.
quantum software engineering, requirements engineering, requirements
## I Introduction
Quantum software engineering (QSE) [1, 2], as classical software engineering, is expected to focus on various phases of quantum software development, including requirements engineering, design and modeling, testing, and debugging. Various studies have been conducted in the literature regarding most of these phases. However, as reported in [3, 1], the requirements engineering phase remains relatively untouched. Only a few preliminary works exist on requirements engineering [4, 5].
Requirements engineering, like in the classical context, if not conducted properly, will build incorrect quantum software and cause high costs in fixing it once problems are discovered in later phases of quantum software development. Thus, this paper focuses on quantum software requirements engineering (QSRE). In particular, we highlight the key aspects of QSRE that differentiate itself from the classical domain. To illustrate differences, we also present a motivating example of financial risk management. Moreover, we shed light on how typical requirements engineering will be impacted due to the quantum context and suggest key following activities.
## II Motivating Example
We will use the motivating example of credit risk analysis with quantum algorithms from Qiskit [6]. Detailed information about the algorithm is published in [7, 8]. The proposed quantum algorithm is more efficient than its equivalent classical implementations, such as using Monte Carlo simulations on classical computers. We calculate two key risk measures, i.e., _Value at Risk_ (VaR) and _Conditional Value at Risk_ (CVaR). Key requirements of the risk estimation, including the calculation of these two risk measures (i.e., functional requirements), are shown in Figure 1. In addition, we present extra-functional requirements specific to quantum computing, e.g., estimating the number of gates and (ancilla) qubits. Moreover, we show hardware constraints such as the limited number of qubits and limited depth of circuits.
Figure 2 (a) present a use case diagram including actor _Credit Analyst_ responsible for managing risk in finance, as illustrated with use case _Manage risk in finance with quantum_. This use case includes use cases _Determine Var_ and _Determine CVaR_. Also, for calculating VaR or CVaR, a credit analyst needs to define the confidence level, captured with use case _Define the confidence level_. In Figure 2 (b), we use the use case diagram notation to illustrate the main functionalities of a quantum expert applying the Amplitude Estimation [7, 8] algorithm for calculating VaR and CVaR.
## III Quantum Software Requirements Engineering
### _Stakeholders_
The ISO/IEC/IEEE 15288 standard defines stakeholders as: "_Individual or organization having a right, share, claim, or interest in a system or in its possession of characteristics that meet their needs and expectations_" [9]. Identifying stakeholders and their requirements is a crucial activity in requirements engineering. When building quantum software systems, stakeholders are the same as in the classical context. For example, in our example, stakeholders related to the development of the quantum risk management system include credit analysts (domain experts), borrowers (customers), banks, and software developers, all having different concerns on various aspects, including functionality, ease of use, price, and performance.
### _Requirements classifications_
Requirements are commonly classified into _functional_ and _extra-functional_ (Section III-B1). A further classification specific to QSRE is whether requirements are related to the quantum or the classical part (Section III-B2) of the system.
#### Iii-B1 Functional requirements and extra-functional requirements
_Functional requirements_ are related to the functionality that a quantum software system is expected to provide. For instance, the functional requirements of our example are indicated with <<Functional Requirement>>, such as _Determine Value at Risk (VaR) with the 95% of confidence level_ (see Figure 1). Identifying functional requirements for quantum software shall be the same as for the classical one.
SEBoK defines _non-functional requirements_ (also commonly named _extra-functional_) as "_Quality attributes or characteristics that are desired in a system, that define how a system is supposed to be_" [10]. These attributes vary from one system to another. For instance, safety requirements (i.e., one type of extra-functional requirements) will only apply to a safety-critical system. All the relevant extra-functional requirements from classical software systems generally apply to quantum software systems. However, there are additional requirements. For instance, Figure 1 shows three extra-functional requirements: _Estimation accuracy shall be a quadratic speed-up over classical methods (e.g., Monte Carlo)_, which is further decomposed into another two extra-functional requirements on estimating the required numbers of gates and (ancilla) qubits. These two requirements relate to the hardware constraints: _Limited number of qubits_ and _Limited depth of quantum circuits_. Identifying and realizing these extra-functional requirements require knowledge of quantum computing. The good news is that such requirements are common across various quantum software applications, implying that they can be reused and that common solutions can be proposed to address them. We would also like to point it out that Saraiva et al. [5] have already identified such five common extra-functional requirements. Moreover, such extra-functional requirements might need to be elicited step-wise, as their elicitation depends on identifying other requirements. Ideally, when available in the future, an actionable requirements elicitation process could clearly guide users through all required activities.
#### Iii-B2 Quantum requirements vs. classical requirements
It is crucial to distinguish requirements that should be addressed with classical computers and those to be addressed with quantum computers. Moreover, there should be high-level requirements that are hybrid. For instance, Figure 1 defines three stereotypes <<Reg>>, <<Reg>>, and <<hReq>> to distinguish requirements that need to be addressed in the classical, quantum, or hybrid manner, respectively. Doing so is essential, as mentioned by Weder et al. [10, 11]; indeed, the first phase of their proposed quantum software lifecycle is about performing quantum-classical splitting. Requirements engineering, especially requirements analysis, and optimization, is typically performed in this phase to decide which parts of a targeted problem need to be solved on a quantum computer and which parts go to classical hardware. Consequently, requirements specification and modeling solutions should provide mechanisms to support the problem separation into classical and quantum parts. We explain this idea by applying three stereotypes to use case modeling (see Figure 2).
### _Specific extra-functional concerns_
#### Iii-C1 Portability
Near future quantum computers will be built with different hardware technologies; thus, portability will remain a key requirement to be captured. For example, a quantum software system for our example (i.e., credit risk analysis) may need to be deployed to different quantum computers. Moreover, in the near future, various quantum computing resources and classical computing will be pooled so that more computations can be performed jointly. Thus, converting a problem into a set of requirements, where requirements shall be addressed with different types of quantum computers and classical computers, is needed.
#### Iii-C2 Performance
Performance requirements over classical implementation are essential to justify the need for quantum computing. For example, our example requires that the estimation accuracy be a quadratic speed-up over classical methods (see Figure 1). Such requirements may consider other requirements, e.g., the estimation accuracy depends on the number of gates and the number of (ancilla) qubits that need to be
Fig. 1: Finance application for credit risk analysis – key requirements, in the SysML requirements diagram notation. Stereotypes <<Reg>>, <<Reg>>, and <<hReq>> are applied to distinguish quantum requirements, classical requirements and the hybrid of both, respectively. Stereotypes <<Functional Requirement>> and <<Extra-functional Requirement>> distinguish functional and extra-functional requirements.
Fig. 2: (a) Application for credit risk analysis – key use cases. (b) Key functionalities of realizing _Determine VaR_ and _Determine_ C_i_aR_ in (a).
estimated at the requirements level to check whether or not the expected quadratic speed-up on the estimation accuracy can be achieved. These are two additional requirements. Such requirements are common across quantum software, as, currently and in the near future, the capabilities of quantum computers are limited. Thus, deciding early on whether available resources can achieve the expected performance requirements and, if yes, with which margin is important.
#### Iii-B3 Reliability
Currently, hardware errors affect the reliability of their computations and consequently constrain how quantum computers should be used and how quantum software should be designed. For instance, performing a reset after running several calculations for a period of time might be needed; this means that a quantum algorithm might not be run for a long time [12]. Thus, when identifying requirements of quantum software, it is essential to identify reliability requirements and associated constraints, especially considering the impact of hardware errors on the reliability of quantum software systems. Decisions such as introducing Quantum Error Correction [13] (which requires additional quantum resources) or other fault tolerance mechanisms might be needed early in the quantum software development lifecycle.
#### Iii-B4 Scalability
Current quantum computers support a limited number of qubits, i.e., resources are scarce and expensive. Therefore, scalability requirements are carefully considered while designing quantum software. For instance, as discussed in [8], in the context of quantum risk analysis (our motivating example), based on the results of the authors' investigation, more qubits are needed to model more realistic scenarios, thereby achieving practically meaningful advantages over Monte Carlo simulations, which represent state of the art in risk management. Moreover, scalability requirements (e.g., on the number of parameters and constraints expected to be handled in the risk analysis) should be carefully defined such that they can be satisfied with a limited depth of the quantum circuit to mitigate the impact of decoherence, with limited use of two-qubit gates (e.g., CNOT gates) to reduce the effect of crosstalk, and so on, which can be ensured with more powerful quantum computers, dedicated error mitigation mechanisms, and even carefully-designed quantum algorithms.
#### Iii-B5 Maintainability
Like classical software, quantum software will require maintainability. Given that, as expected, quantum hardware will continue to evolve, existing quantum software needs to be updated (in some cases) to deal with the hardware changes. For example, with the decreased hardware error rates provided by the latest technological advancements, error handing mechanisms in quantum software systems must be updated to improve performance and reduce the cost of additional error correction. Thus, quantum software systems shall identify and capture such maintainability requirements.
#### Iii-B6 Reusability
Like classical software, the reusability of quantum software is essential to be easily reused across different systems. Thus, such requirements shall be captured during requirements engineering. However, some specific requirements related to quantum software shall be explicitly captured. For instance, quantum software is often built as hybrid software. Therefore, having tight coupling between the two parts would reduce the reusability of quantum software. Instead, the high cohesion of the quantum software part is expected to enable more reusability.
## IV Discussions and Suggestions
Requirements elicitation elicits software requirements, i.e., quantum software in our context. Given that, in this phase, we investigate _what_ problem a quantum software should solve rather than _how_ the software should be implemented to address this problem, the requirements engineering for quantum software shall remain similar to the classical one. For instance, identifying stakeholders and defining system boundaries remain the same. However, one difference might be in checking whether it is needed to solve a problem that has been solved in the classical world with quantum, especially considering the known limitations of quantum computing. For example, in our running example (see Figure 1), we need to consider requirements specific to the quantum domain, such as the required number of qubits (i.e., a hardware constraint). Regarding stakeholders, there remain similarities between the classical and quantum requirements elicitation. For instance, a possible stakeholder in our example is the credit analyst, which would remain the same as in the classical domain. Existing methods, such as interviews and prototyping for requirements elicitation, are also expected to be largely similar.
Functional and non-functional requirements are typically specified during requirements specification at various formalization degrees, ranging from informal natural language specifications to fully formal specifications. Examples include semi-formal notations such as use cases and entity-relations diagrams or formal notations such as Hoare logic [14]. Requirements specifications for quantum software will be changed to accommodate concepts related to quantum software. For example, when using use case diagrams, as shown in our example, it is helpful to distinguish use cases from the classical world, the quantum world, and the mix of the two. Moreover, when specifying requirements with modeling notations (e.g., SysML requirements diagram), they need to be extended to capture novel concepts from quantum software. Finally, formal methods are also relevant to investigate for specifying quantum software requirements as surveyed in [15]. Nonetheless, such methods are also quite early in their stage of development [15].
Requirements _verification_ of quantum software has received less attention; when considering formal methods, only preliminary tools and methods are available as discussed in [15]. Moreover, the survey discusses the need for new methods for formal verification for complex quantum software. Requirements validation via automated testing is getting popular in the software engineering community, with several new works being published (e.g., [16, 17, 18, 19, 20, 21]). Nonetheless, as discussed in [22], many testing challenges remain unaddressed. Finally, the classical verification and validation methods, e.g., inspection and walk-through, apply to some extent to quantum software requirements.
Based on our investigation, we recommend the following:
(1) Carefully consider separating parts of the problem that should be addressed in the classical world and those on quantum computers; (2) Identify and specify requirements related to various constraints, especially those about quantum hardware. Realizing these requirements depends on available and realistic quantum computing resources and explicitly specifying such requirements support requirements analysis on the feasibility of the realization; (3) Identify existing quantum algorithms that could be incorporated. Selecting which quantum algorithms to use is a decision that might need to be made at the early stage, as the availability and capability of such quantum algorithms have an impact on the quantum part of the realization of certain extra-functional requirements; (4) Based on the identified and specified requirements, requirements analysis might be needed to identify key factors (e.g., selection of quantum algorithms, determining quantum hardware resources, assessing the feasibility of satisfying extra-functional requirements) that have a significant impact on the development of quantum software, and potential trade-offs among these factors. Doing so is expected to effectively support decision-making on selecting quantum hardware resources; (5) Identify requirements whose realization strongly depends on constantly emerging quantum algorithms and advanced quantum computers. Doing so is necessary because as soon as more advanced quantum algorithms or quantum computers are available, such requirements could be realized (if not possible before) or realized better. Also, decisions made regarding the satisfaction of certain requirements (e.g., the required number of gates) and rationales behind these decisions are highly recommended to be recorded.
## V Conclusions and Future Work
Requirements engineering (RE) for quantum software has gotten less attention than other phases, such as quantum software testing. Thus, we present some ideas on how RE for quantum software will differ from the classical counterpart. For instance, what will be the key differences for extra-functional requirements? Finally, we discussed how various steps in RE, such as requirements elicitation, specification, verification, and validation, will be impacted, including developing requirements specification/modeling, analyses, verification, and validation methods, with tool support, for supporting quantum software development at the RE phase.
| 量子ソフトウェアエンジニアリング (QSE) が注目を集めていることが、量子ソフトウェアモデリング、テスト、デバッグなどの話題の論文が増加していることから明らかです。しかし、文献では、量子ソフトウェア要件エンジニアリング (QSRE) は、まだソフトウェアエンジニアリングの領域として比較的調査されていない領域です。そのため、本稿では、古典ソフトウェアの要件エンジニアリングと量子ソフトウェアの要件エンジニアリングの差異を理解するための、初期の考えを提示します。これは、古典的な要件分類 (例えば、機能的および非機能的要件) を量子ソフトウェアのコンテキストにマッピングする試みを通じて行うものです。さらに、QSRE のさまざまな側面について議論し、量子ソフトウェアエンジニアリングコミュニティから注目すべき点について述べます。 |
2309.10651 | Wave breaking for the generalized Fornberg-Whitham equation | This paper aims to show that the Cauchy problem of the Burgers equation with
a weakly dispersive perturbation involving the Bessel potential (generalization
of the Fornberg-Whitham equation) can exhibit wave breaking for initial data
with large slope. We also comment on the dispersive properties of the equation. | Jean-Claude Saut, Shihan Sun, Yuexun Wang, Yi Zhang | 2023-09-19T14:31:24 | http://arxiv.org/abs/2309.10651v1 | # Wave breaking for the generalized
###### Abstract.
This paper aims to show that the Cauchy problem of the Burgers equation with a weakly dispersive perturbation involving the Bessel potential (generalization of the Fornberg-Whitham equation) can exhibit wave breaking for initial data with large slope. We also comment on the dispersive properties of the equation.
## 1. introduction
This paper is a continuation of a previous work, [31], aiming to understand the possible wave breaking in weak perturbations of the Burgers equation. This kind of equations is a toy model to understand the influence of a weakly dispersive perturbations of scalar or system conservation laws. Actually in most physically relevant dispersive systems (_eg_ the water wave system, see [22]), the dispersion is weak and strong dispersive effects occur for instance in a long wave limit after Taylor expanding the dispersion relation. On the other hand the nonlinearity is often quadratic (coming for particular from the Euler equation). It thus appears that equations with high order nonlinear terms and high dispersion such as the generalized Korteweg-de Vries equation are not appropriate to analyze the problem under study.
The possibility of wave breaking for the fractionary Korteweg-de Vries equation (fKdV)
\[\partial_{t}u+u\partial_{x}u-(-\partial_{x}^{2})^{\alpha/2}\partial_{x}u=0, \quad-1\leq\alpha<0,\]
and of the related Whitham equation, [34, 35], has been proven in many recent papers [19, 27, 31, 37]. 1 We aim here to show similar results in the case where the dispersive operator has a smooth symbol.2
Footnote 1: The earlier works [8, 25] have shown wave breaking for a wide class of non-local dispersive equations, however, which does not apply directly to the Whitham equation. The blow-up results in [6] concern the blow-up of the \(C^{1+\delta}\) norm, but the boundedness of the solution is not proven.
Footnote 2: The Whitham equation has also a smooth symbol.
More precisely, we are concerned with the Cauchy problem of non-local weakly dispersive perturbations of the Burgers equation (will be referred to as the generalized Fornberg-Whitham equation)
\[\begin{cases}\partial_{t}u+u^{p}\partial_{x}u-\mathcal{K}_{s}\partial_{x}u=0, \quad p=1,2,...\\ u(x,0)=u_{0}(x),\end{cases} \tag{1.1}\]
Introduction
The purpose of this paper is to study the energy-momentum equation (1.1) in the form of the energy-momentum equation (1.2), which is a generalization of the energy-momentum equation (1.1) to the energy-momentum equation (1.2). The energy-momentum equation (1.3) is a generalization of the energy-momentum equation (1.4) to the energy-momentum equation (1.5). The energy-momentum equation (1.6) is a generalization of the energy-momentum equation (1.7) to the energy-momentum equation (1.8). The energy-momentum equation (1.9) is a generalization of the energy-momentum equation (1.1) to the energy-momentum equation (1.1). The energy-momentum equation (1.1) is a generalization of the energy-momentum equation (1.1) to the energy-momentum equation (1.1). The energy-momentum equation (1.1) is a generalization of the energy-momentum equation (1.
and the Burgers-Poisson equation (1.5) is recovered when \(\kappa=\frac{1}{2}\) and neglecting the two quadratic terms in the second equation._
_While the Camassa-Holm equation is formally integrable (see [11]), this does not seem to be the case of the Fornberg-Whitham equation, see [20]._
Since (1.1) is a skew-adjoint perturbation of the Burgers equation, one easily checks by standard energy methods that the associated Cauchy problem is locally well-posed in \(H^{s}(\mathbb{R}),s>3/2\), so that the nonlocal dispersive term does not allow to enlarge the space of resolution for the Cauchy problem of the Burgers equation. 3We will show that it does not prevent the wave breaking phenomena (shock formation).
Footnote 3: Note that ill-posedness of (1.1) in \(H^{3/2}(\mathbb{R})\) seems to be an open question. See [23] for a proof of this result for the Burgers equation.
We say that the solution of (1.1) exhibits wave breaking (shock formation) if there exists some \(T>0\) such that
\[|u(x,t)|<\infty,\quad\text{for $x\in\mathbb{R}$ and $t\in[0,T)$},\]
but
\[\sup_{x\in\mathbb{R}}|\partial_{x}u(x,t)|\longrightarrow+\infty,\quad\text{ as $t\to T^{-}$}.\]
Results concerning the wave breaking of solutions to the Fornberg-Whitham equation (1.2) were obtained in [9, 17, 18, 32, 36]. We refer to [18] for a review of various issues concerning the Cauchy problem for the Fornberg-Whitham equation.
**Remark 2**.: _A wave breaking for some solutions of a Fornberg-Whitham equation perturbed by a nonlocal commutator type term is proven in [2]._
Our aim in the present paper is to show that the solutions to the generalized Fornberg-Whitham equation (1.1) can exhibit wave breaking for initial data with large slope, thus extending the similar known results for the Fornberg-Whitham equation. We will also comment on the dispersive behavior of (1.1), in particular on the existence of solitary waves using the fact that it reduces to the KdV equation in the long wave limit, and on linear dispersive estimates.
**Notations.** Let \(\mathcal{F}(g)\) or \(\widehat{g}\) be the Fourier transform of a Schwartz function \(g\) whose formula is given by
\[\mathcal{F}(g)(\xi)=\widehat{g}(\xi):=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}g( x)e^{-\mathrm{i}x\xi}\,dx\]
with inverse
\[\mathcal{F}^{-1}(g)(x)=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}g(\xi)e^{ \mathrm{i}x\xi}\,d\xi,\]
and by \(m(\partial_{x})\) the Fourier multiplier with symbol \(m\) via the relation
\[\mathcal{F}\big{(}m(\partial_{x})g\big{)}(\xi)=m(\mathrm{i}\xi)\widehat{g}( \xi).\]
Take \(\varphi\in C_{0}^{\infty}(\mathbb{R})\) satisfying \(\varphi(\xi)=1\) for \(|\xi|\leq 1\) and \(\varphi(\xi)=0\) when \(|\xi|>2\), and let
\[\psi(\xi)=\varphi(\xi)-\varphi(2\xi),\quad\psi_{j}(\xi)=\psi(2^{-j}\xi),\quad \varphi_{j}(\xi)=\varphi(2^{-j}\xi),\]
we then may define the Littlewood-Paley projections \(P_{j},P_{\leq j},P_{>j}\) via
\[\widehat{P_{j}g}(\xi)=\psi_{j}(\xi)\widehat{g}(\xi),\quad\widehat{P_{\leq j}g }(\xi)=\varphi_{j}(\xi)\widehat{g}(\xi),\quad P_{>j}=1-P_{\leq j},\]
and also \(P_{\sim},P_{\lesssim j},P_{\ll j}\) by
\[P_{\sim j}=\sum_{2^{k}\sim 2^{j}}P_{k},\quad P_{\lesssim j}=\sum_{2^{k}\leq 2^{j +C}}P_{k},\quad P_{\ll j}=\sum_{2^{k}\ll 2^{j}}P_{k},\]
and the obvious notation for \(P_{[a,b]}\). We will also denote \(g_{j}=P_{j}g,g_{\lesssim j}=P_{\lesssim j}g\), and so on, for convenience.
The notation \(C\) always denotes a nonnegative universal constant which may be different from line to line but is independent of the parameters involved. Otherwise, we will specify it by the notation \(C(a,b,\dots)\). We write \(g\lesssim h\) (\(g\gtrsim h\)) when \(g\leq Ch\) (\(g\geq Ch\)), and \(g\sim h\) when \(g\lesssim h\lesssim g\). We also write \(\sqrt{1+x^{2}}=\langle x\rangle\) and \(\|g\|_{H^{1,1}}=\|\langle x\rangle g\|_{H^{1}}\) for simplicity.
## 2. Main results
### The case \(p=1\)
In this case, we show that the solution to (1.1) can exhibit wave breaking for \(s\in(2/5,\infty)\).
**Theorem 2.1**.: _Let \(s\in(2/5,1)\). If \(u_{0}\in H^{3}(\mathbb{R})\) satisfies the following slope conditions:_
\[\delta^{2}\big{[}\inf_{x\in\mathbb{R}}u_{0}^{\prime}(x)\big{]}^{ 2}>C\big{(}\|u_{0}\|_{H^{3}}+C_{1}+C_{1}^{\frac{1}{3}}\|u_{0}^{\prime\prime \prime}\|_{L^{2}}^{\frac{2}{3}}\big{)},\] \[(1-\delta)^{2}\big{[}-\inf_{x\in\mathbb{R}}u_{0}^{\prime}(x) \big{]}>C(1+C_{0}^{-1}C_{1}), \tag{2.1}\] \[(1-\delta)^{3}\big{[}-\inf_{x\in\mathbb{R}}u_{0}^{\prime}(x) \big{]}>C\big{(}1+C_{1}^{-\frac{2}{3}}\|u_{0}^{\prime\prime\prime}\|_{L^{2}}^ {\frac{2}{3}}\big{)},\]
_where \(\delta\in(0,1)\) is a small number, and \(C_{0},C_{1}>0\) satisfy_
\[\|u_{0}\|_{L^{\infty}}\leq C_{0}/2,\quad\|u_{0}^{\prime}\|_{L^{\infty}}\leq C _{1}/2. \tag{2.2}\]
_Then the solution \(u(t,x)\) to (1.1) exhibits wave breaking at \(T>0\) with_
\[(1+\delta)^{-1}\big{[}-\inf_{x\in\mathbb{R}}u_{0}^{\prime}(x)\big{]}^{-1}<T<( 1-\delta)^{-2}\big{[}-\inf_{x\in\mathbb{R}}u_{0}^{\prime}(x)\big{]}^{-1}. \tag{2.3}\]
**Theorem 2.2**.: _Let \(s\in[1,\infty)\). If \(u_{0}\in H^{2}(\mathbb{R})\) satisfies the following slope conditions:_
\[\delta^{2}\big{[}-\inf_{x\in\mathbb{R}}u_{0}^{\prime}(x)\big{]}^{ 2}>C\big{(}\|u_{0}\|_{H^{2}}+C_{1}+\|u_{0}^{\prime\prime}\|_{L^{2}}\big{)},\] \[(1-\delta)^{2}\big{[}-\inf_{x\in\mathbb{R}}u_{0}^{\prime}(x) \big{]}>CC_{0}^{-1}\|u_{0}^{\prime}\|_{L^{2}}, \tag{2.4}\] \[(1-\delta)^{3}\big{[}-\inf_{x\in\mathbb{R}}u_{0}^{\prime}(x) \big{]}>C\big{(}1+C_{1}^{-1}\|u_{0}^{\prime\prime}\|_{L^{2}}\big{)},\]
_where \(\delta\in(0,1)\) is a small number, and \(C_{0},C_{1}>0\) satisfy (2.2). Then the solution \(u(t,x)\) to (1.1) exhibits wave breaking at \(T>0\) with (2.3)._
**Remark 3**.: _There exists a class of initial data \(u_{0}\) satisfying the conditions (2.1)\({}_{1}\)-(2.1)\({}_{3}\) in Theorem 2.1. Indeed, for any given \(\phi\in H^{3}(\mathbb{R})\) with \(\inf_{x\in\mathbb{R}}\phi^{\prime}(x)<0\), set_
\[u_{0} =\lambda\phi,\] \[C_{0} =2\lambda\|\phi\|_{L^{\infty}},\] \[C_{1} =2\lambda\|\phi^{\prime}\|_{L^{\infty}}.\]
_Choosing \(\lambda>0\) sufficiently large, one can easily check that \(u_{0}\) satisfies (2.1)\({}_{1}\)-(2.1)\({}_{3}\) by comparing the powers of \(\lambda\) on both sides of each inequality. For example, \(u_{0}(x)=\lambda e^{-x^{2}}\) with \(\lambda>0\) sufficiently large satisfies (2.1)\({}_{1}\)-(2.1)\({}_{3}\)._
**Remark 4**.: _The conditions (2.4)\({}_{1}\)-(2.4)\({}_{3}\) lower the requirement on the regularity of the initial data \(u_{0}\) of (2.1)\({}_{1}\)-(2.1)\({}_{3}\) from \(H^{3}\) to \(H^{2}\) since the dispersion effect of the Bessel potential is much weaker when \(s\) is larger._
**Remark 5**.: _Oh and Pasqualotto [27] obtained precise wave breaking information for the solution to (1.1) when \(p=1\) and \(s\in(0,1]\) with more delicated assumptions on the initial data by the modulation theory (see also Yang's work [37] on the Burgers-Hilbert equation)._
### The case \(p>1\)
In this case, we show that the solution to (1.1) can exhibit wave breaking for all \(s\in(0,\infty)\).
**Theorem 2.3**.: _Let \(s\in(0,1)\). Suppose \(\bar{x}_{1}\) and \(\bar{x}_{2}\) are the largest and smallest numbers such that \(\{x:u_{0}^{\prime}(x)<0\}\subset[\bar{x}_{1},\bar{x}_{2}]\). If \(u_{0}\in H^{3}(\mathbb{R})\) satisfies the following slope conditions:_
\[\begin{split}&\delta^{2}\big{[}-\inf_{x\in\mathbb{R}}u_{0}^{ \prime}(x)\big{]}^{2}>C\bigg{[}\|u_{0}\|_{H^{3}}+C_{1}+\bigg{(}\frac{A^{p-1}}{ 2B^{p-1}}\bigg{)}^{-\frac{7B^{p-1}}{2pA^{2p-2}}}\|u_{0}^{\prime\prime\prime} \|_{L^{2}}\bigg{]},\\ &(1-\delta)^{2}\big{[}-\inf_{x\in\mathbb{R}}u_{0}^{\prime}(x) \big{]}>C(1+C_{1}C_{0}^{-1}),\\ &(1-\delta)^{3}\big{[}-\inf_{x\in\mathbb{R}}u_{0}^{\prime}(x) \big{]}>C\bigg{[}1+\bigg{(}\frac{A^{p-1}}{2B^{p-1}}\bigg{)}^{-\frac{7B^{p-1}} {pA^{2p-2}}}C_{1}^{-1}\|u_{0}^{\prime\prime\prime}\|_{L^{2}}\bigg{]},\end{split} \tag{2.5}\]
_and local amplitude conditions:_
\[\begin{split}& u_{0}(x)<B-C(1-\delta)^{-2}\big{[}-\inf_{x\in \mathbb{R}}u_{0}^{\prime}(x)\big{]}^{-1}(C_{0}+C_{1}),\\ & u_{0}(x)>A+C(1-\delta)^{-2}\big{[}-\inf_{x\in\mathbb{R}}u_{0}^{ \prime}(x)\big{]}^{-1}(C_{0}+C_{1})\end{split} \tag{2.6}\]
_for all \(x\in[\bar{x}_{1},\bar{x}_{2}]\), where \(\delta\in(0,1)\) is a small number, and \(C_{0},C_{1}>0\) satisfy (2.2), and \(A,B>0\) satisfy_
\[\begin{split}& A^{2p-2}>8\delta B^{2p-2},\quad 4pA^{2p-2}>7B^{p-1}, \\ & B>A+C(1-\delta)^{-2}\big{[}-\inf_{x\in\mathbb{R}}u_{0}^{\prime} (x)\big{]}^{-1}(C_{0}+C_{1}).\end{split} \tag{2.7}\]
_Then the solution \(u(t,x)\) to (1.1) exhibits wave breaking at \(T>0\) with_
\[\frac{1}{pB^{p-1}+\delta}\frac{1}{[-\inf_{x\in\mathbb{R}}u_{0}^{\prime}(x)]}<T< \frac{1}{(A^{p-1}B^{1-p}-\delta)(pA^{p-1}-\delta)}\frac{1}{[-\inf_{x\in\mathbb{ R}}u_{0}^{\prime}(x)]}. \tag{2.8}\]
**Theorem 2.4**.: _Let \(s\in[1,\infty)\). Suppose \(\bar{x}_{2}\) are the largest and smallest numbers such that \(\{x:u_{0}^{\prime}(x)<0\}\subset[\bar{x}_{1},\bar{x}_{2}]\). If \(u_{0}\) satisfies the slope conditions:_
\[\delta^{2}\big{[}-\inf_{x\in\mathbb{R}}u_{0}^{\prime}(x)\big{]}^ {2}>C\bigg{[}\|u_{0}\|_{H^{2}}+\bigg{(}C_{1}+\bigg{(}\frac{A^{p-1}}{2B^{p-1}} \bigg{)}^{-\frac{5B^{p-1}}{2pA^{p-2}}}\|u_{0}^{\prime\prime}\|_{L^{2}}\bigg{)} \bigg{]},\] \[(1-\delta)^{2}\big{[}-\inf_{x\in\mathbb{R}}u_{0}^{\prime}(x) \big{]}>C\bigg{(}\frac{A^{p-1}}{2B^{p-1}}\bigg{)}^{-\frac{pB^{p-1}}{2pA^{2p-2} }}C_{0}^{-1}\|u_{0}^{\prime}\|_{L^{2}},\] \[(1-\delta)^{3}\big{[}-\inf_{x\in\mathbb{R}}u_{0}^{\prime}(x) \big{]}>C\bigg{(}\frac{A^{p-1}}{2B^{p-1}}\bigg{)}^{-\frac{5BP^{-1}}{2pA^{2p-2} }}C_{1}^{-1}\|u_{0}^{\prime\prime}\|_{L^{2}},\]
_and local amplitude conditions:_
\[u_{0}(x)<B-C(1-\delta)^{-2}\big{[}-\inf_{x\in\mathbb{R}}u_{0}^{ \prime}(x)\big{]}^{-1}\bigg{(}\frac{A^{p-1}}{2B^{p-1}}\bigg{)}^{-\frac{B^{p-1} }{2pA^{2p-2}}}\|u_{0}^{\prime}\|_{L^{2}},\] \[u_{0}(x)>A+C(1-\delta)^{-2}\big{[}-\inf_{x\in\mathbb{R}}u_{0}^{ \prime}(x)\big{]}^{-1}\bigg{(}\frac{A^{p-1}}{2B^{p-1}}\bigg{)}^{-\frac{B^{p-1} }{2pA^{2p-2}}}\|u_{0}^{\prime}\|_{L^{2}}\]
_for all \(x\in[\bar{x}_{1},\bar{x}_{2}]\), where \(\delta\in(0,1)\) is a small number, and \(C_{0},C_{1}>0\) satisfy (2.2), and \(A,B>0\) satisfy_
\[A^{2p-2}>8\delta B^{2p-2},\quad 4pA^{2p-2}>7B^{p-1},\] \[B>A+C(1-\delta)^{-2}\big{[}-\inf_{x\in\mathbb{R}}u_{0}^{\prime}( x)\big{]}^{-1}\bigg{(}\frac{A^{p-1}}{2B^{p-1}}\bigg{)}^{-\frac{B^{p-1}}{2pA^{2p-2} }}\|u_{0}^{\prime}\|_{L^{2}}.\]
_Then the solution \(u(t,x)\) to (1.1) exhibits wave breaking at \(T>0\) with (2.8)._
**Remark 6**.: _It should be pointed out that the interval \([\bar{x}_{1},\bar{x}_{2}]\) can be replaced by a larger but finite interval in the local amplitude conditions (2.6). Otherwise, the local amplitude condition will become a global one, which means \(u_{0}\notin L^{2}(\mathbb{R})\) and thus contradicts \(u_{0}\in H^{3}(\mathbb{R})\). More importantly, if the latter case happened, then \(u_{0}\) is bounded below by a positive constant on the entire line which is physically strange since \(u_{0}\) is the initial elevation. In the classical water waves models (such as the KdV equation), the solution is assumed to tend to zero at infinity._
## 3. Preliminaries
First, we list some basic properties of the Bessel potential.
**Lemma 1**.: _There exists some constant \(C\) only depending on \(s\) such that_
\[G_{s}(x)\leq C\left\{\begin{array}{ll}\frac{1}{|x|^{1-s}}&\quad\text{for}\ |x| \leq 1\ \text{and}\ \ 0<s<1,\\ \log\frac{1}{|x|}+1&\quad\text{for}\ |x|\leq 1\ \text{and}\ \ s=1,\\ 1&\quad\text{for}\ |x|\leq 1\ \text{and}\ \ s>1,\\ |x|^{\frac{s-2}{2}}e^{-|x|}&\quad\text{for}\ |x|>1\ \text{and}\ \ s>0\end{array}\right. \tag{3.1}\]
_and_
\[|G^{\prime}_{s}(|x|)|\leq C\left\{\begin{array}{ll}\frac{1}{|x|^{2-s}}&\quad \text{for}\ |x|\leq 1\ \text{and}\ \ 0<s<2,\\ 1&\quad\text{for}\ |x|\leq 1\ \text{and}\ \ s\geq 2,\\ |x|^{\frac{s-2}{2}}e^{-|x|}&\quad\text{for}\ |x|>1\ \text{and}\ \ s>0.\end{array}\right. \tag{3.2}\]
_In particular, one has_
\[\int_{1}^{\infty}|G^{\prime}_{s}(|x|)|\ \mathrm{d}x\leq C\quad\text{for}\ s>0 \tag{3.3}\]
_and_
\[\int_{\eta}^{\infty}G^{2}_{s}(x)\ \mathrm{d}x\leq C\quad\text{for}\ s\geq 1 \ \text{and}\ \eta\in(0,1]. \tag{3.4}\]
_Here \(C\) in (3.4) does not depend on \(\eta\)._
Proof.: Clearly one can write
\[G_{s}(x)=c_{1}(s)|x|^{\frac{s-1}{2}}K_{\frac{1-s}{2}}(|x|) \tag{3.5}\]
and
\[G^{\prime}_{s}(|x|)=-c_{2}(s)|x|^{\frac{s-1}{2}}K_{\frac{3-s}{2}}(|x|), \tag{3.6}\]
where \(K_{\nu}\) is known as a modified Bessel function of the third kind given by
\[K_{\nu}(r)=\frac{1}{2}\Big{(}\frac{r}{2}\Big{)}^{\nu}\int_{0}^{\infty}\frac{e ^{-t-\frac{r^{2}}{4t}}}{t^{\nu+1}}\ \mathrm{d}t\quad\text{for}\ r>0.\]
\(K_{\nu}(r)\) is analytic on \(r\) except at \(r=0\) and even on \(\nu\), and has the following asymptotic formulae (see [1, 26]):
\[K_{\nu}(r)\leq C(\nu)\left\{\begin{array}{ll}r^{-\nu},&\quad 0<r\leq 1,\ \nu>0\\ \log(1/r)+1,&\quad 0<r\leq 1,\ \nu=0\\ r^{-1/2}e^{-r},&\quad r>1,\ \nu\geq 0.\end{array}\right. \tag{3.7}\]
The estimates (3.1)-(3.2) follow from (3.5)-(3.7) directly. The estimate (3.3) can be easily verified by (3.2). In order to estimate (3.4), we split the integral as follows:
\[\int_{\eta}^{\infty}G^{2}_{s}(x)\ \mathrm{d}x=\int_{\eta}^{1}G^{2}_{s}(x)\ \mathrm{d}x+\int_{1}^{\infty}G^{2}_{s}(x)\ \mathrm{d}x. \tag{3.8}\]
It needs only to take care of the first integral on the RHS of (3.8) when \(s=1\). Indeed, noticing that
\[\log\left(\frac{1}{|y|}\right)\leq\frac{C}{|y|^{\frac{2}{5}}}\quad\text{for} \ 0<|y|<1, \tag{3.9}\]
one may estimate
\[\int_{\eta}^{1}G_{1}^{2}(x)\ \mathrm{d}x\leq C\int_{|y|<1}\left(\frac{1}{|y|^{\frac{ 2}{5}}}+1\right)^{2}\ \mathrm{d}y\leq C.\]
Next, we recall the standard Gagliardo-Nirenberg interpolation inequality.
**Lemma 2**.: _Let \(1\leq q,\ r\leq\infty,\ j,\ m\in N\) with \(j/m\leq\theta\leq 1\). If_
\[\frac{1}{p}=j+\theta\bigg{(}\frac{1}{r}-m\bigg{)}+\frac{1-\theta}{q},\]
_then_
\[\|\partial_{x}^{j}u\|_{L^{p}}\leq C\|\partial_{x}^{m}u\|_{L^{r}}^{\theta}\|u\| _{L^{q}}^{1-\theta},\]
_where the constant \(C\) depends only on \(j,m,r,p,q,\theta\)._
## 4. Reformulation in Lagrangian coordinates
It is standard to show that there exists some positive \(T\) such that (1.1) admits a solution \(u\in C([0,T);H^{3}(\mathbb{R}))\), see for example [28]. In what follows, we will assume \(T\) is the maximal existence time of the solution \(u\).
Denote by \(X(t,x)\) the position of the particle \(x\) at time \(t\)
\[\frac{\mathrm{d}X}{\mathrm{d}t}(t,x)=u^{p}(t,X(t,x))\quad\text{ and }\quad X(0,x)=x.\]
Let
\[v_{n}(t,x)=(\partial_{x}^{n-1}u)(t,X(t,x))\quad\text{for }n=1,2.\]
Then it follows from (1.1) that
\[\frac{\mathrm{d}v_{1}}{\mathrm{d}t}=-K_{1}^{s}(t,x), \tag{4.1}\]
where
\[K_{1}^{s}(t,x)=\int_{\mathbb{R}}G_{s}(y)\partial_{x}u(t,X(t,x)-y)\ \mathrm{d}y \tag{4.2}\]
and
\[\frac{\mathrm{d}v_{2}}{\mathrm{d}t}=-pv_{1}^{p-1}v_{2}^{2}-K_{2}^{s}(t,x), \tag{4.3}\]
in which
\[K_{2}^{s}(t,x)=\int_{\mathbb{R}}G_{s}(y)\partial_{x}^{2}u(t,X(t,x)-y)\ \mathrm{d}y. \tag{4.4}\]
Set
\[m(t)=\inf_{x\in\mathbb{R}}v_{2}(t,x)=\inf_{x\in\mathbb{R}}(\partial_{x}u)(x,t )=\colon m(0)q(t)^{-1}. \tag{4.5}\]
To prove Theorem 2.1 -2.4, it suffices to show \(q(t)\to 0\) as \(t\to T^{-}\), whose key ingredient is to prove the following:
\[|K_{2}^{s}(t,x)|<\delta^{2}m^{2}(t)\quad\text{for }(t,x)\in[0,T)\times \mathbb{R}. \tag{4.6}\]
## 5. Proof of Theorem 2.1
Note that \(p=1\). In what follows, we always assume \(\delta\in(0,1)\) is a sufficiently small number.
We first check that (4.6) holds at \(t=0\) by the assumption \(\eqref{eq:1}_{1}\). To this end, one writes \(K_{2}^{s}(0,x)\) as follows:
\[K_{2}^{s}(0,x)=\underbrace{\int_{|y|<1}G_{s}(y)u_{0}^{\prime\prime}(x-y)\ \mathrm{d}y}_{I_{1}}+ \underbrace{\int_{|y|\geq 1}G_{s}(y)u_{0}^{\prime\prime}(x-y)\ \mathrm{d}y}_{I_{2}}. \tag{5.1}\]
Applying \(\eqref{eq:1}_{1}\) to \(I_{1}\) yields
\[|I_{1}|\leq C\|u_{0}^{\prime\prime}\|_{L^{\infty}}\bigg{|}\int_{|y|<1}\frac{1 }{|y|^{1-s}}\ \mathrm{d}y\bigg{|}\leq C\|u_{0}\|_{H^{3}}, \tag{5.2}\]
where the Sobolev embedding \(H^{1}(\mathbb{R})\hookrightarrow L^{\infty}(\mathbb{R})\) has been used. To handle \(I_{2}\), one may integrate by parts to deduce
\[\begin{split}|I_{2}|&\leq|G_{s}(1)[u_{0}^{\prime}(- 1-y)-u_{0}^{\prime}(1-y)]|+\bigg{|}\int_{|y|\geq 1}G_{s}^{\prime}(y)u_{0}^{ \prime}(x-y)\ \mathrm{d}y\bigg{|}\\ &\leq C\|u_{0}^{\prime}\|_{L^{\infty}}\leq CC_{1},\end{split} \tag{5.3}\]
where one has used (3.3).
It follows from \(\eqref{eq:1}_{1}\) and (5.1)-(5.3) that
\[|K_{2}^{s}(0,x)|\leq C(\|u_{0}\|_{H^{3}}+C_{1})<\delta^{2}m^{2}(0)\quad\text{ for }x\in\mathbb{R}. \tag{5.4}\]
Next, we will show (4.6) for \(t\neq 0\) by the assumptions \(\eqref{eq:1}_{1}\)-\(\eqref{eq:1}_{3}\) together with an argument of contradiction. Suppose that (4.6) is not true, then there exist some \(T_{1}\in(0,T)\) and \(x_{0}\in\mathbb{R}\) such that
\[\boxed{|K_{2}^{s}(T_{1},x_{0})|=\delta^{2}m^{2}(T_{1})}. \tag{5.5}\]
Thus, without loss of generality, we may assume by continuity that
\[|K_{2}^{s}(t,x)|\leq\delta^{2}m^{2}(t)\quad\text{for }(t,x)\in(0,T_{1}]\times \mathbb{R}.\]
For \((t,x)\in(0,T_{1}]\times\mathbb{R}\), set
\[\Sigma_{\delta}(t)=\{x\in\mathbb{R}:v_{2}(t,x)\leq(1-\delta)m(t)\} \tag{5.6}\]
and
\[v_{2}(t,x)\bigg{(}=\frac{v_{1}(0)}{1+v_{1}(0)\int_{0}^{t}[1+v_{1}^{-2}K_{2}^{s }(\tau,x)]\ \mathrm{d}\tau}\bigg{)}=\,:m(0)r^{-1}(t,x), \tag{5.7}\]
where the first equality in (5.7) comes from solving (4.3).
Then, one has the following lemmas, whose proofs are quite similar with that in [31, 25], so we omit it.
**Lemma 3**.: _For fixed \(\delta\), the set \(\Sigma_{\delta}(t)\) is decreasing in \(t\), namely \(\Sigma_{\delta}(t_{2})\subset\Sigma_{\delta}(t_{1})\) whenever \(0\leq t_{1}\leq t_{2}\leq T_{1}\)._
**Lemma 4**.: _We have_
\[(1+\delta)m(0)\leq\frac{\mathrm{d}}{\mathrm{d}t}r(t,x)\leq(1-\delta)m(0)\quad \text{for }x\in\Sigma_{\delta}(T_{1}), \tag{5.8}\]
\[q(t)\leq r(t,x)\leq\frac{1}{1-\delta}q(t)\quad\text{for }x\in\Sigma_{\delta}(T_{1}) \tag{5.9}\]
_and_
\[0<q(t)\leq 1. \tag{5.10}\]
**Lemma 5**.: _We have_
\[\int_{0}^{t}q(\tau)^{-\gamma}\ \mathrm{d}\tau\leq-(1-\delta)^{-(\gamma+1)}(1- \gamma)^{-1}m^{-1}(0)\big{[}(1-\delta)^{\gamma-1}-q(t)^{1-\gamma}\big{]}, \tag{5.11}\]
_where \(\gamma\in(0,1)\cup(1,\infty)\), and_
\[\int_{0}^{t}q(\tau)^{-1}\ \mathrm{d}\tau\leq-(1-\delta)^{-2}m^{-1}(0)[-\log(1 -\delta)-\log q(t)]. \tag{5.12}\]
We first claim that
\[\|v_{1}(t)\|_{L^{\infty}}=\|u(t)\|_{L^{\infty}}<C_{0}\quad\text{for all }t\in[0,T_{1}] \tag{5.13}\]
and
\[\|v_{2}(t)\|_{L^{\infty}}=\|\partial_{x}u(t)\|_{L^{\infty}}<C_{1}q(t)^{-1} \quad\text{for all }t\in[0,T_{1}], \tag{5.14}\]
where \(C_{0}\) and \(C_{1}\) satisfy (2.2).
First, when \(t=0\), observe that
\[\|v_{1}(0)\|_{L^{\infty}}=\|u_{0}\|_{L^{\infty}}<C_{0}\]
and
\[\|v_{2}(0)\|_{L^{\infty}}=\|u_{0}^{\prime}\|_{L^{\infty}}<C_{1}q(t)^{-1}.\]
Then, it remains to show (5.13) and (5.14) when \(t\neq 0\), which will be achieved by the contradiction argument again. Assume that there exists \(T_{2}\in(0,T_{1}]\) such that (5.13) and (5.14) hold for all \(t\in[0,T_{2})\), but either of them fails at \(t=T_{2}\), that is
\[\boxed{\text{either }\|u(T_{2})\|_{L^{\infty}}=C_{0}\ \text{or}\ \|\partial_{x}u(T_{2})\|_{L^{\infty}}=C_{1}q^{-1}(t)}. \tag{5.15}\]
Hence, by continuity, one has
\[\|v_{1}(t)\|_{L^{\infty}}=\|u(t)\|_{L^{\infty}}\leq C_{0}\quad\text{for }t\in[0,T_{2}] \tag{5.16}\]
and
\[\|v_{2}(t)\|_{L^{\infty}}=\|\partial_{x}u(t)\|_{L^{\infty}}\leq C_{1}q(t)^{-1 }\quad\text{for }t\in[0,T_{2}]. \tag{5.17}\]
**Estimates on \(K_{1}^{s}(t,x)\).** We split the integral (4.2) into two parts as follows:
\[K_{1}^{s}(t,x)=\underbrace{\int_{|y|\leq\eta}G_{s}(y)\partial_{x}u(t,X(t,x)-y )\ \mathrm{d}y}_{I_{3}}+\underbrace{\int_{|y|>\eta}G_{s}(y)\partial_{x}u(t,X(t,x)- y)\ \mathrm{d}y}_{I_{4}}, \tag{5.18}\]
where \(\eta=\eta(t)\in(0,1]\) will be determined later. By \(\eqref{eq:1}_{1}\) and (5.17), one has
\[|I_{3}|\leq C\|v_{2}\|_{L^{\infty}}\int_{|y|\leq\eta}\frac{1}{|y|^{1-s}}\ \mathrm{d}y \leq C\eta^{s}\|v_{2}\|_{L^{\infty}}\leq CC_{1}\eta^{s}q(t)^{-1}. \tag{5.19}\]
By \(\eqref{eq:1}_{1}\), \(\eqref{eq:1}_{1}\), (3.3) and (5.16), one integrates by parts to find that
\[\begin{split}|I_{4}|&\leq|G_{s}(\eta)[u(t,X(t,x)- \eta)-u(t,X(t,x)+\eta)]|\\ &\quad+|\int_{\eta<|y|\leq 1}G_{s}^{\prime}(y)u(t,X(t,x)-y)\ \mathrm{d}y|\\ &\quad+|\int_{|y|>1}G_{s}^{\prime}(y)u(t,X(t,x)-y)\ \mathrm{d}y|\\ &\leq C[\eta^{s-1}\|v_{1}\|_{L^{\infty}}+(\eta^{s-1}-1)\|v_{1}\|_ {L^{\infty}}+\|v_{1}\|_{L^{\infty}}]\\ &\leq C\eta^{s-1}\|v_{1}\|_{L^{\infty}}\leq CC_{0}\eta^{s-1}. \end{split} \tag{5.20}\]
By choosing \(\eta=q(t)\), it follows from (5.19) and (5.20) that
\[|K_{1}^{s}(t,x)|\leq C(C_{0}+C_{1})q(t)^{s-1}\quad\text{for }(t,x)\in(0,T_{2}] \times\mathbb{R}. \tag{5.21}\]
**Estimates on \(v_{1}(t,x)\).** By (5.16) and (5.21), one uses (4.1) and (5.11) to deduce that
\[\begin{split}|v_{1}(t,x)|&\leq\|u_{0}\|_{L^{\infty }}+C(C_{0}+C_{1})\int_{0}^{t}q(\tau)^{s-1}\ \mathrm{d}\tau\\ &\leq\frac{1}{2}C_{0}-C(C_{0}+C_{1})(1-\delta)^{-2}m^{-1}(0)\\ &<C_{0}\quad\text{for }(t,x)\in(0,T_{2}]\times\mathbb{R},\end{split} \tag{5.22}\]
where \(\eqref{eq:1}_{2}\) has been used in the last inequality.
**Estimates on \(K_{2}^{s}(t,x)\).** Similar to (5.18), we also write the integral (4.4) as follows:
\[K_{2}^{s}(t,x)=\underbrace{\int_{|y|\leq\eta}G_{s}(y)\partial_{x}^{2}u(t,X(t, x)-y)\ \mathrm{d}y}_{I_{5}}+\underbrace{\int_{|y|>\eta}G_{s}(y)\partial_{x}^{2}u(t,X(t, x)-y)\ \mathrm{d}y}_{I_{6}}. \tag{5.23}\]
In a similar fashion to (5.19) and (5.20), one can estimate
\[|I_{5}|\leq C\|\partial_{x}^{2}u\|_{L^{\infty}}\int_{|y|\leq\eta}\frac{1}{|y| ^{1-s}}\ \mathrm{d}y\leq C\eta^{s}\|\partial_{x}^{2}u\|_{L^{\infty}} \tag{5.24}\]
and
\[\begin{split}|I_{6}|&\leq|G_{s}(\eta)[\partial_{x}u(t,X(t, x)-\eta)-\partial_{x}u(t,X(t,x)+\eta)]|\\ &\quad+\bigg{|}\int_{\eta<|y|\leq 1}G^{\prime}_{s}(y)\partial_{x}u(t,X (t,x)-y)\ \mathrm{d}y\bigg{|}\\ &\quad+\bigg{|}\int_{|y|>1}G^{\prime}_{s}(y)\partial_{x}u(t,X(t, x)-y)\ \mathrm{d}y\bigg{|}\\ &\leq C\big{[}\eta^{s-1}\|v_{2}\|_{L^{\infty}}+\big{(}\eta^{s-1}- 1\big{)}\|v_{2}\|_{L^{\infty}}+\|v_{2}\|_{L^{\infty}}\big{]}\\ &\leq CC_{1}\eta^{s-1}q(t)^{-1}.\end{split} \tag{5.25}\]
In order to get the blow-up rate of \(I_{5}\), it suffices to estimate \(\|\partial_{x}^{2}u\|_{L^{\infty}}\). Noticing that \(\|\partial_{x}^{2}u\|_{L^{\infty}}\) is not contained in (5.16) or (5.17), our idea is to use \(\|\partial_{x}^{3}u\|_{L^{2}}\) to control \(\|\partial_{x}^{2}u\|_{L^{\infty}}\). To this end, we turn to estimate \(\|\partial_{x}^{3}u\|_{L^{2}}\) by energy estimates. Applying \(\partial_{x}^{3}\) to (1.1), and multiplying it by \(\partial_{x}^{3}u\), one obtains
\[\begin{split}\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{ \mathbb{R}}(\partial_{x}^{3}u)^{2}\mathrm{d}x&=\int_{\mathbb{R}} \partial_{x}^{3}u\int_{\mathbb{R}}G_{s}(x-y)\partial_{y}^{4}u(y)\,\mathrm{d}y \mathrm{d}x\\ &\quad-\int_{\mathbb{R}}\big{[}4\partial_{x}u(\partial_{x}^{3}u)^ {2}+3(\partial_{x}^{2}u)^{2}\partial_{x}^{3}u+u\partial_{x}^{4}u\partial_{x}^ {3}u\big{]}\ \mathrm{d}x\\ &=-\frac{7}{2}\int_{\mathbb{R}}\partial_{x}u(\partial_{x}^{3}u)^ {2}\ \mathrm{d}x,\end{split} \tag{5.26}\]
where one has used the fact that \(G_{s}(\cdot)\) is an even kernel, and the following equalities:
\[\int_{\mathbb{R}}u\partial_{x}^{4}u\partial_{x}^{3}u\ \mathrm{d}x=-\frac{1}{2}\int_{ \mathbb{R}}\partial_{x}u(\partial_{x}^{3}u)^{2}\ \mathrm{d}x\quad\text{and}\quad\int_{\mathbb{R}}( \partial_{x}^{2}u)^{2}\partial_{x}^{3}u\ \mathrm{d}x=0.\]
Consequently, it follows from (5.26) and (4.5) that
\[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\mathbb{R}}(\partial_{x}^{3}u)^{2}\ \mathrm{d}x \leq-7m(0)q(t)^{-1}\|\partial_{x}^{3}u\|_{L^{2}}^{2}, \tag{5.27}\]
which, together with (5.12), yields
\[\begin{split}\|\partial_{x}^{3}u\|_{L^{2}}&\leq\|u _{0}^{\prime\prime\prime}\|_{L^{2}}(1-\delta)^{-\frac{7}{2(1-\delta)^{2}}}q(t) ^{-\frac{7}{2(1-\delta)^{2}}}\\ &\leq C\|u_{0}^{\prime\prime\prime}\|_{L^{2}}q(t)^{-\frac{7}{2(1- \delta)^{2}}}\quad\text{for $t\in(0,T_{2}]$}.\end{split} \tag{5.28}\]
The next key observation is that one can utilize the Gagliardo-Nirenberg interpolation inequality to deduce that
\[\begin{split}\|\partial_{x}^{2}u\|_{L^{\infty}}&\leq C \|\partial_{x}u\|_{L^{\infty}}^{\frac{1}{2}}\|\partial_{x}^{3}u\|_{L^{2}}^{ \frac{2}{3}}\\ &\leq CC_{1}^{\frac{1}{3}}\|u_{0}^{\prime\prime\prime}\|_{L^{2}}^ {\frac{2}{3}}q(t)^{-\frac{1}{3}-\frac{7}{3(1-\delta)^{2}}}\quad\text{for $t\in(0,T_{2}]$},\end{split} \tag{5.29}\]
where one has used (5.17) and (5.28).
It follows from (5.24) and (5.29) that
\[|I_{5}|\leq CC_{1}^{\frac{1}{3}}\|u_{0}^{\prime\prime\prime}\|_{L^{2}}^{\frac{ 2}{3}}\eta^{s}q(t)^{-\frac{1}{3}-\frac{7}{3(1-\delta)^{2}}}. \tag{5.30}\]
Inserting (5.25) and (5.30) to (5.23) and choosing \(\eta=q(t)^{-\frac{2}{3}+\frac{7}{3(1-\delta)^{2}}}\) yield
\[\begin{split}|K_{2}(t,x)|&\leq C\Big{(}C_{1}+C_{1}^{ \frac{1}{3}}\|u_{0}^{\prime\prime\prime}\|_{L^{2}}^{\frac{2}{3}}\Big{)}q(t)^{- \frac{1+2s}{3}-\frac{7(1-s)}{3(1-\delta)^{2}}}\\ &\leq C\Big{(}C_{1}+C_{1}^{\frac{1}{3}}\|u_{0}^{\prime\prime \prime}\|_{L^{2}}^{\frac{2}{3}}\Big{)}q(t)^{-2}\quad\text{for }(t,x)\in(0,T_{2}]\times\mathbb{R},\end{split} \tag{5.31}\]
where one has used
\[-\frac{1+2s}{3}-\frac{7(1-s)}{3(1-\delta)^{2}}\geq-2,\]
which follows from the facts \(s\in(2/5,1)\) and (5.10).
**Estimates on \(v_{2}(t,x)\).** By (4.6), one notices that
\[\frac{\mathrm{d}v_{2}}{\mathrm{d}t}\leq|K_{2}(t,x)|,\]
which, together with (5.31) and (5.11), implies
\[\begin{split} v_{2}(t,x)&\leq\|u_{0}^{\prime}\|_{L^ {\infty}}+C\Big{(}C_{1}+C_{1}^{\frac{1}{3}}\|u_{0}^{\prime\prime\prime}\|_{L^ {2}}^{\frac{2}{3}}\Big{)}\int_{0}^{t}q(\tau)^{-2}\ \mathrm{d}\tau\\ &\leq\frac{1}{2}C_{1}q(t)^{-1}-C\Big{(}C_{1}+C_{1}^{\frac{1}{3}}\| u_{0}^{\prime\prime\prime}\|_{L^{2}}^{\frac{2}{3}}\Big{)}(1-\delta)^{-3}m^{-1}(0)q(t)^{ -1}\\ &<C_{1}q(t)^{-1}\quad\text{for }(t,x)\in(0,T_{2}]\times\mathbb{R}, \end{split} \tag{5.32}\]
where one has used \(\eqref{eq:C1}_{3}\) in the last inequality.
On the other hand, one may assume that \(\|u_{0}^{\prime}\|_{L^{\infty}}=-m(0)\) without loss of generality, which together with (4.5) and (2.2) implies
\[v_{2}(t,x)\geq m(0)q(t)^{-1}\geq-\frac{C_{1}}{2}q(t)^{-1}\quad\text{for }(t,x)\in(0,T_{2}]\times\mathbb{R}. \tag{5.33}\]
**The contradiction argument** (5.15). Collecting (5.22), (5.32) and (5.33) yields a contradiction to (5.15). Thus, (5.13) and (5.14) follow.
**The contradiction argument** (5.5). It follows from (5.31) and \(\eqref{eq:C1}_{1}\) that
\[\begin{split}|K_{2}(t,x)|&\leq C\Big{(}C_{1}+C_{1} ^{\frac{1}{3}}\|u_{0}^{\prime\prime\prime}\|_{L^{2}}^{\frac{2}{3}}\Big{)}m^{-2} (0)m^{2}(t)\\ &<\delta^{2}m^{2}(t)\quad\text{for }(t,x)\in(0,T_{1}]\times \mathbb{R},\end{split}\]
which contradicts (5.5). Hence we have shown (4.6).
Now we are ready to finish the proof of Theorem 2.1.
Proof of Theorem 2.1.: For \(t\in[0,T)\) and \(x\in\Sigma_{\delta}(t)\), it follows from Lemma 3 (by setting \(t_{1}=0\) and \(t_{2}=t\)) that
\[m(0)\leq v_{2}(0,x)\leq(1-\delta)m(0).\]
This, together with (5.7) and (5.8), yields
\[r(t)\leq m(0)\big{[}v_{2}^{-1}(0,x)+(1-\delta)t\big{]}\leq(1-\delta)^{-1}+m(0)( 1-\delta)t\]
and
\[r(t)\geq m(0)\big{[}v_{2}^{-1}(0,x)+(1+\delta)t\big{]}\geq 1+m(0)(1+\delta)t.\]
Hence
\[(1-\delta)+m(0)(1-\delta^{2})t\leq q(t)\leq(1-\delta)^{-1}+m(0)(1-\delta)t,\]
which is
\[(1-\delta)+\inf_{x\in\mathbb{R}}u_{0}^{\prime}(x)(1-\delta^{2})t\leq q(t)\leq( 1-\delta)^{-1}+\inf_{x\in\mathbb{R}}u_{0}^{\prime}(x)(1-\delta)t. \tag{5.34}\]
Obviously, \(q(t)\) goes to zero by sending \(t\) to \((1+\delta)^{-1}\big{[}-\inf_{x\in\mathbb{R}}u_{0}^{\prime}(x)\big{]}^{-1}\) and \((1-\delta)^{-2}\big{[}-\inf_{x\in\mathbb{R}}u_{0}^{\prime}(x)\big{]}^{-1}\) on the LHS and RHS of (5.34), respectively. On the other side, (5.13) implies that \(v_{1}(t,x)\) is bounded for all \(t\in[0,T^{\prime}]\) with any \(T^{\prime}<T\). Hence, \(u\) exhibits wave breaking at \(T\) satisfying (2.3).
## 6. Proof of Theorem 2.2
Note that \(p=1\). We only handle the case of \(\underbrace{s=1}\) since the other case \(s\in(1,\infty)\) can be dealt with analogously (in fact it is much easier). Comparing the proof in the case of \(s\in(2/5,1)\), our aim in this section is to lower the regularity of \(u\) from \(H^{3}\) to \(H^{2}\).
First we check that (4.6) holds at \(t=0\). \(I_{2}\) can be estimated exactly as (5.3). It remains to consider \(I_{1}\). By Holder's inequality, one uses (3.1)\({}_{2}\) and (3.9) to estimate
\[\begin{split}|I_{1}|&\leq C\|u_{0}^{\prime\prime} \|_{L^{2}}\bigg{(}\int_{|y|<1}\Big{(}\log\Big{(}\frac{1}{|y|}\Big{)}+1\Big{)} ^{2}\ \mathrm{d}y\bigg{)}^{\frac{1}{2}}\\ &\leq C\|u_{0}^{\prime\prime}\|_{L^{2}}\bigg{(}\int_{|y|<1}\left( \frac{1}{|y|^{\frac{2}{5}}}+1\right)^{2}\ \mathrm{d}y\bigg{)}^{\frac{1}{2}}\leq C\|u_{0}\|_{H^{2}}.\end{split} \tag{6.1}\]
It follows from (2.4)\({}_{1}\), (6.1) and (5.3) that
\[|K_{2}^{s}(0,x)|\leq C(\|u_{0}\|_{H^{2}}+C_{1})<\delta^{2}m^{2}(0)\quad\text{ for }x\in\mathbb{R}.\]
Now, we shall prove (4.6) for \(t\neq 0\) by the assumptions (2.4)\({}_{1}\)-(2.4)\({}_{3}\). The argument of contradiction, lemmas and claims are exactly the same as in the proof of Theorem 2.1.
**Estimates on \(K_{1}^{s}(t,x)\).** One may use Holder's inequality to estimate
\[|I_{4}|\leq\|\partial_{x}u\|_{L^{2}}\bigg{(}\int_{|y|>\eta}G_{s}^{2}(y)\ \mathrm{d}y \bigg{)}^{\frac{1}{2}}\leq C\|\partial_{x}u\|_{L^{2}}, \tag{6.2}\]
where one has used (3.4), and
\[\begin{split}|I_{3}|&\leq C\|\partial_{x}u\|_{L^{2} }\bigg{(}\int_{|y|\leq\eta}\Big{(}\log\Big{(}\frac{1}{|y|}\Big{)}+1\Big{)}^{2} \ \mathrm{d}y\bigg{)}^{\frac{1}{2}}\\ &\leq C\|\partial_{x}u\|_{L^{2}}\bigg{(}\int_{|y|\leq\eta}\left( \frac{1}{|y|^{\frac{2}{5}}}+1\right)^{2}\ \mathrm{d}y\bigg{)}^{\frac{1}{2}}\leq C\|\partial_{x}u\|_{L^{2}}\eta^{\frac{1} {10}}\end{split} \tag{6.3}\]
where one has used \(\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eq
**Estimates on \(K_{2}^{s}(t,x)\).** For \(I_{6}\), similar to (5.25), one uses (3.1)\({}_{2}\), (3.2)\({}_{1}\) and (3.3) to get
\[\begin{split}|I_{6}|&\leq|G_{s}(\eta)[\partial_{x}u (t,X(t,x)-\eta)-\partial_{x}u(t,X(t,x)+\eta)]|\\ &\quad+\bigg{|}\int_{\eta<|y|\leq 1}G_{s}^{\prime}(y)\partial_{x}u (t,X(t,x)-y)\ \mathrm{d}y\bigg{|}\\ &\quad+\bigg{|}\int_{|y|>1}G_{s}^{\prime}(y)\partial_{x}u(t,X(t, x)-y)\ \mathrm{d}y\bigg{|}\\ &\leq C\bigg{[}\bigg{(}\log\bigg{(}\frac{1}{\eta}\bigg{)}+1\bigg{)} \|v_{2}\|_{L^{\infty}}+|\log\eta|\|v_{2}\|_{L^{\infty}}+\|v_{2}\|_{L^{\infty} }\bigg{]}\\ &\leq CC_{1}\eta^{-\frac{1}{6}}q(t)^{-1},\end{split} \tag{6.9}\]
where one has used
\[\log\bigg{(}\frac{1}{\eta}\bigg{)}\leq\frac{C}{\eta^{\frac{1}{6}}}\quad\text{ for }0<\eta<1.\]
For \(I_{5}\), one instead uses Holder's inequality to estimate
\[|I_{5}|\leq C\|\partial_{x}^{2}u\|_{L^{2}}\eta^{\frac{1}{10}}.\]
It remains to control \(\|\partial_{x}^{2}u\|_{L^{2}}\). Indeed, applying \(\partial_{x}^{2}\) to (1.1) and multiplying it by \(\partial_{x}^{2}u\), one deduces that
\[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\mathbb{R}}(\partial_{x}^{2}u)^{2}\ \mathrm{d}x=-5\int_{\mathbb{R}}\partial_{x}u(\partial_{x}^{2}u)^{2}\ \mathrm{d}x\leq-5m(0)q^{-1}(t)\|\partial_{x}^{2}u\|_{L^{2}}^{2},\]
which, along with (5.12), yields
\[\begin{split}\|\partial_{x}^{2}u\|_{L^{2}}&\leq\|u _{0}^{\prime\prime}\|_{L^{2}}(1-\delta)^{-\frac{5}{2(1-\delta)^{2}}}q(t)^{- \frac{5}{2(1-\delta)^{2}}}\\ &\leq C\|u_{0}^{\prime\prime}\|_{L^{2}}q(t)^{-\frac{5}{2(1-\delta )^{2}}}\quad\text{for }t\in(0,T_{2}].\end{split}\]
Hence, one obtains
\[|I_{5}|\leq C\|u_{0}^{\prime\prime}\|_{L^{2}}\eta^{\frac{1}{10}}q(t)^{-\frac{ 5}{2(1-\delta)^{2}}}. \tag{6.10}\]
Taking \(\eta=q(t)^{-\frac{15}{4}+\frac{75}{8(1-\delta)^{2}}}\), it follows from (5.23), (6.9) and (6.10) that
\[\begin{split}|K_{2}^{s}(t,x)|&\leq C\big{(}C_{1}+\| u_{0}^{\prime\prime}\|_{L^{2}}\big{)}q(t)^{-\frac{3}{8}-\frac{25}{16(1-\delta)^{2}}} \\ &\leq C\big{(}C_{1}+\|u_{0}^{\prime\prime}\|_{L^{2}}\big{)}q(t)^{- 2}\quad\text{for }(t,x)\in(0,T_{2}]\times\mathbb{R},\end{split} \tag{6.11}\]
where one has used
\[-\frac{3}{8}-\frac{25}{16(1-\delta)^{2}}\geq-2.\]
**Estimates on \(v_{2}(t,x)\).** It follows from (6.11) and (5.11) that
\[\begin{split} v_{2}(t,x)&\leq\|u_{0}^{\prime}\|_{L^{ \infty}}+C\big{(}C_{1}+\|u_{0}^{\prime\prime}\|_{L^{2}}\big{)}\int_{0}^{t}q( \tau)^{-2}\ \mathrm{d}\tau\\ &\leq\frac{1}{2}C_{1}q^{-1}(t)-C(1-\delta)^{-3}m^{-1}(0)\big{(}C _{1}+\|u_{0}^{\prime\prime}\|_{L^{2}}\big{)}q(t)^{-1}\\ &<C_{1}q(t)^{-1}\quad\text{for }(t,x)\in(0,T_{2}]\times\mathbb{R}, \end{split} \tag{6.12}\]
where \(\eqref{eq:v2}_{3}\) has been used. Moreover, similar to (5.33), it holds that
\[v_{2}(t,x)\geq-\frac{C_{1}}{2}q(t)^{-1}\quad\text{for }(t,x)\in(0,T_{2}] \times\mathbb{R}. \tag{6.13}\]
**The contradiction argument** (5.15). By (6.8), (6.12) and (6.13), we get a contradiction to (5.15). Hence, (5.13) and (5.14) follow.
**The contradiction argument** (5.5). One uses (6.11) and \(\eqref{eq:v2}_{1}\) to find that
\[\begin{split}|K_{2}(t,x)|&\leq C\big{(}C_{1}+\|u_{ 0}^{\prime\prime}\|_{L^{2}}\big{)}m^{-2}(0)m^{2}(t)\\ &<\delta^{2}m^{2}(t)\quad\text{for }(t,x)\in(0,T_{1}]\times \mathbb{R},\end{split}\]
which contradicts (5.5). This completes the proof of (4.6).
The remaining proof is similar to that of \(2/5<s<1\), so we omit it.
## 7. Proof of Theorem 2.3
Note that \(p>1\). We will point only the main differences in the proof between Theorem 2.1 and 2.3.
First, one can check (5.4) by following exactly the same way as (5.1)-(5.3) and using \(\eqref{eq:v2}_{1}\). Then, it remains to show (4.6) by an argument of contradiction.
Since the nonlinear term \(v_{1}^{p-1}v_{2}^{2}\) does not have a fixed sign generally, in order to use \(v_{1}^{p-1}v_{2}^{2}\) to control \(K_{2}(t,x)\) in (4.3), the key idea is to make the following a priori assumption:
\[\boxed{A\leq v_{1}(t;x)\leq B\quad\text{ for }(t,x)\in[0,T_{1}]\times[\bar{x}_{1},\bar{x}_{2}]}, \tag{7.1}\]
in which \(A\) and \(B\) are given in Theorem 2.3 satisfying (2.6).
For \((t,x)\in[0,T_{1}]\times[\bar{x}_{1},\bar{x}_{2}]\), define
\[\Sigma_{\delta}(t)=\{x\in[\bar{x}_{1},\bar{x}_{2}]:v_{2}(t;x)\leq(A^{p-1}B^{1- p}-\delta)m(t)\}\]
and
\[v_{2}(t,x)\bigg{(}=\frac{v_{2}(0)}{1+v_{2}(0)\int_{0}^{t}\big{[}pv_{1}^{p-1}( \tau)+\big{(}v_{2}^{-2}K_{1}\big{)}(\tau)\big{]}\ \mathrm{d}\tau}\bigg{)}=:m(0)r^{-1}(t,x).\]
Then, one can show the following lemmas.
**Lemma 6.** _For fixed \(\delta\), the set \(\Sigma_{\delta}(t)\) is decreasing in \(t\), namely \(\Sigma_{\delta}(t_{2})\subset\Sigma_{\delta}(t_{1})\) whenever \(0\leq t_{1}\leq t_{2}\leq T_{1}\)._
**Lemma 7**.: _It holds that_
\[(pB^{p-1}+\delta)m(0)\leq\frac{\mathrm{d}}{\mathrm{d}t}r(t,x)\leq(pA^{p-1}- \delta)m(0)<0\quad\text{for }x\in\Sigma_{\delta}(T_{1}),\]
\[q(t)\leq r(t,x)\leq\frac{1}{A^{p-1}B^{1-p}-\delta}q(t)\quad\text{for }x\in \Sigma_{\delta}(T_{1})\]
_and_
\[0<q(t)\leq 1.\]
**Lemma 8**.: _It holds that_
\[\begin{split}\int_{0}^{t}q^{-\gamma}(\tau)\,\mathrm{d}\tau& \leq(1-\gamma)^{-1}m^{-1}(0)(pA^{p-1}-\delta)^{-1}(A^{p-1}B^{1-p}- \delta)^{-\gamma}\\ &\quad\times\big{[}q^{1-\gamma}(t)-(A^{p-1}B^{1-p}-\delta)^{ \gamma-1}\big{]},\end{split} \tag{7.2}\]
_where \(\gamma\in(0,1)\cup(1,\infty)\), and_
\[\begin{split}\int_{0}^{t}q^{-1}(\tau)\,\mathrm{d}\tau& \leq m^{-1}(0)(pA^{p-1}-\delta)^{-1}(A^{p-1}B^{1-p}-\delta)^{-1}\\ &\quad\times\big{[}\log(A^{p-1}B^{1-p}-\delta)+\log q(t)\big{]}. \end{split} \tag{7.3}\]
**Estimates on \(K_{1}^{s}(t,x)\).** The estimates are exactly same as in the proof of Theorem 2.1.
**Estimates on \(v_{1}(t,x)\).** Note that
\[(pA^{p-1}-\delta)^{-1}(A^{p-1}B^{1-p}-\delta)^{-1}<(1-\delta)^{-2}.\]
Then it follows from (4.1) and (7.2) that
\[\begin{split}|v_{1}(t;x)|&\leq\|u_{0}\|_{L^{\infty }}+\int_{0}^{t}|K_{1}^{s}(\tau,x)|\ \mathrm{d}\tau\\ &\leq\frac{1}{2}C_{0}-C(C_{0}+C_{1})\big{(}pA^{p-1}-\delta\big{)} ^{-1}\big{(}A^{p-1}B^{1-p}-\delta\big{)}^{s-1}\\ &\quad\times m^{-1}(0)\big{[}(A^{p-1}B^{1-p}-\delta)^{-s}-q^{s}(t )\big{]}\\ &\leq\frac{1}{2}C_{0}-C(C_{0}+C_{1})(1-\delta)^{-2}m^{-1}(0)\\ &<C_{0}\quad\text{for }\ (t,x)\in(0,T_{2}]\times\mathbb{R},\end{split} \tag{7.4}\]
where \(\eqref{eq:E1}_{2}\) has been used in the last inequality.
**Estimates on \(K_{2}^{s}(t,x)\).** By (7.3), one solves (5.27) to find that
\[\begin{split}\|\partial_{x}^{3}u\|_{L^{2}}&\leq\|u _{0}^{\prime\prime\prime}\|_{L^{2}}\big{(}A^{p-1}B^{1-p}-\delta\big{)}^{-\frac {7}{2(pA^{p-1}-\delta)(Ap^{-1}B^{1-p}-\delta)}}\\ &\quad\times q(t)^{-\frac{7}{2(pA^{p-1}-\delta)(Ap^{-1}B^{1-p}- \delta)}}\\ &\leq\left(\frac{A^{p-1}}{2B^{p-1}}\right)^{-\frac{7B^{p-1}}{2pA^ {2p-2}}}\|u_{0}^{\prime\prime\prime}\|_{L^{2}}q(t)^{-\frac{7B^{p-1}}{2pA^{2p-2 }}}\quad\text{for }\ t\in(0,T_{2}],\end{split}\]
which, along with (5.24), yields
\[|I_{5}|\leq C\bigg{(}\frac{A^{p-1}}{2B^{p-1}}\bigg{)}^{-\frac{7B^{p-1}}{2pA^{2p-2 }}}\|u_{0}^{\prime\prime\prime}\|_{L^{2}}\eta^{s}q(t)^{-\frac{7B^{p-1}}{2pA^{2p -2}}}.\]
This together with (5.23), (5.25) by choosing \(\eta=q(t)^{-1+\frac{7B^{p-1}}{2pA^{2p-2}}}\) gives
\[|K_{2}^{s}(t;x)| \leq C\bigg{[}C_{1}+\bigg{(}\frac{A^{p-1}}{2B^{p-1}}\bigg{)}^{- \frac{7B^{p-1}}{2pA^{2p-2}}}\|u_{0}^{\prime\prime\prime}\|_{L^{2}}\bigg{]}q(t)^ {\big{(}1-\frac{7B^{p-1}}{pA^{2p-2}}\big{)}(1-s)-1}\] \[\leq C\bigg{[}C_{1}+\bigg{(}\frac{A^{p-1}}{2B^{p-1}}\bigg{)}^{- \frac{7B^{p-1}}{2pA^{2p-2}}}\|u_{0}^{\prime\prime\prime}\|_{L^{2}}\bigg{]}q(t) ^{-2}\ \ \ \text{for}\ \ (t,x)\in(0,T_{2}]\times\mathbb{R},\]
where one has used
\[\bigg{(}1-\frac{7B^{p-1}}{2pA^{2p-2}}\bigg{)}(1-s)-1\geq-2,\]
which follows from (2.7), \(s\in(0,1)\), and \(0<q(t)\leq 1\).
**Estimates on \(v_{2}(t,x)\).** Note that
\[(pA^{p-1}-\delta)^{-1}(A^{p-1}B^{1-p}-\delta)^{-2}<(1-\delta)^{-3}.\]
Then it follows from (4.3) and (7.3) that
\[v_{2}(t;x) \leq C\bigg{[}C_{1}+\ \bigg{(}\frac{A^{p-1}}{2B^{p-1}}\bigg{)}^{- \frac{7B^{p-1}}{2pA^{2p-2}}}\|u_{0}^{\prime\prime\prime}\|_{L^{2}}\bigg{]} \int_{0}^{t}q(\tau)^{-2}\ \mathrm{d}\tau\] \[\leq\frac{1}{2}C_{1}q(t)^{-1}-C\bigg{[}C_{1}+\ \bigg{(}\frac{A^{p-1}}{2B^{p-1}}\bigg{)}^{- \frac{7B^{p-1}}{2pA^{2p-2}}}\|u_{0}^{\prime\prime\prime}\|_{L^{2}}\bigg{]}( pA^{p-1}-\delta)^{-1}\] \[\quad\times(A^{p-1}B^{1-p}-\delta)^{-2}m^{-1}(0)\big{[}q(t)^{-1}- (A^{p-1}B^{1-p}-\delta)\big{]}\] \[\leq\frac{1}{2}C_{1}q(t)^{-1}-C\bigg{[}C_{1}+\bigg{(}\frac{A^{p-1 }}{2B^{p-1}}\bigg{)}^{-\frac{7B^{p-1}}{2pA^{2p-2}}}\|u_{0}^{\prime\prime\prime \prime}\|_{L^{2}}\bigg{]}\] \[\quad\times(1-\delta)^{-3}m^{-1}(0)q^{-1}(t)\] \[<C_{1}q(t)^{-1}\ \ \ \text{for}\ \ (t,x)\in(0,T_{2}]\times \mathbb{R},\]
where \(\eqref{eq:C1}_{3}\) has been used in the last inequality.
**The contradiction arguments** (5.15) **and** (5.5). One can get contradictions to (5.15) and (5.5) by collecting the estimates above.
**The a priori assumption** (7.1). Similar to (7.4), one may estimate
\[v_{1}(t,x) \leq u_{0}(x)-Cm^{-1}(0)(1-\delta)^{-2}(C_{0}+C_{1})<B, \tag{7.5}\] \[v_{1}(t,x) \geq u_{0}(x)+Cm^{-1}(0)(1-\delta)^{-2}(C_{0}+C_{1})>A\]
for \(t\in[0,T_{1}]\) and \(x\in[\bar{x}_{1},\bar{x}_{2}]\). Here one has used \(\eqref{eq:C1}_{1}\) in \(\eqref{eq:C1}_{1}\) and \(\eqref{eq:C1}_{2}\) in \(\eqref{eq:C1}_{2}\), respectively.
## 8. Proof of Theorem 2.4
One can follow the arguments in showing Theorem 2.2 and Theorem 2.3 to prove Theorem 2.4.
## 9. Dispersive properties and weak entropy solutions
### Linear estimates
We have seen that (1.1) shares with the Burgers equation a typical property of conservation laws, the possibility of shock formation. We briefly comment here on dispersive properties. The first one concerns \(L^{1}-L^{\infty}\) estimates for the linear equation
\[u_{t}+(I-\partial_{x}^{2})^{-s/2}u_{x}=0,\quad u(x,0)=\phi(x). \tag{9.1}\]
The case \(s=2\) (linearized Fornberg-Whitham equation) corresponds to the linear Benjamin-Bona-Mahony (BBM ) equation
\[u_{t}+(I-\partial_{x}^{2})^{-1}u_{x}=0, \tag{9.2}\]
for which J. Albert in [3] proved the following decay estimate for the solution \(u\) of (9.2) with initial data \(\phi\in L^{1}(\mathbb{R})\cap H^{4}(\mathbb{R})\):
\[\|u(\cdot,t)\|_{L^{\infty}}\lesssim(\|\phi\|_{L^{1}}+\|\phi\|_{H^{4}})(1+t)^{ -1/3},\quad\forall t\geq 1.\]
In [4], Albert proved a similar decay estimate in a different functional setting, that is
\[\|u(\cdot,t)\|_{L^{\infty}}\lesssim\|(1+|x|)\phi\|_{L^{2}}(1+t)^{-1/3},\quad \forall t\geq 1.\]
Similar linear estimates hold for (9.1) as well.
Let
\[\Phi(\xi)=\Phi(\xi;x,t):=t^{-1}x\xi+(1+|\xi|^{2})^{s/2}\xi.\]
One may write
\[e^{t(1-\partial_{x}^{2})^{s/2}\partial_{x}}g(t,x) =\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{\mathrm{i}t\Phi( \xi)}\widehat{g}(t,\xi)\varphi(2^{10}\xi)\,\mathrm{d}\xi\] \[\quad+\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{\mathrm{i}t \Phi(\xi)}\widehat{g}(t,\xi)\big{(}1-\varphi(2^{10}\xi)\big{)}\,\mathrm{d}\xi\] \[= \colon I_{L}(t,x,g)+I_{H}(t,x,g).\]
Then, we have the following decay estimates.
**Proposition 1**.: _Let \(N_{0}=[s]+1\), \(t\geq 1,\ x\in\mathbb{R}\) and \(g\) be a real function. Assume_
\[\|\widehat{g}\|_{L^{\infty}}+t^{-1/6}(\|g\|_{H^{1,1}}+\|g\|_{H^{N_{0}}})\leq 1.\]
_Then_
\[|I_{L}(t,x,g)|\lesssim t^{-1/3}\langle(x+t)/t^{1/3}\rangle^{-1/4} \tag{9.3}\]
_and_
\[|I_{H}(t,x,g)|\lesssim t^{-1/3}. \tag{9.4}\]
_Consequently, for the solution \(u\) to (1.1), it holds that_
\[\|u(\cdot,t)\|_{L^{\infty}}\lesssim t^{-1/3}[\|u\|_{L^{1}}+t^{-1/6}(\|u\|_{H^{ 1,1}}+\|u\|_{H^{N_{0}}})],\quad\forall t\geq 1.\]
Proof of (9.3).: The proof is close to [15] (see also [33]). It is easy to see that \(\partial_{\xi}\Phi(\xi)=0\) on \([-2^{-9},2^{-9}]\) has no root or two roots with opposite signs (corresponding to \(x>-t\)). It suffices to consider the latter case since the former case is much easier and follows from similar calculations. Let \(\xi_{0}\) be the positive root of \(\partial_{\xi}\Phi(\xi_{0})=0\) on \([-2^{-9},2^{-9}]\), and set
\[I_{L}^{+}:=\int_{0}^{\infty}e^{\mathrm{i}t\Phi(\xi)}\widehat{g}(t,\xi)\varphi(2 ^{10}\xi)\,\mathrm{d}\xi.\]
To verify (9.3), up to taking complex conjugates, it suffices to show
\[|I_{L}^{+}|\lesssim t^{-1/3}\max(t^{1/3}\xi_{0},1)^{-1/2},\]
which will be divided into two cases depending on the size of \(\xi_{0}\).
**Case 1: \(\xi_{0}\leq t^{-1/3}\)**. In this case, one needs only to show that \(I_{L}^{+}\) is bounded by \(t^{-1/3}\). For this, we decompose \(I_{L}^{+}\) as follows:
\[I_{L}^{+} =\int_{0}^{\infty}e^{\mathrm{i}t\Phi(\xi)}\widehat{g}(t,\xi) \varphi(2^{-10}t^{1/3}\xi)\varphi(2^{10}\xi)\,\mathrm{d}\xi\] \[\quad+\int_{0}^{\infty}e^{\mathrm{i}t\Phi(\xi)}\widehat{g}(t,\xi )\big{(}1-\varphi(2^{-10}t^{1/3}\xi)\big{)}\varphi(2^{10}\xi)\,\mathrm{d}\xi=: A_{1}+A_{2}.\]
Clearly \(A_{1}\) can be controlled by the desired bound \(t^{-1/3}\). To estimate \(A_{2}\), one uses integration by part to find that
\[|A_{2}| \lesssim t^{-1}\int_{0}^{\infty}\big{|}\partial_{\xi}\big{[}( \partial_{\xi}\Phi)^{-1}\big{(}1-\varphi(2^{-10}t^{1/3}\xi)\big{)}\varphi(2^{ 10}\xi)\big{]}\widehat{g}(t,\xi)\big{|}\,\mathrm{d}\xi\] \[=: A_{21}+A_{22}\]
with
\[A_{21} \lesssim t^{-1}\int_{0}^{\infty}\big{|}\partial_{\xi}\big{[}( \partial_{\xi}\Phi)^{-1}\big{(}1-\varphi(2^{-10}t^{1/3}\xi)\big{)}\big{]} \varphi(2^{10}\xi)\widehat{g}(t,\xi)\big{|}\,\mathrm{d}\xi\] \[\quad+t^{-1}\int_{0}^{\infty}\big{|}(\partial_{\xi}\Phi)^{-1} \big{(}1-\varphi(2^{-10}t^{1/3}\xi)\partial_{\xi}\varphi(2^{10}\xi)\widehat{g }(t,\xi)\big{|}\,\mathrm{d}\xi\] \[=: A_{21}^{1}+A_{21}^{2}.\]
The key point in estimating \(A_{21}\) and \(A_{22}\) is to bound \(|\partial_{\xi}\Phi|\) from below. In fact, by Taylor's formula, one has
\[|\partial_{\xi}\Phi|=\left|\frac{3s}{2}(\xi_{0}^{2}-\xi^{2})+\mathcal{O}(\xi _{0}^{4})+\mathcal{O}(\xi^{4})\right|\gtrsim|\xi_{0}^{2}-\xi^{2}|\gtrsim\xi^{2}\]
since \(\xi\gg\xi_{0}\) on the support of \(A_{2}\). Since \(\varphi(2^{10}\xi)\) is bounded, \(A_{21}^{1}\) and \(A_{22}\) can be shown similarly as [15] to have the bound \(t^{-1/3}\). For the new term
\(A_{21}^{2}\), one has
\[A_{21}^{2} \lesssim t^{-1}\int_{0}^{\infty}|(\partial_{\xi}\Phi)^{-1}\big{(}1- \varphi(2^{-10}t^{1/3}\xi)\big{)}\partial_{\xi}\varphi(2^{10}\xi)\widehat{g}(t, \xi)\big{|}\,\mathrm{d}\xi\] \[\lesssim t^{-1}\|\widehat{g}\|_{L^{\infty}}\int_{0}^{\infty}\xi^ {-2}|1-\varphi(2^{-10}t^{1/3}\xi)|\,\mathrm{d}\xi\lesssim t^{-2/3}.\]
Hence \(A_{2}\) is bounded by the desired bound \(t^{-1/3}\).
**Case 2: \(\xi_{0}\geq t^{-1/3}\)**. In this case, one shall obtain a bound \(t^{-1/2}\xi_{0}^{-1/2}\) for \(I_{L}^{+}\). To this end, one instead decomposes
\[I_{L}^{+} =\int_{0}^{\infty}e^{it\Phi(\xi)}\widehat{g}(t,\xi)\big{(}1- \psi(\xi/\xi_{0})\big{)}\varphi(2^{10}\xi)\,\mathrm{d}\xi\] \[\quad+\int_{0}^{\infty}e^{it\Phi(\xi)}\widehat{g}(t,\xi)\psi(\xi /\xi_{0})\varphi(2^{10}\xi)\,\mathrm{d}\xi=:A_{3}+A_{4}.\]
Integration by parts leads to
\[A_{3} \lesssim t^{-1}\int_{0}^{\infty}\big{|}\partial_{\xi}\big{[}( \partial_{\xi}\Phi)^{-1}\big{(}1-\psi(\xi/\xi_{0})\big{)}\varphi(2^{10}\xi) \big{]}\widehat{g}(t,\xi)\big{|}\,\,\mathrm{d}\xi\] \[\quad+t^{-1}\int_{0}^{\infty}\big{|}(\partial_{\xi}\Phi)^{-1} \big{(}1-\psi(\xi/\xi_{0})\big{)}\varphi(2^{10}\xi)\partial_{\xi}\widehat{g}( t,\xi)\big{|}\,\,\mathrm{d}\xi\] \[=:A_{31}+A_{32}\]
with
\[A_{31} \lesssim t^{-1}\int_{0}^{\infty}\big{|}\partial_{\xi}\big{[}( \partial_{\xi}\Phi)^{-1}\big{(}1-\psi(\xi/\xi_{0})\big{)}\big{]}\varphi(2^{10 }\xi)\widehat{g}(t,\xi)\big{|}\,\,\mathrm{d}\xi\] \[\quad+t^{-1}\int_{0}^{\infty}\big{|}(\partial_{\xi}\Phi)^{-1} \big{(}1-\psi(\xi/\xi_{0})\big{)}\partial_{\xi}\varphi(2^{10}\xi)\widehat{g}( t,\xi)\big{|}\,\,\mathrm{d}\xi\] \[=:A_{31}^{1}+A_{32}^{2}.\]
Note that \(|\partial_{\xi}\Phi|\gtrsim|\xi_{0}^{2}-\xi^{2}|\gtrsim\max(\xi,\xi_{0})^{2}\) on the support of \(A_{3}\). Again, since \(\varphi(2^{10}\xi)\) is bounded, one can follow [15] to show \(A_{31}^{1}\) and \(A_{32}\) to have the bound \(t^{-1/2}\xi_{0}^{-1/2}\). It remains to consider \(A_{32}^{2}\). Indeed, it holds that
\[A_{32}^{2}\lesssim t^{-1}\|\widehat{g}\|_{L^{\infty}}\int_{0}^{\infty}\max( \xi,\xi_{0})^{-2}|1-\psi(\xi/\xi_{0})|\,\mathrm{d}\xi\lesssim t^{-1}\xi_{0}^{ -3},\]
which is better than the desired bound \(t^{-1/2}\xi_{0}^{-1/2}\) due to \(\xi_{0}\geq t^{-1/3}\). Thus, we have shown that \(A_{3}\) satisfies the desired bound \(t^{-1/2}\xi_{0}^{-1/2}\).
We next deal with \(A_{4}\). Let \(l_{0}\) be the smallest integer satisfying \(2^{l_{0}}\geq(t\xi_{0})^{-1/2}\). Then, it holds that
\[A_{4} =\int_{-\infty}^{\infty}e^{\mathrm{i}t\Phi(\xi)}\widehat{g}(t,\xi) \psi(\xi/\xi_{0})\varphi\big{(}2^{-l_{0}}(\xi-\xi_{0})\big{)}\varphi(2^{10}\xi) \,\mathrm{d}\xi\] \[\quad+\sum_{l\geq l_{0}+1}\int_{-\infty}^{\infty}e^{\mathrm{i}t \Phi(\xi)}\widehat{g}(t,\xi)\psi(\xi/\xi_{0})\psi\big{(}2^{-l}(\xi-\xi_{0}) \big{)}\varphi(2^{10}\xi)\,\mathrm{d}\xi\] \[=:A_{4l_{0}}+\sum_{l\geq l_{0}+1}A_{4l}.\]
First, one has
\[|A_{4l_{0}}|\lesssim 2^{l_{0}}\|\widehat{g}\|_{L^{\infty}}\lesssim t^{-1/2} \xi_{0}^{-1/2}.\]
It remains to estimate \(A_{4l}\) for \(l\geq l_{0}+1\). For this, one integrates by parts to deduce that
\[|A_{4l}| \lesssim t^{-1}\int_{-\infty}^{\infty}\big{|}\partial_{\xi}\big{[}( \partial_{\xi}\Phi)^{-1}\psi(\xi/\xi_{0})\psi\big{(}2^{-l}(\xi-\xi_{0})\big{)} \varphi(2^{10}\xi)\big{]}\widehat{g}(t,\xi)\big{|}\,\mathrm{d}\xi\] \[=:A_{4l,1}+A_{4l,2}\]
with
\[A_{4l,1} \lesssim t^{-1}\int_{-\infty}^{\infty}\big{|}\partial_{\xi}\big{[}( \partial_{\xi}\Phi)^{-1}\psi(\xi/\xi_{0})\psi\big{(}2^{-l}(\xi-\xi_{0})\big{)} \big{]}\varphi(2^{10}\xi)\widehat{g}(t,\xi)\big{|}\,\mathrm{d}\xi\] \[\quad+t^{-1}\int_{-\infty}^{\infty}\big{|}(\partial_{\xi}\Phi)^{ -1}\psi(\xi/\xi_{0})\psi\big{(}2^{-l}(\xi-\xi_{0})\big{)}\partial_{\xi} \varphi(2^{10}\xi)\widehat{g}(t,\xi)\big{|}\,\mathrm{d}\xi\] \[=:A_{4l,1}^{1}+A_{4l,2}^{2}.\]
Observe that \(|\partial_{\xi}\Phi|\gtrsim 2^{l}\xi_{0}\) on the support of \(A_{4l}\). For the same reason as before, it suffices to focus on the new term \(A_{4l,2}^{2}\), which can be estimated as follows:
\[A_{4l,2}^{2}\lesssim t^{-1}2^{-l}\xi_{0}^{-1}\|\widehat{g}\|_{L^{\infty}}\int _{-\infty}^{\infty}\psi(\xi/\xi_{0})\psi\big{(}2^{-l}(\xi-\xi_{0})\big{)}\, \mathrm{d}\xi\lesssim t^{-1}2^{-l},\]
which yields the desired bound \(t^{-1/2}\xi_{0}^{-1/2}\) by summation over \(l\geq l_{0}+1\) and using \(2^{l_{0}}\geq(t\xi_{0})^{-1/2}\). Hence, we have also shown that \(A_{4}\) satisfies the desired bound \(t^{-1/2}\xi_{0}^{-1/2}\).
Proof of (9.4).: The proof is similar to [7] (see also [30]). Observe
\[I_{H}:=\sum_{k\in\mathbb{Z}}\underbrace{\int_{-\infty}^{\infty}e^{\mathrm{i}t \Phi(t,\xi)}\widehat{P_{k}g}(t,\xi)\big{(}1-\varphi(2^{10}\xi)\big{)}\,\mathrm{ d}\xi}_{I_{H,k}}\approx\sum_{k\in\mathbb{N}}I_{H,k},\]
where one has used the property of the support of the integral. To show (9.4), we first prove
\[|I_{H,k}|\lesssim t^{-1/2}2^{(1+s)k/2}\|\widehat{P_{k}g}\|_{L^{\infty}}+t^{-3/4} 2^{(3s-1)k/4}(\|\widehat{P_{k}g}\|_{L^{2}}+2^{k}\|\partial\widehat{P_{k}g}\|_{L ^{2}}), \tag{9.5}\]
and
\[|I_{H,k}|\lesssim t^{-1/2}2^{(1+s)k/2}\|P_{k}g\|_{L^{1}}. \tag{9.6}\]
We only show (9.5) since (9.6) is much easier. When \(t\lesssim 2^{(s-1)k}\), it is easy to see that
\[|I_{H,k}|\lesssim 2^{k}\|\widehat{P_{k}g}\|_{L^{\infty}}\lesssim t^{-1/2}2^{(1 +s)k/2}\|\widehat{P_{k}g}\|_{L^{\infty}}.\]
It remains to consider \(t\gtrsim 2^{(s-1)k}\). Direct calculations yield
\[\left|\frac{\mathrm{d}}{\mathrm{d}\xi}\big{(}(1+|\xi|^{2})^{-s/2}\xi\big{)} \right|\geq c_{0}|\xi|^{-s},\quad\text{as }|\xi|\geq 1/100\]
for some constant \(c_{0}>0\) independent of \(\xi\). Let
\[\mathcal{I}:=\{k\in\mathbb{N}:\frac{c_{0}}{4}|tx^{-1}|\leq 2^{sk}\leq 4c_{0}|tx^{- 1}|\}.\]
**Case 1:**\(k\in\mathbb{N}\setminus\mathcal{I}\). Integration by parts yields
\[|I_{H,k}| \lesssim t^{-1}\int_{-\infty}^{\infty}\big{|}\partial_{\xi}\big{[} (\partial_{\xi}\Phi)^{-1}\big{(}1-\varphi(2^{10}\xi)\big{)}\psi_{k}(\xi)\big{]} \widehat{P_{k}g}(t,\xi)\big{|}\,\mathrm{d}\xi\] \[\quad+t^{-1}\int_{-\infty}^{\infty}\big{|}(\partial_{\xi}\Phi)^{ -1}\big{(}1-\varphi(2^{10}\xi)\big{)}\psi_{k}(\xi)\partial_{\xi}\widehat{P_{k }g}(t,\xi)\big{|}\,\mathrm{d}\xi\] \[=:B_{1}+B_{2}.\]
Noticing that \(|\partial_{\xi}\Phi(\xi)|\gtrsim\big{|}|t^{-1}x|-c_{0}|\xi|^{-s}\big{|} \gtrsim 2^{-sk}\) on the support of \(I_{H,k}\), one may estimate
\[B_{1}\lesssim t^{-1}2^{(s-1/2)k}\|\widehat{P_{k}g}\|_{L^{2}},\]
and
\[B_{2}\lesssim t^{-1}2^{(s+1/2)k}\|\partial\widehat{P_{k}g}\|_{L^{2}}.\]
Recalling \(t\gtrsim 2^{(s-1)k}\), we obtain
\[|I_{H,k}|\lesssim t^{-3/4}2^{(3s-1)k/4}(\|\widehat{P_{k}g}\|_{L^{2}}+2^{k}\| \partial\widehat{P_{k}g}\|_{L^{2}}).\]
**Case 2:**\(k\in\mathcal{I}\). Notice that \(\partial_{\xi}\Phi(\xi)=0\) on \(\mathbb{R}\setminus[-2^{-9},2^{-9}]\) has no root or two roots with opposite signs (corresponding to \(x>0\)). We only consider the latter case since the other case is much easier. Denote by \(\xi_{0}\) the positive root of \(\partial_{\xi}\Phi(\xi)=0\) on \(\mathbb{R}\setminus[-2^{-9},2^{-9}]\). Let \(l_{0}\) be the smallest integer satisfying
\(2^{l_{0}}\geq t^{-1/2}2^{(1+s)k/2}\). Then, one has
\[I_{H,k} =\int_{-\infty}^{\infty}e^{\mathrm{i}t\Phi(\xi)}\widehat{P_{k}g}(t,\xi)\big{(}1-\varphi(2^{10}\xi)\big{)}\varphi_{l_{0}}\big{(}\xi-\xi_{0}\big{)} \,\mathrm{d}\xi\] \[\quad+\sum_{l\geq l_{0}+1}\int_{-\infty}^{\infty}e^{\mathrm{i}t \Phi(\xi)}\widehat{P_{k}g}(t,\xi)\big{(}1-\varphi(2^{10}\xi)\big{)}\psi_{l}( \xi-\xi_{0})\,\mathrm{d}\xi\] \[=:J_{l_{0}}+\sum_{l\geq l_{0}+1}J_{l}.\]
First, one can easily check that
\[|J_{l_{0}}|\leq t^{-1/2}2^{(1+s)k/2}\|\widehat{P_{k}g}\|_{L^{\infty}}.\]
It remains to bound \(J_{l}\) for \(l\geq l_{0}+1\). Integration by parts yields
\[|J_{l}| \lesssim t^{-1}\int_{-\infty}^{\infty}\big{|}\partial_{\xi}\big{[} (\partial_{\xi}\Phi)^{-1}\big{(}1-\varphi(2^{10}\xi)\big{)}\psi_{l}(\xi-\xi_{ 0})\psi_{k}(\xi)\big{]}\widehat{P_{k}g}(t,\xi)\big{|}\,\mathrm{d}\xi\] \[\quad+t^{-1}\int_{-\infty}^{\infty}\big{|}(\partial_{\xi}\Phi)^{ -1}\big{(}1-\varphi(2^{10}\xi)\big{)}\psi_{l}(\xi-\xi_{0})\psi_{k}(\xi) \partial_{\xi}\widehat{P_{k}g}(t,\xi)\big{|}\,\mathrm{d}\xi\] \[=:B_{3}+B_{4}.\]
Since \(|\partial_{\xi}\Phi(\xi)|\gtrsim\big{|}|\xi_{0}|^{-s}-|\xi|^{-s}\big{|} \gtrsim 2^{l-(1+s)k}\) on the support of \(J_{l}\), it follows that
\[B_{3}\lesssim t^{-1}2^{-l+(1+s)k}\|\widehat{P_{k}g}\|_{L^{\infty}},\]
and
\[B_{4}\lesssim t^{-1}2^{-\frac{l}{2}+(1+s)k}\|\partial\widehat{P_{k}g}\|_{L^{2 }}.\]
Therefore
\[\sum_{l\geq l_{0}+1}J_{l}\lesssim t^{-1/2}2^{(1+s)k/2}\|\widehat{P_{k}g}\|_{L^ {\infty}}+t^{-3/4}2^{(3s-1)k/4}2^{k}\|\partial\widehat{P_{k}g}\|_{L^{2}}.\]
Finally, we are in the position to show (9.4). Let \(\theta=(N_{0}-s)/2>0\). If \(2^{k}\leq t^{1/(100N_{0})}\), then we use (9.5) to deduce that
\[2^{\theta k}|I_{H,k}|\lesssim t^{-1/2}2^{(1+s)k/2}2^{\theta k}+t^{-3/4}2^{(3s- 1)k/4}2^{(1+\theta)k}t^{1/6}\lesssim t^{-1/3}. \tag{9.7}\]
If \(2^{k}\geq t^{1/(100N_{0})}\), then it follows from (9.6) that
\[2^{\theta k}|I_{H,k}| \lesssim t^{-1/2}2^{(1+s)k/2}2^{\theta k}2^{-k/2}2^{-N_{0}k/2} \tag{9.8}\] \[\quad\times\|P_{k}g\|_{H^{N_{0}}}^{1/2}(\|\widehat{P_{k}g}\|_{L^ {2}}+2^{k}\|\partial\widehat{P_{k}g}\|_{L^{2}})^{1/2}\] \[\lesssim t^{-1/2}t^{1/12}t^{1/12}\lesssim t^{-1/3},\]
where one has used (see for instance [7])
\[\|P_{k}g\|_{L^{2}}\lesssim 2^{-k/2}\|P_{k}g\|_{L^{2}}^{1/2}(\|\widehat{P_{k}g}\|_{L ^{2}}+2^{k}\|\partial\widehat{P_{k}g}\|_{L^{2}})^{1/2}.\]
Thanks to the decay factor \(2^{\theta k}\), (9.7) and (9.8) suffice to fulfil (9.4).
**Remark 7**.: _The linear estimates in [3, 4] are used to prove the global existence and decay of small solutions to the generalized BBM equation:_
\[u_{t}+u_{x}+u^{p}u_{x}-u_{xxt}=0,\]
_when \(p\geq 4\)._
_On the other hand, Kwak and Munoz [21] proved decay properties for small solutions of the generalized BBM equations, for any \(p\geq 1\), in the region:_
\[I(t)=(-\infty,-at)\cup((1+b)t,\infty),\quad t>0,\]
_for any \(b>0,\ a>1/8\)._
_It would be interesting to extend this result to the generalized Fornberg-Whitham equations._
### Solitary wave solutions
In order to study their long wave limit, we rescale (1.1) with \(p=1\) as
\[u_{t}+\varepsilon uu_{x}-\mathcal{K}_{s}^{\varepsilon}u_{x}=0, \tag{9.9}\]
where \(\varepsilon\ll 1\) and \(\mathcal{K}_{s}^{\varepsilon}=(1-\varepsilon\partial_{x}^{2})^{-s/2}\). Observing that
\[(1+\varepsilon\xi^{2})^{-s/2}=1-\varepsilon s\frac{\xi^{2}}{2}+O(\varepsilon ^{2}),\]
one gets from (9.9) formally that
\[u_{t}+u_{x}+\varepsilon uu_{x}+\frac{\varepsilon s}{2}u_{xxx}=O(\varepsilon^ {2}), \tag{9.10}\]
suggesting that one obtains the KdV equation in the long wave limit. A similar fact has been used in [10] for a class of nonlocal equations involving the Whitham equation to prove the existence of solitary wave solutions of those equations. In fact, one can check that the symbol of the linear part and nonlinearity of (1.1) satisfy the **Assumptions (A1)-(A3)** of [10], so that the existence result [10, Theorem 1.2] there applies in our case yielding existence of solitary wave solutions \(u(x-\nu t)\) of the generalized Fornberg-Whitham equation for any \(s>0\).
### Global weak entropy solutions
We first give the definition of weak entropy solution of (1.1) for \(p=1\):
**Definition 1** ([18]).: _Let \(u_{0}\in L^{1}(\mathbb{R})\cap L^{\infty}(\mathbb{R})\). A function \(u\in C([0,\infty),L^{1}(\mathbb{R}))\) that is bounded on \(\mathbb{R}\times[0,T]\) for every \(T>0\) is called a weak entropy solution of (1.1), if_
\[\int_{0}^{\infty}\int_{-\infty}^{\infty} [(|u(x,t)-\lambda|\partial_{t}\phi(x,t)+\frac{1}{2}\mathrm{sgn}(u (x,t)-\lambda)(u^{2}(x,t)-\lambda^{2})\partial_{x}\phi(x,t)\] \[-\mathrm{sgn}(u(x,t)-\lambda)\mathcal{K}_{s}^{\prime}\star u( \cdot,t)(x)\phi(x,t)]\,\mathrm{d}x\mathrm{d}t\geq 0\]
_holds for arbitrary \(\lambda\in\mathbb{R}\) and nonnegative test functions \(\phi\in\mathcal{D}(\mathbb{R}\times(0,\infty))\)._
When \(s=2\), the weak entropy solution of the Fornberg-Whitham equation (1.4) was obtained in [18]:
**Theorem 9.1**.: _Let \(u_{0}\in L^{1}(\mathbb{R})\cap L^{\infty}(\mathbb{R})\). Then the Cauchy problem to (1.4) with the initial data \(u(x,0)=u_{0}(x)\) has a unique entropy solution \(u\). For any \(t>0,x,y\in\mathbb{R},x<y\), \(u\) satisfies the Oleinik type inequality_
\[u(y,t)-u(x,t)\leq\bigg{(}\frac{1}{t}+2+2t(1+2e^{t}\|u_{0}\|_{L^{1}}\bigg{)}(y-x).\]
_Moreover, the following \(L^{1}\) stability holds: if \(v\) is the weak entropy solution corresponding to the initial data \(v_{0}\in L^{1}(\mathbb{R})\cap L^{\infty}(\mathbb{R})\), then_
\[\|u(t)-v(t)\|_{L^{1}}\leq e^{t}\|u_{0}-v_{0}\|_{L^{1}},\quad\forall t>0.\]
It would be interesting to extend this result to the generalized Fornberg-Whitham equation (1.1) for all \(s>0\).
One can refer to [12, 16, 24] for other related works.
## 10. Final comments
We have shown that the Burgers equation with a dispersive perturbation based on Bessel potentials has a rather rich dynamics. It has a hyperbolic character (possibility of shock formation, existence of global weak solutions), and it displays dispersive properties: linear dispersive estimates and, in the long wave limit, a link with the KdV equation leading to the existence of solitary waves.
It has been shown in [29, 30] that the modified (cubic) fKdV equation with \(-1<\alpha<0\) has also dispersive properties in the sense that it possesses global small smooth solutions. It would be interesting to check if this property still holds with a "Bessel type" dispersion and also for quadratic nonlinearities.
| この論文では、カウシー問題を弱く分散させることで、ベッセルポテンシャル(フォーベルン-ウィッtham方程式の generalization)を含んだBurgers方程式の初期データに対して波 breaking を生み出せることを示します。また、この方程式の分散特性について議論します。 |
2302.14745 | Epitaxial growth and characterization of (001) [NiFe/M]$_{20}$ (M = Cu,
CuPt and Pt) superlattices | We present optimization of [(15 $\unicode{x212B}$) Ni$_{80}$Fe$_{20}$/(5
$\unicode{xC5}$) M]$_{20}$ single crystal multilayers on (001) MgO, with M
being Cu, Cu$_{50}$Pt$_{50}$ and Pt. These superlattices were characterized by
high-resolution X-ray reflectivity (XRR) and diffraction (XRD) as well as polar
mapping of important crystal planes. It is shown that cube on cube epitaxial
relationship can be obtained when depositing at the substrate temperature of
100 $^\circ$C regardless of the lattice mismatch (5% and 14% for Cu and Pt,
respectively). At lower substrate temperatures poly-crystalline multilayers
were obtained while at higher substrate temperatures {111} planes appear at
$\sim$10$^\circ$ off normal to the film plane. It is also shown that as the
epitaxial strain increases, the easy magnetization axis rotates towards the
direction that previously was assumed to be harder, i.e. from [110] to [100],
and eventually further increase in the strain makes the magnetic hysteresis
loops isotropic in the film plane. Higher epitaxial strain is also accompanied
with increased coercivity values. Thus, the effect of epitaxial strain on the
magnetocrystalline anisotropy is much larger than what was observed previously
in similar, but polycrystalline samples with uniaxial anisotropy (Kateb et al.
2021). | Movaffaq Kateb, Jon Tomas Gudmundsson, Snorri Ingvarsson | 2023-02-28T16:45:13 | http://arxiv.org/abs/2302.14745v1 | Epitaxial growth and characterization of (001) [NiFe/M]\({}_{20}\) (M = Cu, CuPt and Pt) superlattices
###### Abstract
We present optimization of [(15 A) Ni\({}_{0}\)Fe\({}_{20}\)/(5 A) M]\({}_{20}\) single crystal multilayers on (001) MgO, with M being Cu, Cu\({}_{50}\)Pt\({}_{50}\) and Pt. These superlattices were characterized by high resolution X-ray reflectivity (XRR) and diffraction (XRD) as well as polar mapping of important crystal planes. It is shown that cube on cube epitaxial relationship can be obtained when depositing at substrate temperature of 100 \({}^{\circ}\)C regardless of the lattice mismatch (5% and 14% for Cu and Pt, respectively). At lower substrate temperatures poly-crystalline multilayers were obtained while at higher substrate temperatures [111] planes appear at \(\sim\)10\({}^{\circ}\) off normal to the film plane. It is also shown that as the epitaxial strain increases, the easy magnetization axis rotates towards the direction that previously was assumed to be harder, i.e. from [110] to [100], and eventually further increase in the strain makes the magnetic hysteresis loops isotropic in the film plane. Higher epitaxial strain is also accompanied with increased coercivity values. Thus, the effect of epitaxial strain on the magnetocrystalline anisotropy is much larger than what was observed previously in similar, but polycrystalline samples with uniaxial anisotropy (Kateb _et al._ 2021).
NiFe; Superlattice; Magnetic Anisotropy; Microstructure; Substrate Temperature pacs: 75.30.Gw, 75.50.Bb, 73.50.Jt, 81.15.Cd
## I Introduction
Since the discovery of the giant magneto-resistance (GMR) effect by Fert [1] and Grunberg [2] in the late 1980s, magnetic multilayers have been widely studied. In many cases they present unique features that cannot be achieved within the bulk state namely inter-layer exchange coupling [3], magnetic damping, due to the interface [4; 5] rather than alloying [6], and perpendicular magnetic anisotropy [7].
The GMR discovery, without a doubt, was an outcome of the advances in preparation methods such as molecular beam epitaxy (MBE), that enabled deposition of multilayer films with nanoscale thicknesses [8]. Thus, a great deal of effort has been devoted to enhancing the preparation methods over the years using both simulations [9; 10; 11; 12] and experiments (cf. Ref. [13] and references therein). Permalloy (Py) multilayers with non-magnetic (NM) Pt [13; 14; 15] or Cu [16; 17; 18; 19; 12] as spacers have been studied extensively in recent years. Various deposition methods have been utilized for preparing magnetic multilayers such as MBE [16], pulsed laser deposition (PLD) [20], ion beam deposition [21; 12], dc magnetron sputtering (dcMS) [14; 17; 3; 18], and more recently, high power impulse magnetron sputtering (HiPMS) [13].
Permalloy (Py) is a unique material with regards to studying magnetic anisotropy, which has been shown to strongly depend on the preparation method [22]. For instance, uniaxial anisotropy can be induced in polycrystalline Py by several means [23]. However, it has been thought that the cubic symmetry of single crystal Py encourages magneto-crystalline anisotropy, while uniaxial anisotropy cannot be achieved. We have recently shown that using HiPIMS deposition one can decrease the Ni\({}_{3}\)Fe (L1\({}_{2}\)) order, but maintain the single crystal form, to achieve uniaxial anisotropy. We attributed this to the high instantaneous deposition rate during the HiPIMS pulse [24], which limits ordering compared to dcMS that present cubic (biaxial) anisotropy. Regarding Py multilayers there has been a lot of focus on magneto-dynamic properties recently while the effects of interface strain on magnetic anisotropy has not received much attention. Rook _et al._[16] prepared polycrystalline Py/Cu multilayers by MBE and reported a weak anisotropy in them, i.e. hysteresis loops along both the hard and easy axes with complete saturation at higher fields. They compared the coercivity values (\(H_{\rm c}\)) and saturation fields of their samples to \(H_{\rm c}\) and anisotropy field (\(H_{\rm k}\)) of sputter deposited multilayers showing uniaxial anisotropy and concluded that the latter gives more than twice harder properties. They also reported an increase in \(H_{\rm c}\) with Py thickness and attributed this to the interface strain that relaxes with increased thickness. Correa _et al._[14] prepared nanocrystalline Py/Pt multilayers on rigid and flexible substrates and in both cases obtained weak anisotropy but two orders of magnitude larger \(H_{\rm c}\). Unfortunately, they did not mention any change in magnetic anisotropy
upon straining the flexible substrate.
Recently we showed that utilizing increased power to the dcMS process, and in particular, by using HiPIMS deposition that the interface sharpness in polycrystalline [Py/Pt]\({}_{20}\) multilayers can be improved, due to increased ionization of the sputtered species [13]. Briefly, in dcMS deposition the film forming material is composed mostly of neutral atoms [25], while in HiPIMS deposition a significant fraction of the film forming material consists of ions [26; 27]. In fact we have shown that higher ionization of the film-forming material leads to smoother film surfaces and sharper interfaces using molecular dynamics simulations [28; 29]. We also showed that by changing the non-magnetic spacer material one can increase interface strain that is accompanied with higher \(H_{\mathrm{c}}\), \(H_{\mathrm{k}}\) and limited deterioration of uniaxial anisotropy [13].
Another aspect of preparation is that deposition chambers for multilayers mostly benefit from oblique deposition geometry, which encourage uniaxial anisotropy in Py. The origin of uniaxial anisotropy induced by oblique deposition has been proposed to be self-shadowing, but this has not been systematically verified. We demonstrated uniaxial anisotropy, even in atomically smooth films with normal texture, which indicates lack of self-shadowing [30; 31]. We also showed that oblique deposition is more decisive in definition of anisotropy direction than application of an _in-situ_ magnetic field for inducing uniaxial magnetic anisotropy. Also for polycrystalline Py films oblique deposition by HiPIMS presents a lower coercivity and anisotropy field than when dcMS deposition is applied [32; 33; 34]. While none of the above mentioned results verify self-shadowing they are consistent with our interpretation of the order i.e. oblique deposition induces more disorder than _in-situ_ magnetic field and HiPIMS produce more disorder than dcMS. Note that the level of order in polycrystals cannot be easily observed by X-ray diffraction. In this regard we proposed a method for mapping the resistivity tensor that is very sensitive to level of order in Py [23; 35]. We reported much higher coercivity and deterioration of uniaxial anisotropy in (111) Py/Pt multilayers obtained by HiPIMS deposition of the Py layers [13]. We attributed the latter effect to the interface sharpness and higher epitaxial strain when HiPIMS is utilized for Py deposition.
Here, we study the properties of Py superlattices deposited by dcMS with Pt, Cu and CuPt as non-magnetic spacers. Pt and Cu were chosen as spacer because they have lattice parameters of 3.9 and 3.5 A, respectively, and therefore provide varying strain to the Py film which has lattice constant of 3.54 A. In this regard, calibration of the substrate temperature during deposition with respect to the desired thickness is of prime importance [36]. It is worth mentioning that dcMS deposition is expected to give more ordered single crystal (001) Py layers in which crystalline anisotropy is dominant [22]. This enables understanding to what extent interface strain will affect magnetocrystalline anisotropy of Py which we will show is much larger than the changes in uniaxial anisotropy in our latest study [13]. Section II discusses the deposition method and process parameters for the fabrication of the superlattices and the characterization methods applied. In Section III the effects of substrate temperature on the properties of the Py/Cu system are studied followed by exploring the influence of varying the lattice parameter of the non-magnetic layer on the structural and magnetic properties of the superlattice. The findings are summarized in Section IV.
## II Experimental apparatus and methods
The substrates were one side polished single crystal (001) MgO (Crystal GmbH) with surface roughness \(<\)5 A and of dimensions 10 mm\(\times\)10 mm\(\times\)0.5 mm. The MgO substrates were used as received without any cleaning but were baked for an hour at 600 \({}^{\circ}\)C in vacuum for dehydration, cooled down for about an hour, and then maintained at the desired temperature \(\pm\)0.4 \({}^{\circ}\)C during the deposition. The superlattices were deposited in a custom built UHV magnetron sputter chamber with a base pressure below \(5\times 10^{-7}\) Pa. The chamber is designed to support 5 magnetron assemblies and targets, which are all located 22 cm away from substrate holder with a 35\({}^{\circ}\) angle with respect to substrate normal. The shutters were controlled by a LabVIEW program (National Instruments). The deposition was made with argon of 99.999 % purity as the working gas using a Ni\({}_{80}\)Fe\({}_{20}\) at.% and Cu targets both of 75 mm diameter and a Pt target of 50 mm in diameter.
The Py depositions were performed at 150 W dc power (MDX 500 power supply from Advanced Energy) at argon working gas pressure of 0.25 Pa which gives deposition rate of 1.5 A/s. Both pure Cu and Pt buffer layers were deposited at dc power of 20 W. For the deposition of CuPt alloy we calibrated Cu\({}_{50}\)Pt\({}_{50}\) at. % at dc power at 10 and 13 W for Cu and Pt, respectively. This selection of powers provide a similar deposition rate of 0.45 A/s in all cases. In order to ensure that the film thickness is as uniform as possible, we rotate the sample at \(\sim\)12.8 rpm. These deposition processes were repeated to fabricate superlattices consisting of 20 repetitions of 15 A Py and 5 A Pt, Cu or Cu\({}_{50}\)Pt\({}_{50}\) at.% (CuPt).
X-ray diffraction measurements (XRD) were carried out using a X'pert PRO PANalitiated diffractometer (Cu K\({}_{\alpha 1}\) and K\({}_{\alpha 2}\) lines, wavelength 0.15406 and 0.15444 nm, respectively) mounted with a hybrid monochromator/mirror on the incident side and a 0.27\({}^{\circ}\) collimator on the diffracted side. We would like to remark that, K\({}_{\alpha 2}\) separation at 2\(\theta\) = 55\({}^{\circ}\) is only 0.2\({}^{\circ}\) and much less at the smaller angles i.e. where our multilayer peaks are located. This is an order of magnitude smaller than the full width half maximum (FWHM) of our multiplayer and satellite peaks. A line focus was used with a beam width of approximately 1 mm. The film thickness, mass density, and surface roughness, was determined by low-angle
X-ray reflectivity (XRR) measurements with an angular resolution of 0.005\({}^{\circ}\), obtained by fitting the XRR data using the commercial X'pert reflectivity program, that is based on the Parrat formalism [37] for reflectivity.
The magnetic hysteresis was recorded using a homemade high sensitivity magneto-optic Kerr effect (MOKE) looper. We use a linearly polarized He-Ne laser of wavelength 632.8 nm as a light source, with Glan-Thompson polarizers to further polarize and to analyze the light after Kerr rotation upon reflection off the sample surface. The Glan-Thompson polarizers linearly polarize the light with a high extinction ratio. They are cross polarized near extintion, i.e. their polarization states are near perpendicular and any change in polarization caused by the the Kerr rotation at a sample's surface is detected as a change in power of light passing through the analyzer. The coercivity was read directly from the easy axis loops. The anisotropy field is obtained by extrapolating the linear low field trace along the hard axis direction to the saturation magnetization level, a method commonly used when dealing with effective easy axis anisotropy.
## III Results and discussion
### Effect of substrate temperature on structural and magnetic properties
Figure 1 shows the XRR results from Py/Cu superlattices deposited at different substrate temperatures. The \(\Lambda\) and \(\delta\) indicated in the figure are inversely proportional to the superlattice period and the total thickness, respectively. It can be clearly seen that the fringes decay faster for a Py/Cu superlattice deposited at substrate temperature of 21 \({}^{\circ}\)C and 200 \({}^{\circ}\)C than when deposited at 100 \({}^{\circ}\)C. This indicates lower surface roughness obtained in the Py/Cu superlattice deposited at 100 \({}^{\circ}\)C. When deposited at room temperature, the large lattice mismatch between MgO and Py/Cu does not allow depositing a high quality superlattice. For substrate temperature of 200 \({}^{\circ}\)C, however, it is difficult to grow a continuous Cu layer with such a low thickness (5 A). This is due to the dewetting phenomenon which causes the minimum Cu thickness that is required to maintain its continuity to be 12 A. Earlier, it has been shown that for substrate temperature up to 100 \({}^{\circ}\)C Py/(1 A) Cu showed a limited intermixing upon annealing [17]. The optimum substrate temperature for deposition obtained here is very close to 156 \({}^{\circ}\)C which has earlier been reported for the deposition of (001) Fe/MgO [38] and (001) Fe\({}_{84}\)Cu\({}_{16}\)/MgO [39] superlattices. We would like to remark that in our previous study we deposited 5 nm Ta underlayer to reduce the substrate surface roughness [13]. However, Ta on MgO is non-trivial due to the large lattice mismatch (22%). Besides, Ta underlayer encourages polycrystalline \(\langle 111\rangle\) texture normal to substrate surface that does not serve our purpose here.
Figure 2 shows the result of symmetric (\(\theta-2\theta\)) XRD scan normal to the film for Py/Cu superlattices deposited at different substrate temperatures. It can be seen that no Cu and Py peak were detected in the superlattice deposited at room temperature. Thus, epitaxial growth of Py and Cu were suppressed by the low substrate temperature. Furthermore, we studied room temperature deposited Py/Cu using grazing incidence XRD which indicated a polycrystalline structure (not shown here). For substrate temperature of 100 - 200 \({}^{\circ}\)C there are clear (002) Py/Cu peaks indicating an epitaxial relationship in the (001) Py \(\parallel\) (001) Cu \(\parallel\) (001) MgO stack. However, there is no sign of satellite peaks due to the \(\Lambda\) (Py/Cu) period. We explain this further when comparing the Py/Cu, Py/Pt, and Py/CuPt superlattices in Section III.2.
Figure 3 shows the pole figures from the \(\{200\}\) and \(\{111\}\) planes for Py/Cu superlattices deposited at different substrate temperatures. For the Py/Cu superlattice deposited at 21 \({}^{\circ}\)C, there is only a peak in the middle of the \(\{111\}\) pole figure that indicates a weak \(\langle 111\rangle\) contribution normal to the film plane. For a superlattice deposited with substrate temperature of 100 \({}^{\circ}\)C the \(\{200\}\) pole figure indicates an intense spot at \(\psi=0\) that is corresponding to (002) Py/Cu planes parallel to the substrate. There is also a weaker four-fold spot at \(\psi=90^{\circ}\) and \(\phi=0,90,180\) and 270\({}^{\circ}\) from the \(\{200\}\) planes parallel to the substrate edges. In the \(\{111\}\) pole figure only four-fold points appear at \(\psi=54.7^{\circ}\) and with 45\({}^{\circ}\) shifts in \(\phi\) with respect to substrate edges. These are the characteristics of the _so-called_ cube on cube epitaxy achieved at 100 \({}^{\circ}\)C. For deposition with substrate temperature of 200 \({}^{\circ}\)C, however, there is a weak \(\{111\}\) ring at \(\psi=7.5^{\circ}\). Note that these \(\{111\}\) planes were not detected by normal XRD because the \(\{111\}\) Py/Cu peak appears at
Figure 1: Comparison of the XRR pattern from [Py/Cu]\({}_{20}\) superlattices deposited on (001) MgO at different substrate temperatures. The \(\Lambda\) and \(\delta\) are inversely proportional to the Py/Cu period and total thickness, respectively.
which is masked by the strong (002) MgO peak normal to the film plane.
Figure 4 compares the MOKE response of Py/Cu superlattices deposited at different substrate temperatures. For a superlattice deposited at room temperature, uniaxial anisotropy along the [100] direction is evident. This is expected since the oblique deposition in a co-deposition chamber tends to induce uniaxial anisotropy in Py films [30; 31; 32; 22]. However, the oblique deposition cannot overcome magnetocrystalline anisotropy due to symmetry in an ordered single crystal Py [22]. Thus, the low substrate temperature must be accounted for the limiting order in the Py layer and presence of uniaxial anisotropy.
For deposition at higher substrate temperatures, however, biaxial anisotropy was obtained with the easy axes along the [110] directions in plane. It is worth mentioning that the bulk crystal symmetry gives the easy axis along the [111] direction which is forced into the film plane along the [110] direction due to shape anisotropy [22]. In the Py/Cu superlattice grown at 100 \({}^{\circ}\)C (Figure 4 (b)), \(\langle 1\bar{1}0\rangle\) is clearly an easy direction, with a very low \(H_{\rm c}\) of 0.7 Oe and double-hysteresis loops along the \(\langle 110\rangle\) direction that saturates at 1.2 Oe. For the Py/Cu superlattice deposited at 200\({}^{\circ}\)C (Figure 4 (c)), it seems the double-hysteresis loops overlap and the other easy axis gives a step that in total present increased coercivity. With increasing substrate temperature not only do the coercivities vary but also the shapes of the hysteresis curves are different. When the substrate temperature during deposition is 21\({}^{\circ}\)C the magnetization, shown in Figure 4 (a), is much like we obtain for polycrystalline single layer films. When the substrate temperature is higher, as shown in Figures 4 (b) and (c), however, the anisotropy has rotated by 45 degrees and the hysteresis loops have changed. The intermediate steps in the hysteresis curves are caused by antiferromagnetic alignment of the magnetic layers, that minimizes the exchange and dipolar magnetic interactions. In some cases this results in perfectly zero magnetic remanence, while in other cases the cancellation is not perfect. The non-magnetic Cu spacer layer is only 5 A in our case, just at the onset of the first antiferromagnetic exchange coupling peak observed by Parkin [40]. Double hysteresis curves have been observed in the Py/Cu system [18]. Note that Ni is miscible in Cu and during annealing a mixing of Ni
Figure 3: Comparison of the {200} and {111} pole figures from [Py/Cu]\({}_{20}\) superlattices deposited on (001) MgO at substrate temperatures of 21, 100, and 200\({}^{\circ}\)C. The background is removed for better illustration.
Figure 2: Comparison of the XRD pattern from [Py/Cu]\({}_{20}\) superlattices deposited on (001) MgO at different substrate temperatures. The intense peak belongs to (002) planes of MgO and and the other peak is due to (002) planes of Py/Cu multilayer.
and Cu is possible. Such intermixing causes a decrease of magnetic homogenity and a reduction in the GMR [17; 18].
### Effect of strain on structural and magnetic properties
In order to explore the influence of strain on the magnetic properties we deposited NM layers of Pt and Cu\({}_{50}\)Pt\({}_{50}\) at. % alloy in addition to Cu discussed in Section III.1. Pt has lattice constant of 3.9 A which is larger than of Py, which has lattice constant of 3.54 A. Therefore, by going from Cu to Cu\({}_{50}\)Pt\({}_{50}\) and then to Pt the strain is gradually increased. Figure 5 shows XRR results from different superlattices deposited on (001) MgO at 100 \({}^{\circ}\)C. Note that the \(\Lambda\) peak is suppressed in the Py/Cu superlattice. One may think this arises from a diffused Py/Cu interface that leads to smooth density variation at the interface. This is not the case here and the \(\Lambda\) peaks intensity decreases due the similar density of Py and Cu. The latter has been shown to reduce the resolution of the XRR measurement in Si/SiO\({}_{2}\) by a few orders of magnitude [41].
The layers thickness and their mass density as well as surface and interface roughness obtained by fitting XRR results for deposition at substrate temperature of 100\({}^{\circ}\)C are summarized in table 1. The period \(\Lambda\) is in all cases about 19 A with \(t_{\rm Py}\sim 16\) A and \(t_{\rm NM}\sim 3\) A. The film mass density of the Py layers is the highest (8.74 g/cm\({}^{3}\)) in the Py/Cu stack but is lowest (7.45 g/cm\({}^{3}\)) in the Py/CuPt stack.
Figure 6 shows the XRD results from Py/NM superlattices deposited on (001) MgO at substrate temperature of 100 \({}^{\circ}\)C. Since the Py/Pt and Py/CuPt superlattices exhibit multiple (002) peaks the pole figure were obtained for the main peak (indicated
\begin{table}
\begin{tabular}{c|c c c|c c c|c} \hline \hline Sample & \multicolumn{3}{c|}{\(t\) (Å)} & \multicolumn{3}{c|}{Ra (Å)} & \multicolumn{2}{c}{\(\rho\) (g/cm\({}^{3}\))} \\ & Py & NM & \(\Lambda\) & Py & NM & Py & NM \\ \hline Py/Cu & 15.8 & 3.46 & 19.3 & 7.62 & 6.25 & 8.74 & 9.8 \\ \hline Py/Pt & 15.9 & 2.97 & 18.9 & 5.92 & 3.24 & 8.55 & 27.2 \\ \hline Py/CuPt & 15.9 & 3.43 & 19.3 & 2.25 & 4.94 & 7.45 & 26.8 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The Py and NM layer thicknesses (\(t\)), roughness (Ra) and density (\(\rho\)) extracted by fitting the XRR results of different superlattices deposited on (001) MgO at 100\({}^{\circ}\)C substrate temperature.
Figure 5: XRR measurements from the various superlattices, [Py/Cu]\({}_{20}\), [Py/Pt]\({}_{20}\), and [Py/CuPt]\({}_{20}\), deposited on (001) MgO at substrate temperature of 100 \({}^{\circ}\)C.
Figure 6: Comparison of the XRD results from the various superlattices, [Py/Cu]\({}_{20}\), [Py/Pt]\({}_{20}\) and [Py/CuPt]\({}_{20}\), deposited at substrate temperature of 100 \({}^{\circ}\)C. All the peaks are due to the (002) plane and the vertical dashed line indicate the (002) peak position for the bulk state.
by 0 in figure 6). It can be seen that all the pole figures are very similar. All these pole figures indicate a cube on cube epitaxial relationship.
Figure 8 depicts the MOKE response from Py/Pt and Py/CuPt superlattices prepared at 100 \({}^{\circ}\)C. For the Py/Pt superlattice we did not detect any clear easy direction in the film plane, the film appeared almost isotropic in the film plane with \(H_{\text{c}}\) of 60 - 75 Oe. The hysteresis in the [100] and [110] directions are displayed in figure 8(a). Aside from \(H_{\text{c}}\) of 3 Oe, the Py/CuPt superlattice presents biaxial anisotropy similar to Py/Cu, cf. figure 4(b) and (c). However, a Py/Cu superlattice exhibits an easy axis along the [110] directions, while an easy axis appears along the [100] orientations for a Py/CuPt superlattice. Note that the [100] directions are harder than both the [110] and [111] directions. However, forcing easy axes along the [100] direction in the single crystal Py on (001) MgO has been reported previously [42].
For the polycrystalline but highly (111) textured Py/M multilayers we observed limited change in coercivity and opening in hard axis with interface strain due to the choice of M [13]. Here, for Py/Pt the coercivity increases an order of magnitude and cubic anisotropy is almost destroyed.
## IV Summary
In summary it is shown that Py superlattices can be successfully deposited on (001) MgO within a narrow substrate temperature window around 100 \({}^{\circ}\)C. For small lattice mismatch of 5% superlattice the easy axes detected along the [110] directions is similar to the single crystal
Py. It is also shown that the moderate lattice mismatch (7%) rotates the easy axes towards the [100] orientation and the coercivity increases. The higher lattice mismatch of 14% present nearly isotropic behaviour and a very high coercivity, simultaneously. Thus, the results indicate that the changes in magnetocrystalline anisotropy due to epitaxial strain are much larger than the changes we observed earlier in the case of uniaxial anisotropy.
###### Acknowledgements.
The authors would like to acknowledge helpful comments and experimental help from Dr. Fridrik Magnus and Einar B. Thorsteinsson. This work was partially supported by the Icelandic Research Fund Grant Nos. 228951, 196141, 130029 and 120002023.
| [(15 $\unicode{x212B}$) Ni$_{80}$Fe$_{20}$/(5$\unicode{xC5}$) M]$_{20}$単結晶多層の最適化を、(001) MgO基面に施し、MはCu、Cu$_{50}$Pt$_{50}$、およびPtです。これらの超格子は、高解像度X線反射(XRR)と diffractio n (XRD) 、および重要な結晶面の方向の磁気パターニングで特徴付けられました。100℃の基板温度で堆積すると、立方体対立方体Epitaxial関係が得られることが示されました。CuとPtの場合、格子Mismatchはそれぞれ5%と14%です。低温の基板温度では多結晶多層が得られ、高温では$\sim$10°オフノーマルな方向に |
2309.16220 | Unmasking the Chameleons: A Benchmark for Out-of-Distribution Detection
in Medical Tabular Data | Despite their success, Machine Learning (ML) models do not generalize
effectively to data not originating from the training distribution. To reliably
employ ML models in real-world healthcare systems and avoid inaccurate
predictions on out-of-distribution (OOD) data, it is crucial to detect OOD
samples. Numerous OOD detection approaches have been suggested in other fields
- especially in computer vision - but it remains unclear whether the challenge
is resolved when dealing with medical tabular data. To answer this pressing
need, we propose an extensive reproducible benchmark to compare different
methods across a suite of tests including both near and far OODs. Our benchmark
leverages the latest versions of eICU and MIMIC-IV, two public datasets
encompassing tens of thousands of ICU patients in several hospitals. We
consider a wide array of density-based methods and SOTA post-hoc detectors
across diverse predictive architectures, including MLP, ResNet, and
Transformer. Our findings show that i) the problem appears to be solved for
far-OODs, but remains open for near-OODs; ii) post-hoc methods alone perform
poorly, but improve substantially when coupled with distance-based mechanisms;
iii) the transformer architecture is far less overconfident compared to MLP and
ResNet. | Mohammad Azizmalayeri, Ameen Abu-Hanna, Giovanni Ciná | 2023-09-28T07:52:01 | http://arxiv.org/abs/2309.16220v1 | # Unmasking the Chameleons: A Benchmark for
###### Abstract
Despite their success, Machine Learning (ML) models do not generalize effectively to data not originating from the training distribution. To reliably employ ML models in real-world healthcare systems and avoid inaccurate predictions on out-of-distribution (OOD) data, it is crucial to detect OOD samples. Numerous OOD detection approaches have been suggested in other fields - especially in computer vision - but it remains unclear whether the challenge is resolved when dealing with medical tabular data. To answer this pressing need, we propose an extensive reproducible benchmark to compare different methods across a suite of tests including both near and far OODs. Our benchmark leverages the latest versions of eICU and MIMIC-IV, two public datasets encompassing tens of thousands of ICU patients in several hospitals. We consider a wide array of density-based methods and SOTA post-hoc detectors across diverse predictive architectures, including MLP, ResNet, and Transformer. Our findings show that i) the problem appears to be solved for far-OODs, but remains open for near-OODs; ii) post-hoc methods alone perform poorly, but improve substantially when coupled with distance-based mechanisms; iii) the transformer architecture is far less overconfident compared to MLP and ResNet.
Out-of-Distribution Detection, Medical Tabular Data
## 1 Introduction
The utilization of ML models in health-related applications is rapidly increasing (Greener et al., 2022; Varoquaux and Cheplygina, 2022). However, a significant limitation lies in their performance evaluation, which is primarily based on optimizing the algorithms for data from the training distribution. This means that they may fail under distribution shift: a model trained on the data from a hospital may not generalize to other hospitals (Rios and Abu-Hanna, 2021; de Hond et al., 2023). Since such ML models are meant to be deployed in real-world healthcare scenarios, ensuring their reliability becomes of utmost importance. One way to prevent models from providing unreliable suggestions is to detect OOD samples in real time, prior to generating predictions. This is known as OOD detection, where a model trained on an in-distribution (ID) set is employed for distinguishing OOD samples from ID data.
In this paper, we investigate this problem for tabular medical data. The problem has already been investigated mainly in the field of computer vision (Yang et al., 2021; Zimmerer et al., 2022), but the results may not extend to tabular medical data. For example, while computer vision has focused on im
Figure 1: The practical dilemma of OOD data: there are no guarantees on how a model will perform on OOD data, hence real-time OOD detection becomes imperative.
proving post-hoc OOD detectors (Yang et al., 2022), it is demonstrated that these kinds of methods perform even worse than a random classifier in medical tabular data due to the phenomenon of overconfidence (Ulmer et al., 2020; Ulmer and Cina, 2021; Zadorozhny et al., 2022).
Existing OOD detection methods can be categorized into three main groups: i) post-hoc methods, which are detectors that can be applied on top of any trained classifier, such as Maximum Softmax Probability (MSP) (Hendrycks and Gimpel, 2017), ii) density-based methods, which are trained to estimate the marginal distribution of the training set in order to detect samples that fall out of the training distribution, such as auto-encoders (AEs) (Kingma and Welling, 2014), and finally iii) methods that require retraining of the prediction model which are mainly designed specifically to be applied on images, such as OpenGAN (Kong and Ramanan, 2021).
The focus of this work is medical tabular data, thus we only consider the first two OOD categories since they can be applied to any medical tabular dataset without limitations. Furthermore, we examine three important architectures for the classifier on which post-hoc methods are implemented, namely MLP, ResNet, and FT-Transformer (Gorishniy et al., 2021)--to assess their impact on OOD detection.
A crucial aspect of comparing these methods involves evaluating their performance across diverse scenarios. Therefore, we incorporate the latest publicly available datasets including eICU and MIMIC-IV (Pollard et al., 2018; Johnson et al., 2023) in our experiments. We also employ different approaches for classifying the data into ID and OOD sets. This lets us consider OOD sets near to and far from the IDs in the experiments, which are referred to as near-OOD and far-OOD (Fort et al., 2021). For example, to consider near-OOD instances, the ID and OOD sets can be constructed by dividing a dataset based on a distinguishing feature such as gender (male vs. female), and for a far-OOD experiment, we can use a dataset from a different data generating process, such as a distinct care protocol (de Hond et al., 2023), or generate artificial OOD samples.
We use this setup to conduct broad experiments that can illustrate the differences between various types of OOD detection methods in order to provide the community with an extensive reproducible benchmark in the medical tabular data domain1. Our results are provided in section 5, and the main findings are as follows: i) the OOD detection problem appears to be almost resolved on far-OOD samples, but it is still an open issue for near-OODs, ii) the distance-based post-hoc OOD detectors are better than other post-hoc methods, iii) unlike the claims in previous work, post-hoc methods can be competitive with the density-based ones, iv) the transformer architecture mitigates the problem of over-confidence that other models suffer from.
Footnote 1: The codes are available at [https://github.com/maximalsayeri/TabMedOOD](https://github.com/maximalsayeri/TabMedOOD).
## 2 Related Work
OOD detection has received a lot of attention, with several kinds of approaches being proposed for this purpose. This has resulted in interest in comparing these methods fairly within the same setting. Some benchmarks compare methods using standard image datasets (Han et al., 2022; Yang et al., 2022; Zhang et al., 2023). However, it is still necessary to have benchmarks that are specific to other domains and data modalities. For example, the MOOD challenge (Zimmerer et al., 2022) investigated OOD detection within the context of medical imaging, with various new methods showing outstanding results (Marinont and Tarroni, 2021; Tan et al., 2022).
In the realm of medical tabular datasets, notable endeavors have been undertaken. A pipeline is presented in Nicora et al. (2022) to train a model on ICU data and detect test samples for which the model might exhibit poor performance. BEDS-Bench (Avati et al., 2021) has provided a benchmark on the generalization ability of ML models over electronic health records, highlighting performance drops under distribution shift. Moreover, it has been found that access to OOD data does not improve the test performance on ICU data (Spathis and Hyland, 2022). The work by Ulmer et al. (2020), proposed one of the first comparisons between basic OOD detectors in the space of medical tabular data, pointing to the fact that the problem of OOD detection was wide open. Also, some guidelines have been provided on how to evaluate an OOD detector in practice within the context of medical data (Zadorozhny et al., 2022) such as methods for selecting the OOD set during evaluation. Compared to this related work, we benchmark the most recent SOTA OOD detection approaches and SOTA architectures, leading to novel insight e.g. on the over-confidence of transformers and the combination of post-hoc and distance-based methods.
## 3 Problem Definition and Evaluation Protocol
In the following, we describe the problem definition, the metrics used to measure performance, and how we select the ID and OOD sets.
### OOD Detection
In training any ML model, we require a training set \(\mathcal{D}_{in}=\{(x_{i},y_{i})\}_{i=1}^{n}\), where each instance \(x_{i}\) has a label \(y_{i}\). Our goal is to have a model \(f:\mathcal{X}\rightarrow\mathcal{R}\) and a binary classifier \(G_{\lambda}:\mathcal{X}\rightarrow\{0,1\}\) such that for a test input \(x\sim\mathcal{X}\):
\[G_{\lambda}(x;f)=\begin{cases}\text{OOD}&f(x)\geq\lambda\\ \text{ID}&f(x)<\lambda\end{cases}\quad. \tag{1}\]
The score according to \(f\) is sometimes called the 'novelty score'. Samples whose novelty score is higher than the threshold \(\lambda\) are classified as OOD. Hence, the final goal would be to train model \(f\) such that it assigns lower scores to \(x\sim\mathcal{D}_{in}\) and higher scores to \(x\sim\mathcal{D}_{out}\), where \(\mathcal{D}_{out}\) is the dataset that includes the OOD samples.
### Metrics
To assess the separability of the ID and OOD scores given by \(f\), we measure the area under the receiver operating characteristic (AUROC) as a well-known threshold-independent classification criterion, and FPR@95, which measures the FPR at a \(\lambda\) corresponding to the TPR being equal to 95%.
### Near, Far, and Synthesized OOD
The distance between \(\mathcal{D}_{in}\) and \(\mathcal{D}_{out}\) plays an important role in the performance of OOD detection methods. It is relatively simpler to detect samples far from the training distribution than samples close to it; we refer to the former samples as far-OOD and to the latter as near-OOD. To consider this in the comparisons, we define different sets of ID and OOD as follows.
**Near and far OODs:** Assuming that we have a dataset \(\mathcal{D}\), we can define the ID and OOD sets in two ways: First, we can separate \(\mathcal{D}\) based on a specific feature such as gender (male vs. female) or age (e.g. elderly vs. young). This reflects what may happen when employing an ML model in practice. As an example, if one develops a model on a population of mostly young people, it might not be fully reliable when applied to the elderly. The second way would be to use \(\mathcal{D}\) as ID and a totally different dataset as OOD. Since identifying OODs in the second scenario appears to be easier, we will refer to the first scenario as near-OOD and the second as far-OOD following the convention in computer vision (Winkens et al., 2020; Fort et al., 2021).
**Synthesized-OOD:** Following the data corruption suggested in Ulmer et al. (2020), we can simulate the OOD samples by scaling a single feature from ID set by a factor of 10, 100, or 1000. For each factor, we will repeat the experiments 100 times with different features, and average the results to isolate the influence of scaling in the results, minimizing the impact of the chosen feature. By increasing the scaling factor, it looks like we are gradually transforming near-OOD samples into far-OOD ones.
## 4 Experiment Setup
### Supported OOD Detectors
Two main OOD detection method categories, described in Table 1, are included in our experiments. The first category is density estimators, which learn the marginal distribution of the ID data and label OOD samples by comparison to such distribution. We include 7 density-based models covering different types of density estimators. These methods are used in prior works and have reached outstanding results (Ulmer et al., 2020; Zadorozhny et al., 2022).
The second group is the post-hoc detectors, which can be integrated into any pre-trained classifier without requiring any additional fine-tuning or training. They mostly generate a novelty score for the input based on the classifier output or intermediate representations. We evaluate 17 different post-hoc detectors, including both commonly used detectors and top-performing ones in their respective fields.
### Datasets
We use eICU (Pollard et al., 2018) and MIMIC-IV (Johnson et al., 2023) in our experiments as two public and popular datasets encompassing tens of thousands of ICU patients in several hospitals. Following the descriptions in section 3.3, each of these datasets is considered as far-OOD set for the other one. For the near-OOD case, we divide the datasets based on the time-independent variables, as this choice better emulates the potential distribution shifts in ID data
that can be encountered in practice than the time-dependent ones. Among the time-independent variables available for each dataset, we have selected age (older than 70 as ID), gender (females as ID), and ethnicity (Caucasian or African American as ID) in eICU, and Age (older than 70 as ID), gender (females as ID), admission type (surgical in the same day of admission as ID), and first care unit (CVICU as ID) in MIMIC-IV dataset. In each case, the remaining part of the dataset would be OOD. The results for the age (older than 70), ethnicity (Caucasian), and first care unit (CVICU) are reported in the main text, and others in Appendix C.
### Pre-processing
We pre-processed the eICU data using the pipeline provided in Sheikhalishahi et al. (2020). Subsequently, the data is filtered to keep only patients with a length of stay of at least 48 hours and an age greater than 18 years old. Additionally, data with unknown discharge status were removed. Furthermore, patients with NaN values in the features used in our models are also removed. This pre-process resulted in a total 54826 unique patients for this dataset.
For the MIMIC-IV, we used the pipeline provided in Gupta et al. (2022) with slight modifications e.g., we added a mapping from the feature IDs to feature names to have each feature name in the final pre-processed data. The data is then filtered similarly to the eICU dataset. This resulted in 18180 unique patients for this dataset.
These datasets contain different types of clinical variables, but they are not recorded for all the patients. To avoid NaN values as much as possible, we are interested in using only the more important variables that are recorded for more patients. Based on these criteria and considering the important clinical variables suggested in Ulmer et al. (2020), we have selected a combination of time-dependent and time-independent variables for each dataset, which are reported in Appendix A.
It should be noted that when datasets are evaluated against each other, only the variables found in both datasets are taken into account. Moreover, for the time-dependent variables, we aggregated the time series by means of 6 different statistics including mean, standard deviation, minimum, maximum, skewness, and number of observations calculated over windows consisting of the full time-series and its first and last 10%, 25%, and 50%.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Method** & **Short Description** \\ \hline AE, VAE (Kingma and Welling, 2014) & Encodes input in a latent representation and reconstructs it. \\ HI-VAE (Nazabal et al., 2020) & Modifies VAE to consider heterogeneous data. \\ Flow (Papamakarios et al., 2017) & Estimates ID data by transformations into a normal distribution. \\ PPCA (Tipping and Bishop, 1999) & Reduces data dimensionality based on singular value decomposition. \\ LOF (De Vries et al., 2010) & Compares local density of input to the density of its closest neighbors. \\ DUE (van Amersfoort et al., 2021) & Uses a deep neural Gaussian process for modeling the ID data. \\ \hline MDS (Lee et al., 2018) & Uses Mahalanobis distances to the class-conditional normal distributions. \\ RMDS (Ren et al., 2021) & Modifies MDS for detecting near-OODs. \\ KNN (Sun et al., 2022) & Measures distance of input to the \(k_{\text{th}}\) nearest neighbor in the ID data. \\ VIM (Wang et al., 2022) & Uses logits and features norm simultaneously. \\ SBE (Zhang et al., 2023b) & Measures the distance of input to the class-conditional representations. \\ KLM (Hendrycks et al., 2022) & Uses KL distance of softmax output from its range over the ID data. \\ OpenMax (Bendale and Boult, 2016) & Fits a Weibull distribution on the logits instead of softmax. \\ MSP (Hendrycks and Gimpel, 2017) & Uses maximum softmax probability as a simple but effective baseline. \\ MLS (Hendrycks et al., 2022) & Uses the maximum logit score instead of MSP. \\ TempScale (Guo et al., 2017) & Calibrates the temperature parameter in the softmax. \\ ODIN (Liang et al., 2018) & Perturbs the input adversarially before using TempScaling. \\ EBO (Liu et al., 2020) & Uses an energy function instead of softmax. \\ GRAM (Sastry and Oore, 2020) & Measures deviation of the Gram matrix from its range over the ID data. \\ GradNorm (Huang et al., 2021) & Uses norm of the backpropagated gradients. \\ ReAct (Sun et al., 2021) & Rectifies the model activations at an upper limit. \\ DICE (Sun and Li, 2022) & Suggests sparsification of the last linear layer. \\ ASH (Djurisic et al., 2023) & Extends the DICE idea to the intermediate feature layers. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Density-based models and post-hoc detectors evaluated in this work, with a short description.
### Task and Prediction Models
To perform post-hoc OOD detection, we would need a prediction model performing a supervised classification task. The main task that is used in prior work is mortality prediction (Sheikhalishahi et al., 2020; Ulmer et al., 2020; Meijerink et al., 2020; Zadorozhny et al., 2022). In mortality prediction, we only use the first 48 hours of data from the intensive care unit collected from patients to predict the in-hospital mortality. It is noteworthy that the mortality rate in the pre-processed data is 12.57% in the MIMIC-IV and 6.77% in the eICU dataset.
To perform this task, we consider three widely used architectures: MLP, ResNet, and FT-Transformer (Gorishniy et al., 2021). MLP passes data through the fully-connected layers and non-linear activation functions with dropout to improve the generalization. ResNet adds batchnorm and residual connections to MLP. We also consider FT-Transformer, constituted of transformer blocks that utilize the attention mechanism (Vaswani et al., 2017).
## 5 Results
In this section, we describe the results for each of the OOD settings based on the AUROC criterion. Results for FPR@95 are presented in Appendix D. Moreover, each experiment is repeated 5 times, and the results are averaged to reduce the impact of randomness in selecting train/test data and training itself. Additionally, we have measured the mortality prediction performance of the prediction models in Appendix B, indicating that they are trained well.
### Far-OOD
Results for the far-OOD setting are displayed in Table 2. According to this table, there are methods that can effectively detect OOD data on each dataset.
Among the density-based methods, Flow attains the best result on eICU, while others exhibit superior performance on MIMIC-IV. Additionally, DUE can detect OODs on MIMIC-IV, but it falls short of being competitive on the eICU dataset. Except for these two approaches and HI-VAE, other density-based methods including AE, VAE, PPCA, and LOF demonstrate strong performance on both datasets.
Within the post-hoc methodologies, MDS exhibits better results compared to others regardless of the choice of the prediction model. Moreover, MDS applied on ResNet is competitive with the density-based approaches, even marginally outperforming them based on the average results across both datasets. After MDS, there is not a single winner. However, approaches like KNN, VIM, and SHE which somehow compute the distance to the training set as the novelty score, outperform the rest.
Regarding the prediction model, the top-performing methods on ResNet demonstrate superior results compared to both MLP and FT-Transformer. However, a problem with ResNet and MLP is that they have over-confidence issues with some approaches on MIMIC-IV. This means that they have more confidence in the OODs, resulting in a performance even worse than a random detector. This is mainly being observed with the detectors that do not rely on distance-based novelty scores such as MSP and GradNorm. Note that the eICU dataset contains data from a more extensive array of hospitals, which can increase diversity in the data and reduce the over-confidence on this dataset.
FT-Transformer seems to solve the over-confidence problem observed in MLP and ResNet, as all detectors perform better than random on both datasets. The attention mechanism in the transformer blocks enables this model to consider relationships between input elements, leading to a better understanding of ID data.
### Near-OOD
Results for the near-OOD setting are presented in Table 2. In the eICU dataset, the diversity among the ID data and the proximity of OODs to ID have collectively yielded an almost random performance for all the approaches. Still, it indicates that methods like MDS and Flow are marginally better than others.
In MIMIC-IV, which contains less diversity, the age variable still results in an almost random performance across all approaches, but FCU reflects some differences between detectors. Among the density-based approaches, AE and VAE are the best choices, followed by PPAC, LOF, and HI-VAE. The post-hoc methods mostly demonstrate similar performance within the same architecture category. Moreover, they can be competitive with density-based methods when applied on the FT-Transformer.
### Synthesized-OOD
Results for the synthesized-OOD setting are displayed in Table 3. It is expected that increasing the
\begin{table}
\begin{tabular}{c|c|c c c|c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Method} & \multicolumn{3}{c|}{eICU} & \multicolumn{3}{c}{MIMIC-IV} \\ & & Far-OOD & Near-OOD(Eth) & Near-OOD(Age) & Far-OOD & Near-OOD(FCU) & Near-OOD(Age) \\ \hline \hline & AE & 96.5\(\pm\)0.2 & 55.1\(\pm\)0.8 & 50.0\(\pm\)0.4 & 99.8\(\pm\)0.0 & 79.4\(\pm\)0.6 & 56.8\(\pm\)0.4 \\ & VAE & 95.8\(\pm\)0.2 & 55.4\(\pm\)0.7 & 50.1\(\pm\)0.4 & **99.8\(\pm\)0.0** & **79.7\(\pm\)0.6** & **57.0\(\pm\)0.4** \\ & HI-VAE & 56.7\(\pm\)1.6 & 44.3\(\pm\)1.2 & 44.4\(\pm\)2.1 & 68.8\(\pm\)8.8 & 79.1\(\pm\)13.4 & 56.8\(\pm\)10.0 \\ Density & Flow & **100.0\(\pm\)0.0** & **61.1\(\pm\)0.9** & 49.7\(\pm\)0.4 & 87.4\(\pm\)7.5 & 31.8\(\pm\)3.9 & 51.6\(\pm\)0.5 \\ Based & PPCA & 96.7\(\pm\)0.2 & 59.1\(\pm\)0.6 & **51.3\(\pm\)0.5** & 99.8\(\pm\)0.0 & 75.1\(\pm\)0.6 & 56.8\(\pm\)0.7 \\ & LOF & 96.5\(\pm\)0.1 & 56.0\(\pm\)0.8 & 49.5\(\pm\)0.5 & 99.2\(\pm\)0.2 & 73.5\(\pm\)0.5 & 55.3\(\pm\)0.8 \\ & DUE & 73.4\(\pm\)0.5 & 53.7\(\pm\)0.9 & 49.5\(\pm\)0.4 & 98.2\(\pm\)0.2 & 59.8\(\pm\)1.7 & 51.6\(\pm\)0.6 \\ \hline \hline & MDS & **84.3\(\pm\)1.4** & **56.8\(\pm\)0.9** & **51.7\(\pm\)0.7** & **98.9\(\pm\)0.6** & 68.0\(\pm\)3.9 & **53.8\(\pm\)0.9** \\ & RMDS & 59.2\(\pm\)2.1 & 50.0\(\pm\)1.7 & 49.1\(\pm\)0.5 & 79.9\(\pm\)12.1 & 60.9\(\pm\)1.1 & 50.5\(\pm\)0.4 \\ & KNN & 79.4\(\pm\)1.2 & 53.4\(\pm\)2.4 & 48.5\(\pm\)0.8 & 65.4\(\pm\)8.3 & **73.3\(\pm\)0.9** & 54.0\(\pm\)0.5 \\ & VIM & 69.9\(\pm\)1.4 & 51.2\(\pm\)1.7 & 46.7\(\pm\)0.8 & 97.8\(\pm\)2.4 & 71.1\(\pm\)1.2 & 54.0\(\pm\)0.9 \\ & SHE & 61.5\(\pm\)0.8 & 50.8\(\pm\)1.9 & 50.1\(\pm\)0.1 & 93.9\(\pm\)5.7 & 56.7\(\pm\)0.7 & 49.6\(\pm\)0.2 \\ & KLM & 68.8\(\pm\)1.2 & 52.7\(\pm\)1.8 & 51.7\(\pm\)0.2 & 79.5\(\pm\)5.8 & 46.1\(\pm\)2.0 & 49.3\(\pm\)0.6 \\ & OpenMax & 52.1\(\pm\)1.3 & 47.4\(\pm\)1.5 & 46.3\(\pm\)1.3 & 62.7\(\pm\)1.03 & 66.9\(\pm\)0.9 & 52.6\(\pm\)0.5 \\ \hline \multirow{2}{*}{MLP} & MSP & 51.2\(\pm\)1.2 & 47.1\(\pm\)1.8 & 46.3\(\pm\)1.3 & 13.4\(\pm\)7.7 & 66.0\(\pm\)0.6 & 52.4\(\pm\)0.4 \\ & MLS & 51.3\(\pm\)1.4 & 46.8\(\pm\)1.7 & 46.2\(\pm\)1.3 & 14.2\(\pm\)7.6 & 65.9\(\pm\)0.8 & 52.3\(\pm\)0.4 \\ & TempScale & 51.2\(\pm\)1.2 & 46.8\(\pm\)1.8 & 46.3\(\pm\)1.3 & 13.3\(\pm\)7.7 & 66.0\(\pm\)0.8 & 52.4\(\pm\)0.3 \\ & ODIN & 51.3\(\pm\)1.2 & 46.8\(\pm\)1.8 & 46.3\(\pm\)1.3 & 13.5\(\pm\)7.7 & 66.0\(\pm\)0.8 & 52.5\(\pm\)0.3 \\ & EBO & 51.4\(\pm\)1.4 & 46.8\(\pm\)1.7 & 46.2\(\pm\)1.3 & 14.4\(\pm\)7.7 & 65.9\(\pm\)0.8 & 52.5\(\pm\)0.3 \\ & GRAM & 46.7\(\pm\)1.4 & 46.6\(\pm\)1.5 & 48.4\(\pm\)0.6 & 16.7\(\pm\)11.7 & 53.6\(\pm\)1.4 & 49.9\(\pm\)0.1 \\ & GradNorm & 50.2\(\pm\)1.3 & 46.8\(\pm\)1.8 & 46.4\(\pm\)1.3 & 12.7\(\pm\)7.9 & 65.8\(\pm\)0.8 & 52.5\(\pm\)0.4 \\ & ReAct & 52.5\(\pm\)1.2 & 46.7\(\pm\)1.5 & 46.6\(\pm\)1.4 & 74.3\(\pm\)1.6 & 65.1\(\pm\)0.9 & 52.5\(\pm\)0.4 \\ & DIGE & 50.6\(\pm\)1.3 & 46.9\(\pm\)1.8 & 46.6\(\pm\)1.3 & 14.7\(\pm\)8.4 & 65.8\(\pm\)0.9 & 52.7\(\pm\)0.8 \\ & ASH & 50.6\(\pm\)1.3 & 46.8\(\pm\)1.5 & 46.6\(\pm\)1.7 & 14.0\(\pm\)7.3 & 65.7\(\pm\)0.9 & 52.1\(\pm\)0.4 \\ \hline \hline & MDS & **96.9\(\pm\)0.3** & **58.4\(\pm\)0.6** & 51.6\(\pm\)0.7 & **99.7\(\pm\)0.1** & 74.0\(\pm\)2.4 & 55.4\(\pm\)0.3 \\ & RMDS & 45.8\(\pm\)2.9 & 50.1\(\pm\)2.5 & 49.9\(\pm\)0.3 & 79.5\(\pm\)13.1 & 62.6\(\pm\)1.4 & 50.8\(\pm\)0.4 \\ & KNN & 91.2\(\pm\)0.9 & 59.2\(\pm\)1.6 & 50.1\(\pm\)0.1 & 93.0\(\pm\)1.9 & 57.7\(\pm\)2.3 & 54.3\(\pm\)0.5 \\ & VIM & 93.5\(\pm\)1.0 & 56.9\(\pm\)1.5 & 48.2\(\pm\)1.1 & 99.5\(\pm\)0.2 & **75.0\(\pm\)1.8** & **56.2\(\pm\)0.4** \\ & SHE & 65.7\(\pm\)1.6 & 50.5\(\pm\)1.3 & 51.7\(\pm\)0.8 & 99.7\(\pm\)0.1 & 73.0\(\pm\)1.6 & 52.8\(\pm\)0.6 \\ & KLM & 55.5\(\pm\)2.6 & 47.5\(\pm\)1.5 & 51.5\(\pm\)0.4 & 78.1\(\pm\)6.7 & 55.2\(\pm\)1.4 & 47.7\(\pm\)0.2 \\ & OpenMax & 64.8\(\pm\)5.4 & 50.7\(\pm\)1.3 & 47.0\(\pm\)1.1 & 65.2\(\pm\)2.4 & 67.2\(\pm\)1.4 & 54.3\(\pm\)0.4 \\ & MSP & 65.4\(\pm\)5.0 & 51.3\(\pm\)2.4 & 47.1\(\pm\)0.8 & 19.5\(\pm\)10.3 & 65.0\(\pm\)4.3 & 53.9\(\pm\)0.7 \\ \hline \multirow{2}{*}{ResNet} & MLS & 64.4\(\pm\)6.0 & 51.9\(\pm\)2.2 & 46.8\(\pm\)1.4 & 38.0\(\pm\)26.5 & 66.5\(\pm\
\begin{table}
\begin{tabular}{c|c|c c c|c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Method} & \multicolumn{3}{c|}{eICU} & \multicolumn{3}{c}{MIMIC-IV} \\ & & \(\mathcal{F}\)=10 & \(\mathcal{F}\)=100 & \(\mathcal{F}\)=1000 & \(\mathcal{F}\)=10 & \(\mathcal{F}\)=100 & \(\mathcal{F}\)=1000 \\ \hline \hline & AE & 80.5\(\pm\)1.3 & 88.4\(\pm\)0.9 & 90.0\(\pm\)0.7 & 76.4\(\pm\)1.6 & 83.9\(\pm\)2.1 & 86.6\(\pm\)2.1 \\ & VAE & 80.0\(\pm\)1.3 & 88.3\(\pm\)0.9 & 89.9\(\pm\)0.7 & 76.4\(\pm\)1.6 & 83.8\(\pm\)2.1 & 86.6\(\pm\)2.1 \\ & HI-VAE & 50.0\(\pm\)0.1 & 50.0\(\pm\)0.1 & 50.1\(\pm\)0.2 & 50.5\(\pm\)0.8 & 53.1\(\pm\)1.4 & 52.0\(\pm\)2.0 \\ Density & Flow & 70.2\(\pm\)2.5 & 82.1\(\pm\)2.5 & 87.7\(\pm\)1.4 & 53.8\(\pm\)1.8 & 65.0\(\pm\)3.0 & 75.7\(\pm\)2.5 \\ Based & PPCA & 80.7\(\pm\)1.3 & 88.3\(\pm\)0.9 & 89.7\(\pm\)0.8 & 76.9\(\pm\)1.5 & 84.0\(\pm\)2.1 & 86.6\(\pm\)2.0 \\ & LOF & **84.4\(\pm\)1.3** & **89.4\(\pm\)0.8** & **90.5\(\pm\)0.7** & **78.4\(\pm\)1.5** & **84.7\(\pm\)2.1** & **86.9\(\pm\)1.9** \\ & DUE & 63.9\(\pm\)1.6 & 80.5\(\pm\)1.3 & 88.7\(\pm\)0.8 & 60.3\(\pm\)2.2 & 76.0\(\pm\)1.9 & 83.0\(\pm\)2.0 \\ \hline \hline & MDS & **68.5\(\pm\)2.4** & **82.8\(\pm\)2.0** & 89.2\(\pm\)2.0 & **69.3\(\pm\)2.4** & **80.8\(\pm\)1.6** & **84.7\(\pm\)1.8** \\ & RMDS & 60.8\(\pm\)1.0 & 75.2\(\pm\)2.0 & 85.8\(\pm\)2.0 & 52.7\(\pm\)4.1 & 64.5\(\pm\)6.9 & 76.4\(\pm\)2.8 \\ & KNN & 60.3\(\pm\)1.5 & 68.0\(\pm\)2.0 & 70.9\(\pm\)2.3 & 59.1\(\pm\)4.9 & 67.2\(\pm\)8.9 & 70.7\(\pm\)10.0 \\ & VIM & 48.5\(\pm\)2.0 & 47.2\(\pm\)3.1 & 46.0\(\pm\)4.9 & 56.8\(\pm\)7.9 & 63.5\(\pm\)13.7 & 66.6\(\pm\)15.1 \\ & SHE & 62.5\(\pm\)2.8 & 77.8\(\pm\)1.6 & **89.8\(\pm\)0.1** & 62.3\(\pm\)3.0 & 77.8\(\pm\)2.2 & 83.6\(\pm\)1.5 \\ & KLM & 60.2\(\pm\)1.4 & 69.6\(\pm\)1.5 & 78.1\(\pm\)1.4 & 55.4\(\pm\)2.0 & 66.9\(\pm\)1.6 & 74.0\(\pm\)1.5 \\ & OpenMax & 52.6\(\pm\)2.0 & 65.2\(\pm\)2.5 & 77.4\(\pm\)2.5 & 48.4\(\pm\)5.3 & 54.0\(\pm\)5.1 & 69.4\(\pm\)5.6 \\ \hline \multirow{2}{*}{MLP} & MSP & 40.5\(\pm\)2.2 & 27.1\(\pm\)3.0 & 14.2\(\pm\)2.4 & 45.7\(\pm\)4.8 & 31.7\(\pm\)5.5 & 21.8\(\pm\)1.3 \\ & MLS & 40.4\(\pm\)2.4 & 27.0\(\pm\)3.4 & 13.8\(\pm\)2.8 & 45.6\(\pm\)4.1 & 32.1\(\pm\)4.0 & 24.9\(\pm\)3.7 \\ & TempScale & 40.4\(\pm\)2.2 & 27.0\(\pm\)3.1 & 14.0\(\pm\)2.4 & 45.7\(\pm\)4.8 & 31.6\(\pm\)5.5 & 21.7\(\pm\)1.3 \\ & ODIN & 40.5\(\pm\)2.2 & 27.0\(\pm\)3.1 & 14.0\(\pm\)2.4 & 45.7\(\pm\)4.8 & 31.7\(\pm\)5.5 & 21.8\(\pm\)1.3 \\ & EBO & 40.4\(\pm\)2.4 & 27.0\(\pm\)3.4 & 13.8\(\pm\)2.8 & 45.5\(\pm\)3.9 & 32.0\(\pm\)4.0 & 24.9\(\pm\)3.8 \\ & GRAM & 38.7\(\pm\)2.0 & 25.2\(\pm\)2.6 & 12.7\(\pm\)2.1 & 47.0\(\pm\)0.7 & 32.8\(\pm\)3.0 & 20.4\(\pm\)2.2 \\ & GradNorm & 40.2\(\pm\)2.1 & 26.4\(\pm\)2.9 & 13.2\(\pm\)2.4 & 42.6\(\pm\)2.4 & 26.1\(\pm\)1.1 & 18.8\(\pm\)1.4 \\ & ReAct & 45.0\(\pm\)1.6 & 37.3\(\pm\)2.7 & 31.3\(\pm\)2.5 & 49.0\(\pm\)5.7 & 46.6\(\pm\)12.2 & 46.7\(\pm\)1.4 \\ & DICE & 40.9\(\pm\)2.7 & 28.9\(\pm\)3.5 & 16.7\(\pm\)2.9 & 44.8\(\pm\)3.6 & 31.4\(\pm\)2.7 & 22.0\(\pm\)1.1 \\ & ASH & 40.4\(\pm\)2.4 & 27.3\(\pm\)3.1 & 14.3\(\pm\)2.6 & 45.3\(\pm\)4.1 & 32.1\(\pm\)4.3 & 25.1\(\pm\)3.7 \\ \hline \hline & MDS & **74.4\(\pm\)2.0** & **88.9\(\pm\)1.7** & 91.3\(\pm\)1.4 & **72.4\(\pm\)1.1** & **81.8\(\pm\)0.8** & **85.4\(\pm\)1.0** \\ & RMDS & 51.7\(\pm\)0.6 & 59.8\(\pm\)2.5 & 78.7\(\pm\)2.4 & 46.2\(\pm\)1.5 & 57.6\(\pm\)2.7 & 73.9\(\pm\)1.2 \\ & KNN & 69.5\(\pm\)2.1 & 84.8\(\pm\)2.2 & 87.9\(\pm\)2.2 & 67.5\(\pm\)1.5 & 79.1\(\pm\)1.2 & 83.7\(\pm\)0.8 \\ & VIM & 72.2\(\pm\)2.5 & 87.8\(\pm\)1.8 & 91.5\(\pm\)1.4 & 68.8\(\pm\)1.3 & 80.2\(\pm\)1.2 & 84.5\(\pm\)1.1 \\ & SHE & 68.2\(\pm\)0.7 & 86.6\(\pm\)0.5 & **91.5\(\pm\)0.2** & 67.2\(\pm\)1.4 & 80.0\(\pm\)0.9 & 84.5\(\pm\)0.7 \\ & KLM & 56.6\(\pm\)0.8 & 69.4\(\pm\)1.7 & 80.5\(\pm\)1.9 & 53.2\(\pm\)0.9 & 65.4\(\pm\)0.5 & 75.3\(\pm\)0.8 \\ & OpenMax & 57.5\(\pm\)0.7 & 66.4\(\pm\)4.3 & 77.4\(\pm\)2.9 & 55.2\(\pm\)1.0 & 61.6\(\pm\)1.4 & 61.7\(\pm\)11.4 \\ & MSP & 49.4\(\pm\)2.2 & 35.1\(\pm\)2.1 & 16.8\(\pm\)1.0 & 52.2\(\pm\)1.1 & 38.0\(\pm\)0.7 & 23.1\(\pm\)0.9 \\ ResNet & MLS & 49.0\(\pm\)1.7 & 34.7\(\pm\)2.0 & 18.5\(\pm\)2.
scaling factor (\(\mathcal{F}\)) in this setting facilitates the detection of OODs. However, over-confidence hampers OOD detection, causing certain methods to exhibit a totally opposite behavior with MLP and ResNet architectures, on both datasets. In this case, even the diversity in eICU does not prevent over-confidence. Similar to the far-OOD scenario, FT-Transformer seems to solve this issue, as increasing \(\mathcal{F}\) results in an improved performance for this architecture.
To compare different approaches, density-based methods like AE, VAE, PPCA, and LOF demonstrate better performance than post-hoc ones when \(\mathcal{F}=10\). However, this gap in performance is reduced with increasing \(\mathcal{F}\) to the extent that methods like MDS, VIM, and SHE applied on ResNet are outperforming density-based ones on the eICU dataset with \(\mathcal{F}=1000\).
### Over-confidence
The results above showed over-confidence for the MLP and ResNet architectures, whereas the FT-Transformer appeared as a solution to this problem. For a more visual exploration of this issue, we employ a classification toy example. In this example, each of the predictive architectures is trained on a multi-class classification task with 2D samples. Next, the entropy of the model's softmax output is plotted for a wide array of inputs. Plots are shown in Fig. 2, with a lighter color indicating more confidence (i.e. low entropy in the softmax output). As depicted in this figure, MLP and ResNet confidence is increased by getting further away from the ID data. This observation validates the presence of over-confidence in both the MLP and ResNet models. Conversely, in the FT-Transformer, confidence increases along certain directions and decreases in others. This suggests that the transformer can mitigate the over-confidence in some directions, but does not solve it entirely.
## 6 Discussion
According to our results, when OODs are far from ID data, there exist methods that can effectively perform OOD detection. However, OODs near to ID data such as multiplication by \(\mathcal{F}=10\) or in a near-OOD setting are still challenging to spot. While the poor results in the near-OOD scenarios might be due to a large overlap with the ID data, when multiplying features with a scaling factor one can ensure that there is a clear difference between such OOD and ID data. This points to the fact that there is still ample room for improvement in detecting near OODs.
Delving into OOD detectors, density-based methods, particularly AE, VAE, PPCA, and Flow, exhibit consistent good performance across different settings and datasets. Within the post-hoc methods, they perform poorly in some cases, but they improve substantially and become competitive with the density-based ones when used in conjunction with the distance-based mechanisms. For example, MDS applied on ResNet can even marginally outperform density-based methods in some cases, such as the experiment with a scaling factor of \(\mathcal{F}=1000\) on the eICU dataset. This nuances previous claims that post-hoc methods generally exhibit poor performance (Ulmer et al., 2020; Zadorozhny et al., 2022).
To compare different prediction models, ResNet combined with distance-based detectors demon
Figure 2: Depiction of confidence scores for different architectures. High confidence (low entropy) is represented in light orange. MLP and ResNet exhibit regions of confidence extending far away from the ID data. FT-Transformer mitigates this over-confidence but does not solve it.
strates better results than MLP and FT-Transformer. On the other hand, MLP and ResNet suffer from over-confidence, causing certain detectors to perform even worse than a random classifier. This aligns with what was highlighted in prior studies (Hein et al., 2019; Ulmer and Cina, 2021). Our numerical results suggest that FT-Transformer could be a solution to this problem, however, our simple example with toy data showcases that transformers do not completely eliminate over-confidence.
This benchmark is built on two intensive care datasets, which are highly granular. Hence, caution should be exercised when transporting these findings to alternative healthcare tabular datasets with different characteristics. To facilitate the extension of this benchmark, we provided a modular implementation allowing for the addition of new datasets and methods to the experiments.
| 機械学習モデルの成功 despite the, データはトレーニング分布から発祥していないデータに対して効果的に generaliseしない。実用的な医療システムでMLモデルを信頼して利用し、アウト・オブ・ディストリビューション(OOD)データでの不正確な予測を回避するためには、OODサンプルを検出することが重要である。他の分野では、特にコンピュータビジョンで、多数のOOD検出方法が提案されているが、医療データに適用した場合の課題が解決されているかどうかは明確ではない。この重要な課題に対処するため、私たちは、異なる方法を比較するための包括的で再現可能なベンチマークを提案する。このベンチマークは、近傍と遠方のOODデータを含む複数のテストを含んでいる。私たちのベンチマークは、eICUとMIMIC-IVという2つの公開データセットを、数万人のICU患者を含む複数の病院を網羅している。私たちは、密度に基づいた方法とSOTAのpost |
2309.10889 | Non-Orthogonal Time-Frequency Space Modulation | This paper proposes a Time-Frequency Space Transformation (TFST) to derive
non-orthogonal bases for modulation techniques over the delay-doppler plane. A
family of Overloaded Delay-Doppler Modulation (ODDM) techniques is proposed
based on the TFST, which enhances flexibility and efficiency by expressing
modulated signals as a linear combination of basis signals. A Non-Orthogonal
Time-Frequency Space (NOTFS) digital modulation is derived for the proposed
ODDM techniques, and simulations show that they offer high-mobility
communication systems with improved spectral efficiency and low latency,
particularly in challenging scenarios such as high overloading factors and
Additive White Gaussian Noise (AWGN) channels. A modified sphere decoding
algorithm is also presented to efficiently decode the received signal. The
proposed modulation and decoding techniques contribute to the advancement of
non-orthogonal approaches in the next-generation of mobile communication
systems, delivering superior spectral efficiency and low latency, and offering
a promising solution towards the development of efficient high-mobility
communication systems. | Mahdi Shamsi, Farokh Marvasti | 2023-09-19T19:29:59 | http://arxiv.org/abs/2309.10889v3 | # Non-Orthogonal Time-Frequency Space Modulation
###### Abstract
This paper proposes a Time-Frequency Space Transformation (TFST) to derive non-orthogonal bases for modulation techniques over the delay-doppler plane. A family of Overloaded Delay-Doppler Modulation (ODDM) techniques is proposed based on the TFST, which enhances flexibility and efficiency by expressing modulated signals as a linear combination of basis signals. A Non-Orthogonal Time-Frequency Space (NOTS) digital modulation is derived for the proposed ODDM techniques, and simulations show that they offer high-mobility communication systems with improved spectral efficiency and low latency, particularly in challenging scenarios such as high overloading factors and Additive White Gaussian Noise (AWGN) channels. A modified sphere decoding algorithm is also presented to efficiently decode the received signal. The proposed modulation and decoding techniques contribute to the advancement of non-orthogonal approaches in the next-generation of mobile communication systems, delivering superior spectral efficiency and low latency, and offering a promising solution towards the development of efficient high-mobility communication systems.
overloaded modulation, NOTFS, Delay-Doppler, inverse systems, sphere decoding, iterative method.
## I Introduction
In the next generation of mobile communication systems, channel impairments, particularly in the case of Doppler channels, pose significant challenges that need addressing. To fill this need, Orthogonal Time Frequency Signaling (OTFS) was proposed, which compensates for channel impairments [1, 2]. Recent studies have demonstrated that OTFS can achieve the same spectral efficiency performance as Orthogonal Frequency Division Multiplexing (OFDM) based techniques. Moreover, OTFS can be utilized in high-mobility user scenarios, which is one of the proposed goals of the next mobile generation according to 3GPP visions. However, despite its performance comparable to OFDM, the 2D kernels of OTFS result in inevitable, larger latencies during communication procedures. To address these shortcomings, we propose a new modulation technique that retains the advantages of OTFS and OFDM but omits the orthogonality. Our approach introduces a Time-Frequency Space Transformation (TFST) to derive non-orthogonal bases and create a class of modulation techniques over the delay-doppler plane. The class includes previously studied techniques, such as Time Division Multiplexing (TDM), Frequency Division Multiplexing (FDM), Code Division Multiple Access (CDMA), OFDM, and OTFS.
While researchers have studied Faster Than Nyquist (FTN) signaling [3] and overloaded CDMA [4] to address these constraints, an additional proposed solution is Spectrally Efficient FDM (SEFDM) [5], which has demonstrated promising results in increasing spectral efficiency without sacrificing signal quality. However, like FTN signaling, overloaded CDMA and SEFDM have not yet been widely accepted beyond research. Our proposed modulation technique improves upon these existing methods and provides a more efficient and effective solution for high-mobility communication systems. This new technique has the potential to become the standard for high-mobility communication systems such as 6G and beyond, where low latency and high spectral efficiency are vital.
In Section II, we propose the TFST and derive a new class of Delay-Doppler (DD) modulation techniques. These modulation techniques offer the benefits of both OTFS and OFDM, without the orthogonality constraints. Section III is dedicated to introducing a 2D version of Sphere Decoding (SD) to improve the performance of the proposed approach. In Section IV, we showcase the performance of the proposed technique using simulations. Finally, Section V concludes the paper by summarizing
the advantages of the proposed modulation technique and its potential to become the standard for high-mobility communication systems in the future.
## II Overloading Delay-Doppler Modulation Techniques
In this section, we introduce a novel class of 2D modulation techniques, which are facilitated by a newly proposed transform called TFST. This transform enables the analysis of an arbitrary complex continuous time signal in a 2D format, specifically in the delay-Doppler domain. We examine various properties of the TFST, such as its shift invariance characteristics and its direct connections to both time and Fourier signal representations.
By considering the TFST, we derive a corresponding group of signal bases that can effectively span the domain. We further restrict the bases to specific time and frequency ranges, through which we can establish a set of bases for the proposed category of modulation techniques. These techniques offer enhanced flexibility and efficiency compared to traditional methods, as they provide comprehensive representation of signals in a 2D framework.
The TFST of an arbitrary complex-valued signal \(x(t)\) is defined as (\(-\infty<\tau<\infty\,,\,-\infty<\nu<\infty\)):
\[\mathcal{M}_{x}^{\lambda,\mu}(\tau,\nu)\triangleq\sqrt{\lambda T}\,\sum\limits _{n=-\infty}^{+\infty}\,x(\tau+n\lambda T)\,e^{-j2\pi\frac{n\nu T}{\mu}},\]
where \(\tau\) represents the delay parameter, \(\nu\) the Doppler frequency parameter, and \((\lambda,\mu)\) the transform parameters.
Shift invariance: The shift invariance property of the TFST can be shown by considering a signal that has undergone a delay and a Doppler shift. Let \(r(t)=x(t-\tau_{0})e^{j2\pi\nu_{0}(t-\tau_{0})}\), where \(\tau_{0}\) and \(\nu_{0}\) represent the delay and Doppler shift parameters, respectively. The TFST of \(r(t)\) can be computed as:
\[\mathcal{M}_{r}^{\lambda,\mu}(\tau,\nu)=\mathcal{M}_{x}^{\lambda,\mu}(\tau- \tau_{0},\nu-\lambda\mu\nu_{0})e^{-j2\pi\nu_{0}(\tau-\tau_{0})}.\]
Periodicity: Another important property of the TFST is its periodicity in both the time and frequency domains. Specifically, for any arbitrary complex-valued signal \(x(t)\), we have:
\[\mathcal{M}_{x}^{\lambda,\mu}(\tau+\lambda T,\nu) = e^{j2\pi\nu T/\mu}\,\mathcal{M}_{x}^{\lambda,\mu}(\tau,\nu),\] \[\mathcal{M}_{x}^{\lambda,\mu}\left(\tau,\nu+\mu\Delta f\right) = \mathcal{M}_{x}^{\lambda,\mu}(\tau,\nu),\]
where \(\Delta f=1/T\). Together, the shift invariance and periodicity properties of the TFST enable efficient analysis and processing of time-varying signals in the delay-Doppler domain.
Multiplication property: The TFST of the product of two signals \(a(t)\) and \(b(t)\), denoted \(c(t)=a(t)\star b(t)\), is given by \(\mathcal{M}_{c}^{\lambda,\mu}(\tau,\nu)=\frac{1}{\lambda\mu}\sqrt{\lambda T}\times I\), where \(I\) is defined as
\[I \triangleq \int\limits_{0}^{\mu\Delta f}\mathcal{M}_{a}^{\lambda,\mu}(\tau,\nu-\nu^{\prime})\mathcal{M}_{b}^{\lambda,\mu}(\tau,\nu^{\prime})d\nu^{\prime}\] \[= \frac{1}{\lambda\mu}\sqrt{\lambda T}\,\int\limits_{0}^{\mu\Delta f }\mathcal{M}_{a}^{\lambda,\mu}(\tau,\nu^{\prime})\mathcal{M}_{b}^{\lambda, \mu}(\tau,\nu-\nu^{\prime})d\nu^{\prime}.\]
Convolution property: The TFST of the convolution of two signals \(a(t)\) and \(b(t)\), denoted \(c(t)=a(t)\star b(t)\), is given by
\[\mathcal{M}_{c}^{\lambda,\mu}(\tau,\nu)=\frac{1}{\sqrt{\lambda T }}\int\limits_{0}^{\lambda T}\mathcal{M}_{a}^{\lambda,\mu}(\tau-\tau^{\prime},\nu)\,\mathcal{M}_{b}^{\lambda,\mu}(\tau^{\prime},\nu)\,d\tau^{\prime}\] \[=\frac{1}{\sqrt{\lambda T}}\int\limits_{0}^{\lambda T}\mathcal{M} _{a}^{\lambda,\mu}(\tau^{\prime},\nu)\,\mathcal{M}_{b}^{\lambda,\mu}(\tau- \tau^{\prime},\nu)\,d\tau^{\prime}.\]
The time domain signal \(x(t)\) and its Fourier transform \(\mathcal{F}_{x}(f)=\int\limits_{-\infty}^{\infty}\,x(t)e^{-j2\pi ft}\,dt\) can be obtained from its TFST representation \(\mathcal{M}_{x}(\tau,\nu)\) using the following equations:
\[x(t) = \frac{1}{\lambda\mu}\sqrt{\lambda T}\int\limits_{0}^{\mu\Delta f }\mathcal{M}_{x}^{\lambda,\mu}(t,\nu)\,d\nu,\] \[\mathcal{F}_{x}(f) = \frac{1}{\sqrt{\lambda T}}\int\limits_{0}^{\lambda T}\mathcal{M} _{x}^{\lambda,\mu}(\tau,\lambda\mu f)\,e^{-j2\pi f\tau}\,d\tau.\]
### _Derivation of the Modulation Technique_
A signal in the delay domain, located at \(\tau_{0}\) and in the Doppler domain, located at \(\nu_{0}\) (\(0\leq\tau_{0}<T\), \(0\leq\nu_{0}<\Delta f\)), can be expressed as:
\[\mathcal{M}_{(p,\tau_{0},\nu_{0})}^{\lambda,\mu}(\tau,\nu)\triangleq \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
It can be shown that the time domain signals \(p_{(\tau_{0},\nu_{0})}(t)\), where \(0\leq\tau_{0}<\lambda T\) and \(0\leq\nu_{0}<\mu\Delta f\), form a basis for the space of time domain signals. Any time domain signal \(x(t)\) can be expressed as a linear combination of the basis signals \(p_{(\tau_{0},\nu_{0})}(t)\), i.e.:
\[x(t)=\int_{0}^{\lambda T}\int_{0}^{\mu\Delta f}c_{x}^{\lambda, \mu}(\tau_{0},\nu_{0})\,p_{(\tau_{0},\nu_{0})}^{\lambda,\mu}(t)\,d\tau_{0}\,d \nu_{0},\] \[c_{x}^{\lambda,\mu}(\tau_{0},\nu_{0})=\int_{-\infty}^{+\infty}p_ {(\tau_{0},\nu_{0})}^{\lambda,\mu}(t)^{*}\,x(t)\,dt.\]
And the coefficient \(c_{x}(\tau_{0},\nu_{0})\) corresponding to the basis signal \(p_{(\tau_{0},\nu_{0})}(t)\) is the value of the TFST representation of \(x(t)\) at \(\tau=\tau_{0}\) and \(\nu=\nu_{0}\), i.e.:
\[c_{x}^{\lambda,\mu}(\tau_{0},\nu_{0})=\frac{1}{\lambda\mu}\mathcal{M}_{x}^{ \lambda,\mu}(\tau_{0},\nu_{0}).\]
### _Non-Orthogonal Time Frequency Space_
The derivation of Non-Orthogonal Time Frequency Space (NOTFS) modulation begins by defining the basis signals \(\psi_{(\tau_{0},\nu_{0})}^{(q,s),(\lambda,\mu)}(t)\) as a product of the time and frequency pulses \(q(t)\) and \(S(f)\), and the TD signal \(p_{(\tau_{0},\nu_{0})}^{\lambda,\mu}(t)\), as follows:
\[\psi_{(\tau_{0},\nu_{0})}^{(q,s),(\lambda,\mu)}(t)\triangleq\left(p_{(\tau_{0 },\nu_{0})}^{\lambda,\mu}(t)\,q(t)\right)\star s(t),\begin{cases}0\leq\tau_{0} <\lambda T\\ 0\leq\nu_{0}<\mu\Delta f\end{cases}\]
where \(q(t)\approx 0\,,\,t\notin[\,0\,,\,N\epsilon T)\) and \(|\mathcal{F}_{s}(f)|\!=\!\left|\int_{-\infty}^{+\infty}\!\!(t)e^{-j2\pi ft}\, dt\right|\approx 0\,,\,f\notin[\,0\,,\,M\kappa\Delta f)\). When rectangular pulses are used, a simplified expression for the basis signals is obtained as:
\[\psi_{(\tau_{0},\nu_{0})}^{(q,s),(\lambda,\mu)}(t)=\frac{\sqrt{\lambda T}}{ \lambda\mu}\sum_{n=0}^{N^{\prime}-1}e^{j2\pi\nu_{0}n\mu^{-1}T}\,s(t-\tau_{0}- n\lambda T),\]
where \(N^{\prime}\approx\frac{\epsilon}{\lambda}\times N\). Applying the TFST representation to the basis signals leads to
\[\mathcal{M}_{\psi,\tau_{0},\nu_{0}}^{\lambda,\mu}(\tau,\nu)=\frac{1}{\lambda \mu}\mathcal{M}_{q}^{\lambda,\mu}\left(\tau_{0},\nu-\nu_{0}\right)\,\mathcal{ M}_{s}^{\lambda,\mu}\left(\tau-\tau_{0},\nu\right),\]
which relates the TFST representation of the basis signals to that of the pulses \(q(t)\) and \(s(t)\).
Now we can perform a concentration analysis to determine the magnitude of an arbitrary modulation base. Through this analysis, as shown in
\[\left|\mathcal{M}_{\psi,\tau_{0},\nu_{0}}^{\lambda,\mu}(\tau,\nu) \right|^{2}=\frac{1}{(\lambda\mu)^{2}}\frac{\sin^{2}\left(\pi N^{\prime\prime} \frac{(\nu-\nu_{0})}{\mu\Delta f}\right)}{\sin^{2}\left(\pi\frac{(\nu-\nu_{0 })}{\mu\Delta f}\right)}\times\ldots\] \[\frac{\sin^{2}\left(\pi M^{\prime}\frac{(\tau-\tau_{0})}{\lambda T }\right)}{\sin^{2}\left(\pi\frac{(\tau-\tau_{0})}{\lambda T}\right)},\]
we can observe that the signal is concentrated around \((\tau_{0},\nu_{0})\). With this in mind, each modulation base can be defined by the following equation:
\[\chi_{(k,l)}^{\lambda,\mu}(t) \triangleq \frac{1}{\sqrt{MN}}\psi_{\left(\tau_{0}=\frac{\left(\sigma T \right)}{M},\nu_{0}=\frac{k\theta\Delta f}{N}\right)}^{(q,s),(\lambda,\mu)}(t).\]
This suggests an Overloaded Delay-Doppler Modulation (ODDM) where the modulated signal is expressed as:
\[x(t)\!=\!\sum_{k=0}^{N-1}\sum_{l=0}^{M-1}x[k,l]\,\chi_{(k,l)}^{\lambda,\mu}(t).\]
By digitally implementing the ODDM, the group of NOTFS modulations can be shown as:
\[x(t) \approx \frac{\rho}{\mu}\frac{1}{\sqrt{MN}}\sum_{n=0}^{N^{\prime\prime}-1 }\sum_{m=0}^{M^{\prime\prime}-1}\!\!\!g(t-n\lambda T)\times\ldots\] \[X_{\mathrm{Tr}}[n,m]\,e^{j2\pi m\rho\Delta f(t-n\lambda T)},\] \[X_{\mathrm{Tr}}[n,m] \triangleq \sum_{k=0}^{N^{\prime\prime}-1}\sum_{l=0}^{M^{\prime\prime}-1}x [k,l]\,e^{j2\pi\left(\frac{\theta}{\mu}\frac{nk}{N}-\phi\frac{ml}{M}\right)},\] \[\begin{cases}n=0,1,\cdots,N^{\prime\prime}-1\\ m=0,1,\cdots,M^{\prime\prime}-1\end{cases},\] \[g(t) \triangleq \begin{cases}\frac{1}{\sqrt{\lambda T}}\,,\,t\in[0\,,\,\lambda T )\\ 0\,\,\,\text{otherwise}.\end{cases}.\]
For the rest of the article, we will focus on two typical groups of essentially equal parameters based on overloading parameters (\(\alpha\) and \(\beta\) control compression in the time and frequency domains, respectively), and continue with the latter:
Group 1: \(\theta=\mu,\Phi.\rho=1,\kappa=\rho=\beta,\,\lambda=\epsilon=\alpha\)
Group 2: \(\lambda=\rho=\kappa=\epsilon=1,\,\frac{\theta}{\mu}=\alpha,\,\Phi=\beta\).
## III Sphere Decoding and Inverse Systems
Since the ODDM techniques are two-dimensional in nature, we suggest a modified version of the SD algorithm. To establish a suitable benchmark for comparison, we also offer a brief overview of an iterative inverse methods family that can serve as an initial value for the decoding process.
### _2-D Sphere Decoding_
To describe our communication system depicted in Fig. 1, we use \(\mathbf{X}\) as the transmitted signal. We represent the received signal frame by \(\mathbf{Y}=H_{1}.\mathbf{X}.H_{2}^{\dagger}+\mathbf{Z}\), where \(\mathbf{Z}\) is the additive noise of the communication channel at the receiver. For the sake of brevity, we use the notation \((G,H)\). Specifically, we express \(G\) as \(G=T_{1}^{\dagger}.H_{1}.T_{1}\) and \(H\) as \(H=T_{2}^{\dagger}.H_{2}.T_{2}\), define
the objective function as \(J(\mathbf{S})\triangleq\big{|}\big{|}\mathbf{Y}_{T}-G\mathbf{S}H^{\dagger}\big{|}\big{|}_{F}^{2}\) (where \(\mathbf{Y}_{\mathbf{T}}\triangleq T_{1}.\mathbf{Y}.T_{2}^{\dagger}\)), and set the goal to solve \(\widehat{\mathbf{S}}=\arg\min_{\mathbf{S}\in\mathbb{A}^{M\times N}}J(\mathbf{S})<g^{2}\). Here, \(\mathbb{A}\) is the set of constellation points of sending symbols, and \(g\) is the search radius. To solve this problem, the SD algorithm presented in Alg.1 (with update routine \(\Psi\) as in Alg.2) is used.
We first calculate the QR decompositions of \(H\) and \(G\), denoted as \(G=Q_{G}R_{G}\) and \(H=Q_{H}R_{H}\). Accordingly, the partial objective functions are defined using \(R=R_{G}\), \(L=R_{H}^{\dagger}\), and \(\mathbf{U}=Q_{G}^{\dagger}.\mathbf{Y}_{T}.Q_{H}\), as follows1
Footnote 1: the indexing method for a matrix (or a frame) \(A\) is defined as: \(A_{m:M,n:N}\ \triangleq\ \begin{bmatrix}A_{m,n}&\dots&A_{m,N}\\ \vdots&\ddots&\vdots\\ A_{M,n}&\dots&A_{M,N}\end{bmatrix},A_{m:M,n}\ \triangleq\ \begin{bmatrix}A_{m,n}\\ \vdots\\ A_{M,n}\end{bmatrix}\), and \(A_{m,n:N}\triangleq\big{[}A_{m,n}\quad\dots&A_{m,N}\big{]}\).
\[J_{m,n}(S_{m:M,n:N})\triangleq\left|U_{m,n}-R_{m,m:M}S_{m:M,n:N}L_{m:M,n}\right| ^{2}.\]
Thus, the overall objective function can be rewritten as \(J(\mathbf{S})\triangleq\sum_{m=0}^{M}\sum_{n=0}^{N}J_{m,n}(S_{m:M,n:N})\), which facilitates applying the SD algorithm to solve for the optimal solution \(\widehat{\mathbf{S}}\).
It is worth mentioning that the use of 2-D SD can significantly reduce computational complexity compared to 1D SD, as demonstrated in Table I. This complexity reduction is crucial in effectively optimizing communication systems with large numbers of symbols or frames. Consequently, the 2-D SD algorithm improves system performance while simultaneously mitigating errors.
```
Result:\(\hat{S}=arg\min_{\hat{S}}J(S)\,s.t.S_{m,n}\in\mathbb{A}\) Initialization: \(\hat{X}\in\{0\}^{M\times N\times\kappa},\hat{J}\in\{0\}^{\kappa}\) Update: \(\hat{X},J\leftarrow\Psi(\hat{X},\hat{J},M,N)\) for\(i=1:\min(M,N)-1\)do for\(k=1:i-1\)do Update: \(\hat{X},\hat{J}\leftarrow\Psi(\hat{X},\hat{J},M-k,N-i)\) Update: \(\hat{X},\hat{J}\leftarrow\Psi(\hat{X},\hat{J},M-i,N-k)\) end for Update: \(\hat{X},\hat{J}\leftarrow\Psi(\hat{X},\hat{J},M-i,N-i)\) end for if\(M>N\)then for\(i=N:M-1\)do for\(k=0:N-1\)do Update: \(\hat{X},\hat{J}\leftarrow\Psi(\hat{X},\hat{J},M-i,N-k)\) end for end if\(M<N\)then for\(i=M:N-1\)do for\(k=0:M-1\)do Update: \(\hat{X},\hat{J}\leftarrow\Psi(\hat{X},\hat{J},M-k,N-i)\) end for end if end for return\(\hat{S}=\hat{X}_{:,:,1}\)
```
**Algorithm 1**2-D Sphere Decoding.
Fig. 1: Typical block-diagram of Delay-Doppler (DD) modulation techniques.
### _Inverse System_
The Iterative Method (IM) was introduced to address distortion resulting from non-ideal interpolation. By defining \(G\) as a distortion operator, it is possible to recursively implement IM to compute \(G^{-1}\) in order to compensate for its distortion [6, 7]. The IM recursive equation is given by:
\[x_{k}=\lambda(x_{0}-G(x_{k-1}))+x_{k-1},\]
where \(x_{k}\) represents the estimated signal after \(k\) iterations and \(\lambda\) is a relaxation parameter.
In soft decoding, the following steps are taken [5]:
\[d \gets 1-r/\eta\] \[w \leftarrow\lambda(w_{0}-G^{\text{soft}}(w,d))+w,\]
where, \(p_{i}\) and \(q_{i}\) denote the real and imaginary components of \(w_{i}\), respectively, and \(s_{i}\) is defined as:
\[s_{i}=\left\{\begin{aligned} & p_{i};\text{if}\ \left|p_{i}\right|<d\\ & sign(p_{i});\text{otherwise}\end{aligned}\right.+1j\times \left\{\begin{aligned} q_{i};\text{if}\ \left|q_{i}\right|<d\\ sign(q_{i});\text{otherwise}\end{aligned}\right.\]
## IV Simulation Results
In this section, we present an assessment of the performance of the proposed modulation techniques and detection algorithms through simulation results. In order to achieve a better understanding, we define the overloading factor as \(\eta\triangleq\frac{1}{\alpha.\beta}-1\). The IM with soft decoding is employed to detect the received NOTFS signals in an AWGN channel. The proposed approach shows promising performance by achieving a low BER (\(10^{-4}-10^{-5}\)) under \(30\%\) overloading, as demonstrated by the results shown in Fig. 2
Furthermore, Fig. 3 illustrates that the system can attain an acceptable level of performance even with super overloading in small frames. It should be noted that by increasing the number of iterations, the system's performance can be further enhanced. Through the adoption of the proposed 2D SD methodology, it is evident that the system's performance can be further improved. As presented in Fig. 4, low values of BER can be achieved even under a high degree of overloading, specifically at \(66\%\).
## V Conclusion
We proposed a novel modulation technique that combines the advantages of OTFS and OFDM while omitting the orthogonality constraint. This new technique utilizes a Time-Frequency Space Transformation (TFST) to derive non-orthogonal bases, resulting in a family of modulation techniques over the delay-doppler plane. We demonstrated that our proposed method achieves higher spectral efficiency and lower latency while maintaining similar performance to OTFS and OFDM. We presented simulation results for our proposed technique and demonstrated its superior performance compared to
\begin{table}
\begin{tabular}{|c|c|c|} \hline method & 2d-SD (for \(J_{m,n}\)) & 1d-SD (for \(J_{k}\)) \\ \hline \(QR\) Decomp. & \(M\times M\) and \(N\times N\) & \(MN\times MN\) \\ \hline complex \(\times\) & \(\frac{MN(\min(M,N)+3)}{2}\) & \(\frac{MN(1+MN)}{2}\) \\ \hline complex \(+\) & \(\frac{MN(\min(M,N)+1)}{2}\) & \(\frac{MN(1+MN)}{2}\) \\ \hline \end{tabular}
\end{table} TABLE I: complex operations.
Fig. 2: BER vs EB/N0: NOTFS, AWGN channel, IM and soft decoding (different \(\lambda\)s).
existing methods in various scenarios. We also introduced a novel implementation of sphere decoding in two dimensions that significantly reduces computational complexity compared to one-dimensional sphere decoding. Our proposed technique has the potential to become the standard for high-mobility communication systems such as 6G and beyond, where low latency and high spectral efficiency are vital.
| この論文では、時間-周波数空間変換(TFST)を提案し、遅れ-雑音平面における非直交基底を導出するための方法です。 TFSTに基づくオーバーロード遅れ-雑音変調(ODDM)手法の家族が提案され、基底信号の線形結合として表現された変調信号の柔軟性と効率性を向上させます。この提案されたODDM手法に、非直交時間-周波数空間(NOTFS)デジタル変調が導出され、シミュレーションの結果、高速移動通信システムの提供、スペクトル効率の向上、および低遅延を実現し、特に高負荷と加算ホワイトガウスノイズ(AWGN)チャネルなどの課題のある状況においても期待できます。また、提案された変調とデコード手法に、効率的に受信信号をデコードするための修正された球体デコードアルゴリズムが提案されています。 |
2309.06033 | Energy-Aware Federated Learning with Distributed User Sampling and
Multichannel ALOHA | Distributed learning on edge devices has attracted increased attention with
the advent of federated learning (FL). Notably, edge devices often have limited
battery and heterogeneous energy availability, while multiple rounds are
required in FL for convergence, intensifying the need for energy efficiency.
Energy depletion may hinder the training process and the efficient utilization
of the trained model. To solve these problems, this letter considers the
integration of energy harvesting (EH) devices into a FL network with
multi-channel ALOHA, while proposing a method to ensure both low energy outage
probability and successful execution of future tasks. Numerical results
demonstrate the effectiveness of this method, particularly in critical setups
where the average energy income fails to cover the iteration cost. The method
outperforms a norm based solution in terms of convergence time and battery
level. | Rafael Valente da Silva, Onel L. Alcaraz López, Richard Demo Souza | 2023-09-12T08:05:39 | http://arxiv.org/abs/2309.06033v1 | # Energy-Aware Federated Learning with Distributed User Sampling and Multichannel ALOHA
###### Abstract
Distributed learning on edge devices has attracted increased attention with the advent of federated learning (FL). Notably, edge devices often have limited battery and heterogeneous energy availability, while multiple rounds are required in FL for convergence, intensifying the need for energy efficiency. Energy depletion may hinder the training process and the efficient utilization of the trained model. To solve these problems, this letter considers the integration of energy harvesting (EH) devices into a FL network with multi-channel ALOHA, while proposing a method to ensure both low energy outage probability and successful execution of future tasks. Numerical results demonstrate the effectiveness of this method, particularly in critical setups where the average energy income fails to cover the iteration cost. The method outperforms a norm based solution in terms of convergence time and battery level.
Energy Harvesting, Federated Learning, Multi-channel ALOHA, User Sampling.
## I Introduction
Federated learning (FL) has emerged as a prominent research topic within the wireless communication community, gaining significant attention in recent years [1]. In FL, edge devices collaboratively train a global model by only sharing local model updates, which provides a higher protection against the exposure of sensitive data, such as surveillance camera images, geolocation data, and health information. However, such collaborative training requires multiple communication rounds, raising spectral and energy efficiency concerns [1]. The latter is particularly important for edge devices, given their inherent energy limitations.
The sixth generation (6G) of wireless systems targets 10-100 times more energy efficiency than 5G, which is critical for supporting massive Internet of Things (IoT) networks [2]. Such demanding vision requires a meticulous design of the communication system, where medium access control (MAC) mechanisms play a major role. Grant-free random access protocols, such as slotted ALOHA (SA) with multiple channels, are suitable candidates for massive IoT applications, since control signaling is much reduced. Moreover, energy availability must be considered to support self-sustainable networks, in which _energy neutrality_[3], balancing availability and expenditure of energy resources, is essential.
Existing literature on FL indirectly addresses spectral and energy efficiency by optimizing the convergence time, leveraging informative updates from users [4, 5] or the relationship between local and global models [6], reducing the required number of iterations. These approaches often overlook the initial battery levels of different devices, which can result in energy depletion during the training process and hinder the overall progress. Even if the training process is not impeded, the remaining energy may be insufficient for the execution of future tasks and the utilization of the trained model.
This letter considers the use of EH devices, which eliminate the need for frequent battery replacement [7], while also allow energy neutrality. Prior works in [8, 9] considered some sort of energy income for FL networks. In [8], a wireless-powered FL system is considered and the tradeoff between model convergence and the transmission power of the access point is derived. The authors in [9] consider EH devices with multiple base stations (BS) and propose a user selection algorithm to minimize the training loss. However, [8, 9] overlook the residual energy in the devices at the end of the training process and the energy imbalances among users, which are considered in this letter. Moreover, they do not consider a random access protocol and massive IoT settings. We present a novel energy-aware user sampling technique for a FL network under a multichannel SA protocol. The proposed method enables users to make informed decisions regarding their participation in an iteration, controlling the computation cost. Numerical results corroborate the effectiveness of our method. In critical energy income setups, lower error and higher energy availability can be achieved compared to [4], which solely considers the informativeness of updates. We can achieve an error 46.72% smaller, while maintaining 37% more energy in a network of 100 devices, while the the performance gap increases with the number of deployed devices.
## II System Model
Consider a wireless network comprising \(K\) users, indexed as \(k\in\mathcal{K}=\{1,2,\ldots,K\}\), a BS, and \(M\) orthogonal channels. Each user has a dataset \(\mathcal{D}_{k}=\{\mathbf{x}_{k},\mathbf{y}_{k}\}\) associated with its respective local model. Here, \(\mathbf{x}_{k}\) is the unlabeled sample vector, with size \(L\times 1\), and \(\mathbf{y}_{k}\) is the ground truth vector for supervised learning. The common goal of every device is to minimize a global loss function \(F(\mathbf{w})\) as
\[\min_{\mathbf{w}}\frac{1}{K}\sum_{k=1}^{K}f_{k}(\mathbf{w}), \tag{1}\]
where \(f_{k}(\mathbf{w})=\ell(\mathbf{w},\mathbf{x}_{k},\mathbf{y}_{k})\) is the local loss function for the \(k\)-th user and \(\mathbf{w}\) is the global model. In FL, the problem in (1) is tackled by distributively minimizing \(f_{k}(\mathbf{w})\) over iterations, which yields a local model update \(\mathbf{g}_{k}(t)=\nabla f_{k}(\mathbf{w}(t))\) for the stochastic gradient descendent method. To ensure collaborative learning, each user transmits \(\mathbf{g}_{k}(t)\) to the BS, which employs an aggregation function to update the global model. Here, we consider FedAvg [10], thus, the global model is updated as
\[\mathbf{w}(t+1)=\mathbf{w}(t)-\mu\sum_{k\in\mathcal{K}}d_{k}\mathbf{g}_{k}(t), \tag{2}\]
where \(\mu>0\) is the learning rate and \(d_{k}=|\mathcal{D}_{k}|/\sum_{k^{\prime}=1}^{K}|\mathcal{D}_{k^{\prime}}|\). Then, the BS broadcasts \(\mathbf{w}(t+1)\) for all users.
From (2), we can observe that the size of the learning step is directly affected by the norm of the local update \(||\mathbf{g}_{k}(t)||\), which quantifies how informative the update is. In [4], the authors present a method to adaptly decide the transmission probability of users based on the local update norm given by
\[p_{\text{tx},k}(t)=\max(\min(e\ln||\mathbf{g}_{k}(t)||-\lambda(t),1),0). \tag{3}\]
In this context, \(\lambda(t)\) serves as a feedback signal that ensures an efficient utilization of the \(M\) orthogonal channels in a multichannel SA setup 1. The value of \(\lambda(t)\) is determined by
Footnote 1: As discussed in [11], transmission errors (or collisions) may compromise the FL performance. However, following [4], the considered network maximizes the utilization of the available resources.
\[\lambda(t)=\lambda(t-1)+\mu_{1}(\hat{K}-M), \tag{4}\]
where \(\mu_{1}\) is a step size and \(\hat{K}\leq K\) is the number of transmissions that occurred at the previous iteration.
Note that this method does not consider the, potentially limited, energy availability at the devices. For instance, an EH user could repeatedly transmit and drain its battery in the process, rendering the execution of future tasks impossible. To mitigate this, we introduce a sleep probability and consider an strategy depicted in Fig. 1 and based on the following steps.
1. **Energy Harvesting:** At the start of an iteration, each device harvests \(\zeta_{k}(t)\) Joules of energy and stores in the battery if its capacity allows, being \(\zeta_{k}(t)\) a random variable with a predefined distribution.
2. **Engagement:** Each user decides whether to engage in the iteration with a sleep probability \[p_{s,k}(t)=1-\alpha\frac{B_{k}(t)}{B_{\max}},\] (5) where \(\alpha\) is a constant, \(B_{k}(t)\) is the current battery level, and \(B_{\max}\) is the battery capacity, which is the same for all devices. We propose this sleep probability to equalize the battery charge of all devices over time. The awaken users receive the global model \(\mathbf{w}(t)\) from the BS and compute their local model updates \(\mathbf{g}_{k}(t)\).
3. **Informative Multi-Channel SA:** Users transmit \(\mathbf{g}_{k}(t)\) with a probability given by (3). Transmissions occur through a randomly chosen channel among \(M\) channels. A transmission is only successful if there is no collision.
4. **Global Model Updates**: Following (2) the BS aggregates the local updates and broadcasts \(\mathbf{w}(t+1)\) and \(\lambda(t+1)\), which are assumed to be collision-free.
Following this procedure, the battery evolution model is
\[B_{k}(t) =B_{k}(t-1)+\min(\zeta_{k}(t),B_{\max}-B_{k}(t-1))\] \[-\delta_{\text{e},k}(t)(E_{k}^{\text{cmp}}+E_{k}^{\text{rx}})- \delta_{\text{tx},k}(t)E_{k}^{\text{tx}}, \tag{6}\]
where \(\delta_{\text{e},k}(t)\) and \(\delta_{\text{tx},k}(t)\) are indicator functions representing user engagement and transmission, respectively. They are equal to \(1\) when the corresponding event occurs and \(0\) otherwise. Additionally, \(E_{k}^{\text{cmp}}\), \(E_{k}^{\text{rx}}\), and \(E_{k}^{\text{tx}}\) are the computation, reception, and transmission energy costs, respectively, whose models are presented in Section III. Moreover, it is crucial to choose a precise value for \(\alpha\) in step 2) to ensure the proper functioning of the network, which is discussed in Section IV.
## III Energy Consumption Models
### _Local-Computation Model_
The computation complexity of a machine learning algorithm can be measured by the number of required floating point operations (FLOPs). Let \(W\) denote the number of FLOPs per data sample for a given model. The total number of FLOPs for the \(k\)-th user to perform one local update is
\[G_{k}=W|\mathcal{D}_{k}|. \tag{7}\]
Let \(f_{\text{clk},k}\) be the processor clock frequency (in cycles/s) of the \(k\)-th user and \(C_{k}\) be the number of FLOPs it processes within one cycle. Then, the time required for one local update is
\[t_{k}=\frac{G_{k}}{C_{k}f_{\text{clk},k}},\quad\forall k\in\mathcal{K}. \tag{8}\]
Moreover, for a CMOS circuit, the central processing unit (CPU) power is often modeled by its most predominant part: the dynamic power [12], which is proportional to the square of the supply voltage and to the operating clock frequency. Moreover, for a low voltage supply, as in our case, the frequency scales approximately linear with the voltage [12]. Therefore, the CPU power consumption can be written as [8]
\[p_{k}^{\text{cmp}}=\psi_{k}f_{\text{clk},k}^{3}\quad\forall k\in\mathcal{K}, \tag{9}\]
where \(\psi\) is the effective capacitance and depends on the chip architecture. Based on (8) and (9), the energy consumption of the computation phase for the \(k\)-th user is given by
\[E_{k}^{\text{cmp}}=t_{k}P_{k}^{\text{cmp}}=\psi_{k}\frac{G_{k}}{C_{k}}f_{\text {clk},k}^{2}. \tag{10}\]
Fig. 1: Users begin the iteration by harvesting energy. Then, a user may engage by computing its local model update \(\mathbf{g}_{k}(t)\). A user can either transmit or withhold its update. Transmissions occur through one of \(M\) channels using SA. If more than one user access the same channel, there is a collision.
### _Transceiver Model_
The energy consumed by the edge devices' transceivers is
\[E_{k}^{\text{comms}}=E_{k}^{\text{tx}}+E_{k}^{\text{rx}}+E_{k}^{\text{sleep}}, \tag{11}\]
where \(E_{k}^{\text{tx}}\) (\(E_{k}^{\text{rx}}\)) is the energy required to transmit (receive) a local (global) update while \(E_{k}^{\text{sleep}}\) is the consumed energy during the inactive time. Since \(E_{k}^{\text{sleep}}\) is much smaller than \(E_{k}^{\text{tx}}\) and \(E_{k}^{\text{rx}}\), we neglect its impact in the following.
Considering the transmission of local updates with a radiated power \(P_{k}^{\text{tx}}\), the power consumed by the edge transceivers is can be modeled as [13]
\[P_{k}^{\text{total}}=\frac{P_{k}^{\text{tx}}}{\eta}+P_{\text{circ}}, \tag{12}\]
where \(\eta\) is the drain efficiency of the power amplifier (PA), and \(P_{\text{circ}}\) is a fixed power consumption that comprises all other transceiver circuits except the PA. Then, the energy required to transmit a local update is
\[E_{k}^{\text{tx}}=\frac{P_{k}^{\text{total}}}{R_{b}^{\text{tx}}}N_{k}, \tag{13}\]
where \(N_{k}\) is the local update size in bits, and \(R_{b}^{\text{tx}}\) is the bit rate in the uplink. Meanwhile, the energy consumed when receiving the global updates is modeled by
\[E_{k}^{\text{rx}}=\frac{P_{k}^{\text{rx}}}{R_{b}^{\text{tx}}}N, \tag{14}\]
where \(N\) is the global update size in bits, \(R_{b}^{\text{rx}}\) is the bit rate in the downlink, and \(P_{k}^{\text{rx}}\) is the receive power consumption, which includes \(P_{\text{circ}}\). Thus, \(P_{k}^{\text{rx}}\) is slightly greater than \(P_{\text{circ}}\), but usually smaller than \(P_{k}^{\text{total}}\).
## IV Sleep Probability Tuning
To ensure that a device saves enough energy for future tasks while still participating in the model training, we propose a precise selection of parameter \(\alpha\) based on the EH process and the desired battery level at the end of the training. Notice that the expected battery level with respect to \(k\) and assuming equal costs for all devices can be obtained from (6) as
\[\mathbb{E}[B_{k}(t)] =\mathbb{E}[B_{k}(t-1)]+\mathbb{E}[\min(\zeta_{k}(t),B_{\max}-B_{ k}(t-1))]\] \[-\mathbb{E}[\delta_{\text{e},k}(t)](E^{\text{cmp}}+E^{\text{rx} })-\mathbb{E}[\delta_{\text{tx},k}(t)]E^{\text{tx}}\] \[=\mathbb{E}[B_{k}(t-1)]+\mathbb{E}[\min(\zeta_{k}(t),B_{\max}-B_{ k}(t-1))]\] \[-\alpha\frac{\mathbb{E}[B_{k}(t)]}{B_{\max}}(E^{\text{cmp}}+E^{ \text{rx}})-p_{\text{tx},k}(t)E^{\text{rx}}, \tag{15}\]
where \(\mathbb{E}[\delta_{\text{e},k}(t)]=1-p_{\text{s},k}(t)\) and \(\mathbb{E}[\delta_{\text{tx},k}(t)]=p_{\text{tx},k}(t)\). We also consider the expectation of the battery level in \(p_{\text{s},k}\), since we aim to stabilize the average battery level to a fixed threshold \(\xi>0\) over time. Therefore, as \(t\) tends to infinity, \(\mathbb{E}[B_{k}(t)]\) converges to \(\xi\). Using this in (15) leads to
\[\alpha=\left(E_{h}-p_{\text{tx},k}(t)E^{\text{tx}}\right)\frac{B_{\max}}{\xi( E^{\text{cmp}}+E^{\text{rx}})}, \tag{16}\]
where \(E_{h}=\mathbb{E}[\min(\zeta_{k}(t),B_{\max}-B_{k}(t-1))]\) is the average harvested energy. Note that the proposed solution requires knowledge of \(\zeta_{k}(t)\) and \(B_{k}(t-1)\) distributions. Although it is reasonable to assume that a device has such knowledge, mathematical tractability of the battery level is challenging. Since the required battery knowledge pertains to a previous time than the energy income, the distributions of these two variables are independent. This allows us to rearrange the expectations and state the average harvested energy as
\[E_{h} =\mathbb{E}[\min(\zeta_{k}(t),B_{\max}-B_{k}(t-1))]\] \[=\mathbb{E}_{\xi}[\mathbb{E}_{B}[\min(\zeta_{k}(t),B_{\max}-B_{ k}(t-1))]]\] \[\overset{\text{op}}{=}\mathbb{E}_{\xi}[\min(\zeta_{k}(t),B_{\max }-\mathbb{E}[B_{k}(t-1)])]\] \[\overset{\text{op}}{=}\mathbb{E}[\min(\zeta_{k}(t),B_{\max}- \xi)]\] \[=\mathbb{E}[\zeta_{k}(t)\mid\zeta_{k}(t)\leq B_{\max}-\xi]\text{Pr} \{\zeta_{k}(t)\leq B_{\max}-\xi\}\] \[+(B_{\max}-\xi)\text{Pr}\{\zeta_{k}(t)>B_{\max}-\xi\}. \tag{17}\]
Since the minimum function is convex, we employed Jensen's inequality in step (a) and from step (b) onward we consider \(t\rightarrow\infty\), thus \(\mathbb{E}[B_{k}(t-1)]=\xi\).
Since \(p_{\text{tx},k}(t)\) is not known a priori, and to allow deviations of the energy stored in the battery about \(\xi\), we use \(\mathbb{E}[p_{\text{tx},k}(t)]\) in (16) instead of \(p_{\text{tx},k}(t)\). According to (4), out of the \(K\) users, \(M\) updates per iteration are transmitted on average to the BS, thus, \(\mathbb{E}[p_{\text{tx},k}(t)]=M/K\). Then, with (17) and (16) we have
\[\alpha\geq\left(\mathbb{E}_{k}[\min(\zeta_{k}(t),B_{\max}-\xi)]-\frac{M}{K}E^{ \text{tx}}\right)\frac{B_{\max}}{\xi(E^{\text{cmp}}+E^{\text{rx}})}. \tag{18}\]
At the beginning of the training process, the BS broadcasts the value of \(\alpha\) solved by assuming equality in (18).
### _Mean EH Knowledge_
We also consider a simpler variation of the method where we exploit only the average EH information, i.e., we use \(E_{h}=\mathbb{E}[\zeta_{k}(t)]\) and \(\mathbb{E}[p_{\text{tx},k}(t)]=M/K\) in (16), thus
\[\alpha=\left(\mathbb{E}[\zeta_{k}(t)]-\frac{M}{K}E^{\text{tx}}\right)\frac{B_{ \max}}{\xi(E^{\text{cmp}}+E^{\text{rx}})}. \tag{19}\]
The energy mean knowledge (EMK) approach in (19) disregards the impact of the maximum battery capacity, different from the energy distribution knowledge (EDK) in (18).
## V Simulation Results
We analyze the performance of the proposed method compared to the Largest Updates' Norms (LUN) baseline, where users transmit the updates with the largest norms according to [4]. Additionally, to illustrate the necessity of the adaptive control presented in (3) and (4), we include a baseline method that assigns a uniform transmission probability \(p_{\text{tx},k}=M/K\) to all users (to distinguish, we use the acronym AC for adaptive control). We assume a linear regression problem with the following loss function: \(f_{k}(\mathbf{w})=0.5\mathbf{x}_{k}^{\text{T}}\mathbf{w}(t)-y_{k}|^{2}\)[4], where \(\mathbf{x}_{k}\sim\mathcal{N}(\mathbf{v}_{k},\mathbf{I})\), \(y_{k}=\mathbf{x}_{k}^{\text{T}}\mathbf{w}\), and \(\mathbf{w}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). Note that \(\mathbf{w}(t)\) are the training weights, while \(\mathbf{w}\) corresponds to the true weights. Also, parameter \(v_{k}\sim\mathcal{N}(0,\beta_{k})\) is utilized to generate a non-IID dataset, with \(\beta_{k}=\mathbf{I}\) indicating the non-IID degree.
Similar to [9], the energy income at each user is modeled by a compound Poisson stochastic process, i.e., the interarrival
time is modeled by an exponential distribution with rate \(r\) and the amount of energy harvested in each arrival is modeled by a Poisson process with parameter \(m/r\), thus, \(\mathbb{E}_{t}[\xi_{k}(t)]=m\). This model is defined by discrete units of energy. We scale one unit of energy to the total cost of an iteration in J, i.e., \(E_{k}^{\text{commms}}+E_{k}^{\text{cmp}}\). Unless stated otherwise, we set \(r=0.02\) and \(m=0.2\) units of energy. Note that \(r\) is the mean of the exponential distribution, corresponding to an energy arrival every 50 iterations on average, similar to [14]. Moreover, we set \(K=100\), \(M=10\), \(L=10\), \(\mu=0.01\), and \(\mu_{1}=0.1\) as in [4], while \(P_{k}^{\text{tx}}=3.3\) dB, \(P_{k}^{\text{rx}}=1.9\) mW, \(\eta=0.33\), \(P_{\text{circ}}=1.33\) mW, which correspond to a BLE transceiver [15]. Moreover, \(R_{b}^{\text{tx}}=R_{b}^{\text{rx}}=1\) Mbps, \(W=4L\), \(f_{\text{clk},k}=0.25\) GHz, \(C_{k}=20\)[16], and the effective capacitance is \(\psi_{k}=10^{-20}\)[17], while the initial battery level of the devices is given by a uniform distribution \(U(0,B_{\text{max}})\), where \(B_{\text{max}}=10^{-1}\) J.
First, we set the desired threshold to \(\xi=0.4B_{\text{max}}\) and analyze the average stored energy over iterations in Fig. (a)a, which converges to the threshold when we exploit full knowledge of the energy income distribution (EDK; EDK-AC) or just its mean (EMK; EMK-AC). For the LUN approach, the average stored energy stabilizes near zero, as most users run out of energy. The network naturally reaches a stable state since all users, included those that run out of energy, continue to harvest energy. However, only users with sufficient energy actively participate in the training. Fig. (b)b shows that relying solely on the energy income source, without energy management, directly affects the learning process. Indeed, the LUN approach starts the training well, but soon devices die and are unable to resume learning until enough energy is harvested. Meanwhile, with the proposed energy management, devices can participate more frequently, resulting in a smaller error for EDK-AC and EMK-AC. Also, the error without the adaptive control is much higher, since it does not consider the norm of local updates, a persistent trend throughout the simulations.
Next we investigate the effect of the mean of the energy income process on the energy availability when \(\xi=0.4B_{\text{max}}\). Fig. (a)a displays the results for \(t=1000\), revealing that the EDK, EDK-AC, EMK, and EMK-AC curves stay fairly close to the threshold. The variation is due to the inequality in (17), which, similar to the EMK approach, cannot fully incorporate the battery capacity considerations within this operational region. As we increase \(m\), the EDK and EDK-AC curves depart from the EMK and EMK-AC curves, since the battery capacity limitation is more relevant. Besides, an energy surplus occurs within the network with respect to the threshold, since only \(M\) devices transmit on average. In Fig. (b)b, we plot the corresponding average error. For a small \(m\), the threshold is too demanding, resulting in similar errors for all AC approaches. However, as the energy income increases, the proposed method with adaptive control outperforms LUN. As the energy levels continue to rise, the differences between the AC methods and the LUN approach diminish.
In Fig. (a)a we set \(m=0.2\), \(\xi=0.4B_{\text{max}}\), \(t=1000\), for varying number of devices. The average battery level remains relatively unaffected, which is not true for the average error in Fig. (b)b. Here, more users are able to engage in the learning process when using the proposed approaches. In contrast, the LUN method shows limited improvement with the number of users, since it lacks energy awareness, different from the methods that consider the average network energy. Thus, many users continue to consume energy by performing computations without transmitting, leading to rapid battery depletion. Moreover, since users in methods without AC have the same transmission probability, i.e., the methods disregard the informativeness of updates, the same performance improvements exhibited by methods with AC cannot be observed.
Finally, we examine the impact of the energy threshold.
Fig. 2: (a) Normalized average battery level and (b) average error, i.e., \(\sum_{k}||\mathbf{w}_{k}(t)-\mathbf{w}||/K\), as a function of the number of iterations for \(\xi=0.4B_{\text{max}}\), \(m=0.2\), and \(K=100\).
In Fig. 5a it can be observed that the average battery level follows a nearly linear trend for EDK and EDK-AC, with slight variations due to (17). When the threshold is set to lower or higher values, where the constraint is either insignificant or more dominant, the battery level precisely aligns with the threshold when using EDK and EDK-AC. However, with EMK and EMK-AC the battery cannot stabilize at the expected level for higher thresholds. As for the error, in Fig. 5b, it becomes apparent that an optimal threshold exists, when considering the AC methods. If the threshold is too low, some devices deplete their energy and the error increases, while if the threshold is very demanding, the error rises since devices are often saving energy, reaching a point where LUN outperforms the proposed methods. It is worth mentioning that in the exceptional case where all users must maintain full battery, no training occurs as (energy-consuming) transmissions are not allowed.
## VI Conclusion
We proposed an energy-aware method for FL networks under the principle of energy neutrality. Our approach mitigates battery depletion and achieves convergence to a sustainable energy level, enabling the execution of future tasks. The method requires distribution knowledge of the energy income, but relying only on average information was shown to be sufficient. In critical energy income regions and reasonable energy thresholds, our method outperforms the typical norm-based strategy, in terms of convergence time and battery level. In future works, we aim to include physical layer modeling and assess the impact of non-orthogonal multiple access techniques in the power domain and rate allocation procedures.
| **エッジデバイスにおける分散学習は、 federated learning(FL)の advent と共に注目を集めています。特に、エッジデバイスはバッテリー容量が限られており、エネルギーの供給が異種であることが多く、FLにおける複数回の実行が必要なため、エネルギー効率性を高める必要があります。エネルギー枯渇は、トレーニングプロセスを阻害し、訓練済みモデルの効率的な利用を阻害する可能性があります。これらの問題を解決するため、この論文では、 federated learning(FL)ネットワークにエネルギー harvesting(EH)デバイスを統合し、マルチチャンネル ALOHA を導入し、低エネルギー欠損確率と将来のタスクの実行を保証する方法を提案しています。数値結果により、この方法が、特に、平均エネルギー収入が回数のコストをカバーできないような重要な設定において、その有効性を示しています。この方法が、収束時間とバッテリーレベルに関して、基準に基づく解決策よりも優れています。 |
2309.09300 | AutoAM: An End-To-End Neural Model for Automatic and Universal Argument
Mining | Argument mining is to analyze argument structure and extract important
argument information from unstructured text. An argument mining system can help
people automatically gain causal and logical information behind the text. As
argumentative corpus gradually increases, like more people begin to argue and
debate on social media, argument mining from them is becoming increasingly
critical. However, argument mining is still a big challenge in natural language
tasks due to its difficulty, and relative techniques are not mature. For
example, research on non-tree argument mining needs to be done more. Most works
just focus on extracting tree structure argument information. Moreover, current
methods cannot accurately describe and capture argument relations and do not
predict their types. In this paper, we propose a novel neural model called
AutoAM to solve these problems. We first introduce the argument component
attention mechanism in our model. It can capture the relevant information
between argument components, so our model can better perform argument mining.
Our model is a universal end-to-end framework, which can analyze argument
structure without constraints like tree structure and complete three subtasks
of argument mining in one model. The experiment results show that our model
outperforms the existing works on several metrics in two public datasets. | Lang Cao | 2023-09-17T15:26:21 | http://arxiv.org/abs/2309.09300v1 | # AutoAM: An End-To-End Neural Model for Automatic and Universal Argument Mining
###### Abstract
Argument mining is to analyze argument structure and extract important argument information from unstructured text. An argument mining system can help people automatically gain causal and logical information behind the text. As argumentative corpus gradually increases, like more people begin to argue and debate on social media, argument mining from them is becoming increasingly critical. However, argument mining is still a big challenge in natural language tasks due to its difficulty, and relative techniques are not mature. For example, research on non-tree argument mining needs to be done more. Most works just focus on extracting tree structure argument information. Moreover, current methods cannot accurately describe and capture argument relations and do not predict their types. In this paper, we propose a novel neural model called AutoAM to solve these problems. We first introduce the argument component attention mechanism in our model. It can capture the relevant information between argument components, so our model can better perform argument mining. Our model is a universal end-to-end framework, which can analyze argument structure without constraints like tree structure and complete three subtasks of argument mining in one model. The experiment results show that our model outperforms the existing works on several metrics in two public datasets.
Keywords:Argument Mining Information Extraction Natural Language Processing
## 1 Introduction
Argument mining (AM) is a technique for analyzing argument structure and extracting important argument information from unstructured text, which has gained popularity in recent years [12]. An argument mining system can help people automatically gain causal and logical information behind the text. The argument mining techniques benefit plenty of many fields, like legal [31], public opinions [19], finance, etc. Argument mining is beneficial to human society, but there is still much room for development. Argument mining consists of several tasks and has a variety of different paradigms [12]. In this paper, we focus on the most common argument structure of monologue. It is an argumentative text from one side, not an argument from two sides. The microscopic structure of
argumentation is the primary emphasis of the monologue argument structure, which primarily draws out the internal relations of reasoning.
In this setting, an argumentative paragraph can be viewed as an argument graph. An argument graph can efficiently describe and reflect logical information and reasoning paths behind the text. An example of AM result after extraction is shown in Figure 1. The two important elements in an argument graph are the argument component (AC) and the argument relation (AR). ACs are nodes in this graph, and ARs are edges. The goal of an AM system is to construct this argument graph from unstructured text automatically. The process of the AM system definition we use is as following steps:
1. Argument Component Identification (ACI): Given an argumentative paragraph, AM systems will detect ACs from it and separate this text.
2. Argument Component Type Classification (ACTC): AM systems will determine the types of these ACs.
3. Argument Relation Identification (ARI): AM systems will identify the existence of a relationship between any ACs.
4. Argument Relation Type Classification (ARTC): AM systems will determine the type of ARs, which are the existing relations between ACs.
Subtask 1) is a token classification task, which is also a named entity recognition task. This task has a large amount of research work on it. Most of the previous argument mining works [25][10][3] assume that the subtask 1) argument component identification has been completed, which is the argument component has been identified and can be obtained from the argumentative text. Therefore, the emphasis of argument mining research is placed on other subtasks. Following previous works, we also make such assumptions in this paper. On this basis, we design an end-to-end model to complete ACTC, ARI, and ARTC subtasks simultaneously.
ARI and ARTC are the hardest parts of the whole argument mining process. An AR is represented by two ACs. It is difficult to represent AR precisely and
Figure 1: An example of argument mining result after extraction in the CDCP dataset [19]. It forms an argument graph. In this graph, every node AC represents an argument component. Fact, Value, and Policy are three types of ACs. Every edge AR denotes argument relation, and Reason is one type of AR.
capture this relation. Most ACs pairs do not have a relationship at all, which leads to a serious sample imbalance problem. Among the whole process, ARI and ARTC are parts of ACI and ACTC, so the performance of these tasks will be influenced. Due to these reasons, many previous works give up and ignore the classification of ARs. Besides, much research imposes some argument constraints to do argument mining. In most normal cases, they assume argument information is a tree structure, and they can use the characteristic of the tree to extract information. Tree structure argument information is common in an argumentative essay. However, argument information with no constraints is more normal in the real world, like a huge amount of corpus on social media. This information is just like the general argument graphs mentioned before and needs to be extracted in good quality.
In this paper, we solve the above problems with a novel model called **AutoAM** (the abbreviation of **Autom**actic and Universal **A**rgument **M**ining Model). This is an efficient and accurate argument mining model to complete the entire argument mining process. This model does not rely on domain-specific corpus and does not need to formulate special syntactic constraints, etc., to construct argument graphs from argumentative text. To improve the performance of non-tree structured argument mining, we first introduce the argument component attention mechanism (**ArguAtten**) in this model, which can better capture the relevant information of argument components in an argumentative paragraph. It benefits the overall performance of argument mining. We use a distance matrix to add the key distance feature to represent ARs. A stratified learning rate is also a critical strategy in the model to balance multi-task learning. To the best of our knowledge, we are the first to propose an end-to-end universal AM model without structure constraints to complete argument mining. Meanwhile, we combine our novelty and some successful experience to achieve the state of the art in two public datasets.
In summary, our contributions are as follows:
* We propose a novel model **AutoAM** for argument mining which can efficiently solve argument mining in all kinds of the argumentative corpus.
* We introduce **ArguAtten** (argument component attention mechanism) to better capture the relation between argument components and improve overall argument mining performance.
* We conduct extensive experiments on two public datasets and demonstrate that our method substantially outperforms the existing works. The experiment results show that the model proposed in this paper achieves the best results to date in several metrics. Especially, there is a great improvement over the previous studies in the tasks of ARI (argument relation identification) and ARTC (argument relation type classification).
## 2 Related Work
Since argument mining was first proposed [16], much research has been conducted on it. At first, people used rule-based or some traditional machine learning
methods. With the help of deep learning, people begin to get good performance on several tasks and start to focus on non-tree structured argument mining. We discuss related work following the development of AM.
### Early Argument Mining
The assumption that the argument structure could be seen as a tree or forest structure was made in the majority of earlier work, which made it simpler to tackle the problem because various tree-based methods with structural restrictions could be used. In the early stage of the development of argument mining, people usually use rule-based structural constraints and traditional machine learning methods to conduct argumentative mining. In 2007, Moens et al. [16] conducted the first argument mining research on legal texts in the legal field, while Kwon et al. [11] also conducted relevant research on commentary texts in another field. However, the former only identified the content of the argument and did not classify the argument components. Although the latter one further completed the classification of argument components, it still did not extract the relationship between argument components, and could not explore the argument structure in the text. It only completed part of the process of argument mining.
### Tree Structured Argument Mining with Machine Learning
According to the argumentation paradigm theory of Van Eemeren et al. [6], Palau and Moens [15] modeled the argument information in legal texts as a tree structure and used the hand-made Context-Free Grammar (CFG) to parse and identify the argument structure of the tree structure. This method is less general and requires different context-free grammars to be formulated for different structural constraints of argument. By the Stab and Gurevych [27][28] tree structure of persuasive Essay (Persuasive Essay, PE) dataset has been in argument mining has been applied in many studies and practices. In this dataset, Persing and Ng [23] and Stab and Gurevych [28] used the Integer Linear Programming (ILP) framework to jointly predict the types of argument components and argument relations. Several structural constraints are defined to ensure a tree structure. The arg-micro text (MT) dataset created by Peldszus [21] is another tree-structured dataset. In studies using this dataset, decoding techniques based on tree structure are frequently used, such as Minimum Spanning tree (MST) [22], and ILP [1].
### Neural Network Model in Argument Mining
With the popularity of deep learning, neural network models have been applied to various natural language processing tasks. For deep learning methods based on neural networks, Eger et al. [7] studied argument mining as a sequence labeling problem that relies on parsing multiple neural networks. Potash et al. [25] used sequence-to-sequence pointer network [30] in the field of argument mining
and identified the different types of argument components and the presence of argument relations using the output of the encoder and decoder, respectively. Kuribayashi et al. [10] developed a span representation-based argumentation structure parsing model that employed ELMo [24] to derive representations for ACs.
### Recent Non-Tree Structured Argument Mining
Recently, more works have focused on the argument mining of non-tree structures. The US Consumer Debt Collection Practices (CDCP) dataset [18][19] greatly promotes the development of non-tree structured argument mining. The argument structures contained in this dataset are non-tree structures. On this dataset, Niculae et al. [18] carry out a structured learning method based on a factor graph. This method can also handle the tree structure of datasets. It can also be used in the PE dataset, but the factor diagram needs a specific design according to the different types of the argument structure. Galassi et al. [8] used the residual network on the CDCP dataset. Mor IO et al. [17] developed an argument mining model, which uses a task-specific parameterized module to encode argument. In this model, there is also a bi-affine attention module [5] to capture the argument. Recently, Jianzhu Bao et al. [2]tried to solve both tree structure argument and non-tree structure argument by introducing the transformation-based dependency analysis method [4][9]. This work gained relatively good performance on the CDCP dataset but did not complete the ARTC task in one model and did not show the experiment results of ARTC.
However, these methods either do not cover the argument mining process with a good performance or impose a variety of argument constraints. There is no end-to-end model for automatic and universal argument mining before. Thus, we solve all the problems above in this paper.
## 3 Methodology
As shown in Figure 2, we propose a new model called AutoAM. This model adopts the joint learning approach. It uses one model to simultaneously learn the ACTC, ARI, and ARTC three subtasks in argument mining. For the argument component extraction, the main task is to classify the argument component type, and the argument component identification task has been completed by default on both the PE and the CDCP datasets. For argument relation extraction, the model regards ARI and ARTC as one task. The model classifies the relationship between the argument components by a classifier and then gives different prediction results for two tasks by post-processing prediction labels.
### Task Formulation
The input data contains two parts: a) A set of \(n\) argumentative text \(T=\{T_{1},T_{2},...,T_{n}\}\), b) for the \(i\)th argumentative text, there are \(m\) argument component spans \(S=\{S_{1},S_{2},...,S_{m}\}\), where every span marks the start and end scope
of each AC \(S_{i}=(start_{i},end_{i})\). Our aim is to train an argument mining model and use it to get output data: a) types of \(m\) ACs provided in the input data \(ACs=\{AC_{1},AC_{2},...,AC_{m}\}\), b) \(k\) existing ARs \(ARs=\{AR_{1},AR_{2},...,AR_{k}\}\) and their types, where \(AR_{i}=(AC_{a}\to AC_{b})\).
### Argument Component Extraction
By default, the argument component identification task has been completed. The input of the whole model is an argumentative text and a list of positional spans corresponding to each argument component \(S_{i}=(start_{i},end_{i})\).
We input argumentative text \(T\) into pre-trained language models (PLMs) to get contextualized representations \(H\in\mathbb{R}^{m\times d_{b}}\), where \(d_{b}\) is the dimension of the last hidden state from PLMs. Therefore, we represent argumentative text as \(H=(h_{1},h_{2},...,h_{m})\), where \(h_{i}\) denotes the \(i\)th token contextualized representation.
We separate argument components from the paragraph using argument component spans \(S\). In the PE dataset, the argument components do not appear continuously. We use mean pooling to get the representation of each argument component. Specifically, the \(i\) argument component can be represented as:
\[AC_{i}=\frac{1}{end_{i}-start_{i}+1}\sum_{j=start_{i}}^{end_{i}}h_{i}, \tag{1}\]
where \(AC_{i}\in\mathbb{R}^{d_{b}}\). Therefore, all argument components in the argumentative text can be represented as \(ACs=(AC_{1},AC_{2},...,AC_{n})\). For each argument component, we input it into AC Type Classifier \(MLP_{a}\) in order. This classifier contains
Figure 2: The framework of our proposed model called AutoAM.
a multi-layer perceptron. A Softmax layer is after it. The probability of every type of argument component can be get by:
\[p(y_{i}|AC_{i})=Softmax(MLP_{a}(AC_{i})), \tag{2}\]
where \(y_{i}\) represent the predicted labels of the \(i\)th argument component. We get the final predicted label of its argument component as:
\[\hat{y}_{i}=Argmax(p(y_{i}|AC_{i})). \tag{3}\]
### Argument Relation Extraction
This model views ARI and ARTC as having the same task and distinguish them by post-processing predictions. We classify every argument component pair (\(AC_{i}\to AC_{j}\)). Argument component pairs are different of (\(AC_{i}\to AC_{j}\)) and (\(AC_{j}\to AC_{i}\)). We add a label, 'none' here. 'none' represents that there is no relation of \(AC_{i}\to AC_{j}\).
In the argument relation extraction part, we use the enumeration method. We utilize output results from the ACTC step. We combine two argument components and input them into AR Type Classifier to get the predicted output.
First, the model uses ArguAtten (Argument Component Attention mechanism) to enhance the semantic representation of argument components. The self-attention mechanism is first proposed in the Transformer [29]. The core of this mechanism is the ability to capture how each element in a sequence relates to the other elements, i.e., how much attention each of the other elements pays to that element. When the self-attention mechanism is applied to natural language processing tasks, it can often capture the interrelationship of all lexical elements in a sentence and better strengthen the contextual semantic representation. In the task of argument mining, all argument components in an argumentative text also meet this characteristic. The basic task of argument mining is to construct an argument graph containing nodes and edges, where nodes are argument components and edges are argument relations. Before the argument relation extraction task, the self-attention mechanism of argument components can be used to capture the mutual attention of argument components. It means that it can better consider and capture the argument information of the full text. This mechanism is conducive to argument relations extraction and the construction of an argument graph. We define ArguAtten as:
\[ArguAtten(Q,K,V)=Softmax(\frac{QK^{T}}{\sqrt{d_{k}}})\times V, \tag{4}\]
where \(Q\), \(K\), \(V\) are got by multiplying ACs with \(W_{Q}\), \(W_{K}\), \(W_{V}\). They are three parameter matrices \(W_{Q},W_{K},W_{V}\in\mathbb{R}^{d_{k}\times d_{k}}\), and \(d_{k}\) is the dimension of attention layer. Besides, we also use ResNet and layer normalization (LN) after the attention layer to avoid gradient explosion:
\[ResNetOut=LN(ACs+ArgutAtten(ACs)).\]
Through the self-attention of argument components, we obtain a better contextualized representation of argument components and then begin to construct argument pairs to perform argument relation extraction.
We consider that the relative distance between two argument components has a decisive influence on the type of argument relations between them. By observing the dataset, we can find that there is usually no argument relation between the two argument components, which are relatively far apart. It can significantly help the model to classify the argument relation types. Therefore, we incorporate this feature into the representation of argument relations. At first, the distance vector is introduced, and the specific definition is shown as:
\[V_{dist}=(i-j)\times W_{dist}, \tag{6}\]
where \((i-j)\) represents a relative distance, it can be positive or negative. \(W_{dist}\in\mathbb{R}^{1\times d_{dist}}\) is a distance transformation matrix, and it can transform a distance scalar to a distance vector. \(d_{dist}\) is the length of the distance vector.
For each argument relation, it comes from the source argument component (Src AC), the target argument component (Trg AC), and the distance vector (Dist Vec). We concatenate them to get the representation of an argument relation as:
\[AR_{i,j}=[AC_{i},AC_{j},V_{dist}], \tag{7}\]
where \(AR_{i,j}\in\mathbb{R}^{d_{b}\times 2+d_{dist}}\), \(d_{dist}\) is the length of distance vector.
Therefore, argument relations in an argumentative text can be represented as \(ARs=(AC_{1,2},AC_{1,3},...,AC_{n,n-1},)\), contains \(n\times(n-1)\) potential argument relations in total. We do not consider self-relation like \(AR=(AC_{i}\to AC_{i})\).
For each potential argument relation, we separately and sequentially input them into the AR Type Classifier \(MLP_{b}\). The classifier uses a Multi-Layer Perceptron (MLP) containing a hidden layer of 512 dimensions. The output of the last layer of the Multi-layer Perceptron is followed by a Softmax layer to obtain the probability of an argument relationship in each possible type label, as shown in:
\[p(y_{i,j}|AR_{i,j})=Softmax(MLP_{b}(AR_{i,j})), \tag{8}\]
where \(y_{i,j}\) denotes the predicted label of the argument relation from the \(i\)th argument component to the \(j\)th argument component. The final predicted labels are:
\[\hat{y}_{i,j}=Argmax(p(y_{i,j}|AR_{i,j})). \tag{9}\]
To get the predicted labels of ARI and ARTC, we post-processed the prediction of the model. The existence of an argument relation in the ARI task is defined as:
\[\hat{y}_{ARI}=\begin{cases}0&\text{if }\hat{y}_{AR}=0\\ 1&\text{if }\hat{y}_{AR}\neq 0\end{cases} \tag{10}\]
where \(\hat{y}_{AR}\) is the predicted label from the model output.
When we gain the type of an existing argument relation in the ARTC task, we assign the probability of 'none' to zero and select the other label with the higher probability. They are represented as:
\[\hat{y}_{ARTC}=Argmax(p(y_{AR}|AR_{i,j})),\quad y^{none}=0, \tag{11}\]
where \(y^{none}\) is the model output of the label 'none'.
### Loss Function Design
This model jointly learns the argument component extraction and the argument relation extraction. By combining these two tasks, the training objective and loss function of the final model is obtained as:
\[L(\theta)=\sum_{i}log(p(y_{i}|AC_{i}))+\sum_{i,j}p(y_{i,j}|AR_{i,j})+\frac{ \lambda}{2}||\theta||^{2}, \tag{12}\]
where \(\theta\) represents all the parameters in the model, and \(\lambda\) represents the coefficient of L2 regularization. According to the loss function, the parameters in the model are updated repeatedly until the model achieves better performance results to complete the model training.
## 4 Experiments
### Datasets
We evaluate our proposed model on two public datasets: Persuasive Essays (PE) [28] and Consumer Debt Collection Practices (CDCP) [18].
The PE dataset only has tree structure argument information. It has three types of ACs: _Major-Claim_, _Claim_, and _Premise_, and two types of AR: _support_ and _attack_.
The CDCP dataset has general structure argument information, not limited to a tree structure. It is different from the PE dataset and is more difficult. The argument information in this dataset is more similar to the real world. There are five types of ACs (propositions): _Reference_, _Fact_, _Testimony_, _Value_, and _Policy_. Between these ACs, there are two types of ARs: _reason_ and _evidence_.
We both use the original train-test split of two datasets to conduct experiments.
### Setups
In the model training, roberta-base [13] was used to fine-tune, and AdamW optimizer [14] was used to optimize the parameters of the model during the training. We apply a stratified learning rate to obtain a better semantic representation of BERT context and downstream task effect. The stratified learning rate is important in this task because this multi-task learning is complex and have three
subtasks. The ARI and ARTC need a relatively bigger learning rate to learn the data better. The initial learning rate of the BERT layer is set as 2e-5. The learning rate of the AC extraction module and the AR extraction module is set as 2e-4 and 2e-3, respectively. After BERT output, the Dropout Rate [26] is set to 0.2. The maximum sequence length of a single piece of data is 512. We cut off ACs and ARs in the over-length text. The batch size in each training step is set to 16 in the CDCP dataset and 2 in the PE dataset. The reason is that there are more ACs in one argumentative text from the PE dataset than in the CDCP dataset.
In training, we set an early stop strategy with 5 epochs. We set the minimum training epochs as 15 to wait for the model to become stable. We use \(MacroF1_{ARI}\) as monitoring indicators in our early stop strategy. That is because AR extraction is our main improvement direction. Furthermore, the ARI is between the ACTC and the ARTC, so we can better balance the three tasks' performance in the multi-task learning scenario.
The code implementation of our model is mainly written using PyTorch [20] library, and the pre-trained model is loaded using Transformers [32] library. In addition, model training and testing were conducted on one NVIDIA GeForce RTX 3090.
### Compared Methods
We compare our model with several baselines to evaluate the performance:
* **Joint-ILP**[28] uses Integer Linear Programming (ILP) to extract ACs and ARs. We compare our model with it in the PE dataset.
* **St-SVM-full**[18] uses full factor graph and structured SVM to do argument mining. We compare our model with it in both the PE and the CDCP datasets.
* **Joint-PN**[25] employs a Pointer Network with an attention mechanism to extract argument information. We compare our model with it in the PE dataset.
* **Span-LSTM**[10] use LSTM-based span representation with ELMo to perform argument mining. We compare our model with it in the PE dataset.
* **Deep-Res-LG**[8] uses Residual Neural Network on AM tasks. We compare our model with it in the CDCP dataset.
* **TSP-PLBA**[17] introduces task-specific parameterization and bi-affine attention to AM tasks. We compare our model with it in the CDCP dataset.
* **BERT-Trans**[2] use transformation-based dependency analysis method to solve AM problems. We compare our model with it in both the PE and the CDCP datasets. It is also the state of the art on two datasets.
### Performance Comparison
The evaluation results are summarized in Table 1 and Table 2. In both tables, '-' indicates that the original paper does not measure the performance of this
metric for its model. The best results are in bold, and the second-best results are in italics.
On the CDCP dataset, we can see our model achieves the best performance on all metrics in ACTC, ARI, and ARTC tasks. We are the first to complete all the tasks and get ideal results on the CDCP dataset. Our model outperforms the state of the art with an improvement of 2.1 in ACTC and 0.6 in ARI. The method BERT-Trans does not perform ARTC with other tasks at the same time, and it does not report results of ARTC, maybe due to unsatisfactory performance. In particular, compared with the previous work, we have greatly improved the task performance of ARTC and achieved ideal results.
On the PE dataset, our model also gets ideal performance. However, we get the second-best scores in several metrics. The first reason is that the PE dataset is tree-structured, so many previous work impose some structure constraints. Their models incorporate more information, and our model assumes they are general argument graphs in contrast. Another reason is that the models BERT-Trans, Span-LSTM, and Joint-PN combine extra features to represent ACs, like paragraph types, BoW, position embedding, etc. This information will change in the different corpus, and we want to build an end-to-end universal model. For example, there is no paragraph type information in the CDCP dataset. Therefore, we do not use them in our model. Even if our model does not take these factors into account, we achieve similar results to the state of the art.
\begin{table}
\begin{tabular}{l|c c c c c|c c c|c c c|c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{ACTC} & \multicolumn{4}{c|}{ARI} & \multicolumn{4}{c|}{ARTC} & \multirow{2}{*}{AVG} \\ & \multicolumn{1}{c}{Macro} & \multicolumn{1}{c}{Value} & \multicolumn{1}{c}{Policy} & \multicolumn{1}{c}{Testi} & \multicolumn{1}{c}{Fact} & \multicolumn{1}{c|}{Refer.} & \multicolumn{1}{c}{Macro} & \multicolumn{1}{c|}{Rel.} & \multicolumn{1}{c|}{Non-rel.} & \multicolumn{1}{c|}{Macro} & \multicolumn{1}{c|}{Reason} & \multicolumn{1}{c|}{Evidence} \\ \hline St-SVM-strict & 73.2 & 76.4 & 76.8 & 71.5 & 41.3 & 100.0 & - & 26.7 & - & - & - & - & - \\ Deep-Res-LG & 65.3 & 72.2 & 74.4 & 72.9 & 40.3 & 66.7 & - & 29.3 & - & 15.1 & 30.2 & 0.0 & - \\ TSP-PLBA & 78.9 & - & - & - & - & - & - & 34.0 & - & - & - & 18.7 & - \\ BERT-Trans & 82.5 & 83.2 & 86.3 & 84.9 & 58.3 & 100.0 & 67.8 & 37.3 & 98.3 & - & - & - & - \\ \hline
**AutoAM (Ours)** & **84.6** & **85.0** & **86.8** & **86.1** & **65.9** & **100.0** & **68.4** & **38.5** & **98.4** & **71.3** & **98.1** & **44.4** & **74.8** \\ \hline \hline \end{tabular}
\end{table}
Table 1: The results of comparison experiments on the CDCP dataset. All numbers in the table are f1 scores (%). The best scores are in bold. ‘-’ represents that the original paper does not report.
\begin{table}
\begin{tabular}{l|c c c c|c c c|c c c|c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{ACTC} & \multicolumn{4}{c|}{ARI} & \multicolumn{4}{c|}{ARTC} & \multirow{2}{*}{AVG} \\ & \multicolumn{1}{c}{Macro} & \multicolumn{1}{c}{MC} & \multicolumn{1}{c}{Claim} & \multicolumn{1}{c|}{Premise} & \multicolumn{1}{c}{Macro} & \multicolumn{1}{c}{Rel.} & \multicolumn{1}{c|}{Non-rel.} & \multicolumn{1}{c}{Macro} & \multicolumn{1}{c|}{Support} & \multicolumn{1}{c}{Attack} \\ \hline Joint-ILP & 82.6 & 89.1 & 68.2 & 90.3 & 75.1 & 58.5 & 91.8 & 68.0 & 94.7 & 41.3 & 75.2 \\ St-SVM-strict & 77.6 & 78.2 & 64.5 & 90.2 & - & 60.1 & - & - & - & - & - \\ Joint-PN & 84.9 & 89.4 & 73.2 & 92.1 & 76.7 & 60.8 & 92.5 & - & - & - & - \\ Span-LSTM & 85.7 & 91.6 & 73.3 & 92.1 & 80.7 & 68.8 & 93.7 & _79.0_ & _96.8_ & **61.1** & 81.8 \\ BERT-Trans & _88.4_ & **93.2** & _78.8_ & _93.1_ & **82.5** & **70.6** & _94.3_ & **81.0** & - & - & **83.4** \\ \hline
**AutoAM (Ours)** & **88.7** & _91.9_ & **80.3** & **93.9** & _81.6_ & _65.8_ & **98.5** & 75.4 & **97.6** & _53.2_ & _81.9_ \\ \hline \hline \end{tabular}
\end{table}
Table 2: The results of comparison experiments on the PE dataset. All numbers in the table are f1 scores (%). The best results are in bold. The second best results are in italics. ‘-’ represents that the original paper does not report.
### Ablation Study
The ablation study results are summarized in Table 3. We conduct an ablation study on the CDCP dataset to see the impact of key modules in our model. It can be observed that the stratified learning rate is the most critical in this model. It verifies the viewpoint that multi-task learning is complex in this model and ARs extraction module needs a bigger learning rate to perform well. We can see ArguAtten improve the ACTC and ARTC performance by 1.7 and 13.6. However, the ARI matrix decreases a little bit. Even though the numbers are small, we think that the reason is the interrelationship between ACs has little impact on the prediction of ARs' existence. ArguAtten mainly plays an effect in predicting the type of ARs. From this table, we can also find that the distance matrix brings the important distance feature to AR representation with an overall improvement of 6.5.
## 5 Conclusion and Future Work
In this paper, we propose a novel method for argument mining and first introduce the argument component attention mechanism. This is the first end-to-end argument mining model that can extract argument information without any structured constraints and get argument relations of good quality. In the model, ArguAtten can better capture the correlation information of argument components in an argumentative paragraph so as to better explore the argumentative relationship. Our experiment results show that our method achieves the state of the art. In the future, we will continue to explore designing a better model to describe and capture elements and relationships in argument graphs.
| 議論抽出は、論点構造を分析し、無秩序なテキストから重要な論点情報を抽出し、論点抽出システムは、自動的にテキストの因果関係や論理的な情報を提供することができる。社会メディアなどで議論や論争が増加するにつれて、議論抽出がより重要になってきている。しかし、自然言語処理の課題である議論抽出はまだ、その難しさから、成熟した技術が不足している。例えば、非木構造の論点抽出に関する研究がまだ十分に行われていない。多くの研究は、木構造の論点情報を抽出するのに集中している。さらに、現在の方法は、論点関係を正確に記述できず、それらの種類を予測することができない。本稿では、これらの問題を解決するために、AutoAMという新しいニューラルモデルを提案する。まず、私たちのモデルには、論点構成の注意機構が導入されている。これは、論点構成間の関連情報を捉えることができるため、私たちのモデル |
2309.11229 | Trace Monomial Boolean Functions with Large High-Order Nonlinearities | Exhibiting an explicit Boolean function with a large high-order nonlinearity
is an important problem in cryptography, coding theory, and computational
complexity. We prove lower bounds on the second-order, third-order, and
higher-order nonlinearities of some trace monomial Boolean functions.
We prove lower bounds on the second-order nonlinearities of functions
$\mathrm{tr}_n(x^7)$ and $\mathrm{tr}_n(x^{2^r+3})$ where $n=2r$. Among all
trace monomials, our bounds match the best second-order nonlinearity lower
bounds by \cite{Car08} and \cite{YT20} for odd and even $n$ respectively. We
prove a lower bound on the third-order nonlinearity for functions
$\mathrm{tr}_n(x^{15})$, which is the best third-order nonlinearity lower
bound. For any $r$, we prove that the $r$-th order nonlinearity of
$\mathrm{tr}_n(x^{2^{r+1}-1})$ is at least
$2^{n-1}-2^{(1-2^{-r})n+\frac{r}{2^{r-1}}-1}- O(2^{\frac{n}{2}})$. For $r \ll
\log_2 n$, this is the best lower bound among all explicit functions. | Jinjie Gao, Haibin Kan, Yuan Li, Jiahua Xu, Qichun Wang | 2023-09-20T11:40:19 | http://arxiv.org/abs/2309.11229v1 | # Trace Monomial Boolean Functions with Large High-Order Nonlinearities
###### Abstract
Exhibiting an explicit Boolean function with a large high-order nonlinearity is an important problem in cryptography, coding theory, and computational complexity. We prove lower bounds on the second-order, third-order, and higher-order nonlinearities of some trace monomial Boolean functions.
We prove lower bounds on the second-order nonlinearities of functions \(\operatorname{tr}_{n}(x^{7})\) and \(\operatorname{tr}_{n}(x^{2^{r}+3})\) where \(n=2r\). Among all trace monomials, our bounds match the best second-order nonlinearity lower bounds by [1] and [20] for odd and even \(n\) respectively. We prove a lower bound on the third-order nonlinearity for functions \(\operatorname{tr}_{n}(x^{15})\), which is the best third-order nonlinearity lower bound. For any \(r\), we prove that the \(r\)-th order nonlinearity of \(\operatorname{tr}_{n}(x^{2^{r+1}-1})\) is at least \(2^{n-1}-2^{(1-2^{-r})n+\frac{r}{2^{r-1}}-1}-O(2^{\frac{n}{2}})\).
For \(r\ll\log_{2}n\), this is the best lower bound among all explicit functions.
**Keywords**: high-order nonlinearity, trace monomial, lower bound, Boolean function, linear kernel
## 1 Introduction
Exhibiting an _explicit_ Boolean function with a large _high-order nonlinearity_ is an important task in areas including cryptography, coding theory, and computational complexity. In cryptography, a high nonlinearity is an important cryptographic criterion for Boolean functions used in symmetric-key cryptosystems to resist correlation attacks [1]. In coding theory, the largest \(r\)-th order nonlinearity among all \(n\)-variable Boolean functions is exactly the covering radius of Reed-Muller codes \(\operatorname{RM}(r,n)\); computing (high-order) nonlinearity is related to the problem of decoding Reed-Muller codes. In computational complexity, one _must_ prove large enough nonlinearity lower bound (for a function in NP) to prove that NP does not have circuits of quasi-polynomial size [12, 21]. In addition, this problem is related to pseudorandom generators, communication complexity, and circuit complexity; we send interested readers to the survey by Viola [20].
Known techniques for proving nonlinearity lower bound include Hilbert function [22, 23], the "squaring trick" [1, 18, 21, 22], XOR lemmas [1, 1, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 50, 52, 54, 53, 55, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 84, 86, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 43, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 61, 63, 64, 65, 67, 68, 69, 70, 74, 75, 76, 79, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54,
of any _quadratic_ function by the dimension of its _linear kernel_. In this way, the problem of lowering bound nonlinearity essentially reduces to the problem of estimating the number of roots of certain equations over finite fields. Along this line, nonlinearity lower bounds for trace monomial Boolean functions are proved in [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. We summarize the second-order nonlinearity lower bounds in Table 1.
\begin{table}
\begin{tabular}{c|c} \hline
**Function** & \(\mathrm{nl}_{2}\) **lower bound** \\ \hline \(\mathrm{tr}_{n}(\mu x^{2^{t}-1}+g(x))\), \(\mu\in\mathbb{F}_{2^{n}}^{*}\), & \(2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)(2^{t}-4)2^{\frac{n}{2}}+2^{n}}\) \\ \(t\leq n\)[14]\(\geq\) & \(2^{n-1}-2^{\frac{3n}{4}+\frac{t}{2}-1}-O(2^{\frac{n}{4}})\) \\ \hline \(\mathrm{tr}_{n}(x^{2^{r}+3}),n=2r+1\)[14] & \(2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)2^{\frac{n+5}{2}}+2^{n}}\) \\ & \(=\) \(2^{n-1}-2^{\frac{3n+1}{4}}-O(2^{\frac{n}{4}})\) \\ \hline \(\mathrm{tr}_{n}(x^{2^{r}+3}),n=2r-1\)[14] & \(\begin{cases}2^{n-1}-\frac{1}{2}\sqrt{2^{\frac{3n+1}{2}}+2^{\frac{3n-1}{2}}+2^ {n}-2^{n}-2^{\frac{n+3}{2}}},&\text{if }3\nmid n\\ 2^{n-1}-\frac{1}{2}\sqrt{3\cdot 2^{\frac{3n-1}{2}}+2^{n}+3\cdot 2^{n+\frac{1}{2}}-2^ {\frac{n+3}{2}}},&\text{if }3\mid n\\ =\) & \(2^{n-1}-2^{\frac{3n-5}{4}+\frac{1}{2}\log{2}{3}}-O(2^{\frac{n}{4}})\) \\ \hline \(\mathrm{tr}_{n}(x^{2^{n}-2})\)[14] & \(2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)2^{\frac{n}{2}+2}+3\cdot 2^{n}}\) \\ & \(=\) \(2^{n-1}-2^{\frac{3n}{4}}-O(2^{\frac{n}{4}})\) \\ \hline \(\mathrm{tr}_{\frac{n}{2}}(xy^{2^{n}-2})\) \(x,y\in\mathbb{F}_{\frac{n}{2}}^{*}\), if n is even [14] & \(2^{n-1}-\frac{1}{2}\sqrt{2^{n}+(2^{n+2}+2^{\frac{3n}{4}+1}+2^{\frac{n}{2}+1})( 2^{\frac{n}{2}}-1)}\) \\ & \(=\) \(2^{n-1}-2^{\frac{3n}{4}}-O(2^{\frac{n}{2}})\) \\ \hline \(\mathrm{tr}_{n}(\mu x^{2^{i}+2^{j}+1}),\mu\in\mathbb{F}_{2^{n}}^{*}\)[15] & \(\left\{\begin{array}{ll}2^{n-1}-2^{\frac{3n+2i-4}{4}},&\text{if n is even}\\ 2^{n-1}-2^{\frac{3n+2i-5}{4}},&\text{if n is odd}\end{array}\right.\) \\ \hline \(\mathrm{tr}_{n}(\mu x^{2^{2i}+2^{i}+1}),\mu\in\mathbb{F}_{2^{n}}^{*}\), \(\mathrm{gcd}(n,i)=1\), \(n>4\)[15] & \(\left\{\begin{array}{ll}2^{n-1}-2^{\frac{3n}{4}},&\text{if n is even}\\ 2^{n-1}-2^{\frac{3n-1}{4}},&\text{if n is odd}\end{array}\right.\) \\ \hline \(\mathrm{tr}_{n}(\lambda x^{2^{2r}+2^{r}+1})\), \(n=6r\), \(\lambda\in\mathbb{F}_{2^{n}}^{*}\)[16] & \(=\) \(2^{n-1}-2^{\frac{3n}{4}+r-1}-O(2^{\frac{n}{4}})\) \\ \hline \(\mathrm{tr}_{r}(xy^{2^{i}+1})\) \(n=2r\), \(x,y\in\mathbb{F}_{2^{r}}\), \(1\leq i<r\), \(\mathrm{gcd}(2^{r}-1,2^{i}+1)=1\), \(\mathrm{gcd}(i,r)=j\)[16] & \(=\) \(2^{n-1}-2^{\frac{3n}{4}+\frac{j}{2}-1}-O(2^{\frac{n}{2}})\) \\ \hline \(\mathrm{tr}_{n}(\lambda x^{2^{2r}+2^{r}+1})\), \(n=5r\), \(\lambda\in\mathbb{F}_{2^{r}}^{*}\)[15] & \(2^{n-1}-2^{\frac{3n+3r-4}{4}}\) \\ \hline \end{tabular}
\end{table}
Table 1: Second-order nonlinearity lower bounds
\(\begin{array}{|c|c|c|}\hline\mathrm{tr}_{n}(\lambda x^{2^{2^{r}}+2^{r}+1}),&n=3r,&2^{n-1}-2^{\frac{3n+r-4}{4}}\\ \lambda\in\mathbb{F}_{2^{r}}^{*}\ [\mathrm{Sin11}]&\end{array}\)
\(\begin{array}{|c|c|c|}\hline\mathrm{tr}_{n}(\lambda x^{2^{2^{r}}+2^{r}+1}),&n=4r,&2^{n-1}-\frac{1}{2}\sqrt{2^{\frac{7n}{4}}+2^{\frac{5n}{4}}-2^{n}}\\ \lambda\in\mathbb{F}_{2^{r}}^{*}\ [\mathrm{SW11}]&=&2^{n-1}-2^{\frac{7n}{8}-1}-O(2^{ \frac{3n}{8}})\\ \hline\mathrm{tr}_{n}(\lambda x^{2^{2^{r}}+2^{r}+1}),&n=6r,&2^{n-1}-\frac{1}{2} \sqrt{2^{\frac{5n}{3}}+2^{\frac{4n}{3}}-2^{\frac{5n}{6}}+2^{n}-2^{\frac{5n}{6 }}}\\ \lambda\in\mathbb{F}_{2^{n}}^{*}\ [\mathrm{Tan+20}]^{\mathrm{b}}&=&2^{n-1}-2^{ \frac{5n}{6}-1}-O(2^{\frac{n}{2}})\\ \hline\mathrm{tr}_{n}(x^{2^{r+1}+3}),n=2r\ [\mathrm{YT20}]&\begin{cases}2^{n-1}- \frac{1}{2}\sqrt{2^{\frac{3n}{2}}+1}+\frac{2}{4}\frac{5n}{4}+\frac{1}{2}-2^{n }-2^{\frac{3n}{4}+\frac{1}{2}},&\text{if r is odd}\\ 2^{n-1}-\frac{1}{2}\sqrt{2^{\frac{3n}{2}}+1}+\frac{1}{3}\cdot 2^{\frac{5n}{4}+2}-2^{n }-\frac{1}{3}\cdot 2^{\frac{3n}{4}+2},&\text{if r is even}\\ =&2^{n-1}-2^{\frac{3n}{4}-\frac{1}{2}}-O(2^{\frac{n}{2}})\\ \hline\mathrm{tr}_{n}(x^{2^{r}+2^{\frac{r+1}{2}}+1}),n=2r,&2^{n-1}-\frac{1}{2} \sqrt{2^{\frac{3n}{2}}+1}+2^{\frac{5n}{4}+\frac{1}{2}}-2^{n}-2^{\frac{5n}{4}+ \frac{1}{2}}\\ \text{for odd r }[\mathrm{YT20}]&=&2^{n-1}-2^{\frac{3n}{4}-\frac{1}{2}}-O(2^{ \frac{n}{2}})\\ \hline\mathrm{tr}_{n}(x^{2^{2r}+2^{r+1}+1}),n=4r,&2^{n-1}-\frac{1}{2}\sqrt{2^{ \frac{3n}{2}}+1}+\frac{1}{3}\cdot 2^{\frac{5n}{4}+2}-2^{n}-\frac{1}{3}\cdot 2^{ \frac{3n}{4}+2}\\ \text{for even r }[\mathrm{YT20}]&=&2^{n-1}-2^{\frac{3n}{4}-\frac{1}{2}}-O(2^{ \frac{n}{2}})\\ \hline\mathrm{tr}_{n}(x^{2^{r+1}+2^{r}+1}),n=2r+2,&2^{n-1}-\frac{1}{2}\sqrt{2^{ \frac{3n}{2}}+1}+2^{\frac{5n}{4}+\frac{1}{2}}-2^{n}-2^{\frac{3n}{4}+\frac{1}{2}} \\ \text{for even r }[\mathrm{Liu21}]&=&2^{n-1}-2^{\frac{3n}{4}-\frac{1}{2}}-O(2^{ \frac{n}{2}})\\ \hline\end{cases}\)
\(\begin{array}{|c|c|}\hline\mathrm{tr}_{n}(x)\text{ is a univariate polynomial of degree }\leq 2^{t}-2\text{ over }\mathbb{F}_{2^{n}}.\\ \hline\lambda\in\{yz^{d}:y\in U,z\in\mathbb{F}_{2^{n}}^{*}\},&U=\{y\in\mathbb{F }_{2^{3r}}^{*}:\mathrm{tr}_{\mathbb{F}_{2^{3r}}/\mathbb{F}_{2^{r}}}(y)=0\}, \text{ where the function }\mathrm{tr}_{\mathbb{F}_{2^{3r}}/\mathbb{F}_{2^{r}}}(y)\text{ is a mapping from }\mathbb{F}_{2^{3r}}\text{ to } \mathbb{F}_{2^{r}}.\\ \hline\end{array}\)
Among all trace monomials, the best second-order nonlinearity lower bound was proved for functions \(\mathrm{tr}_{n}(x^{2^{r}+3})\), where \(n=2r-1\), by Carlet [1], when \(n\) is odd, and for functions \(\mathrm{tr}_{n}(x^{2^{r+1}+3})\), where \(n=2r\), by Yan and Tang [15], when \(n\) is even. Note that the best second-order nonlinearity lower bound is, \(2^{n-1}-2^{\frac{3}{n}}-2^{\frac{1}{3}n-2^{\frac{1}{3}n-1}}\), proved by Kolokotronis and Limniotis [16], for the Maiorana-McFarland cubic functions (which are _not_ trace monomials).
For the third-order nonlinearity, lower bounds are proved for the inverse function \(\mathrm{tr}_{n}(x^{2^{n}-2})\), the Kasami functions \(\mathrm{tr}_{n}(\mu x^{57})\), functions of the form \(\mathrm{tr}_{n}(\mu x^{2^{i}+2^{j}+2^{k}+1})\). Previous to our results, the best third-order nonlinearity lower bound was proved for functions \(\mathrm{tr}_{n}(\mu x^{2^{3i}+2^{2i}+2^{i}+1})\), where \(\mu\in\mathbb{F}_{2^{n}}^{*}\), and \(\gcd(i,n)=1\), by Singh [16]. Please see Table 2 for a summary.
Garg and Khalyavin [1] proved that the \(r\)-th order nonlinearity for the Kasami function \(f(x)=\mathrm{tr}_{n}(\lambda x^{k})\), where \(k=2^{2r}-2^{r}+1\), \(\lambda\in\mathbb{F}_{2^{n}}^{*}\), \(n\geq 2r\) and \(\gcd(n,r)=1\), is bounded by
\[\begin{cases}2^{n-r}-2^{\frac{n+2r-2}{2}},&\text{for even $n$}\\ 2^{n-r}-2^{\frac{n+2r-3}{2}},&\text{for odd $n$}\end{cases}.\]
Garg [1] proved that the \((\frac{n}{2}-1)\)-th order nonlinearity of \(\mathrm{tr}_{n}(\lambda x^{2^{\frac{n}{2}-1}})\) for \(\lambda\in\mathbb{F}_{2^{n}}^{*}\) is at least \(2^{\frac{n}{2}}\). Tiwari and Sharma [15] proved that the \((\frac{n}{2}-1)\)-th order nonlinearity of \(\mathrm{tr}_{n}(\lambda x^{d})\), where \(\lambda\in\mathbb{F}_{2^{n}}^{*}\) and \(d=3(2^{\frac{n}{2}}-1)+1\) for even \(n\), is at least \(2^{\frac{n}{2}+1}-2^{\frac{n}{2}+1}\); the \((\frac{n}{2}-2)\)-th order nonlinearity of \(\mathrm{tr}_{n}(\lambda x^{d})\), where \(d=2^{\frac{n}{2}}-2\), is at least \(2^{\frac{n}{2}+2}-2^{\frac{n}{4}+\frac{3}{2}}\). Saini and Garg [1] proved that the \(\frac{n}{4}\)-th order nonlinearity of functions \(\mathrm{tr}_{n}(\alpha_{1}x^{d_{1}}+\alpha_{2}x^{d_{2}})\) is at least \(2^{\frac{n}{4}}-2^{\frac{n}{4}-2}\), where \(\alpha_{1},\alpha_{2}\in\mathbb{F}_{2^{n}}\), \(d_{1}=\frac{1}{2}\cdot(2^{\frac{n}{2}}-1)+1\), \(d_{2}=\frac{1}{6}\cdot(2^{\frac{n}{2}}-1)+1\), and \(4\mid n\).
Proving large high-order nonlinearity lower bound for any explicit function is an outstanding open problem in the computational complexity. For example, the problem whether there exists a function in NP with \(\log_{2}n\)-th order nonlinearity at least \(2^{n-1}(1-\frac{1}{\sqrt{n}})\) is open [14]. For the majority and mod functions, Razborov and Smolensky [13, 14, 15] proved that the \(r\)-th order nonlinearities of them are at least \(2^{n-1}(1-O(\frac{r}{\sqrt{n}}))\). For \(r\ll\log n\), Babai, Nisan and Szegedy [14] proved that the generalized inner product function has \(r\)-th order nonlinearity bounded by \(2^{n-1}(1-\exp(-\Omega(\frac{n}{r\cdot d^{r}})))\). Bourgain [1] proved a similar result for \(\mathrm{mod}_{3}\) function; a mistake in his proof is corrected by Green, Roy and Straubing [12]. An improvement was achieved by Viola and Widgerson [14, 15] by exhibiting a polynomial-time computable function with \(r\)-th order nonlinearity lower bounded by \(2^{n-1}(1-\exp(-\frac{\alpha\cdot n}{2^{r}}))\), where constant \(\alpha<\frac{1}{4}\cdot\log_{2}e\). Gopalan, Lovett and Shpilka [1] proved that, if the mod-\(p\) degree, for any prime \(p>2\), of \(f\) is \(d=o(\log n)\), then the \(r\)-th order nonlinearity of \(f\) is at least \(2^{n-1}(1-p^{-O(d)})\). Chattopadhyay _et al._[1] proved that the \(O(1)\)-th nonlinearity for the \(k\) XORs of the majority function is lower bounded by \(2^{kn-1}\left(1-\left(\frac{\mathrm{poly}(k,\log n)}{\sqrt{n}}\right)^{k}\right)\). Chen and Lyu [1] proved that there exists a function \(f\in\mathrm{E}^{\mathrm{NP}}\) which has
\begin{table}
\begin{tabular}{c|c} \hline \hline
**Function** & \(\mathrm{nl}_{3}\) **lower bound** \\ \hline \(\mathrm{tr}_{n}(x^{2^{n}-2})\)[1] & \(2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)\sqrt{2^{\frac{2n}{2}+3}+3\cdot 2^{n+1}-2^{ \frac{n}{2}+3}+16}+2^{n}}\) \\ & \(=\) & \(2^{n-1}-2^{\frac{7n}{8}-\frac{1}{4}}-O(2^{\frac{3n}{8}})\) \\ \hline \(\begin{array}{c}\mathrm{tr}_{n}(\mu x^{57}),\mu\in\mathbb{F}_{2^{n}}^{*}\\ n>10\ \mathrm{[GG10]}\end{array}\) & \(\left\{\begin{array}{ll}2^{n-3}-2^{\frac{n+4}{2}},&\text{if n is even}\\ 2^{n-3}-2^{\frac{n+3}{2}},&\text{if n is odd}\end{array}\right.\) \\ \hline \(\mathrm{tr}_{n}(\mu x^{2^{i}+2^{j}+2^{k}+1})\), \(i>j>k\geq 1\), \(n>2i\), \(\mu\in\mathbb{F}_{2^{n}}^{*}\)[14] & \(\left\{\begin{array}{ll}2^{n-3}-2^{\frac{n+2i-6}{2}},&\text{if n is even}\\ 2^{n-3}-2^{\frac{n+2i-7}{2}},&\text{if n is odd}\end{array}\right.\) \\ \hline \(\begin{array}{c}\mathrm{tr}_{n}(\mu x^{2^{3i}+2^{2i}+2^{i}+1}),\mu\in \mathbb{F}_{2^{n}}^{*}\\ \gcd(i,n)=1\), \(n>6\ \mathrm{[Sin14]}\end{array}\right.\) & \(\left\{\begin{array}{ll}2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)\sqrt{2^{\frac{3n}{2} +3}+2^{n+1}-2^{\frac{n}{2}+4}}+2^{n}},&\text{if n is even}\\ =2^{n-1}-2^{\frac{7n-2}{8}}-O(2^{\frac{3n}{8}})\\ 2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)\sqrt{2^{\frac{3n+5}{2}+2^{n+1}-2^{\frac{n+2} {2}}}}+2^{n}},&\text{if n is odd}\\ =2^{n-1}-2^{\frac{7n-3}{8}}-O(2^{\frac{3n}{8}})\end{array}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Third-order nonlinearity lower bounds
\(r\)-th order nonlinearity at least \(2^{n-1}(1-2^{-r})\) for \(r\leq o(\frac{n}{\log^{n}})^{\frac{1}{2}}\).
### Our results
In this work, we prove lower bounds on the high-order nonlinearities of certain trace monomial Boolean functions. We exhibit some trace monomial functions with large second-order, third-order or higher-order nonlinearities.
**Theorem 1**.: _Let \(f(x)=tr_{n}(x^{7})\). For even \(n\), we have_
\[\mathrm{nl}_{2}(f) \geq \begin{cases}2^{n-1}-\frac{1}{2}\sqrt{\frac{13}{3}\cdot 2^{\frac{3 }{2}n-1}+2^{n}-\frac{1}{3}\cdot 2^{\frac{n}{2}+3}},&3\nmid n\\ 2^{n-1}-\frac{1}{2}\sqrt{\frac{13}{3}\cdot 2^{\frac{3}{2}n-1}+2^{n+2}-\frac{1}{3} \cdot 2^{\frac{n}{2}+3}},&3\mid n\end{cases}\] \[= 2^{n-1}-2^{\frac{3n}{4}-\frac{3}{2}+\frac{1}{2}\log_{2}13-\frac {1}{2}\log_{2}3}-O(2^{\frac{n}{4}}).\]
_For odd \(n\), we have_
\[\mathrm{nl}_{2}(f) \geq \begin{cases}2^{n-1}-\frac{1}{2}\sqrt{3\cdot 2^{\frac{3n-1}{2}}+2^ {n}-2^{\frac{n+3}{2}}},&3\nmid n\\ 2^{n-1}-\frac{1}{2}\sqrt{3\cdot 2^{\frac{3n-1}{2}}+2^{n}+3\cdot 2^{n+\frac{1}{2}}-2^ {\frac{n+3}{2}}},&3\mid n\end{cases}\] \[= 2^{n-1}-2^{\frac{3n-5}{4}+\frac{1}{2}\log_{2}3}-O(2^{\frac{n}{4}}).\]
Theorem 1 gives a lower bound on the second-order nonlinearity of \(\mathrm{tr}_{n}(x^{7})\). Among all trace monomials, it matches the best lower bound when \(n\) is odd (i.e., the modified Welch function [1]).
**Theorem 2**.: _Let \(f(x)=\mathrm{tr}_{n}(x^{2^{r}+3})\), where \(n=2r\). Then we have_
\[\mathrm{nl}_{2}(f) \geq \begin{cases}2^{n-1}-\frac{1}{2}\sqrt{2^{\frac{3n}{4}+1}+2^{\frac {3n}{4}+\frac{1}{2}}-2^{n}-2^{\frac{3n}{4}+\frac{1}{2}}},&2\nmid r\\ 2^{n-1}-\frac{1}{2}\sqrt{2^{\frac{3}{2}n+1}+\frac{1}{3}\cdot 2^{\frac{3}{4}n+2}-2^ {n}-\frac{1}{3}\cdot 2^{\frac{3}{4}n+2}},&2\mid r\end{cases}\] \[= 2^{n-1}-2^{\frac{3n}{4}-\frac{1}{2}}-O(2^{\frac{n}{2}}).\]
Theorem 2 gives a lower bound on the second-order nonlinearity of \(\mathrm{tr}_{n}(x^{2^{r}+3})\), where \(n=2r\). When \(n\) is even, it matches the largest lower bound on the second-order nonlinearities among all trace monomial Boolean functions. That is, it is the same as functions \(\mathrm{tr}_{n}(x^{2^{r+1}+3})\), where \(n=2r\)[13]. Note that a larger lower bound is known for Maiorana-McFarland type functions. Kolokotronis and Liminiotis proved that, the second-order nonlinearity for a cubic Maiorana-McFarland type functions \(g(x)y^{t}\), where \((x,y)\in\mathbb{F}_{2^{n}}\times\mathbb{F}_{2^{m}}\) and \(g(x)\) is a quadratic perfect nonlinear function, and \(m\leq\frac{n}{2}\), is at least \(2^{n+m-1}-2^{n-1}-2^{\frac{n}{2}+m-1}+2^{\frac{n}{2}-1}\)[12].
We would like to point out that, this class of functions \(\mathrm{tr}_{n}(x^{2^{r}+3})\), where \(n=2r\), is studied for the first time in our work. A similar type of functions \(\mathrm{tr}_{n}(x^{2^{r+1}+3})\), where \(n=2r\), was studied in [1]; the lower bound proved in [13] is exactly the same as Theorem 2.
**Theorem 3**.: _Let \(f=\mathrm{tr}_{n}(x^{15})\). Then we have_
\[\mathrm{nl}_{3}(f)\geq\begin{cases}2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)\sqrt{ \frac{1}{3}\cdot 2^{\frac{3}{2}n+4}+\frac{7}{3}\cdot 2^{n+1}-\frac{1}{3}\cdot 2^{ \frac{n}{2}+5}}+2^{n}}\\ =2^{n-1}-2^{\frac{7n}{8}-\frac{1}{4}\log_{2}3}-O(2^{\frac{3n}{8}}),&2\mid n \\ \\ 2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)\sqrt{\frac{29}{8}\cdot 2^{\frac{3n+1}{2}}+2^ {n+1}-7\cdot 2^{\frac{n+5}{2}}}+2^{n}}\\ =2^{n-1}-2^{\frac{7n}{8}-\frac{13}{8}+\frac{1}{4}\log_{2}29}-O(2^{\frac{3n}{8}}),&2\nmid n\end{cases}\]
_for \(n\geq 6\)._
Theorem 3 gives a lower bound on the third-order nonlinearity for the functions within \(\operatorname{tr}_{n}(x^{15})\) class; it is the largest lower bound on the third-order nonlinearity among all trace monomial Boolean functions.
**Theorem 4**.: _Let \(f=\operatorname{tr}_{n}(x^{2^{r+1}-1})\) and \(r\geq 2\)._
\[\operatorname{nl}_{r}(f)\geq 2^{n-1}-2^{(1-2^{-r})n+\frac{r}{2^{r-1}}-1}-O(2^{ \frac{n}{2}}).\]
For \(r\ll\log_{2}n\), our lower bound in Theorem 4 is better than all previous results, for all explicit functions in P, not necessarily trace monomials.
Similarly, we prove the following lower bound on the \(r\)-th order nonlinearity for the inverse function, which is studied in [1]. We credit this to Carlet, who claims that the \(r\)-th order nonlinearity for the inverse function is asymptotically lower bounded by \(2^{n-1}-2^{(1-2^{-r})n}\).
**Theorem 5**.: _Let \(f_{\operatorname{inv}}=\operatorname{tr}_{n}(x^{2^{n}-2})\). For any \(r\geq 1\), we have \(\operatorname{nl}_{r}(f_{\operatorname{inv}})\geq 2^{n-1}-2^{(1-2^{-r})n-2^{-( r-1)}}-O(2^{\frac{n}{2}})\)._
**Techniques.** Our proof of the lower bounds follows from Carlet's methods [1]. That is, to lower bound the \(r\)-th order nonlinearity, we estimate the (first-order) nonlinearity of its \((r-1)\)-th order derivatives. Taking a (nontrivial) \((r-1)\)-th order derivative, our target function becomes a quadratic function. Then, we rely on a result by Canteaut _et al._[1] that relates the nonlinearity of a quadratic function with the dimension of its _linear kernel_. As such, the problem essentially reduces to estimating _the number of roots_ of certain equations over the finite field \(\mathbb{F}_{2^{n}}\).
As for Theorem 1, we use the following ingredients to estimate the number of roots of a certain equation (associated with the linear kernel): we factor the equation into irreducible ones; we apply the known results concerning the number of roots of \(q\)_-polynomials_, and the number of roots of _quadratic equations_ and _quartic equations_ (over finite fields); we use the Weil bound to estimate the weight of trace monomial functions. As for Theorem 2, our proof is similar to [2], and the lower bounds are exactly the same. (The target function, which has a simple form and good behavior, is somehow missed by previous works.)
As for the third-order nonlinearity lower bound, i.e., Theorem 3, our strategy is, again, to estimate the number of roots of a certain equation (associated with the linear kernel). We factor the equation into irreducible ones, and analyze the number of roots for each component separately. The proof relies on the known results about the number of roots of \(q\)-polynomials, and _quartic equations_ (over finite fields). A critical step is to estimate the algebraic degree of a (trace) equation over \(\mathbb{F}_{2^{n}}\). (With the algebraic degree known, we can apply the well-known fact that the number of roots is bounded by the degree.)
In Theorem 4, we study the \(r\)-th order nonlinearity of functions \(\operatorname{tr}_{n}(x^{2^{r+1}-1})\), a natural generalization of \(\operatorname{tr}_{n}(x^{7})\) and \(\operatorname{tr}_{n}(x^{15})\). We prove a lower bound on the (first-order) nonlinearity of all nontrivial \((r-1)\)-th order derivatives of the target function, and the \(r\)-th order nonlinearity lower bound follows from the methods articulated by [1]. The equation (associated with the linear kernel for the derivative) turns out to have a nice explicit form, whose degree is at most \(2^{2r}\). Thus, the nonlinearity bound follows from a result in [1] (that relates the dimension of the kernel with the nonlinearity for any quadratic function).
The proof of Theorem 5 closely follows from [1], who already claimed that the lower bound is asymptotically \(2^{n-1}-2^{(1-2^{-r})n}\). We credit the result to Carlet, who obviously can, but did not have the occasion to write down the details.
## 2 Preliminary
Let \(\mathbb{F}_{2}\) be the finite field of size \(2\). Let \(\mathcal{B}_{n}\) denote the set of all \(n\)-variable Boolean functions. Any \(n\)-variable Boolean function can be represented as a unique polynomial in \(\mathbb{F}_{2}[x_{1},x_{2},\ldots,x_{n}]/\{x_{i}^{2}+x_{i}\}_{1\leq i\leq n}\), that is,
\[f(x_{1},x_{2},\ldots,x_{n})=\sum_{S\subseteq[n]}c_{S}\prod_{i\in S}x_{i},\]
which is called _algebraic normal form_ (ANF). The _algebraic degree_ of \(f\), denoted by \(\deg(f)\), is the number of variables in the highest order term with nonzero coefficient.
The _Hamming weight_ of a vector \(x\in\mathbb{F}_{2}^{n}\), denoted by \(\operatorname{wt}(x)\), is the number of nonzero coordinates. The _weight_ of a Boolean function \(f\), denoted by \(\operatorname{wt}(f)\), is the cardinality of the set \(\{x\in\mathbb{F}_{2}^{n}:f(x)=1\}\). The _distance_ between two functions \(f\) and \(g\) is the cardinality of the set \(\{x\in\mathbb{F}_{2}^{n}:f(x)\neq g(x)\}\), denoted by \(\operatorname{d}(f,g)\).
Let \(\mathbb{F}_{2^{n}}\) be the finite field of size \(2^{n}\). The _absolute trace function_ from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2}\) can be defined as
\[\operatorname{tr}_{n}(x)=x+x^{2}+x^{2^{2}}+\ldots+x^{2^{n-1}},\]
where \(x\in\mathbb{F}_{2^{n}}\). Let \(K=\mathbb{F}_{2^{r}}\) be a subfield of \(L=\mathbb{F}_{2^{n}}\). More generally, the trace function defined with respect to the field extension \(L/K\) is
\[\operatorname{tr}_{L/K}(\alpha)=\alpha+\alpha^{2^{r}}+\ldots+\alpha^{2^{r-( \frac{\alpha}{p}-1)}},\]
where \(\alpha\in\mathbb{F}_{2^{n}}\). It is well known that (for instance, Theorem 2.23 in [10]) the trace function satisfies the following properties
* \(\operatorname{tr}_{L/K}(x+y)=\operatorname{tr}_{L/K}(x)+\operatorname{tr}_{L/ K}(y)\) for any \(x,y\in\mathbb{F}_{2^{n}}\).
* \(\operatorname{tr}_{L/K}(x^{2})=\operatorname{tr}_{L/K}(x)\) for any \(x\in\mathbb{F}_{2^{n}}\).
* For any \(\alpha\in\mathbb{F}_{2^{r}}\), there are exactly \(2^{n-r}\) elements \(\beta\) with \(\operatorname{tr}_{L/K}(\beta)=\alpha\).
Any \(n\)-variable Boolean function can be written as \(f(x)=\operatorname{tr}_{n}(g(x))\), where \(g(x)=\sum_{i=0}^{2^{n}-1}\beta_{i}x^{i}\) is a mapping from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2^{n}}\) for \(\beta_{i}\in\mathbb{F}_{2^{n}}\). A trace _monomial_ Boolean function is of the form \(\operatorname{tr}_{n}(\lambda x^{d})\) where \(\lambda\in\mathbb{F}_{2^{n}}^{*}\) and \(d\) is an integer. It is well known that the degree of the trace monomial function \(\operatorname{tr}_{n}(\lambda x^{d})\) is the Hamming weight of the binary representation of \(d\)[1].
For \(1\leq r\leq n\), the \(r\)-th order nonlinearity of an \(n\)-variable Boolean function \(f\), denoted by \(\operatorname{nl}_{r}(f)\), is the minimum distance between \(f\) and functions with degree at most \(r\), i.e.,
\[\operatorname{nl}_{r}(f)=\min_{\deg(g)\leq r}\operatorname{d}(f,g).\]
We denote by \(\operatorname{nl}(f)\) the first-order nonlinearity of \(f\).
The _Walsh transform_ of \(f\in\mathcal{B}_{n}\) at \(\alpha\in\mathbb{F}_{2^{n}}\) is defined as
\[W_{f}(\alpha)=\sum_{x\in\mathbb{F}_{2^{n}}}(-1)^{f(x)+\operatorname{tr}_{n}( \alpha x)}.\]
The _Walsh spectrum_ of \(f\) is the multi-set consisting of the values \(W_{f}(\alpha)\) for all \(\alpha\in\mathbb{F}_{2^{n}}\). The nonlinearity of any Boolean function in \(n\) variable can be calculated as
\[\operatorname{nl}(f)=2^{n-1}-\frac{1}{2}\max_{\alpha\in\mathbb{F}_{2^{n}}}|W_ {f}(\alpha)|. \tag{1}\]
We denote by \(D_{a}f\) the _derivative_ of the \(f\in\mathcal{B}_{n}\) with respect to \(a\in\mathbb{F}_{2^{n}}\), which is defined to be
\[D_{a}f(x)=f(x)+f(x+a).\]
The \(k\)_-th order derivative_ of \(f\), denoted by \(D_{a_{1}}D_{a_{2}}\ldots D_{a_{k}}f\), is obtained by applying such derivation successively to the function \(f\) with respect to \(a_{1},a_{2},\ldots,a_{k}\in\mathbb{F}_{2^{n}}\).
In [1], Carlet provided a method to lower bound the \(r\)-th order nonlinearity relying on the \((r-1)\)-th order nonlinearity of all its derivatives.
**Proposition 1**.: _[_1_]_ _Let \(f\) be any \(n\)-variable Boolean function and \(r\) a positive integer smaller than \(n\). We have_
\[\operatorname{nl}_{r}(f)\geq 2^{n-1}-\frac{1}{2}\sqrt{2^{2n}-2\sum_{a\in \mathbb{F}_{2^{n}}}\operatorname{nl}_{r-1}(D_{a}f)}.\]
The _quadratic functions_ are the set of the Boolean functions of algebraic degree at most 2. The _linear kernel_ is the central object for the calculation of the nonlinearity of quadratic functions.
**Definition 1**.: _[_10_]_ _Let \(q:\mathbb{F}_{2^{n}}\rightarrow\mathbb{F}_{2}\) be a quadratic function. The linear kernel of \(q\), denoted by \(\mathcal{E}_{q}\), can be defined as_
\[\mathcal{E}_{q}=\mathcal{E}_{0}\cup\mathcal{E}_{1}\]
_where_
\[\mathcal{E}_{0}=\{b\in\mathbb{F}_{2^{n}}\mid D_{b}q=q(x)+q(x+b)=0,\text{ for all }x\in\mathbb{F}_{2^{n}}\},\]
\[\mathcal{E}_{1}=\{b\in\mathbb{F}_{2^{n}}\mid D_{b}q=q(x)+q(x+b)=1,\text{ for all }x\in\mathbb{F}_{2^{n}}\}.\]
The _bilinear form_ associated with a quadratic function \(q\) is defined as
\[B(x,y)=q(0)+q(x)+q(y)+q(x+y).\]
The _linear kernel_\(\mathcal{E}_{q}\) of a quadratic function \(q\) is the _linear kernel_ of its associated bilinear form \(B(x,y)\) by definition, that is
\[\mathcal{E}_{q}=\{x\in\mathbb{F}_{2^{n}}\mid B(x,y)=0\text{ for any }y\in \mathbb{F}_{2^{n}}\}.\]
**Lemma 1**.: _[_10_]_ _Let \(q:\mathbb{F}_{2^{n}}\rightarrow\mathbb{F}_{2}\) be an \(n\)-variable Boolean function of degree at most 2. Then the Walsh spectrum of \(q\) depends on the dimension \(k\) of the linear kernel of \(q\). Moreover, for any \(\mu\in\mathbb{F}_{2^{n}}\), we have_
\begin{tabular}{c|c} \hline \(W_{q}(\mu)\) & The number of \(u\in\mathbb{F}_{2^{n}}\) \\ \hline
0 & \(2^{n}-2^{n-k}\) \\ \hline \(2^{\frac{n+k}{2}}\) & \(2^{n-k-1}+(-1)^{q(0)}2^{\frac{n-k-2}{2}}\) \\ \hline \(-2^{\frac{n+k}{2}}\) & \(2^{n-k-1}-(-1)^{q(0)}2^{\frac{n-k-2}{2}}\) \\ \hline \end{tabular}
**Lemma 2**.: _[_10_]_ _Let \(V\) be a vector space over a field \(\mathbb{F}_{2^{n}}\) and \(Q:V\rightarrow\mathbb{F}_{2^{n}}\) be a quadratic form. Then the dimension of \(V\) and the dimension of the kernel of \(Q\) have the same parity._
That is, if \(f:\mathbb{F}_{2^{n}}\rightarrow\mathbb{F}_{2}\) is a quadratic function, then the parity of the dimension of its linear kernel is the same as the parity of \(n\).
A _q-polynomial_ over \(\mathbb{F}_{q^{n}}\) is the polynomial in the form
\[P(x)=\sum_{i=0}^{n-1}a_{i}x^{q^{i}},\]
where the coefficients \(a_{i}\in\mathbb{F}_{q^{n}}\). It is a _linearized polynomial_ which satisfies the following properties [11, page 108]:
\[P(b+c)=P(b)+P(c),\quad\text{for all }b,c\in\mathbb{F}_{q^{n}} \tag{2}\]
\[P(tb)=tP(b),\quad\text{for all }t\in\mathbb{F}_{q},\,\text{all }b\in\mathbb{F}_{q^{n}}. \tag{3}\]
Equation (2) follows from the fact that \((a+b)^{q^{i}}=a^{q^{i}}+b^{q^{i}}\) for \(a,b\in\mathbb{F}_{q^{n}}\) and \(i\geq 0\)[11, Theorem 1.46]; equation (3) follows from that \(t^{q^{i}}=t\) for \(t\in\mathbb{F}_{q}\) and any \(i\geq 0\). Hence, if \(\mathbb{F}_{q^{n}}\) is regarded as a vector space over \(\mathbb{F}_{q}\), then a \(q\)-polynomial is a linear map of this vector space.
Second-order nonlinearity
In this section, we deduce that the lower bound on the second-order nonlinearity for two classes of trace monomial Boolean functions in the form \(\operatorname{tr}_{n}(x^{7})\) and \(\operatorname{tr}_{n}(x^{2^{r}+3})\), where \(n=2r\).
### The functions \(\operatorname{tr}_{n}(x^{7})\)
We will lower bound the second-order nonlinearity of the monomial cubic functions \(\operatorname{tr}_{n}(x^{7})\). The algebraic degree of the derivatives of \(\operatorname{tr}_{n}(x^{7})\) is at most \(2\) since the degree of \(\operatorname{tr}_{n}(x^{7})\) is exactly \(3\). By using Carlet's method (i.e., Proposition 1), our goal is to calculate the nonlinearities of all its derivatives.
**Proposition 2**.: _Let \(f:\mathbb{F}_{2^{n}}\to\mathbb{F}_{2}\) be a quadratic function. For any \(a\in\mathbb{F}_{2^{n}}^{*}\), we have_
\[\mathcal{E}_{f}=\mathcal{E}_{f(ax)},\]
_where \(\mathcal{E}_{f}\) denotes the linear kernel of the \(f\) and \(\mathcal{E}_{f(ax)}\) denotes the linear kernel of the \(f(ax)\)._
Proof.: Let us prove \(\mathcal{E}_{f}\subseteq\mathcal{E}_{f(ax)}\) first. By definition, if \(b\in\mathcal{E}_{f}\), then \(f(x)+f(x+b)=0\) for all \(x\in\mathbb{F}_{2^{n}}\) or \(f(x)+f(x+b)=1\) for all \(x\in\mathbb{F}_{2^{n}}\). Note that \(x\mapsto ax\) is a bijection over \(\mathbb{F}_{2^{n}}\) for any \(a\in\mathbb{F}_{2^{n}}^{*}\), then we have
\[f(ax)+f(ax+b)=0\text{ for all }x\in\mathbb{F}_{2^{n}}\]
or
\[f(ax)+f(ax+b)=1\text{ for all }x\in\mathbb{F}_{2^{n}}\.\]
So \(b\in\mathcal{E}_{f(ax)}\).
Now let us prove \(\mathcal{E}_{f(ax)}\subseteq\mathcal{E}_{f}\). Let \(g(x)=f(ax)\). From the above, we have \(\mathcal{E}_{g}\subseteq\mathcal{E}_{g(a^{-1}x)}\), that is, \(\mathcal{E}_{f(ax)}\subseteq\mathcal{E}_{f}\).
We will need the following lemmas in the proof of Theorem 6.
**Lemma 3**.: _[_15_, Theorem 3.50]_ _Let \(q\) be a prime. Let \(P(x)=\sum_{i=0}^{n-1}a_{i}x^{q^{i}}\) be a \(q\)-polynomial, where \(a_{i}\in\mathbb{F}_{q^{n}}\). Then the distinct number of roots of \(P(x)\) in \(\mathbb{F}_{q^{n}}\) is a power of \(q\)._
**Lemma 4**.: _[_16_, page 37]_ _The number of solutions in \(\mathbb{F}_{2^{n}}\) of the quartic function_
\[x^{4}+ax+b=0,\ \ a,b\in\mathbb{F}_{2^{n}},\ a\neq 0. \tag{4}\]
* _If_ \(n\) _is odd, then (_4_) has either no solution or exactly two solutions._
* _If_ \(n\) _is even and_ \(a\) _is not a cube, then (_4_) has exactly one solution._
* _If_ \(n\) _is even, and_ \(a\) _is a cube, then (_4_) has four solutions if_ \(\operatorname{tr}_{\mathbb{F}_{2^{n}}/\mathbb{F}_{4}}(\frac{b}{a^{\frac{3}{3} }})=0\)_, and no solutions if_ \(\operatorname{tr}_{\mathbb{F}_{2^{n}}/\mathbb{F}_{4}}(\frac{b}{a^{\frac{3}{3} }})\neq 0\)_._
We need some properties of trace functions in the proof.
**Theorem 6**.: _Let \(f(x)=\operatorname{tr}_{n}(x^{7})\). Let \(\mathcal{E}_{D_{a}f}\) be the linear kernel of \(D_{a}f\). We denote by \(\dim(\mathcal{E}_{D_{a}f})\) the dimension of \(\mathcal{E}_{D_{a}f}\). The distribution of \(\dim(\mathcal{E}_{D_{a}f})\) for all \(a\in\mathbb{F}_{2^{n}}^{*}\) is as follows:_
Proof.: For any \(a\in\mathbb{F}_{2^{n}}^{*}\), we have
\[(D_{a}f)(ax) = \mathrm{tr}_{n}((ax)^{7})+\mathrm{tr}_{n}((ax+a)^{7})\] \[= \mathrm{tr}_{n}((ax)^{7}+(ax+a)^{7})\] \[= \mathrm{tr}_{n}(a^{7}(x^{6}+x^{5}+x^{4}+x^{3}+x^{2}+x+1)).\]
Let \(g(x)=D_{a}f(ax)\). By Proposition 2, we know \(\mathcal{E}_{g}=\mathcal{E}_{D_{a}f}\), and \(\dim(\mathcal{E}_{g})\) equals the number of \(b\in\mathbb{F}_{2^{n}}\) such that \(D_{b}g\) is a constant. Note that
\[D_{b}g(x) = \mathrm{tr}_{n}(a^{7}(\sum_{i=0}^{6}x^{i}))+\mathrm{tr}_{n}(a^{7} (\sum_{i=0}^{6}(x+b)^{i}))\] \[= \mathrm{tr}_{n}(a^{7}((b^{2}+b)x^{4}+(b^{4}+b)x^{2}+(b^{4}+b^{2}) x))+\mathrm{tr}_{n}(\sum_{i=1}^{6}b^{i}).\]
So \(\dim(\mathcal{E}_{g})\) equals the number of \(b\in\mathbb{F}_{2^{n}}\) such that \(\mathrm{tr}_{n}(a^{7}((b^{2}+b)x^{4}+(b^{4}+b)x^{2}+(b^{2}+b^{4})x))\) is a constant.
Using the properties of the trace function, we have
\[\mathrm{tr}_{n}(a^{7}((b^{2}+b)x^{4}+(b^{4}+b)x^{2}+(b^{4}+b^{2}) x)) \tag{5}\] \[= \mathrm{tr}_{n}(a^{7}((b^{2}+b)x^{4}))+\mathrm{tr}_{n}(a^{7}((b^ {4}+b)x^{2}))+\mathrm{tr}_{n}(a^{7}((b^{4}+b^{2})x))\] \[= \mathrm{tr}_{n}((a^{7})^{-4}((b^{2}+b)^{-4}x))+\mathrm{tr}_{n}((a ^{7})^{-2}((b^{4}+b)^{-2}x))+\mathrm{tr}_{n}(a^{7}((b^{4}+b^{2})x))\] \[= \mathrm{tr}_{n}(((a^{7})^{-4}(b^{2}+b)^{-4}+(a^{7})^{-2}(b^{4}+b) ^{-2}+a^{7}(b^{4}+b^{2})x).\]
Thus, (5) is a constant if and only if the coefficient of \(x\) is zero, that is,
\[(a^{7})^{-4}(b^{2}+b)^{-4}+(a^{7})^{-2}(b^{4}+b)^{-2}+a^{7}(b^{4}+b^{2})=0. \tag{6}\]
Taking the 4th power to both sides of (6), we have
\[0 = a^{7}(b^{2}+b)+a^{14}(b^{4}+b)^{2}+a^{28}(b^{4}+b^{2})^{4}\] \[= a^{28}(b^{2}+b)^{8}+a^{14}(b^{2}+b)^{4}+a^{14}(b^{2}+b)^{2}+a^{7 }(b^{2}+b)\] \[= \left((a^{7})^{2}(b^{2}+b)^{4}\right)^{2}+\left((a^{7})^{2}(b^{2} +b)^{4}\right)+\left(a^{7}(b^{2}+b)\right)^{2}+\left(a^{7}(b^{2}+b)\right)\] \[= \left((a^{7})^{2}(b^{2}+b)^{4}+a^{7}(b^{2}+b)\right)\left((a^{7}) ^{2}(b^{2}+b)^{4}+a^{7}(b^{2}+b)+1\right).\]
\begin{table}
\begin{tabular}{c|c|c|c} \hline n & dim(\(\mathcal{E}_{D_{a}f}\)) & The number of \(a\in\mathbb{F}_{2^{n}}^{*}\) \\ \hline \multirow{4}{*}{even \(n\)} & \multirow{2}{*}{\(3\nmid n\)} & 2 & \(\frac{11}{3}\cdot 2^{n-2}-\frac{2}{3}\) \\ \cline{3-4} & & 4 & \(\frac{1}{3}\cdot 2^{n-2}-\frac{1}{3}\) \\ \cline{2-4} & \multirow{2}{*}{\(3\mid n\)} & 2 & \(\frac{2}{3}(2^{n}-1)+\frac{1}{2}\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\) \\ \cline{3-4} & & 4 & \(\frac{1}{3}(2^{n}-1)-\frac{1}{2}\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\) \\ \hline \multirow{4}{*}{odd \(n\)} & \multirow{2}{*}{\(3\nmid n\)} & 1 & \(2^{n-1}\) \\ \cline{3-4} & & 3 & \(2^{n-1}-1\) \\ \cline{3-4} & & 1 & \(\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\) \\ \cline{3-4} & & 3 & \(2^{n}-1-\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\) \\ \hline \end{tabular}
\end{table}
Table 3: The distribution of \(\dim(\mathcal{E}_{D_{a}f})\)
For convenience, let \(P(a,b)=\left((a^{7})^{2}(b^{2}+b)^{4}+a^{7}(b^{2}+b)\right)\left((a^{7})^{2}(b^{2} +b)^{4}+a^{7}(b^{2}+b)+1\right)=Q(a,b)(Q(a,b)+1)\), where \(Q(a,b)=(a^{7})^{2}(b^{2}+b)^{4}+a^{7}(b^{2}+b)\).
We denote by \(\mathrm{N}(a)\) the number of \(b\in\mathbb{F}_{2^{n}}\) such that \(P(a,b)=0\); denote by \(\mathrm{N}_{1}(a)\) the number of \(b\in\mathbb{F}_{2^{n}}\) such that \(Q(a,b)=0\); denote by \(\mathrm{N}_{2}(a)\) the number of \(b\in\mathbb{F}_{2^{n}}\) such that \(Q(a,b)+1=0\). Obviously, \(\mathrm{N}(a)=\mathrm{N}_{1}(a)+\mathrm{N}_{2}(a)\).
It is clear that \(b=0,1\) are two solutions of \(Q(a,b)=0\). If \(b^{2}+b\neq 0\), \(Q(a,b)=0\) is equivalent to
\[(b^{2}+b)^{3}=(a^{7})^{-1}. \tag{7}\]
Observe that the degree (in variable \(b\)) of the polynomial \(P(a,b)\) is 16. So \(\mathrm{N}(a)\leq 16\). Since \(b=0\) or \(1\) are two distinct roots of \(Q(a,b)=0\), we have \(\mathrm{N}(a)\geq 2\). For any fixed \(a\in\mathbb{F}_{2^{n}}^{*}\), note that \(P(a,b)\) is a 2-polynomial in variable \(b\). By Lemma 3, \(\mathrm{N}(a)=2^{k}\) for some \(1\leq k\leq 4\). By Lemma 2, we know that \(\dim(\mathcal{E}_{g})\) and \(n\) have the same parity. Hence, we have \(\mathrm{N}(a)\in\{2^{2},2^{4}\}\) when \(n\) is even; \(\mathrm{N}(a)\in\{2^{1},2^{3}\}\) when \(n\) is odd.
Next, we will consider the cases according to the parity of \(n\) to determine the distribution of \(N(a)\), i.e., the distribution of \(\dim(\mathcal{E}_{D_{a}f})\).
**Case 1:**\(n\) is even. In this case, \(\mathrm{N}(a)\in\{2^{2},2^{4}\}\); it suffices to count the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) where \(\mathrm{N}(a)=16\). Note that the degree of \(Q(a,b)\), for any fixed \(a\in\mathbb{F}_{2^{n}}^{*}\), is 8. So we have \(\mathrm{N}_{1}(a)\leq 8\) and \(\mathrm{N}_{2}(a)\leq 8\). Hence, \(\mathrm{N}(a)=16\) if and only if \(\mathrm{N}_{1}(a)=\mathrm{N}_{2}(a)=8\).
For even \(n\), we have \(\gcd(2^{n}-1,3)=3\) and \(\gcd(2^{n}-2,3)=1\) since \(2^{n}\equiv 1\pmod{3}\). Let \(G=\{g^{3s}\mid 0\leq s\leq\frac{2^{n}-4}{3}\}\) be a multiplicative group of order \(\frac{2^{n-1}}{3}\), where \(g\) is a primitive element of \(\mathbb{F}_{2^{n}}^{*}\). If \(a^{7}\notin G\), there is no solution to (7), which implies that \(\mathrm{N}_{1}(a)=2\). If \(a^{7}\in G\), letting \(a^{7}=g^{3s}\), where \(0\leq s\leq\frac{2^{n}-4}{3}\), we have
\[b^{2}+b=g^{-s+\frac{(2^{n}-1)i}{3}}, \tag{8}\]
for \(i=0,1,2\). If \(\mathrm{N}_{1}(a)=8\), then (8) must have 2 solutions for each \(i=0,1,2\). As a result, \(\mathrm{tr}_{n}(g^{-s+\frac{(2^{n}-1)i}{3}})=0\) must hold for each \(i\). (It is known that \(x^{2}+x=b\) has two solutions if and only if \(\mathrm{tr}_{n}(b)=0\), for instance, see the theorem in [1, page 536].)
Let \(c=g^{-s}\) and \(d=g^{\frac{2^{n}-1}{3}}\). We have \(g^{2^{n}-1-s}=c\), \(g^{\frac{2^{n}-1}{3}-s}=cd\) and \(g^{\frac{2(2^{n}-1)}{3}-s}=cd^{2}\). Furthermore, we have
\[\mathrm{tr}_{n}(c)+\mathrm{tr}_{n}(cd)+\mathrm{tr}_{n}(cd^{2})\] \[= \mathrm{tr}_{n}(c(1+d+d^{2})\] \[= \mathrm{tr}_{n}(c(1+d+d^{2})(1+d)(1+d)^{-1})\] \[= \mathrm{tr}_{n}(c(1+d^{3})(1+d)^{-1})\] \[= 0,\]
since \(d^{3}=g^{2^{n}-1}=1\). In other words,
\[\mathrm{tr}_{n}(g^{2^{n}-1-s})+\mathrm{tr}_{n}(g^{\frac{2^{n}-1}{3}-s})+ \mathrm{tr}_{n}(g^{2\frac{2^{n}-1}{3}-s})=0 \tag{9}\]
always holds for any \(0\leq s\leq\frac{2^{n}-4}{3}\). By (9), there are two possibilities:
* \(\mathrm{tr}_{n}(g^{2^{n}-1-s})=\mathrm{tr}_{n}(g^{\frac{2^{n}-1}{3}-s})= \mathrm{tr}_{n}(g^{2\frac{2^{n}-1}{3}-s})=0\),
* \(\mathrm{tr}_{n}(g^{\frac{(2^{n}-1)i_{1}}{3}-s})=\mathrm{tr}_{n}(g^{\frac{(2^{n} -1)i_{2}}{3}-s})=1\) and \(\mathrm{tr}_{n}(g^{\frac{(2^{n}-1)i_{3}}{3}-s})=0\) for distinct \(i_{1},i_{2},i_{3}\in\{0,1,2\}\).
To proceed, we consider the following two subcases.
**Subcase 1.1**.: \(3\nmid n\) and \(n\) is even. In this case, \(\gcd(2^{n}-1,7)=1\), so the linear function \(a\mapsto a^{7}\) is a bijection from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2^{n}}\). Hence, the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(N_{1}(a)=8\) is exactly the size of the set \(\{0\leq s\leq\frac{2^{n}-4}{3}\mid\mathrm{tr}_{n}(g^{-s+\frac{(2^{n}-1)i}{3}})=0\) for \(i=0,1,2\}\).
Denote \(s_{1}\) by the size of the set \(\{0\leq s\leq\frac{2^{n}-4}{3}\mid\operatorname{tr}_{n}(g^{2^{n}-1-s})= \operatorname{tr}_{n}(g^{\frac{2^{n}-1}{3}-s})=\operatorname{tr}_{n}(g^{\frac{2 (2^{n}-1)i_{3}}{3}-s})=0\}\); denote \(s_{2}\) by the size of the set \(\{0\leq s\leq\frac{2^{n}-4}{3}\mid\operatorname{tr}_{n}(g^{\frac{(2^{n}-1)i_{ 3}}{3}-s})=\operatorname{tr}_{n}(g^{\frac{(2^{n}-1)i_{2}}{3}-s})=1\text{ and } \operatorname{tr}_{n}(g^{\frac{(2^{n}-1)i_{3}}{3}-s})=0\text{ for distinct }i_{1},i_{2},i_{3}\in\{0,1,2\}\}\). Observe that \(\operatorname{wt}(\operatorname{tr}_{n}(x))=2^{n-1}\) because \(\operatorname{tr}_{n}(x)\) is an affine function, and the set \(\{g^{-s+\frac{(2^{n}-1)i}{3}}\mid 0\leq s\leq\frac{2^{n}-4}{3},0\leq i\leq 2\}\) is exactly \(\mathbb{F}_{2^{n}}^{*}\). So we have
\[\begin{cases}3(s_{1}+s_{2})=2^{n}-1\\ 2s_{2}=2^{n-1},\end{cases} \tag{10}\]
where \(2s_{2}=\operatorname{wt}(\operatorname{tr}_{n}(x))=2^{n-1}\) is because \(2s_{2}\) is the weight of the function \(\operatorname{tr}_{n}(x)\). Solving equations (10), we have \(s_{1}=\frac{2^{n-2}-1}{3}\) and \(s_{2}=2^{n-2}\). Thus the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{N}_{1}(a)=8\) is \(\frac{2^{n-2}-1}{3}\). Therefore, the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{N}(a)=16\) is \(\frac{2^{n-2}-1}{3}\) and the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{N}_{2}(a)=4\) is \(\frac{11}{3}\cdot 2^{n-2}-\frac{2}{3}\).
**Subcase 1.2**. \(3\mid n\) and \(n\) is even. In this case, we have \(7\mid 2^{n}-1\); thus the function \(a\mapsto a^{7}\) is a 7-to-1 mapping from \(\mathbb{F}_{2^{n}}^{*}\) to \(\mathbb{F}_{2^{n}}^{*}\). So \(\{a^{7}\mid a^{7}\in G\}=\{g^{3s}\mid 0\leq s\leq\frac{2^{n}-4}{3}\text{ and }7\mid s\}\). Denote by \(s_{1}\) the size of the set \(\{0\leq s\leq\frac{2^{n}-4}{3}\mid\operatorname{tr}_{n}(g^{-s+\frac{(2^{n}-1) i_{3}}{3}-s})=0\text{ for all }i=0,1,2,\text{ and }7\mid s\}\) and by \(s_{2}\) the size of the set \(\{0\leq s\leq\frac{2^{n}-4}{3}\mid\operatorname{tr}_{n}(g^{\frac{(2^{n}-1)i_{ 3}}{3}-s})=\operatorname{tr}_{n}(g^{\frac{(2^{n}-1)i_{3}}{3}-s})=1\text{ and } \operatorname{tr}_{n}(g^{\frac{(2^{n}-1)i_{3}}{3}-s})=0,\text{for distinct }i_{1},i_{2},i_{3}\in\{0,1,2\},\text{ and }7\mid s\}\). One can easily verify that
\[\begin{cases}s_{1}+s_{2}=\frac{2^{n}-1}{21},\\ 14s_{2}=\operatorname{wt}(\operatorname{tr}_{n}(x^{7})).\end{cases} \tag{11}\]
Solving equations (11), we have \(s_{1}=\frac{2^{n}-1}{21}-\frac{\operatorname{wt}(\operatorname{tr}_{n}(x^{7}) )}{14}\) and \(s_{2}=\frac{\operatorname{wt}(\operatorname{tr}_{n}(x^{7}))}{14}\). Hence, the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{N}_{1}(a)=8\) is \(7s_{1}=\frac{2^{n}-1}{3}-\frac{\operatorname{wt}(\operatorname{tr}_{n}(x^{7} ))}{12}\). The number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{N}(a)=16\) is \(\frac{2^{n}-1}{3}-\frac{\operatorname{wt}(\operatorname{tr}_{n}(x^{7}))}{2}\), and the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{N}(a)=4\) is \(\frac{2}{3}(2^{n}-1)+\frac{\operatorname{wt}(\operatorname{tr}_{n}(x^{7}))}{2}\).
**Case 2:**\(n\) is odd. In this case, we have \(\operatorname{N}(a)\in\{2^{1},2^{3}\}\); it suffices to count the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{N}(a)=8\). For odd \(n\), we have \(3\mid(2^{n}-2)\) and \(\gcd(3,2^{n}-1)=1\). So \(a\mapsto a^{3}\) is a bijection in \(\mathbb{F}_{2^{n}}^{*}\). By (7), we have
\[b^{2}+b=(a^{7})^{\frac{2^{n}-2}{3}}. \tag{12}\]
When \(b\not\in\{0,1\}\), equation (12) has two distinct solutions if and only if \(\operatorname{tr}_{n}((a^{7})^{\frac{2^{n}-2}{3}})=0\). Hence, the number of solutions of \(Q(a,b)=0\) is at most 4, i.e., \(\operatorname{N}_{1}(a)\leq 4\).
Note that \(Q(a,b)\) is a 2-polynomial (in variable \(b\)) of degree 8 and \(b=0,1\) are two roots of \(Q(a,b)=0\). So we have \(\operatorname{N}_{1}(a)\in\{2,2^{2}\}\). By Lemma 4, for odd \(n\), the number of distinct \(b^{2}+b\) satisfying \(Q(a,b)+1=0\) is 0 or 2. So the number of \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(Q(a,b)+1=0\) is \(0,2,4\), that is, \(\operatorname{N}_{2}(a)\in\{0,2,4\}\). Thus \(\operatorname{N}(a)=\operatorname{N}_{1}(a)+\operatorname{N}_{2}(a)=8\) if and only if \(\operatorname{N}_{1}(a)=4\).
**Subcase 2.1**: \(3\nmid n\) and \(n\) is odd. In this case, we have \(\gcd(2^{n}-1,7)=1\). So mapping \(a\mapsto a^{7}\) is a bijection from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2^{n}}\). Since \(\gcd(2^{n}-1,2^{\frac{n}{3}})=1\). then mapping \(a\mapsto a^{\frac{2^{n}-2}{3}}\) is a bijection from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2^{n}}\). Note that \(\operatorname{N}_{1}(a)=4\) if and only if \(\operatorname{tr}_{n}((a^{7})^{\frac{2^{n}-2}{3}})=0\). As such, the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{N}_{1}(a)=4\) equals the size of the set \(\{x\in\mathbb{F}_{2^{n}}^{*}\mid\operatorname{tr}_{n}(x)=0\}\). Note that \(\operatorname{tr}_{n}(x)\) is an affine function. So the number of \(x\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{tr}_{n}(x)=0\) is \(2^{n-1}-1\). Thus the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{N}_{1}(a)=4\) equals \(2^{n-1}-1\).
**Subcase 2.2**: \(3\mid n\) and \(n\) is odd. In this case, we have \(\gcd(2^{n}-1,2^{n}-2)=1\). So mapping \(a\mapsto a^{\frac{2^{n}-2}{3}}\) is a bijection from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2^{n}}\). Since \(\gcd(2^{n}-1,7)=7\), then \(a\mapsto a^{7}\) is a 7-to-1 mapping from \(\mathbb{F}_{2^{n}}^{*}\) to \(\mathbb{F}_{2^{n}}^{*}\). Hence, the number of \(a\in\mathbb{F}_{2^{n}}^{*}\) such that \(\operatorname{tr}_{n}((a^{7})^{\frac{2^{n}-2}{3}})=\operatorname{tr}_{n}((a^{ \frac{2^{n}-2}{3}})^{7})=0\) equals the number of \(a\in\mathbb{F}_{2^{n}}^{
By Lemma 1 and Theorem 6, the following corollary is immediate.
**Corollary 1**.: _Let \(f=\mathrm{tr}_{n}(x^{7})\). Denote \(\mathrm{nl}(D_{a}f)\) by the nonlinearity of \(D_{a}f\). For any \(a\in\mathbb{F}_{2^{n}}^{*}\), the distribution of \(\mathrm{nl}(D_{a}f)\) is as follows:_
**Theorem 7**.: _(The Weil bound, for example, Theorem 5.38 in [15]) Let \(f\in\mathbb{F}_{q}[x]\) be of degree \(d\geq 1\), where \(\gcd(d,q)=1\). Let \(\mathcal{X}\) be a nontrivial additive character of \(\mathbb{F}_{q}\). Then_
\[\left|\sum_{x\in\mathbb{F}_{q}}\mathcal{X}(f(x))\right|\leq(n-1)q^{\frac{1}{2}}.\]
**Lemma 5**.: _Let \(d\geq 1\) be an odd number. We have \(\mathrm{wt}(\mathrm{tr}_{n}(x^{d}))\geq 2^{n-1}-\frac{d-1}{2}\cdot 2^{\frac{n}{2}}\)._
Proof.: Let \(\mathcal{X}(x)=e^{\frac{2\pi i\mathrm{tr}_{n}(x)}{p}}=(-1)^{\mathrm{tr}_{n}(x)}\) for \(p=2\). Applying the Weil bound, i.e., Theorem 7, we have
\[\left|\sum_{x\in\mathbb{F}_{2^{n}}}\mathcal{X}(x^{d})\right| = \left|\sum_{x\in\mathbb{F}_{2^{n}}}(-1)^{\mathrm{tr}_{n}(x^{d})}\right|\] \[\leq (d-1)2^{\frac{n}{2}}.\]
Since \(\mathrm{wt}(\mathrm{tr}_{n}(x^{d}))=2^{n-1}-\frac{1}{2}\mid\sum_{x\in\mathbb{ F}_{2^{n}}}(-1)^{\mathrm{tr}_{n}(x^{d})}\mid\), we have
\[\mathrm{wt}(\mathrm{tr}_{n}(x^{d}))\geq 2^{n-1}-\frac{d-1}{2}\cdot 2^{\frac{n}{2}}.\]
Now we are ready to prove Theorem 1, which gives a lower bound on the second-order nonlinearity of \(\mathrm{tr}_{n}(x^{7})\).
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline \multicolumn{1}{c|}{n} & \multicolumn{1}{c|}{\(\mathrm{nl}(D_{a}f)\)} & \multicolumn{1}{c}{The number of \(a\in\mathbb{F}_{2^{n}}^{*}\)} \\ \hline \multirow{5}{*}{even \(n\)} & \multirow{2}{*}{\(3\nmid n\)} & \(2^{n-1}-2^{\frac{n}{2}}\) & \(\frac{11}{3}\cdot 2^{n-2}-\frac{2}{3}\) \\ \cline{3-4} & & \(2^{n-1}-2^{\frac{n+2}{2}}\) & \(\frac{1}{3}\cdot 2^{n-2}-\frac{1}{3}\) \\ \cline{2-4} & & \(2^{n-1}-2^{\frac{n}{2}}\) & \(\frac{2}{3}(2^{n}-1)+\frac{1}{2}\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\) \\ \cline{2-4} & & \(2^{n-1}-2^{\frac{n+2}{2}}\) & \(\frac{1}{3}(2^{n}-1)-\frac{1}{2}\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\) \\ \hline \multirow{5}{*}{odd \(n\)} & \multirow{2}{*}{\(3\nmid n\)} & \(2^{n-1}-2^{\frac{n-1}{2}}\) & \(2^{n-1}\) \\ \cline{3-4} & & \(2^{n-1}-2^{\frac{n+1}{2}}\) & \(2^{n-1}-1\) \\ \cline{1-1} \cline{2-4} & & \(2^{n-1}-2^{\frac{n-1}{2}}\) & \(\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\) \\ \cline{1-1} \cline{2-4} & & \(2^{n-1}-2^{\frac{n+1}{2}}\) & \(2^{n}-1-\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: The distribution of \(\mathrm{nl}(D_{a}f)\)
Proof.: (of Theorem 1) By Proposition 1 and Corollary 1, when \(n\) is even and \(3\nmid n\), we have
\[\mathrm{nl}_{2}(f) \geq 2^{n-1}-\frac{1}{2}\sqrt{2^{2n}-2\sum_{a\in\mathbb{F}_{2^{n}}} \mathrm{nl}(D_{a}f)}\] \[= 2^{n-1}-\frac{1}{2}\sqrt{2^{2n}-2((2^{n-1}-2^{\frac{n}{2}})( \frac{11}{3}\cdot 2^{n-2}-\frac{2}{3})+(2^{n-1}-2^{\frac{n+2}{2}})(\frac{1}{3} \cdot 2^{n-2}-\frac{1}{3}))}\] \[= 2^{n-1}-\frac{1}{2}\sqrt{\frac{13}{3}\cdot 2^{\frac{3}{2}n-1}+2^ {n}-\frac{1}{3}\cdot 2^{\frac{n}{2}+3}}\] \[= 2^{n-1}-2^{\frac{3n}{4}-\frac{3}{2}+\frac{1}{2}\log_{2}13-\frac {1}{2}\log_{2}3}-O(2^{\frac{n}{4}}).\]
Similarly, when \(n\) is even and \(3\mid n\), we have
\[\mathrm{nl}_{2}(f) \geq 2^{n-1}-\frac{1}{2}\sqrt{\frac{1}{3}\cdot 2^{\frac{3}{2}n+3}+2^ {n}-\frac{1}{3}\cdot 2^{\frac{n}{2}+3}-\mathrm{wt}(\mathrm{tr}_{n}(x^{7})) \cdot 2^{\frac{n}{2}}}\] \[\geq 2^{n-1}-\frac{1}{2}\sqrt{\frac{13}{3}\cdot 2^{\frac{3}{2}n-1}+2^ {n+2}-\frac{1}{3}\cdot 2^{\frac{n}{2}+3}}\] \[= 2^{n-1}-2^{\frac{3n}{4}-\frac{3}{2}+\frac{1}{2}\log_{2}13-\frac {1}{2}\log_{2}3}-O(2^{\frac{n}{4}}),\]
where the second step is because \(\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\geq 2^{n-1}-3\cdot 2^{\frac{n}{2}}\) by Lemma 5.
By Proposition 1 and Corollary 1, for odd \(n\) and \(3\nmid n\) we have
\[\mathrm{nl}_{2}(f) \geq 2^{n-1}-\frac{1}{2}\sqrt{2^{2n}-2\sum_{a\in\mathbb{F}_{2^{n}}} \mathrm{nl}(D_{a}f)}\] \[= 2^{n-1}-\frac{1}{2}\sqrt{2^{2n}-2((2^{n-1}-2^{\frac{n-1}{2}})(2^ {n-1})+(2^{n-1}-2^{\frac{n+1}{2}})(2^{n-1}-1))}\] \[= 2^{n-1}-\frac{1}{2}\sqrt{2^{\frac{3n+1}{2}}+2^{\frac{3n-1}{2}}+2 ^{n}-2^{\frac{n+3}{2}}}\] \[\geq 2^{n-1}-2^{\frac{3n-5}{4}+\frac{1}{2}\log_{2}3}-O(2^{\frac{n}{4}}).\]
Similarly, when \(n\) is odd and \(3\mid n\), we have
\[\mathrm{nl}_{2}(f) \geq 2^{n-1}-\frac{1}{2}\sqrt{2^{\frac{3n+3}{2}}+2^{n}-2^{\frac{n+3}{ 2}}-\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\cdot 2^{\frac{n+1}{2}}}\] \[\geq 2^{n-1}-\frac{1}{2}\sqrt{3\cdot 2^{\frac{3n-1}{2}}+2^{n}+3 \cdot 2^{n+\frac{1}{2}}-2^{\frac{n+3}{2}}}\] \[\geq 2^{n-1}-2^{\frac{3n-5}{4}+\frac{1}{2}\log_{2}3}-O(2^{\frac{n}{4}}),\]
where the second step is because \(\mathrm{wt}(\mathrm{tr}_{n}(x^{7}))\geq 2^{n-1}-3\cdot 2^{\frac{n}{2}}\) by Lemma 5.
### Functions of the type \(\mathrm{tr}_{n}(x^{2^{r}+3})\) for \(n=2r\)
In [20], Yan and Tang proved lower bounds on the second-order nonlinearity of the functions \(\mathrm{tr}_{n}(x^{2^{r+1}+3})\), where \(n=2r\). This class of functions was first studied by Cusick and Dobbertin [1]. We study a similar, but different, class of functions, that is, \(\mathrm{tr}_{n}(x^{2^{r}+3})\) for \(n=2r\). In terms of techniques, our proof is similar to [20], and the lower bound is the same as that in [20]. Our main contribution is to _identify_ this class of functions for the first time.
Let \(f=\mathrm{tr}_{n}(x^{2^{r}+3})\). By Proposition 1, we can estimate the second-order nonlinearity \(\mathrm{nl}_{2}(f)\) by calculating the nonlinearity of the derivatives of \(f\), denoted by \(D_{a}f\). We have
\[D_{a}f(x) = \mathrm{tr}_{n}(x^{2^{r}+3}+(x+a)^{2^{r}+3})\] \[= \mathrm{tr}_{n}(a^{2^{r}}x^{3}+a^{2}x^{2^{r}+1}+ax^{2^{r}+2})+ \mathrm{tr}_{n}(a^{3}x^{2^{r}}+a^{2^{r}+1}x^{2}+a^{2^{r}+2}x+a^{2^{r}+3}),\]
where \(\mathrm{tr}_{n}(a^{3}x^{2^{r}}+a^{2^{r}+1}x^{2}+a^{2^{r}+2}x+a^{2^{r}+3})\) is an affine function.
**Theorem 8**.: _Let \(\mathcal{E}_{D_{a}f}\) be the linear kernel of \(D_{a}f(x)\). For odd \(r\), we have_
\[\dim(\mathcal{E}_{D_{a}f})=\begin{cases}r+1,&a\in\mathbb{F}_{2^{r}}^{*},\\ 2,&a\in\mathbb{F}_{2^{n}}\setminus\mathbb{F}_{2^{r}}.\end{cases}\]
_Let \(G=\{g^{3s}\mid 0\leq s\leq\frac{2^{r}-4}{3}\}\) and \(g\) is a primitive element of \(\mathbb{F}_{2^{r}}\). For even \(r\), we have_
\[\dim(\mathcal{E}_{D_{a}f})=\begin{cases}r+2,&a\in G,\\ r,&a\in\mathbb{F}_{2^{r}}^{*}\setminus G,\\ 2,&a\in\mathbb{F}_{2^{n}}\setminus\mathbb{F}_{2^{r}}.\end{cases}\]
Proof.: Let \(g_{a}(x)=\mathrm{tr}_{n}(a^{2^{r}}x^{3}+a^{2}x^{2^{r}+1}+ax^{2^{r}+2})\). By the definition of the linear kernel, we have
\[\mathcal{E}_{D_{a}f}=\mathcal{E}_{g_{a}}=\{x\in\mathbb{F}_{2^{n}}\mid B(x,y)= g_{a}(0)+g_{a}(x)+g_{a}(y)+g_{a}(x+y)=0,\text{ for all }y\in\mathbb{F}_{2^{n}}\}.\]
Using the properties of the trace function and the fact that \(n=2r\), we have
\[0 = B(x,y) \tag{13}\] \[= g_{a}(0)+g_{a}(x)+g_{a}(y)+g_{a}(x+y)\] \[= \mathrm{tr}_{n}(a^{2^{r}}(x^{2}y+xy^{2})+a(x^{2^{r}}y^{2}+x^{2}y^ {2^{r}})+a^{2}(x^{2^{r}}y+xy^{2^{r}}))\] \[= \mathrm{tr}_{n}((a^{2^{r}}x^{2}+a^{2}x^{2^{r}})y+(a^{2^{r}}x+ax^{ 2^{r}})y^{2}+(ax^{2}+a^{2}x)y^{2^{r}})\] \[= \mathrm{tr}_{n}((a^{2^{r}}x^{2}+a^{2}x^{2^{r}}+a^{2^{r-1}}x^{2^{ n-1}}+a^{2^{n-1}}x^{2^{r-1}}+a^{2^{r}}x^{2^{r+1}}+a^{2^{r+1}}x^{2^{r}})y).\]
Equation (13) holds for all \(y\in\mathbb{F}_{2^{n}}\) if and only if the coefficient of \(y\) is zero, that is,
\[a^{2^{r}}x^{2}+a^{2}x^{2^{r}}+a^{2^{r-1}}x^{2^{n-1}}+a^{2^{n-1}}x^{2^{r-1}}+a^ {2^{r}}x^{2^{r+1}}+a^{2^{r+1}}x^{2^{r}}=0. \tag{14}\]
Let \(\begin{cases}y=x^{2^{r}}\\ b=a^{2^{r}}\end{cases}\). Thus \(\begin{cases}x=y^{2^{r}}\\ a=b^{2^{r}}\end{cases}\). Equation (14) becomes
\[bx^{2}+a^{2}y+b^{\frac{1}{2}}x^{2^{n-1}}+a^{2^{n-1}}y^{\frac{1}{2}}+by^{2}+b^ {2}y=0.\]
Squaring both sides of the above equation, we have
\[0 = b^{2}x^{4}+a^{4}y^{2}+bx+ay+b^{2}y^{4}+b^{4}y^{2} \tag{15}\] \[= b^{2}(x+y)^{4}+y^{2}(a^{4}+b^{4})+bx+ay\] \[= a^{2^{r+1}}(x^{4}+x^{2^{r+2}})+x^{2^{r+1}}(a^{4}+a^{2^{r+2}})+a^ {2^{r}}x+ax^{2^{r}}. \tag{16}\]
Thus \(\mathcal{E}_{D_{a}f}\) is the set of \(x\in\mathbb{F}_{2^{n}}\) such that (15) is satisfied. We consider the following cases.
**Case 1**: \(a\notin\mathbb{F}_{2^{r}}\), i.e., \(a\neq b\).
**Subcase 1.1**: \(x\in\mathbb{F}_{2^{r}}\), i.e. \(x=y\). In this case, (15) is equivalent to
\[0 = (a^{4}+b^{4})x^{2}+(a+b)x \tag{17}\] \[= (a+b)x((a+b)^{3}x+1).\]
The solutions to (17) are \(x\in\{0,(a+b)^{2^{n}-4}\}\).
**Subcase 1.2**: \(x\notin\mathbb{F}_{2^{r}}\). Since \((a^{2^{r}}x+ax^{2^{r}})^{2^{r}}=a^{2^{r}}x+ax^{2^{r}}\), we have \(a^{2^{r}}x+ax^{2^{r}}\in\mathbb{F}_{2^{r}}\). From (16), we have
\[a^{2^{r+1}}(x^{4}+x^{2^{r+2}})+x^{2^{r+1}}(a^{4}+a^{2^{r+2}})=a^{2^{r}}x+ax^{2^ {r}},\]
which implies that \(a^{2^{r+1}}(x^{4}+x^{2^{r+2}})+x^{2^{r+1}}(a^{4}+a^{2^{r+2}})\in\mathbb{F}_{2^{r}}\). Since any element \(\alpha\in\mathbb{F}_{2^{r}}\) satisfies equation \(\alpha^{2^{r}}=\alpha\), we have
\[0 = \left(a^{2^{r+1}}(x^{4}+x^{2^{r+2}})+x^{2^{r+1}}(a^{4}+a^{2^{r+2}}) \right)^{2^{r}}+\left(a^{2^{r+1}}(x^{4}+x^{2^{r+2}})+x^{2^{r+1}}(a^{4}+a^{2^{r+ 2}})\right) \tag{18}\] \[= (a^{2}+a^{2^{r+1}})(x^{2}+x^{2^{r+1}})^{2}+(x^{2}+x^{2^{r+1}})(a^{ 2}+a^{2^{r+1}})^{2}\] \[= (a^{2}+a^{2^{r+1}})(x^{2}+x^{2^{r+1}})(a^{2}+a^{2^{r+1}}+x^{2}+x^{ 2^{r+1}}).\]
Since \(a,x\notin\mathbb{F}_{2^{r}}\), we have \(a^{2}+a^{2^{r+1}}\neq 0\) and \(x^{2}+x^{2^{r+1}}\neq 0\). Thus, by (18), we have \(a^{2}+a^{2^{r+1}}+x^{2}+x^{2^{r+1}}=0\), that is, \((x+a)^{2}=(x+a)^{2^{r+1}}\). As such, we have \(x+a=(x+a)^{2^{r}}\), that is, \(y=x+a+b\). So we claim that if \(x\in\mathcal{E}_{D_{a}f}\), then we must have \(y=x+a+b\). On the other hand, let us solve (15) assuming \(y=x+a+b\) is satisfied, which, in fact, must be satisfied, as we have shown. Plugging \(y=x+a+b\) into equation (15), we have
\[0 = b^{2}(a+b)^{4}+(x+a+b)^{2}(a+b)^{4}+bx+a(x+a+b)\] \[= (a+b)^{4}x^{2}+a^{2}(a+b)^{4}+(a+b)x+a(a+b)\] \[= (a+b)^{4}(x+a)^{2}+(a+b)(x+a)\] \[= (a+b)(x+a)((a+b)^{3}(x+a)+1),\]
which implies that \(x=a\) or \(x=(a+b)^{2^{n}-4}+a\).
In Case 1 where \(a\notin\mathbb{F}_{2^{r}}\), we conclude that \(\mathcal{E}_{D_{a}f}=\{0,(a+b)^{2^{n}-4},a,(a+b)^{2^{n}-4}+a\}\) and \(\dim(\mathcal{E}_{D_{a}f})=2\).
**Case 2**: \(a\in\mathbb{F}_{2^{r}}^{*}\). In this case, \(b=a\), equation (15) becomes
\[0 = a^{2}(x+y)^{4}+a(x+y) \tag{19}\] \[= a(x+y)(a(x+y)^{3}+1),\]
which implies that \(y=x\) or \((x+y)^{3}=a^{2^{r}-2}\).
**Subcase 2.1**: \(y=x\), i.e., \(x^{2^{r}}=x\). In this case, \(x^{2^{r}}=x\) if and only if \(x\in\mathbb{F}_{2^{r}}\). Thus \(\mathbb{F}_{2^{r}}\subseteq\mathcal{E}_{D_{a}f}\).
**Subcase 2.2**: \((x+y)^{3}=a^{2^{r}-2}\). In this case, we consider the following two subcases according to the parity of \(r\).
* If \(r\) is odd, we have \(2^{r}-1\equiv 1\pmod{3}\) and \(\gcd(2^{r}-2,3)=3\). Thus \((x+y)^{3}=a^{2^{r}-2}\) implies \[x^{2^{r}}+x=a^{\frac{2^{r}-2}{3}}.\] (20) Since \(\mathbb{F}_{2^{n}}\) is a field extension of \(\mathbb{F}_{2^{r}}\) of degree \(2\), \(x^{2^{r}}+x\) is the trace function from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2^{r}}\), which is a \(2^{r}\)-to-\(1\) mapping. So the number of solutions to (20) is \(2^{r}\). Combining with Subcase 2.1, we conclude that when \(r\) is odd and \(a\in\mathbb{F}_{2^{r}}^{*}\), we have \(\dim(\mathcal{E}_{D_{a}f})=r+1\).
* If \(r\) is even, we have \(\gcd(2^{r}-1,3)=3\). Let \(G=\{g^{3s}\mid 0\leq s\leq\frac{2^{r}-4}{3}\}\) be the multiplicative group of order \(\frac{2^{r}-1}{3}\) and \(g\) is a primitive element of \(\mathbb{F}_{2^{r}}\). If \(a\notin G\), then \(a^{2^{r}-2}\) is not a cube, that is, \((x+y)^{3}=a^{2^{r}-2}=a^{-1}\) has no roots. Combining with Subcase 2.1, we deduce that \(\dim(\mathcal{E}_{g_{a}})=r\) when \(a\notin G\) and \(r\) is even. If \(a\in G\), we have \[x+y=x^{2^{r}}+x=g^{-s+\frac{2^{r}-1}{3}i},\text{ for }i=0,1,2,\] (21) for some \(0\leq s\leq\frac{2^{r}-4}{3}\). Similarly, we can prove, for each \(i=0,1,2\), equation (21) has exactly \(2^{r}\) solutions. Thus we have \(\dim(\mathcal{E}_{g_{a}})=r+2\) when \(a\in G\) and \(r\) is even.
Summarizing all the cases above, we complete the proof.
Combining Proposition 1 and Theorem 8, we can prove the the lower bound on the second-order nonlinearity of \(\mathrm{tr}_{n}(x^{2^{r}+3})\) for \(n=2r\).
Proof.: (of Theorem 2) When \(r\) is odd, by Lemma 1 and Theorem 8, we have
\[\mathrm{nl}(D_{a}f(x))=\begin{cases}2^{n-1}-2^{\frac{n+r-1}{2}},&a\in\mathbb{F }_{2^{r}}^{*},\\ 2^{n-1}-2^{\frac{n}{2}},&a\in\mathbb{F}_{2^{n}}\setminus\mathbb{F}_{2^{r}}. \end{cases} \tag{22}\]
By Proposition 1 and (22), we have
\[\mathrm{nl}_{2}(f) \geq 2^{n-1}-\frac{1}{2}\sqrt{2^{\frac{3n}{2}+1}+2^{\frac{5n}{4}+\frac {1}{2}}-2^{n}-2^{\frac{3n}{4}+\frac{1}{2}}}\] \[= 2^{n-1}-2^{\frac{3n}{4}-\frac{1}{2}}-O(2^{\frac{n}{2}}).\]
When \(r\) is even, similarly, we can prove
\[\mathrm{nl}(D_{a}f(x))=\begin{cases}2^{n-1}-2^{\frac{n+r}{2}},&a\in G,\\ 2^{n-1}-2^{\frac{n+r}{2}-1},&a\in\mathbb{F}_{2^{r}}^{*}\setminus G,\\ 2^{n-1}-2^{\frac{n}{2}},&a\in\mathbb{F}_{2^{n}}\setminus\mathbb{F}_{2^{r}}. \end{cases} \tag{23}\]
where \(G=\{g^{3s}\mid 0\leq s\leq\frac{2^{r}-4}{3}\}\) is the multiplicative group of order \(\frac{2^{r}-1}{3}\) and \(g\) is a primitive element of \(\mathbb{F}_{2^{r}}\). By Proposition 1 and (23), we have
\[\mathrm{nl}_{2}(f) \geq 2^{n-1}-\frac{1}{2}\sqrt{2^{\frac{3}{2}n+1}+\frac{1}{3}}\cdot 2^{ \frac{5}{2}n+2}-2^{n}-\frac{1}{3}\cdot 2^{\frac{3}{4}n+2}\] \[= 2^{n-1}-2^{\frac{3n}{4}-\frac{1}{2}}-O(2^{\frac{n}{2}}).\]
## 4 Third-order nonlinearity
The following proposition is proved by applying Proposition 1 twice.
**Proposition 3**.: _[_1_]_ _Let \(f\) be any \(n\)-variable function and \(r\) a positive integer smaller than \(n\). We have_
\[\mathrm{nl}_{r}(f)\geq 2^{n-1}-\frac{1}{2}\sqrt{\sum_{a\in\mathbb{F}_{2^{n}}^{ *}}\sqrt{2^{2n}-2\sum_{b\in\mathbb{F}_{2^{n}}}\mathrm{nl}_{r-2}(D_{a}D_{b}f)}}.\]
By the above proposition, our goal is to estimate the nonlinearities of the second-order derivatives of \(\mathrm{tr}_{n}(x^{15})\). Observe that
\[\sum_{a\in\mathbb{F}_{2^{n}}}\sqrt{2^{2n}-2\sum_{b\in\mathbb{F}_ {2^{n}}}\mathrm{nl}_{r-2}(D_{a}D_{b}f)} = \sum_{a\in\mathbb{F}_{2^{n}}}\sqrt{2^{2n}-2\sum_{b\in\mathbb{F}_ {2^{n}}}\mathrm{nl}_{r-2}(D_{a}D_{ab}f)}\] \[= \sum_{a\in\mathbb{F}_{2^{n}}}\sqrt{2^{2n}-2\sum_{b\in\mathbb{F}_ {2^{n}}}\mathrm{nl}_{r-2}(D_{ab}D_{a}f)}.\]
Thus it is equivalent to estimate the first-order nonlinearity of \(D_{ab}D_{a}f\) for all \(a,b\in\mathbb{F}_{2^{n}}\).
**Lemma 6**.: _Let \(f=\mathrm{tr}_{n}(x^{15})\). For any \(a\in\mathbb{F}_{2^{n}}\) and \(b\in\mathbb{F}_{2^{n}}\), element \(x\in\mathbb{F}_{2^{n}}\) is in the linear kernel of \(D_{ab}D_{a}f\) if and only if \(P(x,a,b)=0\), where_
\[P(x,a,b)=Q(x,a,b)(Q(x,a,b)+1) \tag{24}\]
_and_
\[Q(x,a,b)=(b^{2}+b)^{-4}R(x,a,b)(R(x,a,b)+1) \tag{25}\]
_and_
\[R(x,a,b)=a^{30}(b^{2}+b)^{6}\left((x^{2}+x)^{2}+(x^{2}+x)(b^{2}+b)\right)^{4}+a ^{15}(b^{2}+b)^{5}\left((x^{2}+x)^{2}+(x^{2}+x)(b^{2}+b)\right). \tag{26}\]
Proof.: For any \(a,b\in\mathbb{F}_{2^{n}}\), we have
\[(D_{a}f)(ax) = \mathrm{tr}_{n}((ax)^{15})+\mathrm{tr}_{n}((ax+a)^{15})\] \[= \mathrm{tr}_{n}((ax)^{15}+(ax+a)^{15})\] \[= \mathrm{tr}_{n}(a^{15}(\sum_{i=0}^{14}x^{i})),\]
and
\[D_{b}((D_{a}f)(ax))\] \[= (D_{ab}D_{a}f)(ax)\] \[= \mathrm{tr}_{n}(a^{15}((b^{2}+b)x^{12}+(b^{4}+b)x^{10}+(b^{4}+b^{ 2})x^{9}+(b^{8}+b)x^{6}+(b^{8}+b^{2})x^{5}+(b^{8}+b^{4})x^{3}))+l(x),\]
where \(l(x)\) is an affine function. By Proposition 2, we have \(\mathcal{E}_{D_{ab}D_{a}f(ax)}=\mathcal{E}_{D_{ab}D_{a}f}\) for any \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) and \(a\in\mathbb{F}_{2^{n}}^{*}\). (When \(a=0\) or \(b\in\{0,1\}\), \(D_{ab}D_{a}f\) becomes \(0\), so the conclusion holds obviously.)
For convenience, let \(g(x)=D_{ab}D_{a}f(ax)\). We have \(\mathcal{E}_{D_{ab}D_{a}f(ax)}=\{x\in\mathbb{F}_{2}^{n}\mid B(x,y)=g(0)+g(x)+ g(y)+g(x+y)=0\) for all \(y\in\mathbb{F}_{2}^{n}\}\) by definition. By somewhat tedious computation, we have
\[B(x,y) = g(0)+g(x)+g(y)+g(x+y)\] \[= \mathrm{tr}_{n}(a^{15}((b^{2}+b)x^{4}+(b^{4}+b)x^{2}+(b^{4}+b^{2 })x)y^{8})\] \[+\mathrm{tr}_{n}(a^{15}((b^{2}+b)x^{8}+(b^{8}+b)x^{2}+(b^{8}+b^{2 })x)y^{4})\] \[+\mathrm{tr}_{n}(a^{15}((b^{4}+b)x^{8}+(b^{8}+b)x^{4}+(b^{8}+b^{4 })x)y^{2})\] \[+\mathrm{tr}_{n}(a^{15}((b^{4}+b^{2})x^{8}+(b^{8}+b^{2})x^{4}+(b^ {8}+b^{4})x^{2})y).\]
Using the properties of the trace function, we have
\[B(x,y) = \mathrm{tr}_{n}(((a^{15}((b^{2}+b)x^{4}+(b^{4}+b)x^{2}+(b^{4}+b^{ 2})x))^{2^{-3}}\] \[+(a^{15}((b^{2}+b)x^{8}+(b^{8}+b)x^{2}+(b^{8}+b^{2})x))^{2^{-2}}\] \[+(a^{15}((b^{4}+b)x^{8}+(b^{8}+b)x^{4}+(b^{8}+b^{4})x))^{2^{-1}}\] \[+a^{15}((b^{4}+b^{2})x^{8}+(b^{8}+b^{2})x^{4}+(b^{8}+b^{4})x^{2})) y).\]
It is clear that \(B(x,y)=0\) for all \(y\in\mathbb{F}_{2^{n}}\) if and only if the coefficient of \(y\) is zero, that is,
\[0 = (a^{15}((b^{2}+b)x^{4}+(b^{4}+b)x^{2}+(b^{4}+b^{2})x))^{2^{-3}}\] \[+ (a^{15}((b^{2}+b)x^{8}+(b^{8}+b)x^{2}+(b^{8}+b^{2})x))^{2^{-2}}\] \[+ (a^{15}((b^{4}+b)x^{8}+(b^{8}+b)x^{4}+(b^{8}+b^{4})x))^{2^{-1}}\] \[+ a^{15}((b^{4}+b^{2})x^{8}+(b^{8}+b^{2})x^{4}+(b^{8}+b^{4})x^{2}).\]
Raising both sides of the above equation to the 8th power, we get \(P(x,a,b)=0\), as desired.
Let \(\mathrm{N}_{P}(a,b)\) denote by the number of \(x\in\mathbb{F}_{2^{n}}\) such that \(P(x,a,b)=0\) where \(a\neq 0\) and \(b\neq 0,1\); let \(\mathrm{N}_{Q}(a,b)\) denote by the number of \(x\in\mathbb{F}_{2^{n}}\) such that \(Q(x,a,b)=0\); let \(\mathrm{N}_{Q+1}(a,b)\) denote by the number of \(x\in\mathbb{F}_{2^{n}}\) such that \(Q(x,a,b)+1=0\); let \(\mathrm{N}_{R}(a,b)\) denote the number of \(x\in\mathbb{F}_{2^{n}}\) such that \(R(x,a,b)=0\), and \(\mathrm{N}_{R+1}(a,b)\) denote the number of \(x\in\mathbb{F}_{2^{n}}\) such that \(R(x,a,b)+1=0\).
**Lemma 7**.: _Let \(a\in\mathbb{F}_{2^{n}}^{*}\) and \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\), and let polynomials \(P(x,a,b)\), \(Q(x,a,b)\) and \(R(x,a,b)\) be defined as in Lemma 6. We have \(\mathrm{N}_{P}(a,b)=\mathrm{N}_{Q}(a,b)+\mathrm{N}_{Q+1}(a,b)\), and \(\mathrm{N}_{Q}(a,b)=\mathrm{N}_{R}(a,b)+\mathrm{N}_{R+1}(a,b)\). In addition,_
* \(\mathrm{N}_{Q}(a,b)\in\{2,2^{2},2^{3},2^{4},2^{5}\}\) _and_ \(\mathrm{N}_{Q+1}(a,b)\leq 32\)_._
* \(\mathrm{N}_{R}(a,b)\in\{2^{2},2^{3},2^{4}\}\) _and_ \(\mathrm{N}_{R+1}(a,b)\leq 16\)_._
* _When_ \(n\) _is even,_ \(\mathrm{N}_{P}(a,b)\in\{2^{2},2^{4},2^{6}\}\)_; when_ \(n\) _is odd,_ \(\mathrm{N}_{P}(a,b)\in\{2^{1},2^{3},2^{5}\}\)_._
Proof.: Notice that \(P(x,a,b)\) is a \(2\)-polynomial (in variable \(x\)) of degree \(64\); the number of roots for equation \(P(x,a,b)=0\) is at most \(64\). By Lemma 2, we know that \(\dim(\mathcal{E}_{D_{ab}D_{a}f})\) and \(n\) have the same parity. Therefore, when \(n\) is even, \(\mathrm{N}_{P}(a,b)\in\{2^{2},2^{4},2^{6}\}\); when \(n\) is odd, \(\mathrm{N}_{P}(a,b)\in\{2^{1},2^{3},2^{5}\}\). (Note that, when \(b\notin\{0,1\}\), \(R(x,a,b)=0\) has at least \(4\) roots \(0,1,b,b+1\), which implies that \(P(x,a,b)=0\) has at least \(4\) roots.)
Since \(P(x,a,b)=Q(x,a,b)(Q(x,a,b)+1)\), we have \(\mathrm{N}_{P}(a,b)=\mathrm{N}_{Q}(a,b)+\mathrm{N}_{Q+1}(a,b)\). Observe that \(Q(x,a,b)\) is a \(2\)-polynomial of degree \(32\), we have \(\mathrm{N}_{Q}(a,b)\in\{2,2^{2},2^{3},2^{4},2^{5}\}\).
From (25), we have \(\mathrm{N}_{Q}(a,b)=\mathrm{N}_{R}(a,b)+\mathrm{N}_{R+1}(a,b)\), when \(b\notin\{0,1\}\). Clearly, \(R(x,a,b)\) is a \(2\)-polynomial of degree \(16\) in variable \(x\). Note that \(x=0,1,b,b+1\) are the four different roots whenever \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\), then \(\mathrm{N}_{R}(a,b)\in\{2^{2},2^{3},2^{4}\}\). On the other hand, the degree of \(R(x,a,b)+1\) is \(16\), so \(\mathrm{N}_{R+1}(a,b)\leq 16\). Since \(Q(x,a,b)\) is a \(2\)-polynomial of degree \(32\), we have \(\mathrm{N}_{Q}(a,b)\in\{2^{2},2^{3},2^{4},2^{5}\}\). In the following, we lower bound the number of \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(\mathrm{N}_{P}(a,b)\leq 16\) for a fixed \(a\in\mathbb{F}_{2^{n}}^{*}\).
**Lemma 8**.: _Let \(n\) be even. Let \(a\in\mathbb{F}_{2^{n}}^{*}\) and \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\). If \(\mathrm{N}_{R}(a,b)=4\), then \(\mathrm{N}_{P}(a,b)\leq 16\)._
Proof.: Since \(\mathrm{N}_{R}(a,b)=4\), we have \(\mathrm{N}_{Q}(a,b)\leq 4+\deg(R+1)=20\). Note that \(\mathrm{N}_{Q}(a,b)=\mathrm{N}_{R}(a,b)+\mathrm{N}_{R+1}(a,b)\in\{2^{2},2^{3},2^{4},2^{5}\}\) by Lemma 7. So we have \(\mathrm{N}_{Q}(a,b)\leq 16\).
Note that \(\mathrm{N}_{P}(a,b)=\mathrm{N}_{Q}(a,b)+\mathrm{N}_{Q+1}(a,b)\), and \(\mathrm{N}_{P}(a,b)\in\{2^{2},2^{4},2^{6}\}\) by Lemma 7. So we have \(\mathrm{N}_{P}(a,b)\leq 16\).
By Lemma 8, when \(n\) is even, to lower bound the number of \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) where \(\dim(\mathcal{E}_{D_{ab}D_{a}f})\leq 4\), it suffices to lower bound the number of \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) where \(\mathrm{N}_{R}(a,b)=4\).
**Theorem 9**.: _Let \(n\) be even. For any \(a\in\mathbb{F}_{2^{n}}^{*}\), there are at least \(\frac{1}{3}\cdot(2^{n+1}-2^{\frac{n}{2}+1}-4)\) elements \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(\mathrm{N}_{R}(a,b)=4\)._
Proof.: When \(x\notin\{0,1,b,b+1\}\), we have \((x^{2}+x)^{2}+(x^{2}+x)(b^{2}+b)\neq 0\). Let \(y=(x^{2}+x)^{2}+(x^{2}+x)(b^{2}+b)\). Since \(R(x,a,b)=0\), we can deduce that
\[y^{3}=\frac{1}{a^{15}(b^{2}+b)}. \tag{27}\]
Let \(G=\{g^{3s}\mid 0\leq s\leq\frac{2^{n}-4}{3}\}\), where \(g\) is a primitive element. If \(b^{2}+b\not\in G\), it is clear that (27) has no solution, which implies that \(\mathrm{N}_{R}(a,b)=4\). Next, we prove there are at least \(\frac{1}{3}\cdot(2^{n+1}-2^{\frac{n}{2}+1}-4)\) elements \(b\) such that \(b^{2}+b\not\in G\), which will complete our proof.
We estimate the number of elements \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(b^{2}+b\notin G\). Let \(s_{1}\) denote the number of elements \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(b^{2}+b\notin G\); let \(s_{2}\) denote the number of elements \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(b^{2}+b\in G\).
Consider equation
\[x^{3}=b^{2}+b. \tag{28}\]
Observe that
* Equation (28) has (at least) a solution in variable \(x\) if and only if \(b^{2}+b\in G\).
* It is well known that (for example, see page 536 in [11]) equation \(b^{2}+b=c\) has two solutions if and only if \(\mathrm{tr}_{n}(c)=0\), otherwise the equation has no solution. Thus, equation (28) in variable \(b\) has a solution if and only if \(\mathrm{tr}_{n}(x^{3})=0\).
For any fixed \(b\), denote the set of solutions by \(X_{b}\), and let \(X=\cup_{b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}}X_{b}\). Consider the mapping \(\phi(x):X\to\mathbb{F}_{2^{n}}^{*}\), where \(\phi(x)=x^{3}\). Notice that \(\phi(x):\mathbb{F}_{2^{n}}^{*}\to\mathbb{F}_{2^{n}}^{*}\) is a 3-to-1 mapping on \(\mathbb{F}_{2^{n}}^{*}\). Furthermore, mapping \(\phi:X\to\mathbb{F}_{2^{n}}^{*}\) is also a 3-to-1 mapping on \(X\). Otherwise, there exist \(x_{1}\in X\), \(x_{2}\notin X\) such that \(x_{1}^{3}=x_{2}^{3}\) and \(\mathrm{tr}_{n}(x_{1}^{3})\neq\mathrm{tr}_{n}(x_{2}^{3})\), which is a contradiction.
Recall that there exists \(b\) such that \(x^{3}=b^{2}+b\) if and only if \(\mathrm{tr}_{n}(x^{3})=0\). Therefore, \(|X|=2^{n}-1-\mathrm{wt}(\mathrm{tr}_{n}(x^{3}))\). Combining with the fact that \(\phi(x):X\to\mathbb{F}_{2^{n}}^{*}\) is a 3-to-1 mapping, we have
\[|\{b^{2}+b:b^{2}+b\in G,b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}|=\frac{1}{3} \cdot(2^{n}-1-\mathrm{wt}(\mathrm{tr}_{n}(x^{3}))).\]
Since \(b\mapsto b^{2}+b\) is 2-to-1 mapping on \(\mathbb{F}_{2^{n}}\setminus\{0,1\}\), we have \(s_{2}=2\cdot|\{b^{2}+b:b^{2}+b\in G\}|=\frac{1}{3}\cdot(2^{n+1}-2-2\mathrm{wt }(\mathrm{tr}_{n}(x^{3})))\). So \(s_{1}=2^{n}-2-s_{2}=\frac{1}{3}\cdot(2^{n}-4+2\mathrm{wt}(\mathrm{tr}_{n}(x^{ 3})))\). By Lemma 5, we have \(\mathrm{wt}(\mathrm{tr}_{n}(x^{3}))\geq 2^{n-1}-2^{\frac{n}{2}}\). So \(s_{1}\geq\frac{1}{3}\cdot(2^{n+1}-2^{\frac{n}{2}+1}-4)\).
The number of \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that (27) with no solution is at least \(\frac{1}{3}\cdot(2^{n+1}-2^{\frac{n}{2}+1}-4)\).
By Theorem 9, the following theorem is immediate.
**Theorem 10**.: _Let \(n\) be even. We have_
**Lemma 9**.: _Let \(n\) be odd. Let \(a\in\mathbb{F}_{2^{n}}^{*}\) and \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\). If \(\mathrm{N}_{R}(a,b)=4\), then \(\mathrm{N}_{P}(a,b)\leq 8\)._
Proof.: Let \(n\) be odd and let \(\mathrm{N}_{R}(a,b)=4\). We will prove the followings step by step:
* \(\mathrm{N}_{R+1}(a,b)\leq 8\).
* \(\mathrm{N}_{Q}(a,b)\leq 8\).
* \(\mathrm{N}_{Q+1}(a,b)\leq 16\).
* \(\mathrm{N}_{P}(a,b)\leq 8\).
First, let us prove \(\mathrm{N}_{R+1}(a,b)\leq 8\). If \(R(x,a,b)+1=0\), we have
\[0 = a^{30}(b^{2}+b)^{6}\big{(}(x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)\big{)} ^{4}+a^{15}(b^{2}+b)^{5}\big{(}(x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)\big{)}+1 \tag{29}\] \[= a^{30}(b^{2}+b)^{6}y^{4}+a^{15}(b^{2}+b)^{5}y+1,\]
where \(y=(x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)\). Equation (30) can be converted to
\[y^{4}+\frac{y}{a^{15}(b^{2}+b)}+\frac{1}{a^{30}(b^{2}+b)^{6}}=0. \tag{31}\]
By Lemma 4, since \(n\) is odd, equation (31) in variable \(y\) has no solution or exactly two solutions. Furthermore, since \((x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)\) is a polynomial of degree 4, the number of \(x\in\mathbb{F}_{2^{n}}\) such that \((x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)+y=c\), for any \(c\), is at most 4. So equation (29) in variable \(x\) has at most 8 solutions, that is, \(\mathrm{N}_{R+1}(a,b)\leq 8\).
Second, we prove \(\mathrm{N}_{Q}(a,b)\leq 8\). Note that \(\mathrm{N}_{Q}(a,b)=\mathrm{N}_{R}(a,b)+\mathrm{N}_{R+1}(a,b)\leq 12\). On the other hand, by Lemma 7, \(\mathrm{N}_{Q}(a,b)\in\{2,2^{2},2^{3},2^{4},2^{5}\}\). So we have \(\mathrm{N}_{Q}(a,b)\leq 8\).
Next, we prove \(\mathrm{N}_{Q+1}(a,b)\leq 16\). Suppose \(Q(x,a,b)+1=0\). We have \((b^{2}+b)^{4}Q(x,a,b)+(b^{2}+b)^{4}=0\), that is,
\[R(x,a,b)(R(x,a,b)+1)+(b^{2}+b)^{4}=0. \tag{32}\]
Viewing (32) as a quadratic equation in variable \(R\), we know that (32) has at most 2 solutions, denoted by \(c_{1},c_{2}\). We shall prove that, for each \(i=1,2\), \(R(x,a,b)=c_{i}\) has at most 8 solutions.
Let \(R(x,a,b)=c_{i}\) and let \(y=(x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)\), where \(c_{i}\in\mathbb{F}_{2^{n}}^{*}\). Then we have
\[a^{30}(b^{2}+b)^{6}y^{4}+a^{15}(b^{2}+b)^{5}y+c_{i}=0,\]
that is
\[y^{4}+\frac{y}{a^{15}(b^{2}+b)}+\frac{c_{i}}{a^{30}(b^{2}+b)^{6}}=0. \tag{33}\]
By Lemma 4, equation (33), in variable \(y\), has no solution or exactly two solutions. Furthermore, since \((x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)\) is a polynomial of degree 4, the number of \(x\in\mathbb{F}_{2^{n}}\) such that \((x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)=d\) is at most 4. In total, equation \(R(x,a,b)=c_{i}\) has at most 8 solutions. Thus, the \(Q(x,a,b)+1=0\) has at most 16 solutions, that is, \(\mathrm{N}_{Q+1}(a,b)\leq 16\).
Finally, we prove \(\mathrm{N}_{P}(a,b)\leq 8\). Note that \(\mathrm{N}_{P}(a,b)=\mathrm{N}_{Q}(a,b)+\mathrm{N}_{Q+1}(a,b)\leq 24\). By Lemma 7, \(\mathrm{N}_{P}(a,b)\in\{1,8,32\}\). So we have \(\mathrm{N}_{P}(a,b)\leq 8\).
**Theorem 11**.: _Let \(n\) be odd. We have_
Proof.: Since \(n\) is odd, we have \(3\mid(2^{n}-2)\). When \(x\in\{0,1,b,b+1\}\), we have \((x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)\neq 0\). If \(R(x,a,b)=0\) for \(a\in\mathbb{F}_{2^{n}}^{*}\) and \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\), we have
\[a^{15}(b^{2}+b)\big{(}(x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)\big{)}^{3}=1,\]
which is
\[(x^{2}+x)^{2}+(b^{2}+b)(x^{2}+x)=a^{-5}(b^{2}+b)^{\frac{2^{n}-2}{3}}. \tag{34}\]
Multiplying \(\frac{1}{(b^{2}+b)^{2}}\) to both sides of (34), we get
\[\left(\frac{x^{2}+x}{b^{2}+b}\right)^{2}+\frac{x^{2}+x}{b^{2}+b}=a^{-5}(b^{2} +b)^{\frac{2^{n}-2}{3}-2}. \tag{35}\]
If \(\mathrm{tr}_{n}(a^{-5}(b^{2}+b)^{\frac{2^{n}-2}{3}-2})=1\), then \(t^{2}+t=a^{-5}(b^{2}+b)^{\frac{2^{n}-2}{3}-2}\) has no solution, where \(t=\frac{x^{2}+x}{b^{2}+b}\). So \(\mathrm{N}_{R}(a,b)=4\). Thus it suffices to lower bound the number of elements \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(\mathrm{tr}_{n}(a^{-5}(b^{2}+b)^{\frac{2^{n}-2}{3}-2})=1\).
Let \(n=2r+1\). Note that \(\frac{2^{n}-2}{3}-2=\sum_{i=1}^{r-1}2^{n-2i}\). So we have
\[(b^{2}+b)^{\frac{2^{n}-2}{3}-2} = (b^{2}+b)^{\sum_{i=1}^{r-1}2^{n-2i}}\] \[= \prod_{i=1}^{r-1}(b^{2}+b)^{2^{n-2i}}\] \[= \prod_{i=1}^{r-1}(b^{2^{n-2i+1}}+b^{2^{n-2i}})\] \[= \sum_{d_{1},d_{2},\ldots,d_{r-1}\in\{0,1\}}b^{\sum_{i=1}^{r-1}2^{n -2i+d_{i}}}.\]
Expanding the trace function using its definition, we have
\[\mathrm{tr}_{n}(a^{-5}(b^{2}+b)^{\frac{2^{n}-2}{3}-2}) \tag{36}\] \[= \mathrm{tr}_{n}(a^{-5}\sum_{d_{1},\ldots,d_{r-1}\in\{0,1\}}b^{ \sum_{i=1}^{r-1}2^{n-2i+d_{i}}})\] \[= \sum_{d_{1},\ldots,d_{r-1}\in\{0,1\}}\mathrm{tr}_{n}(a^{-5}b^{ \sum_{i=1}^{r-1}2^{n-2i+d_{i}}})\] \[= \sum_{d_{1},\ldots,d_{r-1}\in\{0,1\}}\sum_{j=0}^{n-1}(a^{-5})^{2^ {j}}b^{\sum_{i=1}^{r-1}2^{n-2i+d_{i}+j}}\] \[= \sum_{d_{1},\ldots,d_{r-1}\in\{0,1\}}\sum_{j=0}^{n-1}(a^{-5\cdot 2 ^{j}})b^{\sum_{i=1}^{r-1}2^{n-2i+d_{i}+j}}.\]
For convenience, let
\[h(b)=\sum_{d_{1},\ldots,d_{r-1}\in\{0,1\}}\sum_{j=0}^{n-1}(a^{-5\cdot 2^{j}})b^{ \sum_{i=1}^{r-1}2^{n-2i+d_{i}+j}}. \tag{37}\]
Next, we will analyze the highest and lowest degree terms of the polynomial \(h(b)\) as they are closely related to the number of roots of \(h(b)=0\).
**Lemma 10**.: _The maximum degree of \(h(b)\) is \(\frac{5}{3}\cdot 2^{n-1}-\frac{32}{3}\) for \(n\geq 6\)._
**Lemma 11**.: _The minimum degree of the monomial of \(h(b)\) is \(\frac{1}{3}\cdot(2^{n-4}+1)\), which implies \(h(b)=b^{\frac{1}{3}\cdot(2^{n-4}+1)}p(b)\), where \(b\nmid p(b)\)._
The proofs of these two lemmas can be found in Appendix A and B.
By Lemma 10 and 11, the number of elements \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) for which \(h(b)=0\) is at most \(13\cdot 2^{n-4}-12\). Hence, the number of \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(h(b)=1\) is at least \(3\cdot 2^{n-4}+10\).
Hence, we have the number of \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(\mathrm{N}_{P}(a,b)=8\) is at least \(3\cdot 2^{n-4}+10\) since the set of \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(\mathrm{tr}_{n}(a^{-5}(b^{2}+b)^{\frac{2^{n}-2}{3}-2})=1\) satisfying is the set of roots of the equation \(h(b)=1\). That is, the number of \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(\dim(\mathcal{E}_{D_{ab}D_{a}f})\leq 3\) satisfying is at least \(3\cdot 2^{n-4}+10\); the number of \(b\in\mathbb{F}_{2^{n}}\setminus\{0,1\}\) such that \(\dim(\mathcal{E}_{D_{ab}D_{a}f})=5\) is at most \(13\cdot 2^{n-4}-12\).
By Theorem 10 and 11, the following corollary is immediate.
**Corollary 2**.: _Let \(f=\mathrm{tr}_{n}(x^{15})\). Denote that \(\mathrm{nl}(D_{ab}D_{a}f)\) be the nonlinearity of \(D_{ab}D_{a}f\). For any \(b\in\mathbb{F}_{2^{n}}\), the distribution of \(\mathrm{nl}(D_{ab}D_{a}f)\) is as follows:_
Now we are ready to prove Theorem 3, which gives a lower bound on the third-order nonlinearity of \(\operatorname{tr}_{n}(x^{15})\).
Proof.: (of Theorem 3) By Proposition 3 and Corollary 2, for even \(n\), we have
\[\operatorname{nl}_{3}(f)\] \[\geq 2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)\sqrt{2^{2n}-2((2^{n-1}-2^{ \frac{n}{2}+1})(\frac{2^{n+1}-2^{\frac{n}{2}+1}-4}{3})+(2^{n-1}-2^{\frac{n}{2} +2})(\frac{2^{n}+2^{\frac{n}{2}+1}-2}{3}))}+2^{n}}\] \[= 2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)\sqrt{\frac{1}{3}\cdot 2^{ \frac{3}{2}n+4}+\frac{7}{3}\cdot 2^{n+1}-\frac{1}{3}\cdot 2^{\frac{n}{2}+5}+2^{n}}}\] \[\geq 2^{n-1}-2^{\frac{7_{n}}{8}-\frac{1}{4}\log_{2}3}-O(2^{\frac{3n}{ 8}}).\]
By Proposition 3 and Corollary 2, when \(n\) is odd and \(n>6\), we have
\[\operatorname{nl}_{3}(f)\geq 2^{n-1}-\] \[= 2^{n-1}-\frac{1}{2}\sqrt{(2^{n}-1)\sqrt{2^{2}-1}\sqrt{\frac{29} {8}\cdot 2^{\frac{3n+1}{2}}+2^{n+1}-7\cdot 2^{\frac{n+5}{2}}+2^{n}}}\] \[\geq 2^{n-1}-2^{\frac{7_{n}}{8}-\frac{13}{8}+\frac{1}{4}\log_{2}29}-O (2^{\frac{3n}{8}}).\]
### Comparison
We list the lower bound values on the third-order nonlinearity of \(\operatorname{tr}_{n}(x^{15})\) for \(7\leq n\leq 20\) in Table 8 and 9. Our lower bound outperforms all the existing lower bounds [1, 2, 3], both asymptotically and for all concrete \(n\).
\begin{table}
\begin{tabular}{c|c|c} \hline n & \(\operatorname{nl}(D_{ab}D_{a}f)\) & The number of \(b\in\mathbb{F}_{2^{n}}\) \\ \hline even \(n\) & \(0\) & \(2\) \\ \cline{2-3} & \(\geq 2^{n-1}-2^{\frac{n}{2}+1}\) & \(\geq\frac{1}{3}\cdot(2^{n+1}-2^{\frac{n}{2}+1}-4)\) \\ & \(\leq 2^{n-1}-2^{\frac{n}{2}+2}\) & \(\leq\frac{1}{3}\cdot(2^{n}+2^{\frac{n}{2}+1}-2)\) \\ \hline odd \(n\) & \(0\) & \(2\) \\ \cline{2-3} & \(\geq 2^{n-1}-2^{\frac{n+1}{2}}\) & \(\geq 3\cdot 2^{n-4}+10\) \\ & \(\leq 2^{n-1}-2^{\frac{n+3}{2}}\) & \(\leq 13\cdot 2^{n-4}-12\) \\ \hline \end{tabular}
\end{table}
Table 7: The distribution of \(\operatorname{nl}(D_{ab}D_{a}f)\)
## 5 Higher-order nonlinearity
In this section, we lower bound the \(r\)-th order nonlinearity for Boolean functions \(\operatorname{tr}_{n}(x^{2^{r+1}-1})\) and \(\operatorname{tr}_{n}(x^{2^{n}-2})\).
Applying \(t\) times Proposition 1, we have
**Proposition 4**.: _[_1_]_ _Let \(f\) be any \(n\)-variable Boolean function and \(r\) a positive integer smaller than \(n\). We have_
\[\operatorname{nl}_{r}(f)\geq 2^{n-1}-\frac{1}{2}\sqrt{\sum_{a_{1}\in\mathbb{F} _{2^{n}}}\sqrt{\sum_{a_{2}\in\mathbb{F}_{2^{n}}}\cdots\sqrt{2^{2n}-2\sum_{a_{ \epsilon}\in\mathbb{F}_{2^{n}}}\operatorname{nl}_{r-t}(D_{a_{\epsilon}}D_{a_{ \epsilon-1}}\ldots D_{a_{1}}f)}}}.\]
By Proposition 4, to lower bound the \(r\)-th order nonlinearity for functions \(\operatorname{tr}_{n}(x^{2^{r+1}-1})\), our strategy is to lower bound the first-order nonlinearity \(\operatorname{nl}(D_{a_{r-1}}D_{a_{r-2}}\ldots D_{a_{1}}f)\) for all distinct \(a_{1},a_{2},\ldots,a_{r-1}\in\mathbb{F}_{2^{n}}^{*}\). We will need the following lemma in the proof of Lemma 13.
The following lemma is proved in [10]; we state a special case of interest using different notations. Let \(a,b\) be two positive integers, where \(a=\sum_{i\geq 0}2^{i}a_{i}\) and \(b=\sum_{i\geq 0}2^{i}b_{i}\) be the binary representations of \(a\) and \(b\) respectively. Define a partial order \(\preceq\) between two positive integers as follows: \(a\preceq b\) if and only if \(a_{i}\leq b_{i}\) for all \(i\geq 0\); \(a\prec b\) if and only if \(a\preceq b\) and \(a\neq b\). Lucas's theorem says that \(\binom{b}{a}\equiv 1\pmod{2}\) if and only if \(a\preceq b\).
**Lemma 12**.: _(Lemma 4 in [10]) Let \(f=\operatorname{tr}_{n}(x^{2^{r+1}-1})\). For any distinct \(a_{1},a_{2},\ldots,a_{t}\in\mathbb{F}_{2^{n}}^{*}\), where \(1\leq t\leq r\), we have_
\[D_{a_{t}}D_{a_{t-1}}\ldots D_{a_{1}}f(x)=\operatorname{tr}_{n} \Bigl{(}\sum_{\begin{subarray}{c}0\prec d_{t}\prec d_{t-1}\prec\ldots\prec d _{1}\prec d_{0}=2^{r+1}-1\\ \operatorname{wt}(d_{k})=r+1-k,\ k=1,2,\ldots,t\end{subarray}}x^{d_{t}}\prod _{i=1}^{t}a_{i}^{d_{i-1}-d_{i}}\Bigr{)}+p(x), \tag{38}\]
_where \(\deg(p)\leq r-t\)._
The next lemma gives a lower bound on the first-order nonlinearity for the \((r-1)\)-th order derivatives of \(\operatorname{tr}_{n}(x^{2^{r+1}-1})\).
**Lemma 13**.: _Let \(f=\operatorname{tr}_{n}(x^{2^{r+1}-1})\). For any distinct \(a_{1},a_{2},\ldots,a_{r-1}\in\mathbb{F}_{2^{n}}^{*}\), we have_
\[\operatorname{nl}(D_{a_{r-1}}D_{a_{r-2}}\ldots D_{a_{1}}f)\geq 2^{n-1}-2^{ \frac{n+2r-2}{2}}.\]
Proof.: Let \(g(x)=D_{a_{r-1}}D_{a_{r-2}}\ldots D_{a_{1}}f(x)\). Applying Lemma 12 with \(t=r-1\), we have
\[g(x)=\operatorname{tr}_{n}\Bigl{(}\sum_{\begin{subarray}{c}0\prec d_{r-1} \prec d_{r-2}\prec\ldots\prec d_{1}\prec d_{0}=2^{r+1}-1\\ \operatorname{wt}(d_{k})=r+1-k,\ k=1,2,\ldots,r-1\end{subarray}}x^{d_{r-1}} \prod_{i=1}^{r-1}a_{i}^{d_{i-1}-d_{i}}\Bigr{)}+p(x),\]
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \(n\) & 7 & 9 & 11 & 13 & 15 & 17 & 19 \\ \hline \(\operatorname{nl}_{3}\) & 12 & 80 & 429 & 2096 & 9660 & 42923 & 186092 \\ \hline \end{tabular}
\end{table}
Table 8: Lower bounds in Theorem 3 for odd \(n\)
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \(n\) & 8 & 10 & 12 & 14 & 16 & 18 & 20 \\ \hline \(\operatorname{nl}_{3}\) & 30 & 183 & 944 & 4484 & 20308 & 89180 & 383411 \\ \hline \end{tabular}
\end{table}
Table 9: Lower bounds in Theorem 3 for even \(n\)
where \(\deg(p)\leq 1\).
Let \(B(x,y)=g(0)+g(x)+g(y)+g(x+y)\). We have
\[B(x,y) = \operatorname{tr}_{n}\Bigl{(}\sum_{\begin{subarray}{c}0\prec d_{r- 1}\prec d_{r-2}\prec\ldots\prec d_{1}\prec d_{0}=2^{r+1}-1\\ \operatorname{wt}(d_{k})=r+1-k,\ k=1,2,\ldots,r-1\end{subarray}}x^{d_{r-1}} \prod_{i=1}^{r-1}a_{i}^{d_{i-1}-d_{i}}\Bigr{)}+ \tag{39}\] \[\operatorname{tr}_{n}\Bigl{(}\sum_{\begin{subarray}{c}0\prec d_{r- 1}\prec d_{r-2}\prec\ldots\prec d_{1}\prec d_{0}=2^{r+1}-1\\ \operatorname{wt}(d_{k})=r+1-k,\ k=1,2,\ldots,r-1\end{subarray}}y^{d_{r-1}} \prod_{i=1}^{r-1}a_{i}^{d_{i-1}-d_{i}}\Bigr{)}+\] \[\operatorname{tr}_{n}\Bigl{(}\sum_{\begin{subarray}{c}0\prec d_{ r-1}\prec d_{r-2}\prec\ldots\prec d_{1}\prec d_{0}=2^{r+1}-1\\ \operatorname{wt}(d_{k})=r+1-k,\ k=1,2,\ldots,r-1\end{subarray}}(x+y)^{d_{r- 1}}\prod_{i=1}^{r-1}a_{i}^{d_{i-1}-d_{i}}\Bigr{)}\] \[= \operatorname{tr}_{n}\Bigl{(}\sum_{\begin{subarray}{c}0\prec d_{ r-1}\prec d_{r-2}\prec\ldots\prec d_{1}\prec d_{0}=2^{r+1}-1\\ \operatorname{wt}(d_{k})=r+1-k,\ k=1,2,\ldots,r\end{subarray}}x^{d_{r-1}-d_{r }}y^{d_{r}}\prod_{i=1}^{r-1}a_{i}^{d_{i-1}-d_{i}}\Bigr{)}.\]
Let \(e_{i}=d_{i-1}-d_{i}\) for \(i=1,2,\ldots,r\). Let \(e_{r+1}=d_{r}\). Note that
\[0\prec d_{r}\prec d_{r-1}\prec d_{r-2}\prec\ldots\prec d_{1}\prec d_{0}=2^{ r+1}-1\]
and \(\operatorname{wt}(d_{k})=r+1-k\) for \(k=1,2,\ldots,r\). So \(e_{1},e_{2},\ldots,e_{r+1}\) are distinct, and \(\operatorname{wt}(e_{k})=1\) for \(k=1,2,\ldots,r+1\). Rewriting (39), we have
\[B(x,y)=\operatorname{tr}_{n}\Bigl{(}\sum_{\begin{subarray}{c}\text{distinct }e_{1},e_{2},\ldots,e_{r+1}\in\{2^{0},2^{1},\ldots,2^{r}\}\\ \operatorname{wt}(e_{k})=1,\ \forall k\in\{1,2,\ldots,r+1\}\end{subarray}}( \prod_{i=1}^{r-1}a_{i}^{e_{i}})x^{e_{r}}y^{e_{r+1}}\Bigr{)}. \tag{40}\]
According to (40), \(B(x,y)=0\) holds for all \(y\) if and only if the coefficient of \(y\) is zero, that is,
\[\sum_{\begin{subarray}{c}\text{distinct }e_{1},e_{2},\ldots,e_{r+1}\in\{2^{0},2^{1},\ldots,2^{r}\}\\ \operatorname{wt}(e_{k})=1,\ \forall k\in\{1,2,\ldots,r+1\}\end{subarray}} \bigl{(}a_{1}^{e_{1}}a_{2}^{e_{2}}a_{3}^{e_{3}}\ldots a_{r-1}^{e_{r-1}}x^{e _{r}}\bigr{)}^{e_{r+1}^{r-1}}=0. \tag{41}\]
Raising both sides of (41) to the \(2^{r}\)th power, we have
\[\sum_{\begin{subarray}{c}\text{distinct }e_{1},e_{2},\ldots,e_{r+1}\in\{2^{0},2^{1},\ldots,2^{r}\}\\ \operatorname{wt}(e_{k})=1,\ \forall k\in\{1,2,\ldots,r+1\}\end{subarray}} \bigl{(}a_{1}^{e_{1}}a_{2}^{e_{2}}a_{3}^{e_{3}}\ldots a_{r-1}^{e_{r-1}}x^{e _{r}}\bigr{)}^{2^{r}\cdot e_{r+1}^{-1}}=0. \tag{42}\]
Observe that each monomial in the left hand side of (42) has degree at most \(2^{2r}\), because \(e_{r}\leq 2^{r}\) and \(2^{r}\cdot e_{r+1}^{-1}\leq 2^{r}\). So the degree of (42) is at most \(2^{2r}\), which implies that (42) has at most \(2^{2r}\) solutions. Therefore, the dimension of the linear kernel of \(B(x,y)\) is at most \(2r\). By Lemma 1, we have
\[\operatorname{nl}(D_{a_{r-1}}D_{a_{r-2}}\ldots D_{a_{1}}f)\geq 2^{n-1}-2^{ \frac{n+2r-2}{2}}.\]
We will need the following lemma in the proof of Theorem 13.
**Lemma 14**.: _Let integer \(r\geq 1\). Let \(\alpha_{1}>\alpha_{2}>\ldots>\alpha_{r}>0\) and \(c_{1},c_{2},\ldots,c_{r}>0\). We have_
\[c_{1}\cdot 2^{\alpha_{1}n}+c_{2}\cdot 2^{\alpha_{2}n}+\ldots+c_{r}\cdot 2^{ \alpha_{r}n}\leq(\sqrt{c_{1}}\cdot 2^{\frac{1}{2}\cdot\alpha_{1}n}+\frac{c_{2}}{2 \sqrt{c_{1}}}\cdot 2^{(\alpha_{2}-\frac{1}{2}\cdot\alpha_{1})n}+\ldots+\frac{c_{r}}{2 \sqrt{c_{1}}}\cdot 2^{(\alpha_{r}-\frac{1}{2}\cdot\alpha_{1})n})^{2}\]
Proof.: By straightforward calculation, we have
\[\mathrm{R.H.S} = c_{1}\cdot 2^{\alpha_{1}n}+\ldots+c_{r}\cdot 2^{\alpha_{r}n}+ \sum_{i=2}^{r}\frac{c_{i}^{2}}{4c_{1}}\cdot 2^{(2\alpha_{i}-\alpha_{1})n}+ \sum_{i,j=2}^{r}\frac{c_{i}\cdot c_{j}}{2c_{1}}2^{(\alpha_{i}+\alpha_{j}- \alpha_{1})n}\] \[\geq \mathrm{L.H.S}\]
In the following, we lower bound the \(r\)-th order nonlinearity for functions \(\mathrm{tr}_{n}(x^{2^{r+1}-1})\).
**Theorem 12**.: _Let \(f=\mathrm{tr}_{n}(x^{2^{r+1}-1})\) and \(r\geq 1\). We have_
\[\mathrm{nl}_{r}(f)\geq 2^{n-1}-2^{(1-2^{-r})n+\frac{r}{2^{r-1}}-1}-O(2^{\frac{n }{2}}).\]
Proof.: (of Theorem 4) Let \(l_{0}=\mathrm{nl}_{r}(f)\) and
\[l_{i}=\min_{\mathrm{distinct}\ a_{1},\ldots,a_{i}\in\mathbb{F}_{2^{n}}^{*}} \mathrm{nl}_{r-i}(D_{a_{i}}\ldots D_{a_{1}}f)\]
for \(i=1,2,\ldots,r-1\).
By Proposition 1, we have
\[l_{i} = \min_{\mathrm{distinct}\ a_{1},\ldots,a_{i}\in\mathbb{F}_{2^{n}} ^{*}}\mathrm{nl}_{r-i}(D_{a_{i}}\ldots D_{a_{1}}f) \tag{43}\] \[\geq \min_{\mathrm{distinct}\ a_{1},\ldots,a_{i}\in\mathbb{F}_{2^{n} }^{*}}2^{n-1}-\frac{1}{2}\sqrt{2^{2n}-2\sum_{a_{i+1}\in\mathbb{F}_{2^{n}}^{*} \setminus\{a_{1},a_{2},\ldots,a_{i}\}}\mathrm{nl}_{r-i-1}(D_{a_{i+1}}\ldots D_ {a_{1}}f)}\] \[\geq 2^{n-1}-\frac{1}{2}\sqrt{2^{2n}-2(2^{n}-(i+1))l_{i+1}},\]
for \(i=0,1,\ldots,r-2\). Let \(u_{i}=2^{n-1}-l_{i}\). Replacing \(l_{i}\) by \(2^{n-1}-u_{i}\) in (43), we have
\[u_{i}\leq\frac{1}{2}\sqrt{2^{n}(i+1)+2^{n+1}u_{i+1}}. \tag{44}\]
**Claim 1**.: \[u_{i}\leq\frac{1}{2}\left(2^{(1-2^{-(r-i)})n+\frac{r}{2^{r-1}-1}}+\sum_{j=1}^{ r-i-1}(j+i)\cdot 2^{\frac{2^{j}-1}{2^{r-1}}n-\frac{2^{j}-1}{2^{r-1}-1}r-j}\right).\] (45)
_for \(0\leq i\leq r-2\)._
Proof.: (of Claim 1) We prove by induction on \(i\). For the base step, we prove the claim for \(i=r-2\). By (44), we have
\[u_{r-2}\leq\frac{1}{2}\sqrt{2^{n}(r-1)+2^{n+1}u_{r-1}}. \tag{46}\]
By definition of \(l_{r-1}\) and Lemma 13, we have \(l_{r-1}\geq 2^{n-1}-2^{\frac{n+2r-2}{2}}\), that is, \(u_{r-1}\leq 2^{\frac{n+2r-2}{2}}\). Plugging \(u_{r-1}\leq 2^{\frac{n+2r-2}{2}}\) into (46), we have
\[u_{r-2} \leq \frac{1}{2}\sqrt{2^{\frac{1}{2}(3n+2r)}+(r-1)2^{n}}\] \[\leq \frac{1}{2}(2^{\frac{3}{4}n+\frac{r}{2}}+(r-1)2^{\frac{n}{4}- \frac{r}{2}-1}),\]
where the last step follows from Lemma 14.
For the induction step, assuming inequality (45) holds for \(i+1\), we prove (45) for \(i\), where \(i=r-3,r-4,\ldots,0\). Assuming (45) is true for \(i+1\), we prove it for \(i\). We have
\[u_{i} \leq \frac{1}{2}\sqrt{2^{n}(i+1)+2^{n+1}u_{i+1}}\] \[\leq \frac{1}{2}\sqrt{2^{n}(i+1)+2^{n}\cdot\left(2^{(1-2^{-(r-i-1)})n+ \frac{r}{2^{r-i-2}}}+\sum_{j=1}^{r-i-2}(j+i+1)\cdot 2^{\frac{2j-1}{2^{r-i-1}}n- \frac{2^{j-1}}{2^{r-i-2}}r-j}\right)}\] \[\leq \frac{1}{2}\left(2^{(1-2^{-(r-i)})n+\frac{r}{2^{r-i-1}}}+\sum_{j =1}^{r-i-1}(j+i)\cdot 2^{\frac{2j-1}{2^{r-i}}n-\frac{2^{j-1}}{2^{r-i-1}}r-j} \right),\]
as desired, where the third step follows from Lemma 14.
Turn back to the proof of Theorem 4. By Claim 1, we have
\[\mathrm{nl}_{r}(f) = 2^{n-1}-u_{0}\] \[\geq 2^{n-1}-2^{(1-2^{-r})n+\frac{r}{2^{r-1}}-1}-\sum_{j=1}^{r-1}j \cdot 2^{\frac{2^{j}-1}{2^{r}}n-\frac{2^{j}-1}{2^{r-1}}r-(j+1)}\] \[\geq 2^{n-1}-2^{(1-2^{-r})n+\frac{r}{2^{r-1}}-1}-O(2^{\frac{n}{2}}).\]
**Remark 1**.: _By Theorem 12, we deduce that_
\[\mathrm{nl}_{r}(f) \geq 2^{n-1}-2^{(1-2^{-r})n+\frac{r}{2^{r-1}}-1}-O(2^{\frac{n}{2}})\] \[= 2^{n-1}(1-\exp(-\frac{\alpha\cdot n}{2^{r}})),\]
_where \(\alpha\approx\log_{2}e\) when \(r\ll\log_{2}n\)._
Similarly, for the inverse function, we prove the following nonlinearity lower bound. This is studied by Carlet in [1], who claims that the \(r\)-th order nonlinearity is asymptotically lower bounded by \(2^{n-1}-2^{(1-2^{-r})n}\). We credit the lower bound, i.e., Theorem 13, to Carlet, since our proof closely follows the method in [1] by working out the calculations carefully. The proof of the following theorem is in Appendix C.
**Theorem 13**.: _Let \(f_{\mathrm{inv}}=\mathrm{tr}_{n}(x^{2^{n}-2})\). For any \(r\geq 1\), we have \(\mathrm{nl}_{r}(f_{\mathrm{inv}})\geq 2^{n-1}-2^{(1-2^{-r})n-2^{-(r-1)}}-O(2^{ \frac{n}{2}})\)._
Note that the bound in Theorem 12 is slightly better than that in Theorem 13.
### Comparison
Babai, Nisan and Szegedy [1] proved that the \(r\)-th nonlinearity of the generalized inner product function
\[\mathrm{GIP}_{r+1}(x_{1},x_{2},\ldots,x_{n})=\prod_{i=1}^{r+1}x_{i}+\prod_{i=r +2}^{2(r+1)}x_{i}+\ldots+\prod_{i=n-r}^{n}x_{i}\]
is lower bounded by \(2^{n-1}(1-\exp(-\Omega(\frac{n}{r+4^{r}})))\). Bourgain [1] and Green _et al._[1] proved that the \(r\)-th nonlinearity of the mod\({}_{3}\) function is at least \(2^{n-1}(1-\exp(-\frac{n}{8^{r}}))\); Viola [12] and Chattopadhyay [1] improved this bound to \(2^{n-1}(1-\exp(-\frac{n}{4^{r}}))\). Viola [12] exhibited an explicit function \(f\in P\) (which relies on explicit small-bias generators) with \(r\)-th nonlinearity at least \(2^{n-1}(1-\exp(-\frac{\alpha\cdot n}{2^{r}}))\), where \(\alpha<\frac{1}{4}\cdot\log_{2}e\); the lower bound is also proved in [13] using similar argument.
By Theorem 12, we prove that the \(r\)-th order nonlinearity of \(\mathrm{tr}_{n}(x^{2^{r+1}-1})\) is at least \(2^{n-1}(1-\exp(-\frac{\beta\cdot n}{2^{r}}))\), where \(\beta\approx\log_{2}e\) when \(r\ll\log_{2}n\). Previous to our work, the best lower bound is \(2^{n-1}(1-\exp(-\frac{\alpha\cdot n}{2^{r}}))\)[12, 13], where \(\alpha<\frac{1}{4}\cdot\log_{2}e\).
Conclusion
Using algebraic methods, we lower bound the second-order, third-order, and higher-order nonlinearities of some trace monomial Boolean functions. For the second-order nonlinearity, we study Boolean functions \(\mathrm{tr}_{n}(x^{7})\) and \(\mathrm{tr}_{n}(x^{2^{r}+3})\) for \(n=2r\); the latter class of Boolean functions is studied for the first time. Our lower bounds match the best proven lower bounds on the second-order nonlinearity among all trace monomial functions [1, 10]. For the third-order nonlinearity, we prove the lower bound for functions \(\mathrm{tr}_{n}(x^{15})\), which is the best provable third-order nonlinearity lower bound. For higher-order nonlinearity, we prove the lower bound
\[\mathrm{nl}_{r}(f)\geq 2^{n-1}-2^{(1-2^{-r})n+\frac{r}{2^{r-1}}-1}-O(2^{\frac{n }{2}})\]
for functions \(\mathrm{tr}_{n}(x^{2^{r+1}-1})\). When \(r\ll\log n\), this is the best lower bound, compared with all the previous works, e.g., [1, 1, 2, 10, 11, 12].
| ```
暗号、符号理論、計算複雑性の問題において、高次非線形性を表現する明確な論理関数に関する重要な問題がある。私たちは、いくつかのトレースモノマの論理関数の2次、3次、高次非線形性を下限の定理を示した。$tr_n(x^7)$ と $tr_n(x^{2^r+3})$ という関数の2次非線形性を下限の定理を示した。$n=2r$ の場合。トレースモノマの中で、私たちの定理は、奇数と偶数$n$ に対する\cite{Car08} と \cite{YT20} の最高2次非線形性下限と一致している。$tr_n(x^{15})$という関数の3次非線形性を下限の定理を示した。$r$ に対して、 |
2309.09466 | Progressive Text-to-Image Diffusion with Soft Latent Direction | In spite of the rapidly evolving landscape of text-to-image generation, the
synthesis and manipulation of multiple entities while adhering to specific
relational constraints pose enduring challenges. This paper introduces an
innovative progressive synthesis and editing operation that systematically
incorporates entities into the target image, ensuring their adherence to
spatial and relational constraints at each sequential step. Our key insight
stems from the observation that while a pre-trained text-to-image diffusion
model adeptly handles one or two entities, it often falters when dealing with a
greater number. To address this limitation, we propose harnessing the
capabilities of a Large Language Model (LLM) to decompose intricate and
protracted text descriptions into coherent directives adhering to stringent
formats. To facilitate the execution of directives involving distinct semantic
operations-namely insertion, editing, and erasing-we formulate the Stimulus,
Response, and Fusion (SRF) framework. Within this framework, latent regions are
gently stimulated in alignment with each operation, followed by the fusion of
the responsive latent components to achieve cohesive entity manipulation. Our
proposed framework yields notable advancements in object synthesis,
particularly when confronted with intricate and lengthy textual inputs.
Consequently, it establishes a new benchmark for text-to-image generation
tasks, further elevating the field's performance standards. | YuTeng Ye, Jiale Cai, Hang Zhou, Guanwen Li, Youjia Zhang, Zikai Song, Chenxing Gao, Junqing Yu, Wei Yang | 2023-09-18T04:01:25 | http://arxiv.org/abs/2309.09466v2 | # Progressive Text-to-Image Diffusion with Soft Latent Direction
###### Abstract
In spite of the rapidly evolving landscape of text-to-image generation, the synthesis and manipulation of multiple entities while adhering to specific relational constraints pose enduring challenges. This paper introduces an innovative progressive synthesis and editing operation that systematically incorporates entities into the target image, ensuring their adherence to spatial and relational constraints at each sequential step. Our key insight stems from the observation that while a pre-trained text-to-image diffusion model adeptly handles one or two entities, it often falters when dealing with a greater number. To address this limitation, we propose harnessing the capabilities of a Large Language Model (LLM) to decompose intricate and protracted text descriptions into coherent directives adhering to stringent formats. To facilitate the execution of directives involving distinct semantic operations--namely insertion, editing, and erasing--we formulate the Stimulus, Response, and Fusion (SRF) framework. Within this framework, latent regions are gently stimulated in alignment with each operation, followed by the fusion of the responsive latent components to achieve cohesive entity manipulation. Our proposed framework yields notable advancements in obj ect synthesis, particularly when confronted with intricate and lengthy textual inputs. Consequently, it establishes a new benchmark for text-to-image generation tasks, further elevating the field's performance standards. Code are provided at [https://github.com/babahui/Progressive-Text-to-Image](https://github.com/babahui/Progressive-Text-to-Image).
**Object Missing & Wrong Relation**
**Our Progressive Text-to-Image**
## 1 Introduction
The most popular approach to image processing is to use a large number of features to extract the content of the object. The most popular approach is to use a large number of features to extract the content of the object.
Introduction
Text-to-image generation is a vital and rapidly evolving field in computer vision that has attracted unprecedented attention from both researchers and the general public. The remarkable advances in this area are driven by the application of state-of-the-art image-generative models, such as auto-regressive [21, 22] and diffusion models [23, 24, 25], as well as the availability of large-scale language-image datasets [26, 27]. However, existing methods face challenges in synthesizing or editing multiple subjects with specific relational and attributive constraints from textual prompts [11]. The typical defects that occur in the synthesis results are missing entities, and inaccurate inter-object relations, as shown in fig. 1. Existing work improves the compositional skills of text-to-image synthesis models by incorporating linguistic structures [14], and attention controls [15, 16] within the diffusion guidance process. Notably, Structured Diffusion[14] parse a text to extract numerous noun phrases, Attend-and-Excite [17] strength attention activations associated with the most marginalized subject token. Yet, these remedies still face difficulties when the text description is long and complex, especially when it involves two and more subjects. Furthermore, users may find it necessary to perform subtle modifications to the unsatisfactory regions of the generated image, while preserving the remaining areas.
In this paper, we propose a novel progressive synthesizing/editing operation that successively incorporates entities, that conform to the spatial and relational constraint defined in the text prompt, while preserving the structure and aesthetics in each step. Our intuition is based on the observation that text-to-image models tend to better handle short-sentence prompts with a limited number of entities (1 or 2) than long descriptions with more entities. Therefore, we can parse the long descriptions into short-text prompts and craft the image progressively via a diffusion model to prevent the leakage and missing of semantics.
However, applying such a progressive operation to diffusion models faces two major challenges:
* The absence of a unified method for converting the integrated text-to-image process into a progressive procedure that can handle both synthesis and editing simultaneously. Current strategies can either synthesize [17, 22] or edit [23, 24, 25, 26], leaving a gap in the collective integration of these functions.
* The need for precise positioning and relational entity placement. Existing solutions either rely on user-supplied masks for entity insertion, necessitating manual intervention [27, 28], or introduce supplementary phrases to determine the entity editing direction [15, 14], which inadequately addressing spatial and relational dynamics.
To overcome these hurdles, we present the Stimulus, Response, and Fusion (SRF) framework, assimilating a stimulus-response generation mechanism along with a latent fusion module into the diffusion process. Our methodology involves employing a fine-tuned GPT model to deconstruct complex texts into structured prompts, including synthesis, editing, and erasing operations governed by a unified SRF framework. Our progressive process begins with a real image or synthesized background, accompanied by the text prompt, and applies the SRF method in a step-by-step approach. Unlike previous strategies that aggressively manipulate the cross-attention map [26, 22], our operation guides the attention map via a soft direction, avoiding brusque modifications that may lead to discordant synthesis. Additionally, when addressing relationships like "wearing" and "playing with", we begin by parsing the positions of the objects, after which we incorporate the relational description into the diffusion process to enable object interactions.
In summary, we unveil a novel, progressive text-to-image diffusion framework that leverages the capabilities of a Language Model (LLM) to simplify language description, offering a unified solution for handling synthesis and editing patterns concurrently. This represents an advancement in text-to-image generation and provides a new platform for future research.
## 2 Related Work
Our method is closely related to image manipulation and cross-attention control within diffusion models.
### Image Manipulation
Image manipulating refers to the process of digitally manipulating images to modify or enhance their visual appearance. Various techniques can be employed to achieve this
Figure 2: We employ a fine-tuned GPT model to deconstruct a comprehensive text into structured prompts, each classified under synthesis, editing, and erasing operations.
end, such as the use of spatial masks or natural language descriptions to guide the editing process towards specific goals. One promising line of inquiry involves the application of generative adversarial networks (GANs) for image domain transfer [11, 10, 12, 13, 14, 15, 16, 17, 18, 19, 20] or the manipulation of latent space [13, 14, 15, 16, 17, 18, 19, 21, 22, 23], or the manipulation of latent space [13, 14, 15, 16, 17, 18, 19, 20], diffusion models have emerged as the mainstream. GILIDE [11], Blended diffusion [15, 16] and SmartBrush [21] replace masked image regions with predefined objects while preserving the inherent image structure. Additionally, techniques such as prompt-to-prompt [16] and instructpix2pix [17] enable the modification of image-level objects through text alterations. Contrasting previous methods that solely cater to either synthesis or editing, we construct a unified framework that accommodates both.
### Cross Attention Control
Objects and positional relationships are manifested within the cross attention map of the diffusion model. Inspired by this observation [14], techniques have been devised to manipulate the cross attention map for image synthesis or editing. Prompt-to-Prompt approach [16] aims at regulating spatial arrangement and geometry through the manipulation of attention maps derived from textual prompts. Structured Diffusion [14] utilizes a text parsing mechanism to isolate numerous noun phrases, enhancing the corresponding attention space channels. The Attend-and-Excite approach [15] amplifies attention activations linked to the most marginalized subject tokens. Directed Diffusion [16] proposes an attention refinement strategy through the utilization of a weak and strong activation approach. Our approach stands apart by guiding the attention through Soft Latent Direction. Instead of just amplifying or refining attention activations, our soft latent direction serves as a gentle guide, ensuring smoother and more natural transitions in the attention space.
## 3 Method
### Problem Formulation
we elaborate upon our innovative progressive text-to-image framework. Given a multifaceted text description \(\mathcal{P}\) and a real or generated background \(\mathcal{I}\), our primary goal is to synthesize an image that meticulously adheres to the modifications delineated by \(\mathcal{P}\) in alignment with \(\mathcal{I}\). The principal challenge emerges from the necessity to decode the intricacy of \(\mathcal{P}\), manifesting across three complex dimensions:
* The presence of multiple entities and attributes escalates the complexity of the scene, imposing stringent demands on the model to generate representations that are not only accurate but also internally coherent and contextually aligned.
* The integration of diverse positional and relational descriptions calls for the model to exhibit an advanced level of understanding and to employ sophisticated techniques to ascertain precise spatial configuration, reflecting both explicit commands and implied semantic relations.
* The concurrent introduction of synthesis, editing, and erasing operations introduces additional layers of complexity to the task. Managing these intricate operations within a unified model presents a formidable challenge, requiring a robust and carefully designed approach to ensure seamless integration and execution.
We address these challenges through a unified progressive text-to-image framework that: (1) employs a fine-tuned GPT model to distill complex texts into short prompts, categorizing each as synthesis, editing, or erasing mode, and accordingly generating the object mask; (2) sequentially processes these prompts within the same framework, utilizing attention-guided generation to capture position-aware features with soft latent direction, and subsequently integrates them with the previous stage's outcomes in a subtle manner. This approach synthesizes the intricacies of text-to-image transformation into a coherent, positionally aware procedure.
### Text Decomposition
\(\mathcal{P}\) may involve multiple objects and relations, we decompose \(\mathcal{P}\) into a set of short prompts, which produces an image accurately representing \(\mathcal{P}\) when executed sequentially. As illustrated in fig. 2, we fine-tune a GPT model (OpenAI 2023) to decompose \(\mathcal{P}\) into multiple structured prompts, denoted as \(\{\mathcal{P}_{1},\mathcal{P}_{2},...,\mathcal{P}_{n}\}\). Each \(\mathcal{P}_{i}\) falls into one of the three distinct modes: **Synthesis mode:** [object 1] [relation] [object 2] [position] [object 3]", **Editing mode:** "change [object 1] to [object 2]", and **Erasing mode:** "delete [object]". In pursuit of this aim, we start by collecting full texts using ChatGPT [1] and then manually deconstruct them into atomic prompts. Each prompt has a minimal number of relations and is labeled with synthesis/editing/erasing mode. Using these prompts and their corresponding modes for model supervision, we fine-tune the GPT model to enhance its decomposition and generalization ability.
**Operational Layouts.** For the synthesis operation, as shown in fig. 3, we feed both the prompt and a reference
Figure 3: For the synthesis operation, we generate the layout indicated in the prompt from a frozen GPT-4 model, which subsequently yields the new bounding box coordinates for object insertion.
bounding box into a frozen GPT-4 API. This procedure produces bounding boxes for the target entity that will be used in the subsequent phase. We exploit GPT-4's ability to extract information from positional and relational text descriptors. For example, the phrase "cat and dog play together" indicates a close spatial relationship between the "cat" and "dog". Meanwhile, "on the right side" suggests that both animals are positioned to the right of the "yard". For the editing and erasing operations, we employ Diffusion Inversion [11] to obtain the cross-attention map of the target object, which serves as the layout mask. For example, when changing "apples" to "oranges", we draw upon the attention corresponding to "apples". On the other hand, to "delete the oranges", we focus on the attention related to "oranges". Notably, this approach avoids the need to retrain the diffusion model and is proficient in managing open vocabularies. we denote generated layout mask as \(\mathcal{M}\) for all operations in following sections for convention.
In the following section, we provide a complete introduction to the synthesis operation. At last, we exhibit that the editing and erasing operations only differ from the synthesis operation in parameter settings.
### Stimulus & Response
With the synthesis prompt \(\mathcal{P}_{i}\) to be executed and its mask configuration \(\mathcal{M}_{i}\). The goal of Latent Stimulus & Response is to enhance the positional feature representation on \(\mathcal{M}\). As illustrated in fig. 4, this is achieved by guided cross-attention generation. Differing from the approaches[11, 12], which manipulate attention through numerical replacement, we modulate the attention within mask regions associated with the entity in \(\mathcal{P}_{i}\) via a soft manner. Rather than directly altering the attention, we introduce a stimulus to ensure that the object attention converges to the desired scores. Specifically, we formulate a stimulus loss function between the object mask \(\mathcal{M}\) and the corresponding attention \(A\) as:
\[\mathcal{L}_{s}=\sum_{i=1}^{n}(\text{softmax}(A_{t}^{i})-\delta\cdot\mathcal{M }^{i}) \tag{1}\]
where \(A_{t}^{i}\) signifies the cross-attention map of the \(i\)-th object at the \(t\)-th timestep. \(\mathcal{M}^{i}\) denotes the mask of the \(i\)-th object. \(\delta\) represents the stimulus weights. The intent of stimulus attention leans towards a spatial-wise generation process. This is achieved by backpropagating the gradient of the stimulus loss function, as defined in Eq. 1, to update the latent code. This process serves as a latent response to the stimulated attention, which can be formally expressed as:
\[z_{t}^{*}\gets z_{t}-\alpha_{t}\cdot\nabla_{z_{t}}\mathcal{L}_{s} \tag{2}\]
In the above equation, \(z_{t}^{*}\) represents the updated latent code and \(\alpha_{t}\) denotes the learning rate. Finally, we execute another forward pass of the stable diffusion model using the updated latent code \(z_{t}^{*}\) to compute \(z_{t-1}^{*}\) for the subsequent denoising step. Based on eq. (1) and eq. (2), we observe consistent spatial behavior in both the cross-attention and latent spaces. For a more detailed analysis, we refer to fig. 5 and find this property contributes to producing faithful and position-aware image representations.
Figure 4: Overview of our unified framework emphasizes progressive synthesis, editing, and erasing. In each progressive step, A random latent \(z_{t}\) is directed through the cross-attention map in inverse diffusion. Specifically, we design a soft stimulus loss that evaluates the positional difference between entity attention and the target mask region, leading to a gradient for updating the latent \(z_{t-1}^{*}\) as a latent response. Subsequently, another forward diffusion pass is applied to denoise \(z_{t}^{*}\), yielding deriving \(z_{t-1}^{*}\). In the latent fusion phase, we transform the previous \(i\)-th image into a latent code \(z_{t-1}^{bg}\) using DDIM inversion. The blending of \(z_{t-1}^{*}\) with \(z_{t-1}^{bg}\) incorporates a dynamic evolving mask, which starts with a layout box and gradually shifts to cross-attention. Finally, \(z_{t-1}^{*}\) undergoes multiple diffusion reverse steps and results in the \((i+1)\)-th image.
### Latent Fusion
Recalling that \(z_{t-1}^{*}\) denotes the latent feature of the target object, our next task is to integrate them seamlessly with the image from the preceding stage. For this purpose, we first convert the previous image into latent code by DDIM inversion, denoted as \(z^{bg}\). Then for timestep t, we take a latent fusion strategy [11] between \(z_{t}^{bg}\) and \(z_{t}^{*}\), which is formulated as:
\[z_{t-1}=\widehat{\mathcal{M}}\cdot z_{t-1}^{*}+(1-\widehat{\mathcal{M}})\cdot z _{t-1}^{bg} \tag{3}\]
where \(\widehat{\mathcal{M}}\) acts as a latent mask to blend the features of target objects with the background. In the synthesis operation, employing a uniform mask across all steps can be too restrictive, potentially destroying the object's semantic continuity. To mitigate this, we introduce a more soft mask, ensuring both object integrity and spatial consistency. Specifically, during the initial steps of diffusion denoising, we use layout mask \(\mathcal{M}\) to provide spatial guidance. Later, we shift to an attention mask \(\mathcal{M}_{\text{attn}}\), generated by averaging and setting a threshold on the cross-attention map, to maintain object cohesion. This process is denoted as:
\[\widehat{\mathcal{M}}(\mathcal{M}_{\text{attn}},\mathcal{M},t)=\begin{cases} \mathcal{M}&\text{if }t\leq\tau\\ \mathcal{M}_{\text{attn}}&\text{if }t>\tau\end{cases} \tag{4}\]
Here, \(\tau\) serves as a tuning parameter balancing object integrity with spatial coherence. The above response and fusion process is repeated for a subset of the diffusion timesteps, and the final output serves as the image for the next round generation.
**Editing and Erasing Specifications.** Our editing and erasing operation differs in parameter setting: we set \(\mathcal{M}\) in eq. (1) as editing/erasing reference attention. we set \(\widehat{\mathcal{M}}\) in eq. (3) as the editing/erasing mask in all diffusion steps for detailed, shape-specific modifications.
## 4 Experiment
**Baselines and Evaluation.** Our experimental comparison primarily concentrates on Single-Stage Generation and Progressive Generation baselines. (1) We refer to **Single-Stage Generation** methods as those that directly generate images from input text in a single step. Current methods include Stable Diffusion [12], Attend-and-excite [13], and Structured Diffusion [14]. We compare these methods to analyze the efficacy of our progressive synthesis operation. We employ GPT to construct 500 text prompts that contain diverse objects and relationship types. For evaluation, we follow [23] to compute **Object Recall**, which quantifies the percentage of objects successfully synthesized. Moreover, we measure **Relation Accuracy** as the percentage of spatial or relational text descriptions that are correctly identified, based on 8 human evaluations. (2) We define **Progressive Generation** as a multi-turn synthesis and editing process that builds on images from preceding rounds. Our comparison encompasses our comprehensive progressive framework against other progressive methods, which includes Instruct-based Diffusion models [15] and mask-based diffusion models [12, 13]. To maintain a balanced comparison, we source the same input images from SUN [21] and text descriptions via the GPT API [12]. Specifically, we collate five scenarios totaling 25 images from SUN, a dataset that showcases real-world landscapes. Each image is paired with the text description, which ensures: 1. Integration of synthesis, editing, and easing paradigms; 2. Incorporation of a diverse assortment of synthesized objects; 3. Representation of spatial relations (e.g., top, bottom, left, right) and interactional relations (e.g., "playing with", "wearing"). For evaluation, we utilize Amazon Mechanical Turk (AMT) to assess image fidelity. Each image is evaluated based on the fidelity of the generated objects, their relationships, the execution of editing instructions, and the alignment of erasures with the text descriptions. Images are rated on a fidelity scale from 0 to 2, where 0 represents the lowest quality and 2 signifies the highest. With two evaluators assessing each generated image, the cumulative score for each aspect can reach a maximum of 100.
**Implementation Details.** Our framework builds upon Stable Diffusion (SD) V-1.4. During the Stimulus & Response stage, we assign a weight of \(\delta\) equals 0.8 in eq. (1), and set \(t\) equals 25 and \(\alpha_{t}\) equals 40 in eq. (2). We implement the stimulus procedure over the 16 \(\times\) 16 attention units and integrate the Iterative Latent Refinement design[13]. In the latent fusion stage, the parameter \(\tau\) is set to a value of 40.
### Qualitative and Quantitative Results
**Qualitative and Quantitative Comparisons with Single-Generation Baselines.** fig. 6 reveals that traditional baseline methods often struggle with object omissions and maintaining spatial and interactional relations. In contrast, our progressive generation process offers enhanced image fidelity and controllability. Additionally, we maintain finer details in the generated images, such as the shadows of the
Figure 5: Visual results generated by Stable Diffusion and Stimulus & Response. Stable Diffusion shows noticeable problems in positional generation (top), semantic and attribute coupling (middle), and object omission (bottom), while ours delivers precise outcomes.
"beach chair". Result in table 1 indicates that our method outperforms the baselines in both object recall and relation accuracy.
**Qualitative and Quantitative Comparisons with Progressive Generation Baselines.** In fig. 8, baseline methods often fail to synthesize full objects and may not represent relationships as described in the provided text. Moreover, during editing and erasing operations, these methods tend to produce outputs with compromised quality, showcasing unnatural characteristics. It's worth noting that any missteps or inaccuracies in the initial stages, such as those seen in InstructPix2Pix, can cascade into subsequent stages, exacerbating the degradation of results. In contrast, our proposed method consistently yields superior results through every phase. The results in table 2 further cement our method's dominant performance in synthesis, editing, and erasing operations, as underscored by the impressive rating scores.
### Alation Study
**Ablation study of method components is shown in table 3.** Without latent fusion, we lose continuity from prior generation stages, leading to inconsistencies in object synthesis and placement. On the other hand, omitting the Stimulus & Response process results in a lack of positional awareness, making the synthesis less precise. Both omissions manifest as significant drops in relation and entity accuracies, emphasizing the synergistic importance of these components in our approach.
**The analysis of Stimulus & Response in the editing operation is highlighted in fig. 7**. Compared to Stable Diffusion, Stimulus & Response not only enhances object completeness and fidelity but also demonstrates a broader diversity in editing capabilities. The loss curve indicates that Stimulus & Response aligns more closely with the reference cross-attention, emphasizing its adeptness in preserving the original structure.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Method** & Object Recall \(\uparrow\) & Relation Accuracy \(\uparrow\) \\ \hline Stable Diffusion (CVPR, 2022) & 40.7 & 19.8 \\ \hline Structured Diffusion (ICLR, 2023) & 43.5 & 21.6 \\ \hline Attend-and-excite (SIGGRAPH, 2023) & 50.3 & 23.4 \\ \hline Ours & **64.4** & **50.8** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison with Single-Stage Generation baselines.
Figure 6: Qualitative comparison with Single-Stage baselines. Common errors in the baselines include missing objects and mismatched relations. Our method demonstrates the progressive generation process.
## 5 Conclusion
In this study, we addressed the prevailing challenges in the rapidly advancing field of text-to-image generation, particularly the synthesis and manipulation of multiple entities under specific constraints. Our innovative progressive synthesis and editing methodology ensures precise spatial and relational representations. Recognizing the limitations of existing diffusion models with increasing entities, we integrated the capabilities of a Large Language Model (LLM) to dissect complex text into structured directives. Our Stimulus, Response, and Fusion (SRF) framework, which enables seamless entity manipulation, represents a major stride in object synthesis from intricate text inputs.
One major limitation of our approach is that not all text can be decomposed into a sequence of short prompts. For instance, our approach finds it challenging to sequentially parse text such as "a horse under a car and between a cat and a dog." We plan to gather more training data and labels of this nature to improve the parsing capabilities of GPT.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**Synthesis**} & **Editing** & **Erasing** \\ \cline{2-5} & Object \(\uparrow\) & Relation \(\uparrow\) & & \\ \hline InstructPix2Pix (CVPR, 2023) & 19 & 24 & 32 & 29 \\ \hline Stable-inpainting (CVPR, 2022) & 64 & 54 & 65 & 45 \\ \hline Blended Latent (SIGGRAPH 2023) & 67 & 52 & 67 & 46 \\ \hline Ours & **74** & **60** & **72** & **50** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative comparison of our method against Progressive Generation baselines, using rating scores.
Figure 8: Qualitative comparison with Progressive Generation baselines. The first two phases illustrate object synthesis operation, where target objects are color-coded in both the text and layout. Subsequent phases depict object editing and erasing processes, wherein a cat is first transformed into a rabbit and then the rabbit is removed.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Method Variant** & Object Recall \(\uparrow\) & Relation Accuracy \(\uparrow\) \\ \hline w/o Latent Fusion & 38.8 & 21.8 \\ \hline w/o Stimulus \& Response & 58.3 & 45.2 \\ \hline Ours & **64.4** & **50.8** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study of method components.
Figure 7: The analysis of Stimulus & Response in the editing operation. The left side shows a visual comparison between SD (Stable Diffusion) and S&R (Stimulus & Response). The right side presents the convergence curve of cross-attention loss during diffusion sampling steps. The loss is computed as the difference between reference attention and model-generated attention. | テキスト生成の進化する風景に置いて、多様なエンティティの合成と操作を特定の関係性制限に則って行うことは、永遠に課題となります。この論文では、目標画像にエンティティを体系的に統合し、その各段階における空間的および関係的制限に準拠することを保証する革新的な、進化的合成と編集操作を導入します。この論文のキーインサイトは、単一または複数のエンティティを扱うことができるように事前学習されたテキスト生成拡散モデルが、複雑なテキストの記述を、厳格な形式に従う明確な指示に変換することによって、1つまたは2つのエンティティを扱う場合に優れていること、しかし、より多くのエンティティを扱う場合に必ずしもうまくいかないことに由来するものです。この制限に対処するために、大規模言語モデル(LLM)を活用して複雑なテキスト説明を、明確な指示に変換することに焦点を当て |
2309.16546 | Correcting for heterogeneity in real-time epidemiological indicators | Auxiliary data sources have become increasingly important in epidemiological
surveillance, as they are often available at a finer spatial and temporal
resolution, larger coverage, and lower latency than traditional surveillance
signals. We describe the problem of spatial and temporal heterogeneity in these
signals derived from these data sources, where spatial and/or temporal biases
are present. We present a method to use a ``guiding'' signal to correct for
these biases and produce a more reliable signal that can be used for modeling
and forecasting. The method assumes that the heterogeneity can be approximated
by a low-rank matrix and that the temporal heterogeneity is smooth over time.
We also present a hyperparameter selection algorithm to choose the parameters
representing the matrix rank and degree of temporal smoothness of the
corrections. In the absence of ground truth, we use maps and plots to argue
that this method does indeed reduce heterogeneity. Reducing heterogeneity from
auxiliary data sources greatly increases their utility in modeling and
forecasting epidemics. | Aaron Rumack, Roni Rosenfeld, F. William Townes | 2023-09-28T15:57:18 | http://arxiv.org/abs/2309.16546v1 | Correcting for heterogeneity in real-time epidemiological indicators
## Abstract
Auxiliary data sources have become increasingly important in epidemiological surveillance, as they are often available at a finer spatial and temporal resolution, larger coverage, and lower latency than traditional surveillance signals. We describe the problem of spatial and temporal heterogeneity in these signals derived from these data sources, where spatial and/or temporal biases are present. We present a method to use a "guiding" signal to correct for these biases and produce a more reliable signal that can be used for modeling and forecasting. The method assumes that the heterogeneity can be approximated by a low-rank matrix and that the temporal heterogeneity is smooth over time. We also present a hyperparameter selection algorithm to choose the parameters representing the matrix rank and degree of temporal smoothness of the corrections. In the absence of ground truth, we use maps and plots to argue that this method does indeed reduce heterogeneity. Reducing heterogeneity from auxiliary data sources greatly increases their utility in modeling and forecasting epidemics.
## 1 Introduction
Understanding the burden of epidemics is a critical task for both public health officials and modelers. However, traditional surveillance signals are often not available in real-time, due to delays in data collection as well as data revisions. Alternative data sources can provide more timely information about an epidemic's current state, which can be useful for modeling and forecasting. We can use these data sources to create _indicators_, which provide a single number quantifying some measure of epidemic burden for a given location and time. An indicator usually estimates the disease burden at a certain severity level (e.g. symptomatic infections, hospitalizations) when the ground truth is unobserved. During the COVID-19 pandemic, the Delphi group published a repository of several real-time indicators of COVID-19 activity [1].
Many, if not all, of these indicators suffer from heterogeneity. That is, the relationship between the indicator and unobserved ground truth changes over space or time. To define heterogeneity, let \(X\in\mathbb{R}^{N\times T}\) be the matrix containing the indicator values for \(N\) locations and \(T\) time values, and \(Z\in\mathbb{R}^{N\times T}\) be the matrix containing the corresponding ground truth values. We say that spatial heterogeneity is present when
\[\mathbb{E}[X_{i_{1}t}]-Z_{i_{1}t}\neq\mathbb{E}[X_{i_{2}t}]-Z_{i_{2}t}\text{ for some }i_{1}\neq i_{2},t.\]
Likewise, temporal heterogeneity is present when
\[\mathbb{E}[X_{it_{1}}]-Z_{it_{1}}\neq\mathbb{E}[X_{it_{2}}]-Z_{it_{2}}\text{ for some }i,t_{1}\neq t_{2}.\]
Note that we define heterogeneity not simply as a bias in the indicator, but rather that the bias is dependent on location or time. The causes of heterogeneity vary depending on the indicator, but we can consider as an example an indicator based on insurance claims that seeks to estimate incidence of COVID-19 outpatient visits. Insurance claims could be higher relative to COVID-19 incidence in locations where the population in the insurance dataset is older, or where the doctors have more liberal coding policies in labeling a probable COVID case. Even the signal of reported cases, which purportedly reflects COVID-19 infections directly, will suffer from heterogeneity. If a few locations suffer from a shortage of tests, or from a new strain which tests are less accurate in detecting or that has a different fraction of symptomatic cases, those locations will have a different relationship between reported cases and true cases. Similar causes can result in temporal heterogeneity. Test shortages, changing demographics, coding practices can also vary over time within a single location. For example, spatial heterogeneity has been documented in CDC's ILINet due to different mixtures of reporting healthcare provider types in the network [2].
We use real-time indicators for three main functions: modeling the past, mapping the present, and forecasting the future. Correcting for heterogeneity is important for all of these applications. Any statistical conclusions we make about spatiotemporal spread of a disease may be distorted if the underlying data is subject to heterogeneity. In the presence of spatial heterogeneity, the indicator values are not comparable across locations, and a choropleth map displaying the current values of the indicator will be misleading. Similarly, in the presence of temporal heterogeneity, displaying a time series of the indicator may be misleading. Heterogeneity affects forecasts as well, as biases in the features of a forecasting model will lead to forecast inaccuracy. Our goal is to remove heterogeneity in an indicator in order to make it more reliable for these three uses.
Heterogeneity has been described and modeled in the field of econometrics [3]. Nearly all of the work involving heterogeneity in econometrics deals with the implications in regression. If only spatial heterogeneity is present, then a fixed or random effects model can be used [4, 5]. Others have developed parametric methods that assume heterogeneity is also time-varying [6]. The main reason that these methods cannot be transferred to our domain is that they identify heterogeneity only through strict assumptions on the error terms in the regression model. Additionally, we are not performing regression in our application. Rather, we are trying to remove the heterogeneity in the indicator.
A challenge of correcting for heterogeneity is that the problem doesn't have a clear formulation. In nearly every practical application, we lack access to the ground truth and our best option is to compare our indicator with another signal that is a noisy estimate of the ground truth, and often suffers from heterogeneity itself. We will call this signal a "guide" to emphasize that it is not a target for prediction. We believe that the indicator is strongly related with the guide, so they should be correlated across time and space. However, they don't measure the same value, so the correlation should not be 1 even in the absence of noise. Another challenge is that we present the problem in a retrospective setting, without a clear division for training and testing.
In this paper, we investigate removing heterogeneity from two indicators using a different guide for each. The first indicator is based on insurance claims data, and we use reported cases as a guide signal. The second indicator is based on Google search trends of specific queries related to COVID-19. We use the COVID-19 Trends and Impact Surveys (CTIS) as a guide. All of these signals (indicators and guides) are available in COVIDCast [1].
Because heterogeneity is present in a wide variety of indicators, we desire a solution that is general and flexible. Another desired property is that the temporal corrections are smooth across time, because we want to accommodate situations where the relationship between the indicator and guide can drift slowly over time. The model should be flexible enough to allow for abrupt changes, but these should be limited in number. If the corrections are jagged in time, the model may be overadjusting to the guide signal rather than identifying and removing the true heterogeneity.
Lastly, the method should generalize well to a variety of indicators and guides. It should not rely on specific domain knowledge of a single indicator-guide pair because we want the method to be applicable to any current or future indicator and guide. If we believe the indicator and guide have a stronger relationship, then we might want the model to use the guide matrix more and make a stronger bias correction. If we believe that there is more noise in the guide variable, that heterogeneity is mild, or that the inherent signals are more divergent, we might want the model to make a weaker bias correction. Additionally, the temporal smoothing constraint will be stronger or weaker, depending on the application.
The model should have hyperparameters to control the strength of the guide signal in fitting as well as the strength of the temporal smoothness constraint. These can be conceptualized as "knobs". For the indicator-guide relationship, the knob turns between one extreme of not using the guide signal at all and the other extreme of fitting maximally to the guide signal (in some models, fitting exactly to the guide signal). For the temporal smoothness constraint, the knob turns between the extremes of applying no smoothing and enforcing a constant temporal correction factor across time.
In the rest of this paper, we will provide three methods to correct for heterogeneity for a general indicator and guide signal. We then demonstrate their performance in simulated experiments and on several actual epidemiological data sources.
## 2 Methods
Let \(X\in\mathbb{R}^{N\times T}\) be the matrix containing the indicator values for \(N\) locations and \(T\) time points, and \(Y\in\mathbb{R}^{N\times T}\) be the matrix containing the corresponding guide values. We want to transform \(X\) to a matrix \(\tilde{X}\), with the spatial and temporal biases mitigated. As mentioned above, the simplest way to do so is to set \(\tilde{X}=Y\), but this is the most extreme version of overadjustment and removes any unique information contained in \(X\). We will present three methods to remove heterogeneity by using \(Y\) as a guide. The first uses a simple low-rank approximation, and the second and third add elements which ensure that the biases removed are smooth in time. In all of our methods, we detect heterogeneity by examining the difference \(Y-X\). We assume that the signal in this difference matrix is the heterogeneity between \(X\) and \(Y\).
### Bounded Rank Approach
In this approach, we assume that the heterogeneity between \(X\) and \(Y\) is of low rank. We begin without making any assumptions on the smoothness of the temporal biases. Therefore, we solve the following optimization:
\[\hat{A},\hat{B}=\arg\min_{A,B}\|(X+AB^{T})-Y\|_{F}^{2},\]
where \(A\in\mathbb{R}^{N\times K}\), \(B\in\mathbb{R}^{T\times K}\), \(K\leq\min(N,T)\), and \(\|\cdot\|_{F}\) is the Frobenius norm. This optimization can be solved by performing singular value decomposition on the difference matrix \(Y-X\) and keeping the vectors with the \(K\) highest singular values. The corrected matrix is \(\tilde{X}=X+AB^{T}\).
### Fused Lasso Approach
In addition to the low rank assumption, here we further assume that the temporal biases are mostly piecewise constant over time. Therefore, we solve the following optimization:
\[\hat{A},\hat{B}=\arg\min_{A,B}\|(X+AB^{T})-Y\|_{F}^{2}+\lambda\|\Delta_{t}B\|_{1},\]
where \(A\in\mathbb{R}^{N\times K}\), \(B\in\mathbb{R}^{T\times K}\), and \(K\leq\min(N,T)\), and \(\Delta_{t}B\) contains the first differences of B along the time axis. The \(\Delta_{t}B\) penalty is inspired by the fused lasso [7] and encourages \(B\) to be piecewise constant along the time axis.
We solve this optimization using penalized matrix decomposition algorithms described in [8]. We reproduce the algorithm as applicable to our case here:
1. Let \(Z^{1}=Y-X\).
2. For \(k=1,\ldots,K\): 1. Initialize \(v_{k}\) to have \(L_{2}\) norm 1. 2. Iterate until convergence: 1. If \(v_{k}=0\), then \(u_{k}=0\). Otherwise, let \(u_{k}=\frac{Z^{k}v_{k}}{\|Z^{2}v_{k}\|_{2}}\). 2. Let \(v_{k}\) be the solution to \[\min_{v}\frac{1}{2}\|Z^{kT}u_{k}-v\|_{2}^{2}+\lambda\sum_{j=2}^{T}\|v_{j}-v_{j -1}\|_{1}.\] 3. Let \(d_{k}=u_{k}^{T}Z^{k}v_{k}\). 4. Let \(Z^{k+1}=Z^{k}-d_{k}u_{k}v_{k}^{T}\).
3. \(A\) is the matrix whose \(k^{th}\) column is \(d_{k}u_{k}\), and \(B\) is the matrix whose \(k^{th}\) column is \(v_{k}\).
Step 2b) ii) is a fused lasso problem and can be solved using the alternating direction method of multipliers (ADMM) [9]. All of the other steps are trivial to compute.
This optimization has two hyperparameters which can be considered as "knobs": \(K\) and \(\lambda\). The matrix rank \(K\) controls the degree to which we match the guiding signal \(Y\). When \(K=0\), we keep \(X\) exactly as is and apply no correction. As \(K\) increases, we use more information from \(Y\), and when \(K=\min(N,T)\), we transform \(X\) to equal \(Y\) exactly (when \(\lambda=0\)). The lasso penalty \(\lambda\) enforces smoothness along the time axis of \(B\). At \(\lambda=0\), we apply no smoothing at all, and the model is equivalent to the Bounded Rank Model above. As \(\lambda\) approaches \(\infty\), \(B\) contains a constant value across each row.
### Basis Spline Approach
An alternative way to enforce smoothness on the temporal bias correction is to transform the temporal corrections by using B-spline basis functions. These functions \(S\) are determined by setting the polynomial degree \(d\) and a set of knots \(\{t_{1},\ldots,t_{m}\}\)[10]:
\[S_{i,0}(x)=1,\text{ if }t_{i}\leq x<t_{i+1},\text{ otherwise }0,\]
\[S_{i,k}(x)=\frac{x-t_{i}}{t_{i+k}-t_{i}}S_{i,k-1}(x)+\frac{t_{i+k+1}-x}{t_{i+k +1}-t_{i+1}}S_{i+1,k-1}(x),\]
for \(i\in\{1,\ldots,m\}\) and \(k\in\{1,\ldots d\}\). We can use these basis functions to create a fixed spline transformation matrix \(C\in\mathbb{R}^{L\times T}\), where \(C_{i,t}\equiv S_{i,d}(t)\) and \(L\) is a function of \(d\) and \(m\).
We now solve the following optimization:
\[\hat{A},\hat{B}=\arg\min_{A,B}\|(X+AB^{T}C)-Y\|_{F}^{2},\]
where \(A\in\mathbb{R}^{N\times K}\), \(B\in\mathbb{R}^{L\times K}\), and \(K\leq\min(N,L)\), and \(C\) is the spline transformation matrix determined by the given polynomial degree and knots. This problem can be reformulated and solved by reduced rank regression, using the algorithm described in [11]. In this approach, we do not need to apply a penalty to the components of \(B\); the spline basis transformation will ensure that the temporal correction matrix \(B^{T}C\) is smooth.
In this approach, the hyperparameter \(K\) is understood the same way as above. The temporal smoothing hyperparameters are different, however. The degree of smoothing is determined by the polynomial degree \(d\) and knots \(t\). For simplicity, we will set \(d\) as a constant 3; this results in the commonly used cubic spline transformation. We will also enforce that the knots are uniformly spaced, leaving us with the knot interval as the only temporal hyperparameter. The larger the knot interval, the smoother the temporal corrections will be. Note that due to the transformation matrix \(C\), we are no longer able to fit \(\tilde{X}=Y\) exactly, even with unbounded \(K\).
We note that we can parameterize the Basis Spline Approach to be equivalent to the Fused Lasso Approach. By setting the basis spline degree to be \(d=0\), the spline transformation matrix \(C\) results in a vector that is piecewise constant. If we place a knot at every time point and apply an \(\ell_{1}\) penalty to the first differences of the spline components, then the Basis Spline Approach is equivalent to the Fused Lasso Approach. Analogous equivalences hold for higher order splines. If the basis spline degree is \(d=1\), the method is equivalent to trend filtering [12], and so on for higher polynomial degrees.
### Preprocessing Indicator Values
All of the models above assume that the heterogeneity corrections should be additive, that is, \(\tilde{X}=X+AB^{T}\). Depending on the application, it may be more reasonable to apply a multiplicative correction. In such a case, we can fit the models using \(\log X\) and \(\log Y\). If \(X\) or \(Y\) contain zeros, then we can add a pseudocount and fit using \(\log(X+\epsilon)\). We optimize
\[\min_{A,B}\|(\log X+AB^{T})-\log Y\|_{2}^{F}\]
for the Bounded Rank Model, and the temporal penalties are straightforward for the Fused Lasso and Basis Spline models. Our corrected indicator is \(\tilde{X}=X\odot\exp(AB^{T})\), where \(\odot\) represents the Hadamard product and exponentiation is element-wise. One caveat to note is that the optimization minimizes the mean squared error between the indicator and guide on the log scale.
### Hyperparameter Selection
Each of our three models has one or two hyperparameters that control how the guide signal is used. A user may have domain knowledge which suggests that a certain rank is appropriate, in which case, \(K\) can be selected manually. A rank could also be selected via various heuristics, such as an elbow plot of the principal components of \(Y-X\). Alternatively, multiple values of \(K\) could be selected for sensitivity analysis. In this section, we provide a quantitative method of selecting hyperparameters as a default option, as an alternative to manual selection.
In our setting, several factors complicate the usually straightforward application of cross validation. First, the data is structured in a two dimensional matrix. Our optimization method does not allow missingness in the matrices, so we cannot simply
remove a random subset of data and run the optimization procedure. We can remove entire columns (time points) either randomly or in blocks, but we will need to interpolate the values for the missing time points. We can use mean squared error between \(\tilde{X}\) and \(Y\) as the error metric, but it is not clear that this is an ideal choice. The indicator and guide measure different quantities and we do not believe or wish that success is defined as matching \(\tilde{X}\) and \(Y\).
Despite these challenges, we will select hyperparameters by using a cross validation framework with mean squared error as the error metric. In order to reduce the temporal dependencies inherent in the data, we leave out blocks of time for testing, as illustrated in Fig 1. We use linear interpolation to populate the rows of \(B\) in the test set, as illustrated in Fig 2. Our error metric is the mean squared error between \(\tilde{X}\) and \(Y\) on the test set.
In the penalized regression context, it is common to apply the "one standard error rule" to cross validation, in which we select the most parsimonious model whose cross validation error is within one standard error of the the minimum cross validation error across all models [13]. A common justification for this rule is that the cross validation errors are calculated with variance, and it is preferable to take a conservative approach.
Fig 1: We use cross-validation for hyperparameter selection. The red blocks (ten days each) are held out for testing, and the yellow blocks (five days each) are held out to reduce dependencies between the training data and the test data. We repeat for 6 folds.
Fig 2: We need to interpolate the test indices of the temporal adjustment matrix \(B\) in order to calculate \(\tilde{X}\). We do this by linear interpolation between the values of \(B\) on the boundaries of the blocks of training indices. This figure shows interpolation for a single column of matrix \(B\), as an example.
against more complex models [13]. Our setup provides further motivation to apply this rule. Unlike in standard cross validation, our goal is not to find the model which fits best to \(Y\), but rather to use \(Y\) as a guiding signal to mitigate heterogeneity. Additionally, there is likely a slight dependence between the training data and test data due to the temporal structure of the data. Applying the "one standard error rule" will prevent overadjustment to \(Y\).
In order to use the "one standard error rule", we will need to calculate the number of parameters for a given model. For the Bounded Rank and Basis Spline models, this is straightforward. For the Bounded Rank Model, the number of degrees of freedom is \(K(N+T-1)\), and for the Basis Spline Model, it is \(K(N+L-1)\), where \(L\) is the dimensionality of the basis spline transformation matrix \(C\). For the Fused Lasso Model, we cannot simply calculate the number of entries in the matrices \(A\) and \(B\). We will use a result that applies to generalized lasso problems under weak assumptions [14]. In our case, we will estimate the degrees of freedom in matrix \(B\) as \(\|\Delta_{t}B\|_{0}\), or the count of non-zero successive differences along the time axis of \(B\). The total degrees of freedom for the Fused Lasso Model is \(K(N-1)+\|\Delta_{t}B\|_{0}\).
We note that the theorem in [14] applies only to generalized lasso problems, and in our case, we use an iterative approach of which the fused lasso is just a subroutine. Therefore, the results may not hold precisely in our case. However, we are using the "one standard error rule" merely as a heuristic, and we do not require absolute accuracy in estimating the degrees of freedom.
We reiterate that this is a general rule, and the user can use any rule to select hyperparameters. If a user has domain knowledge which suggests that a certain rank is appropriate, then they could simply select that rank. If a user wants a more parsimonious model, they could use a two standard error rule or a three standard error rule. In some cases, the cross validation error may have a clear elbow, which could suggest an ideal rank. The "one standard error rule" is used simply as a baseline when no obvious choice exists.
## 3 Results
### Simulation Experiments
We first performed experiments on simulated data, where the true rank of the difference matrix \(Y-X\) was known. We fit each of the three models to the difference matrix and evaluate performance through cross validation. The simulation setup is as follows:
1. Generate \(A\) as a \(N\times K\) matrix, where \(A_{ij}\stackrel{{\text{iid}}}{{\sim}}\text{Unif}(-1,1)\).
2. Generate \(B\) as a \(T\times K\) matrix. For each column \(k\), select nine random breakpoints \((b_{1}^{k},...,b_{9}^{k})\) between \(1\) and \(T-1\). Set \(b_{0}^{k}=0\) and \(b_{1}^{k}0=T\). Set \(B_{b_{i}^{k}:b_{i+1}^{k},k}\) to be a random constant between \(0\) and \(1\). Thus each column of \(B\) is piecewise constant with \(10\) pieces.
3. Let \(C=AB^{T}\), then normalize to have standard deviation \(1\) across all elements.
4. Let the simulated difference matrix \(Y-X\) be \(D\), where \(D_{ij}=C_{ij}+\epsilon_{ij}\) and \(\epsilon_{ij}\stackrel{{\text{iid}}}{{\sim}}N(0,\sigma=0.1)\). Note that we do not have to simulate \(X\) or \(Y\) individually, only the difference.
In our simulations, we used \(N=51\) and \(T=699\) as in our real-world analysis. We experimented with \(K\in\{5,10,40\}\). For the simulations, we will discuss the Fused Lasso Model (including \(\lambda=0\), which is equivalent to the Bounded Rank Model). The Basis
Spline Model does not perform well in the simulations, as will be discussed in the next section.
For \(K=5\), the rank selected by cross validation is 4, as shown in Fig 3. In applications where the signal-to-noise ratio is low, our methods will have difficulty in detecting all of the heterogeneity. In this simulation, we can decompose the signal and noise exactly, and perform SVD on the signal and noise separately to determine the signal-to-noise ratio. The first 4 singular values of the signal matrix \(C\) are larger than those of the noise matrix \(\epsilon\), but the remainder are smaller. We do not see a strong signal in all 5 singular values partially because the rows of \(AB^{T}\) were not constructed to be orthogonal. This supports the hypothesis that the optimal rank was not found due to the low signal-to-noise ratio. We see a similar pattern for \(K=10\) (Fig 4), where the rank selected by cross validation is 8 and the first 8 singular values of \(C\) are larger than those of \(\epsilon\). Once \(K\) exceeds the optimal rank, the cross validation error slowly increases, just as would be expected if the model were overfitting.
Fig 5 shows that for \(K=40\), the rank with the minimum cross validation error is indeed 40, using the Fused Lasso Model with \(\lambda=1\). Even though the signal-to-noise ratio is higher than 1 for only the first 11 singular values, the correct rank is selected. This is a somewhat surprising result, and might be attributable to the penalty encouraging the rows of \(B\) to be piecewise constant. This may allow the model to detect even parts of the signal that are weaker than the noise.
### COVID-19 Insurance Claims and Reported Cases
Insurance claims are a useful data source in modeling and forecasting epidemics. They provide information about how many people are sick enough to seek medical care, which is potentially more useful than simply the number of people who are infected but potentially asymptomatic. They can also be available at high geographic and temporal resolution, as well as cover a large proportion of the total population. In this section, we will use a dataset of aggregated insurance claims provided by Optum. The signal is the fraction of all outpatient claims with a confirmed COVID-19 diagnosis code, followed by smoothing and removal of day-of-week effects [15]. Despite the advantages of claims
Fig 3: Although the true rank of the correction matrix is 5, the optimal rank selected is 4 and \(\lambda=1\). For the Bounded Rank Model (BR) and small values of \(\lambda\) for the Fused Lasso Model (FL), a clear overfitting curve appears. Even when applying the one standard error rule (1se), the same model is selected (as denoted by the square and triangle).
Fig 4: Although the true rank of the correction matrix is 10, the optimal rank selected is 8 and \(\lambda=0.1\) (denoted by the square). For the Bounded Rank Model (BR) and small values of \(\lambda\) for the Fused Lasso Model (FL), a clear overfitting curve appears. When applying the one standard error rule (1se), the model selected has rank 7 and \(\lambda=1\) (denoted by the triangle).
Fig 5: The true rank of the correction matrix is 40, and the optimal rank selected is 40 with \(\lambda=1\) (denoted by the square). For the Bounded Rank Model (BR) and small values of \(\lambda\) for the Fused Lasso Model (FL), a clear overfitting curve appears with the minimum significantly lower than the optimal rank. When applying the one standard error rule (1se), the rank selected is 13 with \(\lambda=1\) (denoted by the triangle).
datasets, they are often subject to spatial and temporal heterogeneity, as we will demonstrate.
We used reported COVID-19 cases from Johns Hopkins [16] as our guide signal to correct for heterogeneity in the insurance claims signal. As in the simulation experiments, we used the hyperparameter selection scheme described in Section 2.5. Because we believe that the effects of heterogeneity here are multiplicative rather than additive, we applied preprocessing steps as described in Section 2.4. We set \(X\) to be the log of the insurance claims signal, and \(Y\) to be the log of the reported cases signal, each with a pseudocount of \(\epsilon=1\) to account for zeros.
Unlike in the simulation experiments, we do not see a clear overfitting curve. As shown in Fig 6, the cross validation error decreases as \(K\) increases and \(\lambda\) decreases (as the model's complexity increases) and then flattens. The model with the best cross validation error has \(K=50\), where the rank of the difference matrix is \(51\). Clearly, we do not want to use this model, since we do not believe that the heterogeneity present in the claims signal has rank \(50\) out of a possible \(51\). This is where the "one standard error rule" is useful. It selects the Fused Lasso Model with rank \(K=12\) and \(\lambda=1\). Although this model still has a higher rank than we may have thought appropriate, it is much simpler than the model which minimizes cross validation error.
The Basis Spline Models perform poorly on this dataset, as shown in Fig 7. For very small knot intervals, some models are candidates for selection under the "one standard error rule", but their degrees of freedom are larger than those of the Fused Lasso Models. We examine the behavior of the Basis Spline Models in Fig 8, where we see that there is overfitting if the knot interval is too short (many parameters) and overfitting if the knot interval is too long (fewer parameters). After performing linear interpolation, the overfitting model ends up with reasonable accuracy. However, the basis splines themselves do not accurately represent the temporal corrections. We conclude that the assumption of cubic splines is too rigid in this case. The splines simply cannot fit well to the data, likely due to abrupt changepoints that the Fused Lasso Models are able to handle better.
In Fig 9, we illustrate the benefit of applying heterogeneity corrections. The raw
Fig 6: Cross validation error is optimized at \(K=50\) using the Bounded Rank Model (BR), i.e. \(\lambda=0\), indicated by the square. However, when applying the one standard error rule, we select the Fused Lasso Model (FL) with \(K=12\) and \(\lambda=1\), indicated by the triangle. This results in a great reduction in parameters with a small decrease in cross validation accuracy.
[MISSING_PAGE_POST]
insurance claims signal is quite different than the reported case signal in late summer in 2020. The state with the highest claims signal is New York, even though New York has one of the lowest rates of confirmed cases. After applying heterogeneity correction using cases as a guide, the insurance claims signal looks more similar to the reported case signal, improving the comparability of the insurance claims signal across states.
### Evaluating Preprocessing Assumptions
As mentioned above, we applied a log transform to the data, assuming that the heterogeneity effects are multiplicative rather than additive. We can test that assumption by comparing the following three models.
1. Bounded Rank Model with rank \(k=1\) (BR-1): \[\min_{a,b}\sum_{i=1}^{N}\sum_{t=1}^{T}(\log X_{it}+a_{i}\cdot b_{t}-\log Y_{it })^{2}\]
2. Additive Model in log space (AL): \[\min_{a,b}\sum_{i=1}^{N}\sum_{t=1}^{T}(\log X_{it}+a_{i}+b_{t}-\log Y_{it})^{2}\]
Fig 9: Applying a rank-2 correction improves similarity between reported COVID-19 cases and the insurance claims signal. On the left, the average daily confirmed COVID-19 cases between August 15 and September 15, 2020 are displayed in a choropleth map. On the right, we display the value of the insurance claims signal for the same time period before (top) and after (bottom) applying a rank-2 heterogeneity correction using the Bounded Rank Model. The pre-correction %CLI map is not similar to the cases map, but the post-correction %CLI map is.
3. Additive Model in count space (AC): \[\min_{a,b}\sum_{i=1}^{N}\sum_{t=1}^{T}\left(\frac{X_{it}+a_{i}+b_{t}}{Y_{it}}-1 \right)^{2}\]
All of these models have \(N+T\) parameters and a total of \(N+T-1\) degrees of freedom, with a single parameter for each location and a single parameter for each day, with no regularization. In the first two models, the heterogeneity is assumed to be additive in the log space, or multiplicative in the count space. In the AC model, the heterogeneity is assumed to be additive in the count space. In the BR-1 model, the heterogeneity parameters are multiplied together, whereas in the AL model, they are added. Note that we minimize the relative error for the AC model so that all three models have the same objective.
We display the mean squared error between \(\log\tilde{X}\) and \(\log Y\) for each of the three models below, with the standard error in parentheses. The models BR-1 and AL, which assume that heterogeneity is multiplicative, perform much better than AC, which assumes that heterogeneity is additive. This supports our initial assumption that the effects of heterogeneity in this particular signal pair are multiplicative.
The AL model performs slightly better than the BR-1 model, which weakly suggests that the spatial and temporal parameters should be added instead of multiplied together. However, the AL model cannot be generalized to higher rank corrections. Therefore, we cannot use this model in practice, as we believe that the effects of heterogeneity are too complex to be modeled solely by a single parameter for each location and for each time point.
\begin{tabular}{|c|c|c|c|} \hline Model & BR-1 & AL & AC \\ \hline MSE & 0.2205 (0.00220) & **0.2138 (0.00235)** & 0.7611 (0.00919) \\ \hline \end{tabular}
### Google Trends and CTIS Survey
Google has made public an aggregated and anonymized dataset of Google search queries related to COVID-19 [17]. An indicator derived from this dataset roughly measures the relative prevalence of a specific set of search queries in a given location and time. Ideally, this indicator could inform us approximately how many people are symptomatic at a given time at a very minimal cost. However, search query behavior is affected by many other factors other than whether a person is symptomatic. People may be more likely to search for COVID-19 related terms if someone famous was reported as infected, or if public health measures were enacted or lifted. These can create both spatial and temporal heterogeneity in the indicator.
We used the COVID-19 Trends and Impact Survey (CTIS) as a guide signal. Specifically, our guide is the estimated percentage of people with COVID-like illness from the survey. We used the hyperparameter selection scheme as above.
The results here, shown in Fig 10, look more similar to the results in the simulated dataset. The model that performs best in cross validation has a rank of 7, and when applying the one standard error rule, the optimal model is a Fused Lasso Model with rank \(K=4\) and \(\lambda=1\). With increasing \(K\), the cross validation errors increase, indicating that some overfitting can occur. Here as well, the Basis Spline Models perform poorly (not pictured), entrenching a pattern seen in the insurance claims experiment as well.
We examine the temporal components of the optimal model in Fig 11. As expected, the components are piecewise constant across time. The first (most important) component is mostly negative in the beginning of the pandemic and spikes during the
Omicron wave. By using the CTIS survey as a guide, we correct the Google Trends signal downwards in the beginning of the pandemic and upwards during the Omicron wave.
One possible explanation for this heterogeneity is the decline in public attention and anxiety regarding the COVID-19 pandemic. In the beginning and middle of 2020, many asymptomatic people entered COVID-related searches into Google, resulting in a positively biased signal. Throughout most of 2021, minimal corrections are made and the two signals are at their strongest agreement. During the Omicron wave around the beginning of 2022, our method applies a strong positive correction to the Google Trends signal. According to the CTIS signal, COVID-19 cases are highest at this point, but the Google Trends signal does not increase to the same extent, so a further positive correction is needed. One possible explanation is that fewer symptomatic individuals were appearing in the Google signal, potentially because they were more confident that they indeed had a COVID-19 infection, or because they were less anxious. Another explanation could be that fewer non-symptomatic individuals were appearing in the Google signal, potentially because they were less interested in the pandemic. Whatever the exact reason, the corrections show that the Google signal suffers from temporal heterogeneity, which can be corrected by using the CTIS survey as a guide signal.
## 4 Discussion
As explained above, we define heterogeneity as the presence of location-dependent or time-dependent bias between an indicator and its unobserved ground truth. Indicators are useful sources of information for modeling, mapping, and forecasting epidemics, but conclusions derived from the indicators in the presence of heterogeneity may be suspect. The problem of heterogeneity is poorly suited to translate into an optimization problem in the absence of any ground truth data. Therefore, we use another signal as a guide, and present a method that can use the guide strongly or weakly.
Our method appears to be useful on several pairs of COVID-19 indicators. As Fig 9 shows, the raw COVID-19 insurance claims signal gives a very different picture than reported cases. If we were to use the insurance claims signal to understand the current
Fig 10: When correcting the Google Trends signal, cross validation error is optimized at \(K=7\) using the Fused Lasso Model (FL) with \(\lambda=0.1\), indicated by the square. However, when applying the one standard error rule, we select \(K=4\) and \(\lambda=1\), indicated by the triangle.
COVID-19 burden across the United States, we could be very misinformed.
The flexibility of our approach is both its main strength and main weakness. On the one hand, the models discussed in this paper can be used for any generic signal and corresponding guide. The user can choose the appropriate parameters based on domain knowledge, exploratory data analysis (e.g. an elbow plot), or the cross validation scheme described above. Because heterogeneity is not straightforward to quantify, we require flexibility to cover a variety of use cases.
However, this flexibility requires a method to select hyperparameters. In simulations, cross validation yields a reasonable choice of hyperparameters. However, in a real-world setting, the hyperparameters selected by cross validation lead to a model that seems to overadjust. Cross validation might lead to model overadjustment because there are dependencies between the left-out data and the training data. In this case, just as we would expect to overhit in a normal prediction setup, we would expect to overadjust to the guide signal. Additionally, the error metric also encourages overadjustment, since we minimize the squared error between the corrected signal and the guide.
Another significant limitation of this approach is that the guide signal needs to be more reliable than the indicator we are trying to correct. Using Fig 9 as an example again, we see that \(Y\) is low in New York but \(X\) is high, and that after applying our heterogeneity correction, \(\tilde{X}\) is low. This is only an improvement if \(Y\) is correct, that true COVID-19 activity in New York is actually low. In this case, we have domain knowledge to suggest that reported cases suffer from spatial heterogeneity less than insurance claims. However, were we to treat cases as \(X\) and insurance claims as \(Y\), then our "corrected" case signal would be incorrectly high in New York.
An important extension to this approach would be modifying the hyperparameter selection scheme. A better scheme would not default to overadjustment so strongly and would not use an error metric that is optimized when fit exactly to the guide signal. Another extension would be the use of multiple guide signals \(Y_{1},\ldots,Y_{m}\). A simple start would be to set \(Y=\alpha_{1}Y_{1}+\cdots+\alpha_{m}Y_{m}\) and then apply the heterogeneity correction using \(Y\) as the guide signal. Intuitively, if the sources of heterogeneity in the various guides are uncorrelated, then they will tend to cancel out as a result of this averaging,
Fig 11: We plot the temporal components of the heterogeneity correction between the CTIS and Google Trends signal, using the Fused Lasso model with \(K=4\) and \(\lambda=1\). The most prominent corrections occur around January 2022, corresponding with the Omicron wave.
resulting in a spatially and temporally more homogeneous guide. Alternatively, we could view \(X,Y_{1},Y_{2},\ldots,Y_{m}\) as \(m+1\) different signals, and use them with the models discussed above to jointly estimate the underlying latent quantity to which they are all related. Using multiple guide signals will likely also reduce the overadjustment problem, and a more creative approach to incorporating multiple signals might avoid using the error with the guide signal as a performance metric for hyperparameter selection.
Our current setup fits the adjustment matrix in a batch setting, but a future direction would be to modify the algorithm in an online setting. Indicators are commonly used in real-time, so an online algorithm which makes adjustments as new data arrives may be more appropriate for many use cases.
Of the three models we propose, the Fused Lasso Model performs best in both simulated and real-world experiments. However, it is quite expensive computationally, whereas the other two models can be solved rapidly using SVD-based approaches. Given that the Bounded Rank Model usually performs well, it may be preferable to simply use the Bounded Rank Model in some applications. The Basis Spline Model is slightly more sophisticated without a meaningful increase in computation time. However, the assumptions that lie behind the Basis Spline Model seem to be too strong, specifically when there are abrupt changepoints in temporal heterogeneity.
| 補助情報ソースは、空間と時間的な解像度、広域性、遅延が従来の監視信号よりも高いことから、疫学 surveillancesurveillanceにおいてますます重要になっています。これらのデータソースから派生する信号における空間と時間的な異質性を記述します。これらの信号における空間的および/または時間的なバイアスが存在します。空間的および/または時間的なバイアスを修正するために、「ガイド」信号を使用する方法を提示します。この方法により、モデル化と予測のために使用できる、より信頼性の高い信号が生成されます。この方法では、異質性は低ランク行列で近似でき、時間的な異質性は時間的に滑らかなものと仮定しています。この補正の行列ランクと時間的な滑らかな度合いを表すハイパーパラメータ選択アルゴリズムも提示します。真の ground truth がない場合、地図と図表を使用して、この方法が異質性を |
2309.08778 | Satisfiability.jl: Satisfiability Modulo Theories in Julia | Satisfiability modulo theories (SMT) is a core tool in formal verification.
While the SMT-LIB specification language can be used to interact with theorem
proving software, a high-level interface allows for faster and easier
specifications of complex SMT formulae. In this paper we present a novel
open-source package for interacting with SMT-LIB compliant solvers in the Julia
programming language. | Emiko Soroka, Mykel J. Kochenderfer, Sanjay Lall | 2023-09-15T21:44:49 | http://arxiv.org/abs/2309.08778v2 | # Satisfiability.jl:
###### Abstract
Satisfiability modulo theories (SMT) is a core tool in formal verification. While the SMT-LIB specification language can be used to interact with theorem proving software, a high-level interface allows for faster and easier specifications of complex SMT formulae. In this paper we discuss the design and implementation of a novel publicly-available interface for interacting with SMT-LIB compliant solvers in the Julia programming language.
Keywords:satisfiability modulo theories Julia smt-lib interface
## 1 Introduction
Theorem proving software is one of the core tools of formal verification, model checking, and synthesis. Provers are continually improving in their ability to tackle increasingly large, complex, and specialized problems of real-world significance. While the core of SMT is propositional logic, modern solvers encompass integer and real arithmetic, floating-point arithmetic, strings, and data structures such as bit vectors [12]. Additionally, new theories and heuristics are continually being developed, increasing the practical utility of SMT [9][21].
This paper introduces Satisfiability.jl, a Julia package providing a high-level representation for SMT formulae including propositional logic, integer and real-valued arithmetic, and bit vectors. Julia is a dynamically-typed functional language ideal for scientific computing due to its use of type inference, multiple dispatch, and just-in-time compilation to improve performance [7][20]. Although Julia has a smaller software ecosystem compared to Matlab or scientific Python, it has been used in high-performance applications including machine learning and GPU programming [13][6]. Satisfiability.jl is the first package to provide an interface for SMT solving in idiomatic Julia, taking advantage of language features to simplify the process of specifying and solving an SMT problem.
## 2 Prior Work
Many theorem provers have been developed over the years. Some notable provers include Z3 [19], PicoSAT [8], and CVC5 [3], all of which expose APIs in popular languages including C++ and Python. However, provers are low-level tools intended to be integrated into other software, necessitating the development of higher-level interfaces to improve usability. Such interfaces been published for other common languages: PySMT uses both solver-specific interfaces and the SMT-LIB language to interact with a variety of solvers [14]. JavaScript and ScalaSMT are similar packages [2][11]. In C++, the SMT-Switch library provides a powerful interface to SMT-LIB solvers by exposing many of the underlying SMT-LIB commands [18].
In comparison, SMT solving in Julia has historically required the use of wrapped C++ APIs to access specific solvers. Z3 and PicoSAT, among others, provide Julia APIs through this method, allowing access to some or all functionality at a lower level of abstraction [16]. Although wrapping is a powerful tool to access code in other languages, wrapped APIs often provide interfaces that do not match the idioms or best practices of a specific language. Thus, a native Julia interface has the potential to greatly improve the accessibility of formal verification in Julia. Satisfiability.jl is the first such interface to be published in the Julia ecosystem.
## 3 The SMT-LIB Specification Language
SMT-LIB is a low-level specification language designed to standardize interactions with theorem provers. At time of writing, the current SMT-LIB standard is V2.6; we used this version of the language specification when implementing our software. To disambiguate between SMT (satisfiability modulo theories) and this specification language, we always refer to the language as SMT-LIB. For an in-depth treatment of computational logic and the associated decision procedures, readers are referred to [10][15]. A shorter overview of SMT is available in [12].
SMT-LIB uses a Lisp-like syntax designed to simplify input parsing. It is intended to provide an interactive interface for theorem proving similar to Julia's REPL. SMT-LIB supports declaring variables, defining functions, making assertions (e.g. requiring that a Boolean formula be true), and issuing solver commands. Figure 1 provides an example of an SMT solver moving between modes.
SMT expressions also have an associated _sort_, which constrains what functions or operations are valid for a given expression. For example, the SMT-LIB variable declaration (declare-fun a () Int) declares a symbol a with sort Int. The function definition (define-fun f () Bool (> a 1)) defines a function a > 1 with sort Bool. The concept of SMT sorts maps cleanly onto Julia's type system.
Figure 1 shows a simple example of a solver moving between modes as statements are received. One limitation of SMT-LIB is
only valid in specific solver modes. For example, the command (get-model) retrieves the satisfying assignment for a formula and is only valid in sat mode, while (get-unsat-core) is only valid in unsat mode. Issuing a command in the wrong mode yields an error, thus many useful sequences of SMT-LIB commands cannot be scripted in advance.
For a full description of SMT-LIB, readers are referred to [5]. As our software provides an abstraction on top of SMT-LIB, we refrain from an in-depth description of the language. Knowledge of SMT-LIB is not required to use Satisfiability.jl.
## 4 Package Design
Our software facilitates the construction of SMT formulae and the automatic generation of SMT-LIB code, providing a simple interface to SMT-LIB compliant theorem provers. The basic unit of an SMT formula is the expression (Figure 2).
We define the abstract type AbstractExpr and concrete types inheriting from AbstractExpr, which implement a tree structure capable of representing
Figure 1: A simple interaction with an SMT-LIB standard solver. If the formula asserted in assert mode was unsatisfiable, the solver would instead enter unsat mode and a different set of commands would be valid.
Figure 2: The expression and(a > b + 1, b >= 0).
arbitrarily complex nested expressions. Julia's type system prevents expressions with mismatched sorts from being constructed. For example, \(\neg\texttt{x}\) is only valid if x is of type BoolExpr.
Output types of operations follow Julia's type compatibility and promotion rules; for example, if x is a BoolExpr and a is an IntExpr, x + a is an IntExpr.
### Expressions
Variables are declared using the @saturiable macro. Vector- and matrix-valued variables are arrays of single-valued expressions; thus, Julia's built in array functionality allows operators to be broadcast across arrays of expressions. Julia uses dot syntax for broadcasting: if x is a vector, f.(x) broadcasts f across x[1]...x[n].
Elements of vector and matrix expressions are assigned human-readable names of the form name_i_j.
julia> @saturiable(x[1:2, 1:3], Bool)
2x3 Matrix(BoolExpr):
x_1_1
x_1_2
x_1_3
x_2_1
x_2_2
x_2_3
julia> @saturiable(y[1:2], Bool)
julia> and.(x, y) # constructs a 2 x 3 matrix of BoolExprs
Expression naming.Elements of vector and matrix expressions are assigned human-readable names of the form name_i_j. Expressions constructed from operators take names of the format OP_HASH where OP is the operator name and HASH is computed from the names of child expressions using Julia's deterministic Base.hash function. This ensures all expressions have unique names and allows components of nested expressions to be re-used when generating the SMT-LIB representation of an expression. In future versions of our software, this could also allow for optimization by memoizing large expressions with repeated smaller components. Expressions are simplified where possible: specifically, not(not(expr)) simplifies to expr and nested conjunctions or disjunctions are flattened.
### Constants
Constants are automatically wrapped. Julia's native Bool, Int, and Float64 types interoperate with our BoolExpr, IntExpr and RealExpr types as expected, following type promotion rules.
Numeric constants are simplified and promoted; for example, true + 2 is stored as integer 3 and 1 + 2.5 is promoted to 3.5. Logical expressions involving constants can often be simplified. For example:
julia> @saturariable(z, Bool) julia> and(false, z) true julia> implies(false, z) true
### SMT-LIB Representation
Satisfiability.jl can automatically generate the SMT-LIB representation of an expression. An example is provided below.
julia> @saturariable(x, Bool) julia> @saturariable(y, Bool) julia> expr = or(-x, and(-x, y)) julia> print(smt(expr)) (declare-fun x () Bool) (declare-fun y () Bool) (assert (or (and (not x) y) (not x)))
### Interacting with Solvers
Internally, our package uses Julia's Process library to interact with a solver via input and output pipes. This supports the interactive nature of SMT-LIB, which necessitates a two-way connection with the solver, and provides several benefits. By transparently documenting how our software manages sessions with solvers, we eliminate many of the difficulties that arise when calling software dependencies. We also unify the process of installing solvers for Satisfiability.jl across operating systems; the user simply ensures the solver can be invoked from their machine's command line. Users may customize the command used to invoke a solver, providing a single mechanism for interacting with any SMT-LIB compatible solver; customizing options using command line flags; and working around machine-specific issues such as a solver being available under a different name or at a specific path.
## 5 Usage
### Specifying an SMT Problem
Variables.New variables are declared using the @saturiable macro, which behaves similarly to the @variable macro in JuMP [17]. @saturariable takes two arguments: a variable name with optional size and shape (for creating vector-valued and matrix-valued variables) and the variable type.
@saturariable(x,Bool) #SingleBoolExpr @saturariable(y[1:n],Bool) #n-vectorofBoolExpr @saturariable(z[1:n],Int) #mxnvectorofIntExpr
#### 4.0.1 Uninterpreted functions.
An uninterpreted function is a function where the input-output mapping is not known. When uninterpreted functions appear in SMT formulae, the task of the solver is to construct a representation of the function using operators in the appropriate theories, or to determine that no function exists (the formula is unsatisfiable). Satisfiability.jl supports declaring uninterpreted functions with one input variable and one output variable.
@uninterpreted(f,Int,Bool) @uninterpreted(g,(BitVector,32),(BitVector,32)) Uninterpreted functions are implemented using Julia's metaprogramming capabilities to generate correctly typed functions returning either SMT expressions or (if a satisfying assignment is known), the correct value when evaluating a constant. Continuing the above example:
julia> @saturariable(x,Int) julia> typeof(f) UninterpretedFunc julia> typeof(f(x)) BoolExpr julia> sat!(-f(-1),f(1)) :SAT julia> (f(-1),f(1)) (false,true)
#### 4.0.2 Type promotion.
Integer or real-valued operations +, -, * return integer or real expressions using Julia type promotion rules. Division (/) is only valid for real-valued expressions.
@saturiable(a,Bool) @saturiable(b,Int) @saturiable(c,Real) a + b #returnsIntExpr b + c #returnsRealExpr
#### 4.0.3 Operators.
Boolean and arithmetic operators are listed in Table 1. The theory of fixed-size BitVectors includes the arithmetic operators +, *, - and comparison operators, which perform unsigned comparisons, in addition to the BitVector-specific operators listed in Table 2. In addition to its SMT-LIB standard meaning, the distinct operator can accept an array or iterable of expressions, in which case distinct(x1,...,xn) constructs a formula where each xi has a unique value. Finally, as == is used to construct equality constraints, one can check whether two AbstractExpr are equivalent using isequal.
**Operator precedence.** Julia defines the operator precedence and associativity for all symbolic operators defined in this package. For the most up-to-date description of Julia's operator precedence rules, see the Julia documentation [1]. Users are encouraged to parenthesize expressions, as some precedence rules give unexpected results. For example:
```
@saturariable(x,Bool) @saturariable(y,Bool) @saturariable(z,Bool)
```
#thisisnot(x\(\Rightarrow\)y)\(\land\)(y\(\Rightarrow\)z) #it'simplies(x,implies(y\(\land\)y,z)) ```
\begin{table}
\begin{tabular}{l l l l} \hline \hline \multicolumn{2}{c}{SMT-LIB Theory of BitVectors} & \multicolumn{2}{c}{Additional operators implemented by Z3} \\ \hline Operator Symbol Notes & & Operator Symbol Notes \\ \hline \hline and & \& & nor & \(\triangledown\) & Added in Julia 1.7 \\ or & \& & nand & \(\bar{\land}\) & Added in Julia 1.7 \\ not & - & xnor & \\ & \(<<\) & Logical left shift & \(>>\) & Arithmetic right shift \\ & \(>>>\) & Logical right shift & smod & Signed modulus \\ div & Integer division & srem & Signed remainder \\ urem & Unsigned remainder & slt & Signed \(<\) \\ concat & Concatenate BitVectors & sle & Signed \(<\)= \\ bv2int & Convert BitVector to IntExpr seg & Signed \(>\)= \\ int2bv & Convert IntExpr to BitVector sgt & Signed \(>\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: BitVector-specific operators. With the exception of concat, operators over 2 or more BitVectors are only valid for BitVectors of matching length. Operators in the right column are available using Z3, but are not part of the SMT-LIB theory of BitVectors, thus other solvers may not support them [4].
\begin{table}
\begin{tabular}{l l l l} \hline \hline \multicolumn{2}{c}{Boolean operators} & \multicolumn{2}{c}{Comparison and arithmetic operators} \\ \hline Operator Symbol & Operator & Input Type(s) & Return Type \\ \hline or & \(\lor\) & \(>\) & Any numeric expr. BoolExpr \\ and & \(\land\) & \(>\)= & Any numeric expr. BoolExpr \\ not & \(\bar{\lnot}\) & \(<\) & Any numeric expr. BoolExpr \\ implies & \(\Rightarrow\) & \(<\)= & Any numeric expr. BoolExpr \\ xor & \(\vee\) & \(==\) & Any numeric expr. BoolExpr \\ iff & \(\Leftrightarrow\) & \(+\) & Any numeric expr. IntExpr, RealExpr \\ ite & & - & Any numeric expr. IntExpr, RealExpr \\ distinct &!= & \(*\) & Any numeric expr. IntExpr, RealExpr \\ & & / & RealExpr \\ \hline \hline \end{tabular}
\end{table}
Table 1: Boolean, comparison, and arithmetic operators. Where “Any numeric expr” is listed, this refers to BoolExpr, IntExpr, RealExpr, or a Bool, Int, or Float constant. The return type of an arithmetic operator is determined by Julia type promotion rules.
x \(\wedge\) y + z # this is equivalent to (x \(\wedge\) y) + z
@saturariable(a, Int) @saturiable(b, Int) a \(\geq\) 1 \(\vee\) b \(\geq\) 1 # yields a type error (a \(\geq\) 1) \(\vee\) (b \(\geq\) 1) # valid expression
### Working with Formulae
The function smt(expr:AbstractExpr) returns the SMT-LIB representation of expression expr as a string. If expr is a Boolean expression, smt will assert e; otherwise smt will simply generate the SMT-LIB statements defining e. This behavior can be controlled using the optional keyword argument assert=true|false. The function save(e:AbstractExpr, io:IO) writes the output of smt(expr) to an open file or other Julia IO object.
The function sat!(e:BoolExpr, solver::solver) calls the given solver on e and returns either :SAT, :UNSAT, or :ERROR. If e is :SAT, the values of all nested expressions in e are updated to allow easy retrieval of the satisfying assignment. If e is :UNSAT, the values of all nested expressions are set to nothing.
sat! can also be called on a Julia IO object representing a string of valid SMT-LIB commands, allowing previously-written SMT-LIB files to be used with Satisfiability.jl.
#### Interactive Solving
The SMT-LIB specification was designed as an interactive interface, allowing users to modify the assumptions of an SMT problem and issue follow-up commands after a sat or unsat response. Satisfiability.jl provides an InteractiveSolver object to support these use cases.
In interactive mode, users can manage the solver assertion stack using push!, pop! and assert! commands. The behavior of sat! also changes; calling sat!(interactive_solver) checks the satisfiability of all currently asserted statements, returning a tuple (status, assignments) where assignments is a dictionary containing the satisfying assignment.
The following example demonstrates interactive solving.
isolver = open(CVC5()) # open an InteractiveSolver assert!(isolver, exprs...) # makes some assertions
checks satisfiability of asserted expressions status, assignment = sat!(isolver) if status == :SAT # set the values of exprs map( (e) -> assign!(e, assignment), exprs) end
# Equivalent to (check-sat-assuming more_exprs) status, assignment = sat!(isolver, more_exprs...)
Push and pop push!(isolver) # push and pop default to 1 assert!(isolver, expr1) status, assignment = sat!(isolver) if status == :UNSAT pop!(isolver) end close(isolver) # clean up processes when done
This example exposes a limitation of interactive solving; because the asserted expressions are not passed in sat!, their values are not automatically set if a satisfying assignment is found. Instead, a dictionary containing the assignment is returned. Users can propagate these values using assign!.
Additionally, Satisfiability.jl exposes a low-level interface, allowing advanced users to send SMT commands and receive prover responses. Users are responsible for ensuring the correctness of these commands, as well as interpreting the results. In this use case, Satisfiability.jl can aid advanced SMT users in automating generation of SMT-LIB statements and programmatically interacting with solvers.
## 6 Examples
To demonstrate the compact syntax of Satisfiability.jl and evaluate its performance on large problems, we selected the "Pigeonhole" benchmark problem from the SMT-LIB QF_LIA benchmark library1. Given an \(n\times n+1\) integer matrix \(P\), the benchmark problem is to find a satisfying assignment such that each element \(P_{ij}\) is in \(\{0,1\}\), for each row \(i\)\(\sum_{j=1}^{n+1}P_{ij}\geq 1\) and for each column \(j\), \(\sum_{i=1}^{n}P_{i,j}\leq 1\). Since there are more pigeons than available spaces, the problem is always unsat. Results are presented in Figure 3.
Footnote 1: [https://clc-gitlab.cs.uiowa.edu:2443/SMT-LIB-benchmarks/QF_LIA/-/tree/master/pidgeons](https://clc-gitlab.cs.uiowa.edu:2443/SMT-LIB-benchmarks/QF_LIA/-/tree/master/pidgeons)
The pigeonhole benchmark is defined using the following code.
function pigeonhole(n::Int) @saturiable(P[1:n+1, 1:n], Int) rows = BoolExpr[sum(P[i,:]) >= 1 for i=1:n+1] cols = BoolExpr[sum(P[:,j]) <= 1 for j=1:n] status = sat!(rows, cols, P.>= 0, P.<= 1, solver=Z3()) return status # should always return :UNSAT end
We also tested a graph coloring task in which Satisfiability.jl attempts to find up to 5 colorings for each graph of size \(n\), progressively adding assertions
to exclude previously-found solutions. We then generate an SMT-LIB script containing the same commands. This script provides a baseline for timing Z3 on the same task, although the scripted performance could not be practically achieved without prior knowledge of the results (Figure 4).
The following code defines the graph coloring task given \(n\) (the number of nodes), a list of edges as \((i,j)\) pairs, a maximum number of colorings to find, and the number of available colors.
```
functiongraph_coloring(n::Int,edges,to_find::Int,colors::Int) @saturiable(nodes[1:n],Int) limits=and.(nodes.>=1,nodes.<=colors) conns=cat([nodes[i]!=nodes[j]for(i,j)inedges],dims=1) open(Z3())dosolver assert!(solver,limits, conns) i=1 whilei<=to_find status,assignment=sat!(solver) ifstatus==:SAT assign!(nodes,assignment) assert!(solver,not(and(nodes.==value(nodes)))) else break end i+=1 end
```
## 7 Experimental Results
We timed Z3 using Julia's shell command (Cmd) interface and Satisfiability.jl, finding very little overhead when solving a single SMT problem. More overhead is incurred in interactive mode, likely due to the increased demands of two-way communication with Z3 and parsing the result of SMT-LIB (get-model). Timeouts were set at 20 minutes. We do not include precompilation time in any results, as it represents a one-time cost per Julia session.
We also timed the generation of SMT files, presented in Figure 5. The largest file, containing 16,402 commands, was generated in 4.84 seconds.
These results were obtained using Julia 1.9.0 on a Linux x86_64 machine running Mint 19.2 with an Intel i7 2.70GHz processor and 16 GB RAM. The files pigeons_benchmark.jl and graph_coloring_benchmark.jl, available in the Satisfiability.jl repository2, contain the code used to run this analysis. Generated execution logs listing commands and more system information are also provided.
Limitations of this analysis include system interrupts and other spurious resource demands on the core running Z3, which could introduce noise in our measurements. Additionally, for small problem sizes we noticed a \(\tilde{2}0\)-ms difference in the time required to invoke Z3 in a Linux terminal vs using Julia's BenchmarkTools and Cmd interface. For example, when timing the smallest (\(n=2\)) pigeon benchmark we measure an average of 0.0472s using the terminal command time timeout 20m z3 -smt2 pigeons_genfiles/pigeonhole_gen_2.smt, vs 0.0245s using Julia to benchmark the command timeout 20m z3 -smt2 pigeons_genfiles/pigeonhole_gen_2.smt. This difference is negligible for larger problem sizes; however, it shows that minor differences in invoking Z3 introduce small uncertainties. These uncertainties are apparent in Figure 3 where Satisfiability.jl achieves a faster solve time than Z3 called with Julia's Cmd for small \(n\).
Figure 4: Timing results for interactive graph coloring. For sizes above \(n=2^{12}\), Z3 timed out on the test machine. On the right the increase appears relatively large for small problem sizes (0.1 seconds), however the absolute increase is under 0.5 seconds.
Figure 3: Timing results for the pigeonhole benchmark. For \(n>11\), Z3 timed out.
## 8 Conclusions
We have developed a Julia package providing a simple, high-level interface to SMT-LIB compatible solvers. Our package takes advantage of Julia's functionality to construct a simple and extensible interface; we use multiple dispatch to optimize and simplify operations over constants, the type system to enforce the correctness of SMT expressions, and Base.Process to interact with SMT-LIB compliant solvers.
Satisfiability.jl is a registered Julia package and can be downloaded with the command using Pkg; Pkg.add("Satisfiability.jl"). The package documentation can be found at [https://elsoroka.github.io/Satisfiability.jl/](https://elsoroka.github.io/Satisfiability.jl/).
#### Acknowledgments.
The software architecture of Satisfiability.jl was inspired by Convex.jl and JuMP [22][17].
| 論理的検証におけるSMTの有効性(SMT)は、コアツールです。SMT-LIB仕様言語は、理論改善ソフトとインタラクションできる一方で、高度なインターフェースにより、複雑なSMT論理式を高速かつ簡単に記述できます。この論文では、Juliaprogramming言語でSMT-LIB準拠ソルバーとインタラクションするためのオープンソースパッケージを提案します。 |
2309.10137 | Spatz: Clustering Compact RISC-V-Based Vector Units to Maximize
Computing Efficiency | The ever-increasing computational and storage requirements of modern
applications and the slowdown of technology scaling pose major challenges to
designing and implementing efficient computer architectures. In this paper, we
leverage the architectural balance principle to alleviate the bandwidth
bottleneck at the L1 data memory boundary of a tightly-coupled cluster of
processing elements (PEs). We thus explore coupling each PE with an L0 memory,
namely a private register file implemented as Standard Cell Memory (SCM).
Architecturally, the SCM is the Vector Register File (VRF) of Spatz, a compact
64-bit floating-point-capable vector processor based on RISC-V's Vector
Extension Zve64d. Unlike typical vector processors, whose VRF are hundreds of
KiB large, we prove that Spatz can achieve peak energy efficiency with a VRF of
only 2 KiB. An implementation of the Spatz-based cluster in GlobalFoundries'
12LPP process with eight double-precision Floating Point Units (FPUs) achieves
an FPU utilization just 3.4% lower than the ideal upper bound on a
double-precision, floating-point matrix multiplication. The cluster reaches 7.7
FMA/cycle, corresponding to 15.7 GFLOPS-DP and 95.7 GFLOPS-DP/W at 1 GHz and
nominal operating conditions (TT, 0.80V, 25^oC) with more than 55% of the power
spent on the FPUs. Furthermore, the optimally-balanced Spatz-based cluster
reaches a 95.0% FPU utilization (7.6 FMA/cycle), 15.2 GFLOPS-DP, and 99.3
GFLOPS-DP/W (61% of the power spent in the FPU) on a 2D workload with a 7x7
kernel, resulting in an outstanding area/energy efficiency of 171
GFLOPS-DP/W/mm^2. At equi-area, our computing cluster built upon compact vector
processors reaches a 30% higher energy efficiency than a cluster with the same
FPU count built upon scalar cores specialized for stream-based floating-point
computation. | Matheus Cavalcante, Matteo Perotti, Samuel Riedel, Luca Benini | 2023-09-18T20:26:25 | http://arxiv.org/abs/2309.10137v1 | # Spatz: Clustering Compact RISC-V-Based Vector Units to Maximize Computing Efficiency
###### Abstract
The ever-increasing computational and storage requirements of modern applications and the slowdown of technology scaling pose major challenges to designing and implementing efficient computer architectures. In this paper, we leverage the architectural balance principle to alleviate the bandwidth bottleneck at the L1 data memory boundary of a tightly-coupled cluster of Processing Elements (PEs). We thus explore coupling each PE with an L0 memory, namely a private register file implemented as Standard Cell Memory (SCM). Architecturally, the SCM is the Vector Register File (VRF) of Spatz, a compact 64-bit floating-point-capable vector processor based on RISC-V's Vector Extension Zve64d. Unlike typical vector processors, whose VRFs are hundreds of KiB large, we prove that Spatz can achieve peak energy efficiency with a VRF of only 2 KiB. An implementation of the Spatz-based cluster in GlobalFoundries' 12LPP process with eight double-precision Floating Point Units (FPUs) achieves an FPU utilization just 3.4% lower than the ideal upper bound on a double-precision, floating-point matrix multiplication. The cluster reaches 7.7F/FCA/cycle, corresponding to 15.7GFLOPSp and 95.7GFLOPSp/W at 1 GHz and nominal operating conditions (TT, 0.80 V, 25\({}^{\circ}\)C), with more than 55% of the power spent on the FPUs. Furthermore, the optimally-balanced Spatz-based cluster reaches a 95.0% FPU utilization (7.6FMA/cycle), 15.2GFLOPSp, and 99.3GFLOPSp/W (61% of the power spent in the FPU) on a 2D workload with \(7\times 7\) kernel, resulting in an outstanding area/energy efficiency of \(171\,\)GFLOPSp/W/mm\({}^{2}\). At equi-area, our computing cluster built upon compact vector processors reaches a 30% higher energy efficiency than a cluster with the same FPU count built upon scalar cores specialized for stream-based floating-point computation.
RISC-V, Vector Processors, Computer Architecture, Embedded Systems-on-Chip, Machine Learning.
## I Introduction
The pervasiveness of Artificial Intelligence (AI) and Machine Learning (ML) applications triggered an explosion of computational requirements across many application domains. The required computing of the largest ML model doubles every 3.4 months, while its parameter count doubles every 2.3 months [1, 2]. As a result, large-scale computing systems struggle to keep up with the increasing complexity of such ML models. In fact, the performance of the fastest supercomputers only doubles every 1.2 years [2], while their power budget is capped around 20 MW by infrastructure and operating cost constraints. Furthermore, smart devices running AI applications at the Internet of Things (IoT) edge [3] are also tightly constrained in their power budget due to battery lifetime and passive cooling requirements. Therefore, small and large modern computing architectures must optimize their compute and data movement energy and delay [4].
Another major issue for present computer architectures stems from the drastic slowdown of technology scaling, particularly Static Random-Access Memory (SRAM) area scaling. For example, the SRAM bit cell area on TSMC's cutting-edge N3E technology node did not scale compared to its previous N5 node, still coming at 0.021 \(\upmu\)m\({}^{2}\), while logic area scaled down by 70% [5]. The flattening of SRAM scaling challenges almost every hardware design. However, it is particularly disastrous for AI hardware, which exploits SRAMs to implement high-bandwidth, low-latency on-chip storage. As a result, modern AI accelerators resort to large swaths of Standard Cell Memories (SCMs) [6], which dominate their total area. Furthermore, interconnects have trouble keeping up with transistor scaling [7]. The increasing memory and bandwidth requirements of modern computing applications demand large interconnect networks, leading to a considerable area overhead due to the interconnect between Processing Elements (PEs) and memories.
Designers often rely on the extreme domain specialization of hardware architectures to boost their energy efficiency and performance at the price of loss of flexibility. This is not sustainable given the rapidly evolving nature of applications and AI models. This paper focuses on fully programmable architectures based on instruction processors. Particularly, we tackle the interconnect and memory scaling issue on the shared-L1 cluster of Figure 1, a generic template for programmable computing architectures. Each shared-L1 cluster contains a set of PEs sharing tightly-coupled L1 memory through a low-latency interconnect [8]. The cluster's L1 memory is typically implemented as a multi-banked SRAM data cache or SPM. In addition, each PE includes private high-bandwidth L0 SCM.
Figure 1: A simple shared-L1 cluster with \(C\) PEs and a multi-banked L1 SPM implementation with \(M\) SRAM banks. | 計算と記憶容量の増加に伴い、現代アプリケーションの複雑化と技術スケールの遅延は、効率的なコンピュータアーキテクチャの設計と実装に大きな課題となっています。この論文では、アーキテクチャのバランス原則を活用し、処理要素(PE)のTightly-coupled clusterにおけるL1データメモリ境界の帯域幅ボトルネックを緩和します。そこで、各PEをL0メモリに結合し、標準セルメモリ(SCM)として実装されたプライバシーレジスタファイルへと探索します。アーキテクチャ的には、SCMはSpatzという、RISC-VのVectorExtension Zve64dに基づくコンパクト64ビット浮動小数点 capableのベクトルプロセッサのVector Register File(VRF)です。典型的なベクトルプロセッサのVRFは数百KiBと大きく、しかし、SpatzはVRFのサイズを2KiBに抑え、ピーク |
2309.04142 | Trustworthy and Synergistic Artificial Intelligence for Software
Engineering: Vision and Roadmaps | For decades, much software engineering research has been dedicated to
devising automated solutions aimed at enhancing developer productivity and
elevating software quality. The past two decades have witnessed an unparalleled
surge in the development of intelligent solutions tailored for software
engineering tasks. This momentum established the Artificial Intelligence for
Software Engineering (AI4SE) area, which has swiftly become one of the most
active and popular areas within the software engineering field.
This Future of Software Engineering (FoSE) paper navigates through several
focal points. It commences with a succinct introduction and history of AI4SE.
Thereafter, it underscores the core challenges inherent to AI4SE, particularly
highlighting the need to realize trustworthy and synergistic AI4SE.
Progressing, the paper paints a vision for the potential leaps achievable if
AI4SE's key challenges are surmounted, suggesting a transition towards Software
Engineering 2.0. Two strategic roadmaps are then laid out: one centered on
realizing trustworthy AI4SE, and the other on fostering synergistic AI4SE.
While this paper may not serve as a conclusive guide, its intent is to catalyze
further progress. The ultimate aspiration is to position AI4SE as a linchpin in
redefining the horizons of software engineering, propelling us toward Software
Engineering 2.0. | David Lo | 2023-09-08T05:53:24 | http://arxiv.org/abs/2309.04142v2 | # Trustworthy and Synergistic Artificial Intelligence for Software Engineering: Vision and Roadmaps
###### Abstract
For decades, much software engineering research has been dedicated to devising automated solutions aimed at enhancing developer productivity and elevating software quality. The past two decades have witnessed an unparalleled surge in the development of intelligent solutions tailored for software engineering tasks. This momentum established the Artificial Intelligence for Software Engineering (A14SE) area, which has swiftly become one of the most active and popular areas within the software engineering field.
This Future of Software Engineering (FeSE) paper navigates through several focal points. It commences with a succinct introduction and history of A14SE. Thereafter, it underscores the core challenges inherent to A14SE, particularly highlighting the need to realize trustworthy and synergistic A14SE. Progressing, the paper pains a vision for the potential leaps achievable if A14SE's key challenges are surmounted, suggesting a transition toward Software Engineering 2.0. Two strategic roadmaps are then laid out: one centred on realizing trustworthy A14SE, and the other on fostering synergistic A14SE. While this paper may not serve as a conclusive guide, its intent is to catalyze further progress. The ultimate aspiration is to position AI4SE as a linchpin in redefining the horizons of software engineering, propelling us toward Software Engineering 2.0.
AI4SE, Trustworthy AI, Human-AI Collaboration, Software Engineering 2.0, Vision, Roadmaps
## I Introduction and Brief History of AI4SE
_"Study the past if you would define the future."_ - Confuciusius
Software engineering encompasses many tasks spanning the various phases of software development, from requirement gathering and design to coding, testing, and deployment. To boost developer productivity and ensure high-quality software, extensive research in software engineering has aimed to automate some of these manual tasks. While initial automation efforts centered around the development of program analysis methods, e.g., linters [1], model checkers [2, 3], fuzzers [4], etc., the past two decades have witnessed a rapid rise in the design and deployment of AI-powered solutions to assist software practitioners in their tasks.
AI-powered solutions have been employed to analyze a myriad of software artifacts, both products and by-products of software engineering activities. These artifacts encompass source code, execution traces, bug reports, and posts on question-and-answer sites, among others. A variety of AI methods underpin the development of these assistive solutions. In this paper, the term "AI" is employed in a broad sense, encompassing a range of techniques from data mining and information retrieval to meta-heuristics, natural language processing, and machine learning. Today, the boundaries between these fields are becoming increasingly blurred, as many emerging techniques span across them.
As illustrated in Fig. 1, this paper first describes the history and key challenges of AI for Software Engineering (AI4SE) in Sections I and II, respectively. In then describes a vision for AI4SE in Section III. Next, it highlights two roadmaps toward trustworthy and synergistic AI4SE in Sections IV and V, respectively. The paper concludes by providing a summary and a call for action in Section VI.
The remainder of this section offers a concise overview of AI4SE's history after "AI winters" ended in the mid-2000s.1 Even focusing from the mid-2000s to today, there are too many studies to cover. Thus, this section does not attempt to provide comprehensive coverage. For more coverage of the
Fig. 1: Overview of the Paper with Its Six Sections
history of AI4SE, please refer to other reviews and surveys. An example review of the emergence of AI4SE after the "AI winters" is by Xie et al. [6] on data mining for software engineering. The article described the applications of four AI techniques - frequent pattern mining, pattern matching, clustering, and classification - to analyze software engineering artifacts such as execution traces, source code, and bug reports, which can be represented in forms like sequences, graphs, and text. Notably, it described specialized AI techniques adept at inferring software's formal specifications (e.g., [7]), synthesizing bug signatures (e.g., [8]), and identifying duplicate bug reports (e.g., [9]). Beyond this review, there are others covering various AI4SE topics, e.g., [10, 11].2
Footnote 2: For a review on studies in the intersection of AI and software engineering research before the end of “AI winters”, see, for instance, [12].
Over the past two decades, significant efforts have been made to solidify AI4SE as a recognized research area within the software engineering field. A prominent one is the Mining Software Repositories (MSR) conference series, inaugurated as a workshop at the 26th ACM/IEEE International Conference on Software Engineering (ICSE 2004). This workshop "brought together researchers and practitioners in order to consider methods that use data stored in software repositories (such as source control systems, defect tracking systems, and archived project communications) to further understanding of software development practices." [13] Another prominent conference series is the Symposium on Search Based Software Engineering (SSBSE), which started in 2009 [14], and serves as a forum that focuses on solving software engineering tasks by formulating them as optimization or search problems.
A number of AI4SE workshops have also made significant contributions. One that ran for many years (2012 to 2018) was the International Workshop on Software Mining (SoftMine). SoftMine "facilitated researchers who are interested in mining various types of software-related data and in applying data mining techniques to support software engineering tasks." [15] By being hosted at both SE and AI conferences - including the IEEE/ACM International Conference on Automated Software Engineering (ASE) and the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD) - SoftMine bridged the gap between the SE and AI communities. Additionally, numerous AI4SE tutorials and summer/winter schools have been organized over the years.
Today, these community efforts have borne fruit, evidenced by the prominence of AI4SE at leading software engineering conferences. It is noteworthy that at the 45th ACM/IEEE International Conference on Software Engineering (ICSE 2023), "Artificial Intelligence and Software Engineering" and "Software Analytics" were not only among the conference's primary seven areas but also among its most popular.
There have been multiple "waves" that positively influence the area of AI4SE:
_Wave 1 (Big Software Engineering Data)_
By the late 2000s, an increasing volume of open-source data became accessible to researchers, marking a significant shift from the time when most software artifacts remained confined within corporate boundaries. Notably, 2008 witnessed the launch of platforms like GitHub and Stack Overflow. The repositories on GitHub have expanded consistently over the years, and Stack Overflow's post count has shown a similar trajectory. These platforms supply vast amounts of data for AI4SE. As early as 2013, SE researchers had analyzed tens of thousands of version control and issue tracking systems from GitHub repositories [16, 17, 18]. Additionally, these platforms have presented unique challenges that demand AI4SE solutions. For example, the high number of posts in Stack Overflow requires the design of solutions that can empower the community to better browse [19, 20], locate [21, 22], maintain [23, 24], and comprehend [25, 26] answers to software engineering questions.
_Wave 2 (Deep Learning for Software Engineering)_
By the 2010s, deep learning has gained much traction across various domains, beyond computer vision. This evolution significantly influenced the AI4SE landscape. Initial forays into integrating deep learning with software engineering include constructing deep learning solutions for defect prediction [27] and code suggestion [28]. Subsequent research expanded upon various deep learning architectures, such as the Recurrent Neural Network and Transformer, aiming to automate a wider range of software engineering tasks. Moreover, considerable effort went into devising methods to learn effective distributed representations of diverse software artifacts, exemplified by works like [29, 30]. Comprehensive surveys on this topic were written by Yang et al. [31] and Watson et al. [32].
While there have been notable successes in marrying deep learning and software engineering, there are documented instances of limitations. Some studies, such as [33, 34, 35], demonstrate that, in certain situations and tasks, the efficacy of simpler techniques may be comparable to, or even surpass, deep learning models. Moreover, concerns about the generalizability of representations derived via deep learning are also highlighted by some studies such as [36].
_Wave 3 (Large Language Model or Foundation Model for Software Engineering)_
This wave continues from Wave 2. In 2018, Google introduced the Bidirectional Encoder Representations from Transformers (BERT), which underwent pre-training on the Toronto Book Corpus and English Wikipedia [37]. Often considered the first Large Language Model (LLM) or Foundation Model (FM)3, BERT can significantly reduce the necessity for large amounts of high-quality labeled data for downstream tasks. By 2020, researchers had applied BERT to automate software engineering tasks [38, 39]. That same year, Microsoft unveiled CodeBERT, which is a variant of BERT pre-trained
on a corpus containing a mixture of source code and natural language text [40]. After BERT, ever larger LLMs have been developed, with models like GPT-4 [41] currently leading the way. These LLMs have been leveraged to automate many software engineering tasks [42]. Interestingly, many years before the development of LLMs, Hindle et al. [43] have underscored the inherent naturalness of software and the prospective utility of language models in automating software engineering tasks.
The most recent wave has significantly advanced the adoption of AI4SE solutions among practitioners. Today, solutions like GitHub Copilot, Amazon CodeWhisperer, and OpenAI ChatGPT, built upon LLMs, are used by many professional and aspiring software practitioners. While, as of this paper's writing, their primary applications have been in coding, it is easy to envision their expansion into a broader spectrum of software engineering tasks in the near future, such as design, requirement elicitation, verification, and bug report management, among others.
## II Key Challenges of AI4SE: Trust and Synergy
"_Victory comes from finding opportunities in problems_"
- Sun Tzu
While there has been a surge in AI4SE research and its widespread adoption, numerous challenges remain, offering ample opportunities for future research. Many of these challenges can be put into two broad categories: ensuring **trustworthy AI4SE** and promoting **synergistic AI4SE**. If AI4SE solutions are not trusted by practitioners, they will not be adopted. Trust is dynamic; AI4SE solutions need to maintain practitioners' confidence in them over time. Moreover, an effective AI4SE solution should not only be trustworthy but also synergize seamlessly with practitioners. If not, such AI4SE solutions risk becoming obstacles rather than facilitators. While trust and synergy are interconnected concepts, each has its unique characteristics.
### _Need for Trust_
In 2015, a study involving 512 Microsoft practitioners was carried out to assess their perceptions on the relevance of software engineering research [44]. The goal was to identify potential gaps between academic research and its practical application. The results underscored concerns from practitioners, including those revolving on trust. For instance, one respondent stated "It seems that there could be potentially disastrous results if the automation does not [do things] correctly." A follow-up study in 2016, involving 386 practitioners from more than 30 countries across 5 continents, highlighted similar findings [45]. For example, one respondent stated, "I doubt any automated software can explain the reason for things...", highlighting a reason behind the lack of trust.
Fast forward to 2023, and although AI4SE research has undoubtedly advanced since 2015, challenges persist. A 2022 study revealed that many code snippets produced by GitHub Copilot contain security vulnerabilities [46]. Similarly, a 2023 article pointed out that code generated by ChatGPT often had compilation and runtime errors, especially when applied to newer programming tasks that might not have been present in its training data [47]. Recent news articles further emphasize these concerns, such as "Friend or foe: Can computer coders trust ChatGPT?" [48] and "ChatGPT creates mostly insecure code, but won't tell you unless you ask" [49]. There are also concerns about significant variations in ChatGPT's efficacy over time [50], including in the code generation task. Such inconsistencies can undermine trust as practitioners seek stability in the efficacy of AI4SE solutions [51].
If these trust issues are not adequately addressed, the current enthusiasm surrounding AI4SE can possibly diminish, reminiscent of the declining interest in AI experienced during the "AI winters." [52]. Another side of the trust spectrum is over-reliance. Novices may mistakenly place excessive trust in these AI4SE solutions, expecting flawless results, while remaining oblivious to their inherent limitations, which can prove detrimental.
### _Need for Synergy_
Synergy is typically defined as the collaboration of two or more entities to create an outcome that exceeds the sum of their individual contributions. In the context of AI4SE solutions, there are two primary usage scenarios: 1:1, where a single software practitioner interacts with an AI4SE solution, and N:M, involving multiple software practitioners collaborating with multiple AI4SE solutions. Presently, the majority of research and existing AI4SE solutions focus on the 1:1 scenario, which is intuitively more straightforward than the N:M scenario. However, even within this simpler setting, there is no guarantee that software practitioners and AI4SE solutions will achieve seamless synergy.
#### Interlink between synergy and trust
Synergy and trust are intrinsically linked; it is challenging for two entities to work together seamlessly without trust. As an example, Parmin and Orso conducted a controlled experiment showing that while fault localization solutions4 can achieve favorable results by certain metrics, they do not necessarily expedite human debugging processes [55]. Another research, although not centered on AI4SE, underscores that many professionals avoid static analysis solutions due to their frequent false positives, among other reasons [56, 57]. These studies underscore the "boy who cried wolf" phenomenon: when AI4SE solutions (or any automated solution for that matter) repeatedly produce unreliable results, software practitioners become skeptical and may disengage, preventing any synergy.
Footnote 4: A fault localization solution produces a ranked list of program locations that are likely to be faulty [53, 54]. They typically take as input a collection of program spectra describing program locations that are executed by failing and successful test cases.
Once trust is firmly established, AI4SE solutions and practitioners can potentially harmonize in ways that yield tangible benefits. For instance, for fault localization, if an AI4SE solution consistently presents accur
top-5 or top-10 results, Xia et al. found that practitioners can gain a significant performance boost [58]. But the bar for adoption can be steep: Kochhar et al. noted that for 90% of practitioners to adopt fault localization solutions, these solutions must deliver accurate results within the top-5 positions at least 90% of the time [45]. Achieving such a goal is undeniably challenging.
This narrative emphasizes that synergy and trust are closely intertwined, with trust being influenced by the efficacy of AI4SE solutions, specifically their capability and likelihood to yield accurate results. Although slight enhancements in efficacy may not immediately foster trust, there exists a critical threshold that, once exceeded, can serve as a tipping point for both trust and synergy. Therefore, as a community, it is essential to continually push for greater efficacy, even if immediate gains in trust and synergy are not readily visible.
#### Synergy beyond trust
While synergy undoubtedly involves trust, it extends beyond that singular concept. Mere trust does not ensure that the collaboration between two entities will yield outcomes surpassing their separate contributions. Some barriers stand in the way of achieving synergy between software practitioners and AI4SE solutions, including the following:
_Piscem natare does_: In the survey of Microsoft practitioners mentioned earlier [44], several participants pointed out that they deemed certain research unnecessary, as the resultant solutions were not seen as essential. This sentiment was particularly more pronounced among experts; the study found that as experience increased, participants were more critical and considered more studies as unimportant as well as fewer studies as essential. This observation was statistically significant with a p-value of 0.01.
Disrupting the "flow": Software practitioners are most effective when they are in a state of "flow", a concept described as "a state in which people are so involved in an activity that nothing else seems to matter" [59]. This state has been proven crucial for the productivity of software practitioners [60, 61]. If AI4SE solutions are introduced inappropriately or at inopportune moments, software practitioners may feel disrupted, much like the annoyance users felt with Microsoft's Clippy, which was deemed "annoying, impolite, and disruptive of a user's workflow" [62].
Resistance to change: People often display an aversion to modifying their established routines. Previous research has highlighted software practitioners' hesitancy to embrace new processes [63] or technologies [64]. Merely introducing an AI4SE solution does not ensure immediate adoption and endorsement, especially if it disrupts familiar practices. Such resistance can arise if AI4SE solutions are not smoothly integrated into the technological environments software practitioners are accustomed to. For instance, many AI4SE solutions have not been integrated into popular IDEs or issue-tracking systems, hampering their adoption. Another possible situation where resistance may happen is when employing an AI4SE solution may necessitate practitioners to adopt new procedures. For example, to leverage a fault localization solution, practitioners are prompted to consult a ranked list of potentially buggy program locations - a step they may not practice before the introduction of such a solution. The need for a change in _modus operandi_ may pose a certain resistance that needs to be effectively managed (c.f., [65]).
Differences in abstraction levels: Practitioners often consider overarching goals, which may involve a workflow of many tasks, each further broken down into micro-tasks. For example, the DevOps workflow consists of multiple phases, and each phase, e.g., coding, includes multiple activities, e.g., navigation, editing, comprehension, etc. [66, 67]. In contrast, current AI4SE solutions usually target specific, narrower micro-tasks, such as fault localization, clone detection, API recommendation, code summarization, duplicate bug report detection, etc. While each of these micro-tasks is important, the lack of understanding of the overarching workflow may be a barrier to effective synergy.
Communication barriers: The way humans communicate with each other when collaborating on tasks differs from human-AI4SE solution interactions. While humans have a wide range of communication means to collaborate with each other - text, code, sketches [68], and so forth - their communication means with AI4SE solutions are much more limited. Moreover, humans engage in multi-round exchanges [69], drawing from both short [70] and long-term [71] memories of past interactions. Many AI4SE solutions, on the other hand, operate in a single interaction mode. For instance, in fault localization [53, 54], most solutions simply allow practitioners to provide a set of program spectra (corresponding to failing and successful test cases). The AI4SE solution then returns a list of potential faulty program locations. A similar observation can be made for many other AI4SE solutions, e.g., code search [72], code summarization [73], etc.
These challenges can hinder effective synergy, making it difficult for practitioners to fully benefit from AI4SE solutions.
## III Vision: Software Engineering 2.0
_"Anything one can imagine, others can make real"_ - Jules Verne
What possibilities can a trustworthy and synergistic AI4SE unveil? This section paints a future shaped by trustworthy and synergistic AI4SE. Different from conventional papers that chronicle past achievements, this section chooses instead to spotlight the potential of what can be, given the necessary leaps in innovation. With trustworthy and synergistic AI4SE maturing, we are steadily advancing toward establishing a _symbiotic partnership between software practitioners and autonomous, responsible, and intelligent AI4SE agents, creating a human-AI hybrid workforce_. The realization of this human
AI hybrid workforce heralds a new era of **Software Engineering 2.0 (SE 2.0)**.5
Footnote 5: Software Engineering 2.0 is distinct from the concept of Software 2.0. Software 2.0 is defined as software that “is written in much more abstract, human unfriendly language, such as the weights of a neural network. No human is involved in writing this code.” [74] Software Engineering 2.0 focuses on constructing autonomous, responsible, intelligent AI4SE agents that can symbolically work with software practitioners to collaboratively build software, whether they are Software 1.0, Software 2.0, or possibly future Software X.0. Given the unique advantages and limitations of both Software 1.0 and Software 2.0, it is anticipated that future software systems will integrate both, thereby creating complex composite software systems where the need for Software Engineering 2.0 will be even more significant.
### Current state
Over the past two decades of research in AI4SE, we have witnessed the rise of AI4SE tools. As shown in Fig. 2(a), platforms like GitHub Copilot, Amazon CodeWhisperer, and OpenAI ChatGPT, are now embraced by many professional and aspiring software practitioners. This represents a big shift from AI4SE's nascent stages when AI capabilities were limited, drawing insights only from much smaller datasets for binary predictions (such as defect prediction [75] and failure prediction [76]) or generating contents confined to a strict form or grammar (such as specification mining [7]).
However, to actualize the vision of Software Engineering 2.0, there remain significant challenges to address. SE 2.0 envisions a reality that transcends merely equipping developers with AI tools for rudimentary tasks--like generating standard boilerplate code or commonplace algorithms; tasks that today's tools have already mastered. Rather, it gestures toward a broader, more transformative horizon.
### From smart tools to smart workmates
As research in trustworthy and synergistic AI4SE advances, AI4SE tools will evolve from mere smart tools to smart workmates -- see Fig. 2(b). The defining trait that distinguishes a smart workmate from a smart tool is _responsible autonomy_ - something we expect from a human colleague in software development. When a tool is trustworthy and can synergize well with practitioners, it can operate with increased _autonomy_, with practitioners confident in its ability to _responsibly_ execute tasks. This is a big transition analogous to a transformation from a smartwatch to a colleague, reminiscent of the android Data from the Star Trek series.
Moreover, currently, AI4SE tools predominantly function as assistants. As these tools mature further, they stand to be recognized as _first-class citizens_ within software development processes. As first-class citizens, these tools will evolve into smart workmates that can assume a broader array of roles. They can act as peers, aiding in the development of software modules with limited supervision, or even adopt managerial capacities, performing work planning and coordination tasks - see Fig. 2(c).
Furthermore, AI4SE workmates, acting as intelligent agents, will not be limited to collaborating with just one individual. Instead, they will become integral members of teams. Picture a blended team of software practitioners and AI4SE intelligent agents working cohesively toward shared objectives as illustrated in Fig. 2(d). This setup will involve diverse interactions: human-to-human (H-H), human-to-agent (H-A), and agent-to-agent (A-A). While substantial research exists on H-H and H-A dynamics, the A-A interactions are much less explored, especially in the software engineering field. Central to these interactions is the establishment of symbiosis and synergy, enabling mutual enrichment between practitioners and AI4SE agents.
### Adaptable yet solid
Today's software practitioners navigate a dynamic environment. Team members come in from the job market and, in time, move on. In the envisioned SE 2.0, AI4SE agents must be agile and adaptable to effortlessly integrate into teams as illustrated in Fig. 2(e). They should be capable of discerning the strengths and capabilities of both human members and fellow AI4SE agents, identifying avenues to contribute meaningfully to the team's goals. Moreover, these agents should possess the resilience and flexibility to adjust when either their AI counterparts or software practitioners transition out of the team.
Lastly, the viability of SE 2.0 hinges on solid legal, ethical, and economic foundations as illustrated in Fig. 2(f). There may be a need for new legal frameworks to delineate responsibility when software practitioners and intelligent agents collaborate. Privacy and copyright regulations will likely require adjustments to cater to SE 2.0 dynamics. Ethical concerns must be addressed to ensure that integrating AI4SE agents into software engineering processes yields societal benefits while mitigating potential adverse impacts, such as job losses for software practitioners. From an economic standpoint, aspects like the AI4SE agent market dynamics and vendor profitability models need attention. Therefore, the evolution and implementation of SE 2.0 will necessitate contributions not only from the Software Engineering field and Computer Science discipline but also from broader academic and professional domains.
### Timeline
During the session in which this talk was presented at ICSE 2023, an engaging discussion emerged about the timeline for SE 2.0's realization. The transition from the current SE to SE 2.0 will unfold in phases. Currently, we observe a surge of enthusiasm among software practitioners to harness AI4SE tools. However, there are mixed results and many unresolved issues; current tools are often cumbersome and ineffective for various software engineering tasks. They are also only able to automate some of the many tasks that software practitioners do today.
In the upcoming phase - Now to Now+U years - buoyed by increased AI4SE research and substantial investments from academia, industry, and government, many of the challenges mentioned in the previous paragraph will be addressed. This will transform current AI4SE tools into "power" tools. These "power" tools will still not be autonomous and require practitioners' close supervision. However, they will address and
alleviate many of the frustrations practitioners currently face in using them, and be integrated smoothly across a broad range of software engineering tasks. Drawing a parallel, consider the progression of smartwatches. Their genesis can be traced back to 1976, when they were expensive, offered limited functionality like basic calculations, and had many usability issues. In contrast, the smartwatches of today, nearly half a century later, are versatile and user-friendly gadgets. Given today's accelerated pace of innovation, it is plausible that the evolution of AI4SE tools will occur in a considerably shortened timeframe, although predicting a precise value for U is challenging.
In the subsequent phase - Now+U years to Now+(w\(\times\)U) years - these "power" tools will evolve into intelligent work-mates, characterized by responsible autonomy. This progression will likely be incremental too, unfolding as the facets of responsible autonomy depicted in Fig. 2(b) - (f) are actualized and their associated challenges addressed. It is at this juncture
Fig. 2: Software Engineering 2.0 (SE 2.0) will see AI4SE solutions transitioning from smart tools to smart workmates. These intelligent agents will exercise responsible autonomy. They will be first-class citizens, taking different roles as assistants, peers, and even managers in software engineering projects. They will be well integrated in dynamic software engineering teams. There will be human-human (H-H), human-agent (H-A), and agent-agent (A-A) interactions. There will also be a new economy surrounding the AI4SE agent market as well as solid foundations in law and ethics supporting SE 2.0.
that Software Engineering 2.0 will truly come into being. The value for \(w\) will depend on how fast Artificial General Intelligence (AGI) will be realized. Projections vary significantly, with some anticipating some form of AGI realization in a few years, while others expect it to take several decades [77, 78].
## IV Roadmap to Trustworthy AI4SE
"_Trust but verify_" - Ronald Reagan
Trust is a pivotal element in successful collaborations between humans and intelligent solutions [79, 80, 81]. Section II has highlighted the challenges in establishing trust between software practitioners and AI4SE solutions. Thus, further research is important. This section outlines nine strategies for achieving trustworthy AI4SE, as depicted in Fig. 3 and detailed further below.
### _Characterize trust factors_
Firstly, a clearer definition is required regarding the factors that influence practitioners' trust in AI4SE solutions. Some studies have looked into measuring trust in automation, e.g., [82, 83], however, mostly not in the context of software engineering. While some studies have delved into practitioners' expectations of specific AI4SE solutions [84, 85, 86, 45, 87, 88, 46, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 223, 224, 225, 226, 227, 228, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 288, 289, 291, 285, 287, 288, 289, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 314, 315, 316, 317, 324, 325, 326, 327, 328, 333, 334, 335, 346, 347, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 40, 41,42, 43,44, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 70, 72, 74, 79, 71, 75, 76, 77, 79, 71, 78, 79, 72, 73, 75, 76, 77, 78, 79, 73, 79, 74, 78, 79, 75, 77, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 88, 89, 80, 82, 84, 86, 87, 88, 89, 80, 83, 85, 89, 81, 84, 86, 88, 87, 89, 82, 85, 89, 80, 84, 87, 88, 89, 80, 85, 86, 87, 89, 81, 88, 89, 82, 89, 80, 86, 87, 88, 89, 82, 88, 89, 80, 89, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 80, 89, 81, 83, 85, 84, 87, 86, 88, 89, 82, 89, 83, 86, 89, 84, 88, 89, 85, 86, 87, 88, 89, 80, 87, 89, 80, 88, 89, 81, 89, 82, 89, 80, 83, 84, 85, 86, 87, 89, 82, 88, 89, 80, 84, 89, 85, 86, 87, 88, 89, 80, 87, 89, 80, 88, 89, 82, 89, 80, 89, 80, 81, 82, 83, 84, 86, 89, 85, 87, 89, 86, 88, 87, 88, 88, 89, 88, 89, 80, 89, 81, 82, 84, 88, 89, 80, 83, 84, 86, 88, 89, 85, 86, 87, 88, 89, 88, 89, 80, 89, 80, 81, 82, 84, 85, 86, 87, 88, 89, 80, 88, 89, 82, 89, 83, 84, 85, 86, 87, 88, 88, 89, 80, 89, 80, 81, 82, 83, 84, 88, 89, 80, 82, 85, 86, 87, 88, 89, 81, 83, 87, 88, 89, 82, 89, 83, 84, 88, 85, 86, 87, 88, 89, 80, 87, 88, 89, 80, 89, 82, 83, 84, 85, 86, 88, 89, 80, 89, 83, 87, 88, 89, 80, 84, 89, 85, 86, 87, 89, 88, 89, 80, 89, 80, 81, 82, 83, 84, 86, 89, 80, 82, 85, 87, 88, 86, 88, 89, 81, 87, 88, 89, 80, 88, 8, 89, 80, 89, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 82, 89, 83, 85, 86, 89, 87, 88, 89, 80, 89, 82, 89, 80, 81, 83, 84, 85, 86, 87, 88, 89, 80, 88, 89, 80, 81, 82, 84, 83, 86, 88, 89, 82, 89, 80, 83, 84, 85, 86, 87, 88, 89, 80, 89, 82, 89, 80, 84, 85, 86, 87, 88, 89, 82, 89, 83, 86, 89, 80, 87, 88, 89, 80, 89, 80, 89, 80, 89, 82, 89, 80, 89, 80, 81, 82, 89, 80, 89, 80, 81, 82, 89, 83, 84, 86, 89, 80, 82, 85, 86, 87, 88, 89, 83, 88, 89, 80, 84, 89, 85, 86, 87, 89, 80, 89, 80, 89, 82, 89, 80, 89,
the top-\(N\), these do not positively influence practitioners' trust in the solution. Similarly, another study [92] has proposed the number of initial false alarms (IFA) as a metric, as first impressions matter to get a software practitioner to trust an AI4SE solution. Also, in areas like defect prediction, effort-aware metrics have been proposed [93]. These metrics are more aligned with trust factors compared to their non-effort-aware counterparts. This is because practitioners may find their trust dwindling if they do not witness an increase in favorable outcomes that is commensurate with an increased effort in scrutinizing an AI4SE solution's recommendations.
**T1**_Build smarter AI4SE solutions with LLM and more_
The area of AI4SE offers much potential for advancement. At present, a prominent research trend within the AI4SE community is the use, adaptation, and design of Large Language Models (LLMs) to automate software engineering tasks. Current studies predominantly harness these LLMs for a subset of software engineering tasks, such as code search and code summarization [94, 95]. However, software engineering is multifaceted, encompassing more than just these tasks. While recent research has begun delving into less-explored areas, e.g., managing bug reports [96], designing software architecture [97], building navigation aids for software engineering Q&A sites [98], etc., there remains significant scope for broader exploration.
Additionally, recent studies have highlighted certain limitations of LLM for software engineering (LLM4SE). For instance, these models are not robust to minor perturbations of inputs [99, 100]. Also, they can be vulnerable to shifts in data distributions, like the evolution of third-party libraries and releases of new ones [101]. Moreover, a prior study has also demonstrated the challenges of LLM4SE in handling data that resides in the tail-end of the distribution [102]. Recognizing and characterizing the limitations of LLMs and devising strategies to overcome those presents compelling directions for future research.
To boost the efficacy of LLM4SE, some recent studies have focused on enhancing the inputs and outputs of LLMs and integrating them with other techniques. For instance, some studies incorporate code graphs (e.g., control flow graph, program dependency graph, etc.) as input to LLMs [103, 72]. Others transform source code to an intermediate representation (IR) that provides a more concise and uniform input to LLMs for more effective and efficient learning [104, 105]. Moreover, other studies emphasize selecting optimal in-context examples to bolster the efficacy of LLMs [106]. There is also an emphasis on coupling LLMs with methods like program analysis and testing to yield superior results [107, 108, 109, 110, 47]. Additionally, a few studies showcase the potential of both one-off and ongoing interactions with LLMs to refine their outputs for several software engineering tasks [112, 47].
Owing to the swift progress in LLM4SE research, this concise summary does not capture its entire scope and may be quickly outdated. As such, Systematic Literature Reviews (SLRs) on LLM4SE, e.g., [42], can prove invaluable. The rapid evolution in LLM4SE warrants multiple SLRs to capture the latest developments and trends. Multiple SLRs can also be conducted to examine LLM4SE from various perspectives.
**T1**_Synthesize task- and user-aware explanations_
Contemporary AI4SE solutions offer diverse recommendations - including patches to fix a bug, source code to write next, third-party library to use, and so on. However, these often come without explanations. This opacity can diminish the trust software practitioners have in these suggestions. As a result, there is a need for explainable AI4SE solutions that can realize effective practitioner-AI4SE solution interactions that engender trust and bring about trusted collective intelligence.
While there are some efforts done in this direction, e.g., [113, 114, 115, 116, 117], more explorations are needed. First, we can extend existing explainable AI4SE solutions to cover more software engineering tasks. Second, we can develop their capabilities to produce _task-specific explanations_ that
Fig. 3: Roadmap to Trustworthy AI4SE
are tailored to suit specific software engineering tasks and contexts. Moreover, the explanations need to be _user-aware_; they need to consider the specific expertise and experience of a practitioner who uses an AI4SE solution. Producing effective explanations for a recommendation made by an AI4SE solution given a specific task and context for a target software practitioner is challenging for several reasons: software engineering is a complex endeavor including diverse tasks; software engineering knowledge evolves rapidly; also, software practitioners have diverse backgrounds. Thus, there is much room to innovate to tackle these challenges.
### _Produce arguments, evidence, and guarantees through passive and active interactions_
For greater acceptance by software practitioners, AI4SE solutions must articulate convincing arguments and present pertinent evidence to practitioners, guiding their decisions regarding recommendations made by AI4SE solutions. Furthermore, evidence should not be solely obtained passively from the data present in software artifacts. Active interaction with these artifacts - feeding input data and monitoring the ensuing outputs - can be an effective way of producing evidence. Additionally, it will be desirable if AI4SE solutions are able to provide some guarantees of their efficacy.
Consider an instance where an AI4SE solution suggests to a practitioner that a particular code segment is potentially vulnerable. Its _argument_: the code resembles \(M\) code fragments on GitHub, which experienced practitioners fixed to remove vulnerabilities that match an entry in the Common Weakness Enumeration (CWE).6 To bolster this argument, the AI4SE solution can produce tangible _evidence_ by generating a test input that showcases the exploitability of the flagged code, following techniques proposed in [118, 119, 120].
Footnote 6: [https://cwe.mitre.org/index.html](https://cwe.mitre.org/index.html)
Similarly, an AI4SE solution recommending third-party libraries [121] may execute static and dynamic program analyses. These will result in concrete pieces of evidence, e.g., demonstrable efficiency or a small memory footprint, to support the recommendations. A worst-case execution time guarantee can also be given by leveraging static analysis [122].
### _Enhance Extrinsic Trustworthiness of AI4SE Solutions_
Software practitioners need guarantees that the use of AI4SE solutions will not bring them into conflict with laws. Also, they need assurances that sufficient measures are in place to protect AI4SE solutions from malicious actors. To address these concerns, which impact the trustworthiness of AI4SE solutions, it is essential to explore the following directions.
### _Design privacy-aware AI4SE solutions_
An open avenue of research is in fortifying AI4SE solutions to ensure compliance with privacy regulations more rigorously. Prior works have provided some protection to sensitive data in some software artifacts [123, 124, 125]. They, however, have mainly focused on data that comes in tabular format. They have also only considered some specific software engineering tasks, e.g., testing, defect prediction, and debugging. Thus, more can be done to strengthen this protection and expand it to cover diverse software artifacts and tasks.
Also, privacy-aware AI4SE solutions are important for the adoption of AI4SE solutions that require software practitioners to transmit proprietary code and data to third-party services. Software practitioners (and companies) need certain guarantees that proprietary code and data are not retained or seeped into the underlying AI4SE model that can potentially be leaked to other users of the service, c.f., [126, 127]. The use of Trusted Execution Environments [128] and the design of suitable protocols may be one way to achieve this guarantee, c.f., [91]. Alternatively, model compression techniques can be used to customize AI4SE models for local deployment on servers with limited memory and processing power [129]. Such local deployment eliminates the need to transfer proprietary data to third-party vendors, improving privacy. Yet another possibility is to design AI4SE solutions that employ federated learning [130].
Moreover, strategies are needed to help AI4SE solutions respect the EU General Data Protection Regulation (GDPR)'s provision of "right to be forgotten" without necessitating extensive retraining, c.f., [131]. The current AI4SE solutions do not readily allow the removal of specific contributions from open-source projects when the corresponding software practitioners invoke their "right to be forgotten" under GDPR.
### _Design license-aware AI4SE solutions_
In 2022, Microsoft, GitHub, and OpenAI were sued for issues related to privacy and copyright [132]. Moreover, an AI-powered coding assistant can inadvertently replicate GPL v3 licensed code from GitHub, risking copyright infringements when integrated into proprietary software. Holistic integration of licensing information and constraints [133] into AI4SE solutions' training or fine-tuning processes can be designed to address the aforementioned problems. Concurrently, runtime checking methods can be developed to flag AI4SE recommendations that potentially infringe on licensing terms. Code clone detection methods, e.g., [134, 135, 136, 137], can be employed as part of such runtime checks.
### _Design attack-resistant AI4SE solutions_
As AI4SE solutions become integral to software engineering processes, their ability to withstand attacks from malicious actors is of high importance. Recent studies [138, 139, 140, 141, 142, 143] have examined specific attack strategies, yet a broader spectrum awaits exploration. A comprehensive threat assessment is vital, paired with detection, quantification, and mitigation strategies, aiming to design and develop AI4SE solutions that are resilient to multifaceted attacks. Strengthening AI4SE defenses, devising corrective algorithms for real-time "self-healing", and probing the ramifications of data poisoning on software artifacts should be prioritized. Some strides have been made in this direction, but more is warranted.
## V Roadmap to Synergistic AI4SE
"_The whole is greater than the sum of its parts_" - Aristotle
Section II highlighted a number of barriers that impede synergistic interactions between software practitioners and AI4SE solutions. Thus, further research is essential. This section outlines six strategies for achieving synergistic AI4SE, as illustrated in Fig. 4 and elaborated upon below.
### _Understand synergy_
S1_Characterize strengths and weaknesses for better practitioner-AI4SE solution synergy_
To ensure that collaborations between software practitioners and AI4SE solutions produce outcomes greater than the sum of their individual contributions, it is essential to recognize the strengths of one party that can offset the weaknesses of the other. In software engineering, tasks often form lengthy chains, with each main task comprising several micro-tasks. Software practitioners may excel in certain micro-tasks but falter in others. Similarly, AI4SE solutions may perform very well in certain micro-tasks but perform poorly in others. The sets of micro-tasks that software practitioners and AI4SE solutions excel in may be substantially different. Hence, there is a need for empirical research to determine which micro-tasks are better assigned to practitioners - bearing in mind that the aptitude for micro-tasks can differ among individuals and can evolve with their experience. Similarly, it is important to identify the tasks best assigned to AI4SE solutions, acknowledging that these too can vary based on the capabilities of specific AI4SE solutions and the characteristics of practitioners using them.
Also, not in all situations AI4SE solutions can help. To underscore this point, a past study at Adobe revealed that junior software practitioners have a lower adoption threshold for a bug localization solution7[144]. On the other hand, experienced practitioners indicated less need for such automated assistance and have a higher threshold for adoption. Therefore, for AI4SE bug localization solutions to genuinely complement these experienced software practitioners, they need to surpass a certain level of efficacy. Failing to do so, these solutions might prove more obstructive than beneficial. One possibility is for such solutions to provide recommendations sparingly only in cases for which they are likely to exceed the experienced practitioners' expectations, c.f., [145, 146, 147].
Footnote 7: A bug localization solution produces a ranked list of potentially buggy source code files given a bug report.
As AI4SE matures and AI4SE solutions become first-class citizens and autonomous agents, synergizing software practitioners and AI4SE agents can be conceptualized as an optimization problem. Specifically, assuming that we can quantitatively characterize the strengths and weaknesses of different software practitioners and AI4SE solutions for different tasks as weights, the problem of synergizing software practitioners and AI4SE solutions to achieve results that best exceed the sum of their individual contributions is a task assignment problem. In essence, we need to assign tasks to the most suitable software practitioners and/or AI4SE agents to maximize overall reward or efficacy.
S2_Understand flow and reimagine processes_
Software practitioners occasionally need assistance, but not incessantly. Ill-timed assistance can be counterproductive and break software practitioners' "flow." While we can allow practitioners to manually toggle assistance on and off, they may not always discern the optimal moments for aid. Consequently, further studies are vital to pinpoint when AI4SE solutions are most beneficial, integrate them smoothly into the software development process to enhance software practitioners' "flow", tailor them to individual practitioners with varied preferences, and measure the benefits they bring to the table.
Also, many of the software engineering processes today do not prominently picture AI4SE solutions. As AI4SE solutions transition from tools to smart workmates, we may need to relook into the existing processes and identify limitations that may impede synergy. There may be a need for new processes that facilitate symbiotic partnerships between practitioners and AI4SE agents. In such a partnership, both entities mutually benefit, working toward shared goals more effectively.
S3_Understand change process toward synergy_
_Satir change model_[148] delineates several stages that emerge when a change is introduced: previous status quo, resistance, chaos, integration, and new status quo. This model has been influential in organizational behavior and even in the adoption of new software engineering methodologies. For instance, Lindstrom and Beck noted that following the Satir change model, adoption of eXtreme Programming does not guarantee instant positive outcomes and benefits may appear gradually [149].
Similarly, the introduction of AI4SE solutions may require individuals to undergo a change process to achieve synergy. This change process may not instantly lead to improved outcomes. Departing from a familiar status quo, the introduction of an AI4SE solution - which is perceived as an external element - can incite resistance and even chaos. But this phase can subsequently give way to transformation and integration, ultimately establishing a new status quo.
In AI4SE research, there have been limited studies on this change process. There is a need for such studies to answer questions such as: How can we measure the efficacy of the change process? What strategies can accelerate the change process, allowing us to reach the new status quo more rapidly? How can we increase the likelihood that the new status quo significantly surpasses the previous one in terms of quality (of the software developed) and productivity (of software practitioners)? And lastly, what automated solutions can be developed to alleviate the challenges software practitioners face during this transformation?
### _Build synergistic AI4SE capabilities_
#### 3.2.1 Design holistic and workflow-aware AI4SE solutions
Most AI4SE solutions are tailored for specific micro-tasks, such as code generation, code summarization, fault localization, or duplicate bug report detection. In contrast, as highlighted in Section 2, software practitioners often operate at a different level of abstraction, considering higher-level and broader objectives. This difference may be a barrier to synergy.
To address this challenge, a shift toward holistic, workflow-aware AI4SE solutions can be beneficial. For example, as a start, AI4SE solutions can transfer information and insights from one micro-task to others, c.f. [150, 151]. Additionally, they can leverage practitioners' interactions and feedback in one micro-task to improve their capabilities in others. Moreover, we can build AI4SE solutions that pivot from assisting isolated micro-tasks to improving the entire workflow, ensuring that they capture the broader picture rather than just the individual micro-tasks - seeing the "forest" instead of "trees".
#### 3.2.2 Mitigate communication barriers
Many AI4SE solutions currently offer limited interactivity. Though solutions built on top of ChatGPT have begun bridging these interaction gaps, they still fall short of replicating the deeply collaborative experience akin to pair programming with a trusted colleague. Also, in spite of the strides made by LLM-powered AI4SE solutions, challenges persist, such as interactions that become stagnant or unproductive toward solving a software engineering task [47].
To truly capitalize on the potential of AI4SE solutions, we must enhance the communication capabilities between software practitioners and AI4SE solutions. The ideal scenario will allow practitioners to interact with AI4SE solutions as naturally and effectively as they do with their peers. Achieving seamless communication is pivotal for fostering synergy. Moreover, the means of communication should extend beyond just text and code to encompass visuals (like diagrams and sketches), gestures (captured via videos or wearables), and other modes of interaction. Advances in Foundation Models that go beyond LLMs and consider modalities such as images and videos, e.g., [152], may pave the way toward better communication between practitioners and AI4SE solutions.
#### 3.2.3 Innovate on N:M and A-A interactions
Most existing AI4SE solutions focus on the l:1 usage scenario, wherein a single software practitioner engages with one AI4SE solution. To more fully unlock the potential of AI4SE, we need to explore the N:M usage scenario, wherein several software practitioners synergistically interact with multiple AI4SE solutions in a team. Research on this N:M usage scenario is limited. Considering that most software engineering teams consist of more than one practitioner, it is important to explore how a team of interacting practitioners and AI4SE solutions can collaboratively and synergistically complete tasks.
As AI4SE evolves to take a central role in software engineering, it becomes imperative to investigate the synergy among AI4SE solutions. Currently, limited research delves into how these solutions can recognize the capabilities of their counterparts, distribute tasks, and collaboratively operate by capitalizing on each other's strengths and addressing each other's weaknesses. These collaborative capabilities should be developed for both static (where collaborations occur among a predefined set of AI4SE solutions) and dynamic (where collaborations involve previously unknown AI4SE solutions) settings.
## 4 Summary and Concluding Remarks
#### 4.1.1 "If you want to go far, go together"
- African Proverb
The area of AI for Software Engineering (AI4SE) has witnessed exponential growth over the past two decades. Evolving from a niche segment centered around a handful of software engineering tasks, AI4SE has grown into a key pillar within the software engineering field, underscored by the pross of specialized AI4SE solutions across a wide range of software engineering tasks. Three distinct waves of innovation have shaped the trajectory of AI4SE: the surge of software engineering big data, the incorporation of deep learning into the design of AI4SE solutions, and, more recently, the development of AI4SE solutions based on large language models.
Figure 4: Roadmap to Synergistic AI4SE
The latter has propelled AI4SE into the limelight, spawning industry-grade solutions that have found widespread adoption.
This paper highlights two key challenges still confronting AI4SE: the need for trust and synergy. These intertwined principles are important for harnessing the full potential of AI4SE. Looking forward, trustworthy and synergistic AI4SE solutions can transform our present AI4SE tools into truly intelligent workmates, characterized by _responsible autonomy_. These AI4SE entities, acting as intelligent agents, will seamlessly integrate as key autonomous contributors, performing varied roles adeptly and responsibly in software engineering workflows. Moreover, their adaptability will empower them to operate symbiotically not just alongside individuals but within larger, dynamic teams where members - both humans and intelligent agents - can dynamically change. This transformation, fueled by the progression of AI to Artificial General Intelligence (AGI), will mark the inception of Software Engineering 2.0. While advancements in software engineering and AI research are clearly important, the successful realization of the Software Engineering 2.0 vision also hinges on strong foundational frameworks in the legal, ethical, and economic domains.
To achieve the aforementioned vision of Software Engineering 2.0, this paper presents two roadmaps steering toward trustworthy and synergistic AI4SE. The roadmaps enumerate 15 open challenges that await further attention of the AI4SE community. Addressing these could move AI4SE ever closer to Software Engineering 2.0.
This paper seeks to motivate more to join and contribute to the AI4SE research journey. AI4SE is currently in a "_Belle Epoque_" mirroring the aeronautics revolution of the early 1900s. Just as the Wright brothers' groundbreaking 12-second flight in 1903 redefined the boundaries of travel, the promising advances of AI4SE hold the potential to reshape the future of software engineering. Similar to aviation post its inaugural flight, sustained collaboration and dedication over many years by numerous contributors is pivotal to fully realizing the power of AI4SE in revolutionizing the software engineering landscape.
## Acknowledgments
I would like to express my thanks and appreciation to the many people who provided insights and assistance:
* Special thanks to Xing Hu and Hoa Khanh Dam for their excellent organization of the FoSE track at ICSE 2023 and the compilation of the post-proceedings.
* The perspectives presented in this paper are shaped by discussions that I had with many colleagues, insightful papers that I read, and great talks that I attended. Notably, I learned valuable insights from the keynote of Margaret-Anne (Peggy) Storey at ASE 2022 titled "From Automating Software Engineering to Empowering Developers." I also learned much from fellow speakers at the ICSE 2023 Future of Software Engineering (FoSE) track, particularly speakers at the "AI & SE and Debt" session, including Thomas Zimmermann, Mark Harman, and Paris Avgeriou. The lively discussion at the end of the session also contributed to many points described in this paper.
* Parts of this paper were adapted from a successful grant proposal submitted to the National Research Foundation, Singapore under its Investigatorship Grant Call. Many colleagues provided valuable input for that proposal.
* Ratnadira Widyasari, Kisub Kim, and Chengran Yang greatly help to enhance the aesthetics of Fig. 2, 3, and 4.
* The paper was enriched by valuable feedback from many people who reviewed a preliminary draft, including (in alphabetical order): Bowen Xu, Ferdian Thung, Hong Jin Kang, Jieke Shi, Shaowei Wang, Ting Zhang, Xin Zhou, Yuan Tian, and Zhou Yang.
The writing and editing of this paper exemplify a synergistic collaboration between humans and AI-powered tools. For example, many icons in Fig. 1 and 2 were created using Microsoft Image Creator8 and DreamStudio9, both of which offer generative-AI-powered text-to-image functionality. Also, the paper's punctuation, spelling, grammar, clarity, and engagement were improved using Grammarly and ChatGPT. While ACM permits such usage of Grammarly and ChatGPT without obligatory acknowledgement10, it is acknowledged here for the sake of completeness.
Footnote 8: [https://www.bing.com/create](https://www.bing.com/create)
Footnote 9: [https://dreamstudio.ai](https://dreamstudio.ai)
Footnote 10: [https://www.acm.org/publications/policies/frequently-asked-questions](https://www.acm.org/publications/policies/frequently-asked-questions)
Although I consider the perspectives presented in this paper to be well-informed, they are neither comprehensive nor without potential flaws.
This research / project is supported by the National Research Foundation, under its Investigatorship Grant (NRF-NRFI08-2022-0002). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
| decades、ソフトウェアエンジニアリングの研究に多くの時間と労力を注いでいる。自動化されたソリューションの開発に集中し、開発者の生産性とソフトウェアの品質の向上に貢献してきた。過去20年間にわたって、ソフトウェアエンジニアリングタスク向けの高度なソリューションの開発は、過去に例を見ない勢いを見せている。この勢いは、ソフトウェアエンジニアリングに特化した人工知能(AI)の分野(AI4SE)を確立した。AI4SEは、急速にソフトウェアエンジニアリング分野の最もアクティブで人気のある分野の一つとなっている。この論文は、AI4SEの重要なポイントを議論する。まず、AI4SEの簡潔な導入と歴史について述べている。その後、AI4SEに存在する主要な課題を強調し、信頼性と協調性の高いAI4SEの実現を必要とすることを示している。さらに、AI4SEの |
2309.14351 | To build or not to build -- A queueing-based approach to timetable
independent railway junction infrastructure dimensioning | Many infrastructure managers have the goal to increase the capacity of their
railway infrastructure due to an increasing demand. While methods for
performance calculations of railway line infrastructure are already well
established, the determination of railway junction capacity remains a
challenge. This work utilizes the concept of queueing theory to develop a
method for the capacity calculation of railway junctions, solely depending on
their infrastructure layout along with arrival and service rates. The
implementation of the introduced approach is based on advanced model-checking
techniques. It can be used to decide which infrastructure layout to build, i.e.
whether an overpass for the analysed railway junction is needed. The developed
method hence addresses the need for fast and reliable timetable independent
junction evaluation in the long-term railway capacity calculation landscape. | Tamme Emunds, Nils Nießen | 2023-09-20T12:51:10 | http://arxiv.org/abs/2309.14351v1 | To build or not to build - A queueing-based approach to timetable independent railway junction infrastructure dimensioning
###### Abstract
Many infrastructure managers have the goal to increase the capacity of their railway infrastructure due to an increasing demand. While methods for performance calculations of railway line infrastructure are already well established, the determination of railway junction capacity remains a challenge. This work utilizes the concept of queueing theory to develop a method for the capacity calculation of railway junctions, solely depending on their infrastructure layout along with arrival and service rates. The implementation of the introduced approach is based on advanced model-checking techniques. It can be used to decide which infrastructure layout to build, i.e. whether an overpass for the analysed railway junction is needed. The developed method hence addresses the need for fast and reliable timetable independent junction evaluation in the long-term railway capacity calculation landscape.
keywords: junction capacity, Markov Chain, queueing, model-checking, railway infrastructure +
Footnote †: journal:
## 1 Introduction
The amount of goods and passengers transported by rail is predicted to increase significantly, hence infrastructure managers not only need to build new railway infrastructure but also find possibilities to increase traffic volume on already existing relations. Both of these tasks require sufficient methods to determine the capacity of all sub-units, i.e. lines, junctions and stations, in a railway network.
While Capacity Determination techniques for railway lines, junctions and stations have already been developed and continue to contribute to more efficient rail transportation today, only some methods actually describe timetable independent approaches, allowing for sophisticated infrastructure-dimensioning
in early stages of the planning process which is traditionally done in multiple steps (UIC, 2013). Strategical design decisions on a network level may take place decades before operation and are an ongoing process for most infrastructure managers. The technical infrastructure planning does take some time and is done several years in advance, while the definitive timetable construction may be done several months or years prior to operation. Therefore, timetable independent methods determining the capacity of railway infrastructure are of particular importance for the early process-stages including'strategical network design' and 'technical infrastructure planning'.
In this work, classical queueing-based capacity analysis methodology is extended to railway junctions by introducing a Continuous-Time Markov Chain formulation based on routes through the infrastructure of a railway junction. Dependent only on resource conflicts and arrival/service rates of the trains travelling along considered routes, the number of trains waiting for the assignment of their requested infrastructure are estimated by calculating state-probabilities in the introduced Continuous-Time Markov Chain. To the best of our knowledge, this work is the first to utilize state-of-the-art probabilistic model-checking software to perform the computations on models describing railway junction infrastructure. The presented approach does not require timetable data to determine the capacity of a railway junction, enabling early-stage infrastructure dimensioning. Consequently, technical planning decisions, i.e. whether a railway overpass is to be built, can be assisted by capacity calculations with the described approach. With its versatile range of parameters, multiple applications of capacity research, i.e. infrastructure dimensioning for a fixed traffic demand or capacity determination of a fixed junction infrastructure, may be realised. In comparison with other approaches to determine timetable independent capacity, the presented method does not rely on simulations or sampling of possible sequences for timetable-compression, but rather on queueing-systems, formulated for multiple service channels. Further separating this approach from other analytical methods, no deep railway-specific knowledge for a preceding analysis of the considered junction infrastructure is needed, only conflicting routes and either necessary service times or planned operating programs are required as input.
While Section 2 includes a literature review regarding other approaches to the determination of railway capacity, Section 3 introduces formal problem formulations and proposes the new capacity calculation method. In Section 4, the proposed method is compared to a simulation approach with respect to computation times and accuracy of the obtained solutions. A Case Study, highlighting the applicability of the introduced approach, is performed in Section 5. This work is concluded with a summary and discussion in Section 6.
## 2 Related Work
This section provides an overview regarding the state-of-the-art for railway capacity analysis. After the definition of some key types of performance analysis
and frequently used terms, selected examples of literature are matched regarding their associated methodology and capacity definition.
### Terminology of railway performance analysis
With their various requirements to the grade of detail, different stages of the planning process can require an analysis of diverse definitions of railway capacity. While a first definition of railway capacity as the _maximal number of trains traversing a given infrastructure in a given time period under some fixed assumptions_ summarizes the concept in a straightforward manner, the interpretation of the included assumptions determines different levels of capacity. In detail, three types of railway capacity may be distinguished (see also Jensen et al., 2020):
* _Theoretical capacity_: The maximum number of requests (i.e. trains, train-route enquiries), that can be scheduled without conflicts on the given infrastructure under consideration of driving dynamics and installed railway control systems.
* _Timetable capacity_ (sometimes referred to as _maximal capacity_): The maximum number of requests, that can traverse the given infrastructure in acceptable quality when compared to a specified threshold, not only considering driving dynamics, railway control systems, but also operating program specific settings, such as train-mix and arrival processes.
* _Operational capacity_ (sometimes referred to as _practical capacity_): The maximum number of trains, that can traverse the given infrastructure in acceptable operational quality when compared to a specified threshold, considering driving dynamics, railway control systems, operating programs, and additionally respecting disturbances and (knock-on) delays.
Additionally, research has not only been focused on determining the capacity (of any kind), but also on the calculation of the _capacity utilization_, i.e, the amount of available capacity, consumed in a given timetable.
While methods determining capacity utilization are mostly dependent on a timetable (_timetable dependent_) or at least a fixed order of a given train set, some methods are capable of determining performance indicators without the need for a previously set timetable (_timetable independent_), making them valuable for infrastructure dimensioning in early stages of infrastructure planning processes. Some approaches build on a given timetable and find maximal subsets of a set of trains that can additionally be scheduled (_timetable saturation_), building a category in between both dependency expressions. In contrast to a timetable, an _operating program_ specifies the demand of train types on given lines or routes, hence being indispensable for most performance analyses.
Depending on the stage of railway infrastructure planning, a timetable may already be available, such that methodologies like _timetable compression_, i.e. routing trains through the infrastructure with a separation of only the minimum headway possible, are useable.
The various railway infrastructure planning stages make use of varying capacity definitions and hence distinct methodologies for the analysis of railway infrastructure. They additionally differ in terms of timetable dependency or granularity of the described infrastructure. During the following subsection, the methodologies _max-plus-algebra_, _optimisation_, _operational data analysis_, _simulation_ and _analytical_ are differentiated.
Some additional distinctions of relevant railway performance analysis are made regarding their analysed infrastructure - lines, junctions, stations or networks -, their infrastructure decomposition and their utilized solution methods, f.e. _mixed integer programming_ (MIP) or matrix calculations.
### Literature Review
The described terminology is used in Table 1 to partition some relevant related research regarding their associated category. Additionally, selected utilization, optimisation and delay-propagation methods are briefly introduced in the following, while our contribution is classified along the strongly related analytical approaches. An additional table, categorizing the considered literature can be found in the Appendix in Table A.5.
Describing the determination of **capacity utilization** and introducing threshold values for sufficient operational quality, the UIC Code 406 (UIC, 2004, 2013) is widely used internationally for capacity assessments on railway lines (Abril et al., 2008; Landex, 2009; Goverde et al., 2013) and stations (Landex and Jensen, 2013; Weik et al., 2020). Utilizing the theory of capacity occupation, Max-Plus Automata (Goverde, 2007; Besinovic and Goverde, 2018) give measures for the assessment of railway infrastructure and timetable structure. While
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} & & & & & & & **application** \\ & **capacity** & **timetable** & **microscopic** & **stochastic** & **stochastic** & **specific** \\ & **capacity** & **type** & **timetable** & **junction** & **arrival** & **service** & **infrastructure** \\ & **type** & **independent** & **evaluations** & **process** & **process** & **decomposition** & **technique** \\ \hline timetable & \multirow{2}{*}{util} & \multirow{2}{*}{no} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{composation} \\ compression & & & & & & & \\ \hline randomized timetable & \multirow{2}{*}{util} & \multirow{2}{*}{yes} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{combinatorial} \\ compression & & & & & & & \\ \hline max-plus-algebras & \multirow{2}{*}{util} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{yes} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{no} \\ & & & & & & & \\ \hline timetable & \multirow{2}{*}{theo} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{no} \\ saturation & & & & & & & \\ \hline capacity & \multirow{2}{*}{theo, util} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{mostly MIP} \\ optimisation & & & & & & & \\ \hline operational & \multirow{2}{*}{op} & \multirow{2}{*}{no} & \multirow{2}{*}{no} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{no} & \multirow{2}{*}{\begin{tabular}{*}{\begin{tabular}{*}{\end{tabular}{tabular}{*}}} \\ and machine learning \\ \end{tabular} \\ data analysis & \multirow{2}{*}{op} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{no} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{yes} & \multirow{2}{*}{no} & \multirow{2}{*}{\begin{tabular}{*}{\end{tabular}{*}}} \\ analysis & & & & & & \\ \hline simulation & \multirow{2}{*}{op} & \multirow{2}{*}{no} & \multirow{2}{*}{yes} & \multirow{2}{*}{(yes)} & \multirow{2}{*}{yes} & \multirow{2}{*}{no} & \multirow{2}{*}{\begin{tabular}{*}{\end{tabular}}} \\ \hline analytical & \multirow{2}{*}{tt, op} & \multirow{2}{*}{yes} & \multirow{2}{*}{no} & \multirow{2}{*}{yes} & \multirow{2}{*}{yes} & \multirow{2}{*}{no} & \multirow{2}{*}{\begin{tabular}{*}{\end{tabular}}} \\ line capacity & & & & & & \\ \hline \begin{tabular}{l} (prize) analytical \\ node capacity \\ \end{tabular} & \multirow{2}{*}{tt, op} & \multirow{2}{*}{yes} & \multirow{2}{*}{yes} & \multirow{2}{*}{yes} & \multirow{2}{*}{yes} & \multirow{2}{*}{yes} & \multirow{2}{*}{no} & \multirow{2}{*}{\begin{tabular}{*}{\end{tabular}}} \\ \hline \hline introduced & \multirow{2}{*}{tt} & \multirow{2}{*}{yes} & \multirow{2}{*}{yes} & \multirow{2}{*}{yes} & \multirow{2}{*}{yes} & \multirow{2}{*}{yes} & \multirow{2}{*}{no} & \multirow{2}{*}{
\begin{tabular}{*}{\end{tabular}}} \\ here & & & & & & & \\ \hline \multicolumn{7}{l}{Remarks. Please note that a statement in leancaptics means that this feature is partially supported, i.e. for some methods within the group or in an incomplete interpretation only. Furthermore, capacity types utilization (util), theoretical (theo), operational (op) and timetable (tt) are abkirütal.} \\ \end{tabular}
\end{table}
Table 1: Literature considering railway performance estimations
traditional compression methods (UIC, 2004; Goverde, 2007; Abril et al., 2008; Landex, 2009; Goverde et al., 2013) require a timetable to calculate capacity consumption, other approaches have been proposed utilizing randomly generated sequences of train types to overcome timetable dependencies (Jensen et al., 2017, 2020; Weik et al., 2020), focusing on the determination of timetable capacity. An overview and comparison with other capacity consumption methods can be found in Zhong et al. (2023).
Furthermore, **optimisation methods** for the estimation of theoretical capacity have been developed. They mostly formulate (linear) mixed integer programming problems, including approaches for railway lines (Harrod, 2009; Yaghini et al., 2014), stations (Zwaneveld et al., 1996, 2001; Delorme et al., 2001) and networks (Burdett and Kozan, 2006; Burdett, 2015). Some approaches rely on solutions to the railway timetabling problem (Cacchiani and Toth, 2012; Leutwiler and Corman, 2022), building timetables utilizing a given infrastructure in a 'best', as defined by objective functions, manner. Methods may also estimate the capacity occupation of its solution while creating a timetable (Zhang and Nie, 2016). Other approaches are based on the saturation of given timetables, which may also include empty schedules, and optimise the amount of additional traffic (Burdett and Kozan, 2006; Harrod, 2009; Liao et al., 2021).
Going even further, optimisation methods may incorporate rolling stock information (Liao et al., 2021) to the construction of a saturated timetable or estimate the effects of emerging first-order delays to following trains (Mussone and Wolfler Calvo, 2013), handling the propagation of so called _knock on_ delays. For this, Mussone and Wolfler Calvo (2013) extend an approach by De Kort et al. (2003), utilizing max-plus algebras to calculate capacity on lines with some single-track sections, i.e. tunnels.
More detailed insights into operational parameters of specific timetables and detailed infrastructure, rolling stock and delay distribution data can be obtained by utilizing **simulations**. D'Acierno et al. (2019) provide a comprehensive literature review. While being subject to large computational times, simulations are versatile in their use case. As such, the influence on model performance of different parameters can be analysed, i.e. different buffer time distributions to operational capacity estimations (Zieger et al., 2018), analysing the propagation of delays on railway infrastructure.
An investigation of **delay propagation** properties has also been part of further analytical (Goverde, 2010; Buker, 2011) research and subject of machine-learning models (Sahin, 2017; Corman and Kecman, 2018), trained on historical operational data. Sahin (2017) calculates probabilities for different delay states with the help of a Markov Chain, while Corman and Kecman (2018) utilize Bayesian networks. Interested readers are referred to Spanninger et al. (2023) for a detailed review.
Recent approaches make additional use of **operational data** to identify capacity bottlenecks. While some research introduces train traffic data mining approaches to analyse actual operation on railway lines (Graffagnino, 2012) or stations (Armstrong and Preston, 2017), others (Weik, 2022; Corman and Henken, 2022) discuss the applicability of macroscopic fundamental diagrams.
For this, Weik (2022) introduces a simulation approach to highlight their benefit for further macroscopic research and Corman and Henken (2022) provide an overview of open research applications.
Taking randomly distributed inter-arrival and service times into consideration, **analytical methods**, based on queueing theory, have been developed for an efficient timetable or operational capacity analysis in the early planning stages. In (Potthoff, 1970) railway lines and stations are dimensioned, analysing loss probabilities depending on filling and service functions utilizing probability distributions for the arrival process of trains to the analysed station. While Schwanhauser (1974); Wakob (1984); Wendler (2007) introduce measures for an analytical determination of the capacity of lines, Schwanhauser (1978); Niessen (2008, 2013); Schmitz et al. (2017) analyse junctions, incorporating randomly distributed inter-arrival and service durations.
Schwanhauser (1974) formulates the STRELE method to calculate expected values for the waiting times of trains in a line infrastructure, which is extended to junction infrastructure in Niessen (2008). They hence give measures to calculate the **operational capacity** of lines and junctions by comparing the estimated waiting times with threshold values (Schwanhauser and Schultze, 1982).
Potthoff (1970); Schwanhauser (1978); Wendler (2007); Wakob (1984); Niessen (2008, 2013); Schmitz et al. (2017); Weik (2020), however, implement results for the **timetable capacity** of the analysed infrastructure by matching corresponding limits with estimated waiting times for trains without taking delays into account.
Calculating the expected waiting times via probabilities for the loss of an arriving train, Potthoff (1970); Niessen (2008, 2013) formulate methods for station (Potthoff, 1970) and junction infrastructure. For junction capacity estimations, Niessen (2008, 2013) tackles queueing systems with multiple channels, while Schwanhauser (1978) approximates the waiting times in a junction via a single-channel system. This single-channel system approximation is based on route-exclusion probabilities and is adapted in Weik (2020), joining it with line-capacity advancements in Wendler (2007), therefore utilising multi-state service processes and hence more flexible probability distributions when modelling the service process.
_Parameter estimations_ have been done (Wakob, 1984) to obtain a closed formula for approximating the actual waiting times on a railway line with general independent service and inter-arrival time distributions.
Also directly modelling general independent service and arrival processes, Schmitz et al. (2017) formulate multi-dimensional Markov Chains and include phase-type distributions in their approximation. However, their approach is limited to already partitioned junction infrastructure, hence dependent on additional input and deep system knowledge of the algorithm operator. Additionally, they distinguish between train and request types in a queue, resulting in issues with computational memory and scaling, when analysing a more complex infrastructure and/or operating program.
Overall, the methodology landscape is still lacking research combining major advantages of analytical junction capacity methods: Calculating timetable
capacity for multi-channel railway junctions while being easily applicable to a broad range of problem formulations, f.e. infrastructure dimensioning for a fixed traffic demand, and utilizing multi-dimensional Markov Chains, enabling the use of advanced queueing theory methodology, such as probabilistic model-checking.
In this work, infrastructure performance is measured by such an approach, analysing timetable capacity by modelling railway junction infrastructure with a multi-dimensional Continuous-Time Markov Chain. This later discussed model features a route-based infrastructure decomposition, hence being easily applicable to more complex junctions, while respecting arising resource conflicts without the need for additional infrastructure partitioning.
## 3 Methods
### Junction Layout
To evaluate the capacity of a railway system, all sub-units of the network need to be assessed. A common differentiation is made between line and node capacity, while methods describing node capacity can additionally be partitioned describing junction and station capacity.
A _railway line_ is a connection between two origins, usually equipped with single- or double-track infrastructure. If some traffic shares only part of the route and connects to a third origin, a _railway junction_ will be installed to divide the traffic regarding its destination. Unlike at junctions, trains may start and end their journey at _railway stations_. They can be used for passenger- and good-exchange or fulfil primarily operational needs, such as ensuring a possibility for over-takings.
In Figure 1 an infrastructure designs for a railway junction of two double-track lines is given. The _main line_ spans between the two origins A and B, while the _branching line_ connects C with A.
As in most European countries, we consider operation to be defaulting to right-hand traffic, which suggests the usage of route \(r_{1}\) for the direction A to B and routing traffic from B to A via route \(r_{3}\) for the main line. Consequently, routes \(r_{2}\) and \(r_{4}\) are used accordingly for traffic between A and B.
Figure 1: Track-layout for a double-track junction
When dividing traffic on the junction into routes, those four different trails may be considered for both junction layouts (Table 2). The infrastructure can be abstracted along those routes to model its operation.
### Problem Formulation
A railway junction infrastructure \(J=(R,C)\) consists of a set \(R\) of \(k\) routes \(r\in R\) and a _conflict-matrix_\(C\in\{0,1\}^{k\times k}\). Two routes \(r_{i},r_{j}\in R\) are described as _conflicting_, i.e. they cannot be used at the same time, by denoting \(C_{i,j}=1\), and as not conflicting by \(C_{i,j}=0\). Per default, the same route cannot be used twice at a time, hence \(C_{i,i}=1\) for all \(i\in\{1,\ldots,k\}\).
An _operating program_ on the railway junction \(J\) is a set \(A\) of _demands_\(a=(r,n)\) corresponding to a route \(r\in R\) and a total number of trains \(n\in\mathbb{N}\) of that demand in the time horizon \(U\). In order to service a train of demand \(a=(r,n)\in A\), the service time \(t_{\text{service}}(a)\) may be calculated using microscopic railway software. The service time of demand \(a\) is dependent on the infrastructure and safety technology in \(J\) (and adjacent railway lines and junctions), as well as some train-specific characteristics, such as braking percentages, acceleration capabilities, or total mass.
The problem of dimensioning of railway junctions can be formulated by two inverse statements:
1. Given a railway junction \(J\) and a set of possible operating programs \(\{A_{1},\ldots,A_{l}\}\). Which operating program \(\hat{A}\in\{A_{1},\ldots,A_{l}\}\) is the _largest_ to be able to be completed on the junction \(J\) in acceptable quality?
2. Given an operating program \(A\) and a set of possible infrastructure layouts \(\{J_{1},\ldots J_{l}\}\). Which infrastructure \(\hat{J}\in\{J_{1},\ldots J_{l}\}\) is the most affordable, sufficient for acceptable quality in the completion of the desired operation program?
Note that for both problem statements an ordering of some sort can be given for the set of possible solutions. The set of possible operating programs \(\{A_{1},\ldots,A_{l}\}\) can be sorted by the total number of trains in the operating program for statement (I). Hence, the _largeness_ (as in statement (I)) of an operating program may be evaluated by a given order. Regarding statement (II), the volume of funds needed for the construction of the infrastructure layouts may be the metric for the set of possible infrastructure layouts \(\{J_{1},\ldots J_{l}\}\).
While problem statement (I) can be used to assess the theoretical maximal capacity of a railway junction, railway infrastructure operators could be mostly
\begin{table}
\begin{tabular}{l|l|l|l} Name & Origin & Destination & Conflicts \\ \hline \(r_{1}\) & A & B & \(r_{2}\) \\ \(r_{2}\) & A & C & \(r_{1}\), \(r_{3}\) \\ \(r_{3}\) & B & A & \(r_{2}\), \(r_{4}\) \\ \(r_{4}\) & C & A & \(r_{3}\) \\ \end{tabular}
\end{table}
Table 2: Routes through the junction
interested in solutions to the statement (II): The operational program can often be assessed beforehand, i.e. when fixed by external factors, such as governments, promoting a modal shift to railways (Europe's Rail, 2015), or increasing demands when establishing new industry locations.
With the described approach, both problem statements may be solved by using the infrastructure and operating program in the formulation of a queueing system and analyzing its characteristics.
### Queueing System
Queueing theory (Bolch et al., 2006; Zukerman, 2013) has been extensively used to describe the processes in a transportation system. An analysed entity is usually divided into a _service_ and a _waiting_ part (_queue_). Incoming duties may either be assigned to a processing _channel_ or, if none is available, to the next slot in the queue. The Kendall notation (Kendall, 1953) has been developed to abbreviate definitions of Queueing Systems with
\[A/B/n/m.\]
Arrival (\(A\)) and service processes (\(B\)) can be modelled with arbitrary probability distributions. The described Queuing System can contain one or more service channels (\(n\)) and any natural number of waiting slots (\(m\)) - including none or infinitely many. In this work, Exponential (\(M\)) and General independent (\(GI\)) distributions are considered to describe arrival and service processes, but generally, other probability distributions can be utilized as well.
Modelled Queueing Systems are analysed by a set of relevant parameters. For _Markovian_ (or _Exponential distributed_) models (_Markov Chains_), the state at any point in time is the only dependency to the future evolution of the process (see Zukerman (2013, Ch. 2.4)). Hence, transition rates between the states in a Markov Chain can be given. We denote _arrival rates_ by \(\lambda\) and _service rates_ by \(\mu\). They can be calculated with the use of expected values for the inter-arrival time \(ET_{A}\)
\[\lambda=\frac{1}{ET_{A}}, \tag{1}\]
and service times \(ET_{S}\)
\[\mu=\frac{1}{ET_{S}}. \tag{2}\]
Further, we describe the _occupancy rate_ of a queuing system by
\[\rho=\frac{\lambda}{\mu}. \tag{3}\]
Furthermore, variance coefficients of the arrival process \(v_{A}\) and of the service process \(v_{B}\) may be used for estimating supplementary characteristics. Additional parameters include the _estimated length of the queue_\(EL_{W}\) and the _probability of loss_\(p_{loss}\), describing the probability that incoming requests can not be considered in the system, as all service and waiting slots may be occupied.
An example of the use of a Queueing System is the determination of the capacity of a railway line (Wakob, 1984; Wendler, 2007), which is usually described with a \(GI/GI/1/\infty\) system. As an analytical solution to those systems has not yet been found, one can model it as an \(M/M/1/\infty\) system and use an approximation formula (Gudehus, 1976) for the calculation of the expected queue length
\[EL_{W}(GI/GI/1/\infty)\approx EL_{W}(M/M/1/\infty)\cdot c=\frac{\rho^{2}}{1- \rho}\cdot\frac{v_{A}^{2}+v_{B}^{2}}{2} \tag{4}\]
with a factor \(c=\frac{v_{A}^{2}+v_{B}^{2}}{2}\).
Further approximations for the lengths of the queue in general independent arrival and/or service processes have been developed in Wakob (1984); Fischer and Hertel (1990); Wendler (2007); Weik (2020).
Figure 2 introduces a graphical representation of a Continuous-Time Markov Chain (see Ross (2014) for a definition) modelling the \(M/M/1/\infty\) system. It consists of a root state (without a label) as well as states with one currently serviced unit (denoted by's') and some units in the queue (each denoted by 'w'). Ordering the states by the number of units in the system, as of being serviced and currently waiting for units, transitions between subsequent states correspond to the arrival of a unit into the Queueing System (denoted by the arrival rate '\(\lambda\)') or the termination of the service of a unit (denoted by the service rate '\(\mu\)').
The following Section generalizes this concept to railway junctions, which allow more complex resource allocations than on a railway line without multiple possible routes.
### Modelling railway junctions
In this work, railway junctions are considered, which differ from railway lines in one central aspect: It may be feasible for multiple trains to use the infrastructure at the same time - depending on the routes, they are scheduled to take.
#### 3.4.1 States
Modelling those infrastructures as a Continuous-Time Markov Chain \(MC=(S,T)\), used states \(s\in S\) need to be distinguishable regarding the currently serviced and waited-for route(s).
Figure 2: Markov Chain for a \(M/M/1/\infty\) Queueing System
The general set of possible states
\[\hat{S}=\left\{\left(q_{1},b_{1},\ldots,q_{k},b_{k}\right)|q_{i}\in\{0,\ldots,m\},b_{i}\in\{0,1\}\right\} \tag{5}\]
can be obtained by utilizing the information in the set of routes \(R\) of a junction \(J=(R,C)\). A State \(s=\left(q_{1},b_{1},\ldots,q_{k},b_{k}\right)\in\hat{S}\) contains information regarding the number of trains \(q_{i}\in\{0,\ldots,m\}\) waiting in the queue for route \(r_{i}\) and whether a route \(r_{i}\) is currently services \(b_{i}\in\{0,1\}\).
The state-space \(\hat{S}\) can be further restricted to
\[S=\left\{\left(q_{1},b_{1},\ldots,q_{k},b_{k}\right)\in\hat{S}|\sum_{i=1}^{k} \sum_{j=1}^{k}\left(C_{i,j}b_{i}b_{j}\right)=0\right\} \tag{6}\]
by applying the conflicts described in the conflict-matrix \(C\). Furtheron, entries \(q_{i}\) or \(b_{i}\) in a State \(s=\left(q_{1},b_{1},\ldots,q_{k},b_{k}\right)\in S\) are also referenced by \(q_{s,i}\) or \(b_{s,i}\).
#### 3.4.2 Transitions
Transitions \((u,v)=t\in T\) between two states \(u,v\in S\) can correspond to either the _arrival_ of a train for route \(r_{i}\), the _completion of service_ on route \(r_{i}\) or the _choice_ which holding train to service next.
The used transition rates of arrival-transitions are given by the arrival rate \(\lambda_{i}\), of service-transitions by the service rate \(\mu_{i}\), and of choice-transitions by the maximum rate \(M\). Those transition rates can be obtained by the described operational program \(A\) and the time horizon \(U\).
An arrival transition between
\[u=\left(q_{1},b_{1},\ldots,q_{i},b_{i},\ldots,q_{k},b_{k}\right) \tag{7}\]
and
\[v=\left(q_{1},b_{1},\ldots,q_{i}+1,b_{i},\ldots,q_{k},b_{k}\right) \tag{8}\]
utilizes the arrival rate
\[\lambda_{i}=\frac{\sum_{(r_{i},n)\in A}(n)}{U}, \tag{9}\]
where the number of trains using route \(r_{i}\), \(\sum_{(r_{i},n)\in A}(n)\), is divided by the time horizon \(U\).
Transitions for a service process can be modeled between
\[u=\left(q_{1},b_{1},\ldots,q_{i},1,\ldots,q_{k},b_{k}\right) \tag{10}\]
and
\[v=\left(q_{1},b_{1},\ldots,q_{i},0,\ldots,q_{k},b_{k}\right), \tag{11}\]
utilizing the service rate
\[\mu_{i}=\frac{1}{\frac{\sum_{(r_{i},n)\in A}(t_{\text{service}}((r_{i},n)) \cdot n)}{\sum_{(r,n)\in A}(n)}}, \tag{12}\]
which corresponds to the reciprocal of the average service times of all routes, weighted by the amount of trains on each route.
Choice-transitions \(t=(u,v)\) exclusively start at states \(u\in S\) with
\[\sum_{i=1}^{k}b_{u,i}=0 \tag{13}\]
and at least two conflicting sets of routes \(R_{i},R_{j}\subset R\), with
\[\begin{split} q_{u,o}>0,\ \forall r_{o}\in R_{i}\\ q_{u,p}>0,\ \forall r_{p}\in R_{j},\end{split} \tag{14}\]
which are not operable simultaneously, and end at states \(v\in S\), with at least one serviced route. They correspond to the choice of which route to operate next when multiple options are possible.
Since the choice-transitions should not induce additional time in the system, the maximum rate \(M\) should be large enough such that the additional expected time in the system per choice-transition \(1/M\) is sufficiently small. In this work, choice-transitions with identical transition rates are included to model the different decisions between the route(s) to be serviced next. Hence, the obtained results can be assumed to be independent of disposition strategies. The maximum rate is further chosen as \(M=600\), corresponding to a rate of \(10\) per second, which is deemed a competitive approximation.
#### 3.4.3 Example
The introduced Continuous-Time Markov Chain \(MC=(S,T)\) can be used to model Queueing Systems with more complex service regulations. In Figure 3, two different examples of railway track layouts are presented. While layout 3a corresponds to a short single-track segment with two possible routes, layout 3b illustrates the infrastructure of a crossover segment with three routes, two of which (\(r_{1}\) and \(r_{3}\)) being operable simultaneously.
Figures 4 and 5 give graphical representations of continuous-time Markov Chains modelling the single track segment (Figure 2(a)) and the crossover segment (Figure 2(b)). Starting at the most left state, arrivals to their two or three routes \(r_{i}\) are possible with their respective arrival rate \(\lambda_{i}\). Since the Queueing System is empty, an arriving train starts with its service time immediately after arriving. While every other train, arriving in the single track system (Figure 2(a)) during the service time of the first train, will be allocated to a waiting slot, the
Figure 3: Conflicting routes in different infrastructure
assignment of a second train in the Queueing System for the crossover segment does depend on the used routes. If the first train uses \(r_{1}\) or \(r_{3}\) and the second train the other of both, the second train can be serviced immediately. If the two trains share the same route or one of them is using route \(r_{3}\), the second train has to be assigned to a waiting slot.
Those implications of route exclusions continue in both graphs and all combinations of arriving and serviced trains are modeled with one limitation: While in theory, unlimited state spaces and therefore the modelling of \(m=\infty\) waiting positions are possible, an analysis of those models is very complex and in practical operation, the number of waiting trains will always be limited due to space restrictions. Hence, the examples in Figure 4 and 5 contain only one waiting slot per route. Trains arriving to a route while all waiting slots are occupied are therefore neglected.
Table 3 lists the number of states for the two models, rising with the number of waiting positions \(m\). Since the probability of loss \(p_{loss}=p_{loss}(m)\) is decreasing with the number of waiting positions per route, a sufficient number of waiting positions may be calculated by identifying a limit value \(p_{loss}^{*}\) and determining the lowest \(m\in\mathbb{N}\) satisfying
\[p_{loss}(m)\leq p_{loss}^{*} \tag{15}\]
Figure 4: Markov Chain modelling the Queueing System for the single track segment
\begin{table}
\begin{tabular}{l|c|c|c|c|c} number of waiting positions \(m\) & 1 & 2 & 4 & 8 & 16 \\ \hline single track segment & 10 & 23 & 67 & 227 & 835 \\ \hline crossover segment & 28 & 89 & 397 & 2261 & 15013 \\ \end{tabular}
\end{table}
Table 3: Number of states by the number of waiting positions in Queuing Systems for the examples in Figure 3.
iteratively or by testing relevant expressions of \(m\).
A Model for the junction in Figure 1 does incorporate more complex route conflicts and additional parameters, leading to substantial higher state numbers, i.e. \(|S|=128\) for an instance with only one waiting slot \(m=1\) per route. Hence, we refrain from including a graphical representation of the induced Continuous-Time Markov Chain. In the online repository (Emunds and Niessen, 2023) however, a full description of the used model, obtained with the definition in Section 3.4, is given.
### Determination of the estimated length of the queue
While closed-form analytical solutions exist for the estimated length of the queue \(E_{LW}\) for some Queueing Systems (i.e. equation (4), more examples in Fischer and Hertel (1990)), the analysis of more complex systems remains challenging. Since Continuous-Time Markov Chains may be used to model the behavior
Figure 5: Markov Chain modelling the Queueing System for the crossover segment.
of probabilistic programs, the calculation of state-probabilities is a substantial part of the verification of software systems. Furthermore, special software tools have been developed to perform the so-called _probabilistic model-checking_, therefore including sophisticated algorithms to calculate state possibilities and enabling higher-order evaluations. This work utilizes the model checker _Storm_(Hensel et al., 2022).
Storm parses Markov Chain models, i.e. formulated in the _PRISM modelling language_(Parker et al., 2000; Kwiatkowska et al., 2011), and gives tools to check specified _properties_ on them. Properties are formulas describing paths on sets of states of a Markov Chain and can be used to calculate probabilities or reachability statements of states. This process is described as probabilistic model-checking and Storm utilizes different _engines_ to perform this task, automatically detecting the most suitable engine for the specified input.
For this work, the introduced Continuous-Time Markov Chain model has been declared in the PRISM modelling language for multiple infrastructure layouts and read into Storm for the calculations of the expected length of the queue of each route. The formulated model can be found in the online repository (Emunds and Niessen, 2023).
Given the probabilities \(p:S\rightarrow[0,1]\) of all states \(s\in S\), the expected length of the queue \(E_{LW,i}\) of a route \(r_{i}\) can be calculated by summing the probabilities of all states containing elements in the queue
\[E_{LW,i}=\sum_{\Psi_{i}(s)>0}p(s)\cdot\Psi(s), \tag{16}\]
utilizing the function \(\Psi_{i}:S\rightarrow\mathbb{N}^{0}\), giving the number of elements in the queue of the route \(r_{i}\) for a state \(s\in S\).
Notice that the calculation of the expected length of the queue \(E_{LW,i}\) on an arbitrary route \(r_{i}\) in (16) relies on the use of Markov Chains, solely capable of modelling queueing system of type \(M/M/s/\infty\).
In order to analyse systems with general independent (\(GI\)) arrival or service processes, factors, utilizing the variation coefficient of the described process, can be introduced. According to Fischer and Hertel (1990), the expected length of a queue in a Queueing system of type \(GI/GI/s/\infty\) can be analysed by using a \(M/M/s/\infty\) system and modifying the results
\[E_{LW,r_{i}}(M/M/s/\infty)\cdot\frac{1}{\gamma}\approx E_{LW,r_{i}}(GI/GI/s/ \infty) \tag{17}\]
accordingly, using
\[\gamma=\frac{2}{c\cdot v_{B}^{2}+v_{A}^{2}} \tag{18}\]
and
\[c=\left(\frac{\rho}{s}\right)^{1-v_{A}^{2}}\cdot(1+v_{A}^{2})-v_{A}^{2}. \tag{19}\]
Here, \(v_{A}\) and \(v_{B}\) correspond to the coefficients of variation for the arrival and the service process respectively.
Following the equations (17 - 19), a greater coefficient of variation in either the arrival or the service process yields a larger expected length of the queue. In addition to the introduced coefficients of variation, factor \(\gamma\) depends on the occupancy rate \(\rho\) (see identity (3)) and the number of parallel service channels \(s\). Here, the length of the queue \(E_{LW,r_{i}}\) corresponds to arrivals on one route \(r_{i}\) only, \(s=1\) can hence be fixed.
This modification can be implemented in the design of threshold values that have been developed for the capacity analysis of railway infrastructure.
Regarding the modelled railway infrastructure (see Section 3.4), \(E_{LW,i}\) has to be calculated for every route \(r_{i}\). Using the threshold values introduced in the next section, every \(E_{LW,i}\) can be verified regarding its sufficiency for acceptable operating quality on the infrastructure.
### Threshold values
Different threshold values for theoretical and operational capacity have been discussed in the literature. In the UIC Code 406 (UIC, 2013) maximum occupancy rates have been introduced to limit capacity utilization. Potthoff (1970) introduces limits for the loss probabilities in railway stations, which are still in practical use for the dimensioning of track numbers in a railway station, i.e. at the German infrastructure manager DB Netz AG (2022).
In this work, the threshold value \(L^{*}_{W,limit}\), introduced in Schwanhauser and Schultze (1982) and likewise still in practical use (DB Netz AG, 2022), is utilized. It corresponds to the maximum number of trains that are to wait in the analysed line section at any given time for a sufficient performance of the infrastructure. With the ratio of passenger trains in all considered trains \(p_{pt}\), the threshold value
\[L^{*}_{W,limit}=0.479\cdot\exp(-1.3\cdot p_{pt}) \tag{20}\]
can be specified. A threshold value of the approximating queuing system
\[L_{W,limit}\approx\gamma\cdot L^{*}_{W,limit}=\gamma\cdot 0.479\cdot\exp(-1.3 \cdot p_{pt}) \tag{21}\]
can be obtained by utilizing (17) and (20).
Hence, the performance of railway infrastructure is judged based on the arrival and service processes, including their rates and variation, as well as on the operating program during the analysed time horizon.
## 4 Model Performance
Using the formulation of Section 3.4, a Continuous-Time Markov Chain for a railway junction can be obtained. In this work, a maximum of \(m=5\) waiting slots per route has been utilized, as a trade-off between tractability (dependent on the model-size) and accuracy of the model (see Section 3.4.3). By calculating the estimated length of a queue and comparing it to the obtained threshold values (see Section 3.6), the capacity of modelled railway junctions may be assessed. To ensure the quality of the obtained model, the validity of the generated queue-length estimations has to be verified.
Aiming to survey solely the performance of the solution process using the introduced Continuous-Time Markov Chain, \(M/M/s/\infty\) systems have been considered. For this, simulations of a railway junction with multiple incoming railway lines have been built and run on sample data.
### Simulation Architecture
The Simulations have been implemented in Python 3.10.9 (Van Rossum and Drake Jr, 1995; Python Software Foundation, 2022), utilizing SimPy (Team SimPy Revision, 2023). A model of the junction in Figure 1 has been built, including four different routes with route-specific arrival and service processes.
To estimate inter-arrival times for every route, a pseudo-random number generator yields the next inter-arrival time within a specified exponential distribution with an expected value \(ET_{A,r}\) equal to the reciprocal value of the mean arrival rate \(\lambda_{r}=\frac{1}{ET_{A,r}}\) of the modelled route. Trains are stored in a first-in-first-out queue for the route and serviced according to the service time acquired by a second pseudo-random number generator, utilizing another exponential distribution with an expected value \(ET_{S}\) equal to the mean service rate \(\mu=\frac{1}{ET_{S}}\) of the system. During the service of a route \(r\) it is ensured, that no conflicting route \(r^{\prime}\in R\) is able to start service by utilizing shared resources for every pair of routes \((r,r^{\prime})\in R\times R\).
An implementation can be found in the online repository (Emunds and Niessen, 2023). To assess the performance of the simulated process, a snapshot of every route's queue-length is taken in every simulated minute. Hence, the mean length of a queue can be obtained easily for every simulation run.
### Validation
Both, implementations of the simulation and the queueing-length estimations with the formulated Continuous-Time Markov Chain, have been run on a single core of an Intel Xeon Platinum 8160 Processor (2.1 GHz), utilizing a maximum of 3900 MB working memory.
Two different simulations setups have been considered:
1. A simulation with no limit to the number of trains being able to wait in the queues of their requested routes
2. A simulation with a maximum of 5 trains per route being able to wait at the same time for the service on their requested routes
In the second simulation and in the analytical setting, an arriving train is rejected, i.e. not inserted into the respective queue, if its arrival time lies within a time frame where 5 trains are already waiting for the release of the requested route. Here, all routes are capable of having 5 trains waiting for service, hence a total maximum of 20 trains may wait at the same time.
All three solution approaches, both simulation setups and the analytical method, have been set to compute the estimated length of the queue \(E_{LW,r}\) for the route \(r_{3}\) (see Table 2), conflicting with routes in direction of A and C (see Figure 1). Route \(r_{3}\) has been selected as it is one of the routes with the
most conflicts (with \(r_{2}\) and \(r_{4}\)) and it would directly benefit from an overpass construction.
Since those computations have been done to compare the results and running times of the approaches, only a small set of 10 different service rates, between 0.1 and 1.0 trains per minute, has been considered. An arrival rate of 0.1 trains per minute has been set for every route in all computed instances.
Each simulation has been run 100 times for every considered service time, simulating a total of 22 hours for every run. From those 22 hours, the first and last have not been considered, resulting in an evaluated time of 20 hours per simulation and 2000 hours in total for every service time investigated.
For both simulation setups the mean computing time per hour and the mean total run time per 22 hour simulation have been evaluated. Those can be found in Figure 6, which additionally includes the running time of the analytical solution for reference.
The computing time results in Figure 6 clearly indicate that the analytical approach is faster, even if compared to the run of a single simulation hour. Noting the logarithmic scale, for small service rates the analytical approach is faster by a factor of 5 to 10, depending on the considered service rate. Since simulations have to be conducted multiple times in order to receive sufficient results, a comparison with the total simulation times might be more realistic, yielding factors of \(10^{3}\) up to \(10^{4}\).
In general, the computing times of the simulation approach are significantly increasing with a decreasing service rate. This is probably due to a higher amount of trains in the network at any given time, as the mean service time increases with a decreasing service rate, leading to more trains waiting for the release of their requested route. Contrary, simulation runs with a setup without limit required almost the same computing time compared to simulations subject to a maximum of 5 trains per queue.
Additional observations can be made by evaluating the accuracy of the in
Figure 6: Computation times needed for the analytical compared to the simulation approach
troduced analytical approach. This can be done by comparing the obtained results of the simulation methods with the results of the queueing-based analytical approach. In Figure 7 the obtained results of the \(E_{LW,r_{3}}\) computations are depicted by including the exact analytical results and the standard deviation area of the simulation results, i.e. the area \(\left[\overline{E}_{LW}-\sigma,\overline{E}_{LW,r_{3}}+\sigma\right]\), surrounding the mean \(\overline{E}_{LW,r_{3}}\) by the standard deviation \(\sigma\).
Noting the logarithmic scale in Figure 7, the results show significant differences between the simulation setups. For the setup without a limit on the number of trains in a queue (Figure 6(a)), serious discrepancies between the simulation and the analytical can be found for service rates of less than \(0.3\). Results of the analytical solution seem to be more similar to the results of the simulation setup with a limit of \(m=5\) on the length of a queue (Figure 6(b)), matching the setting for the queueing-based analytical approach. Hence, an assumption, that for railway junctions, utilizing the fixed predefinitions and an averaged service rate of less than \(0.3\), the number of trains in a queue will on average exceed the limit of \(5\), can be made. Consequently, arriving trains are likely to have to wait, resulting in a very poor service quality.
Taking the limit for \(M/M/s/\infty\) queueing systems (see Section 3.6), in Figure 7 with '\(L_{W,limit}\)' annotated, into account, the accuracy of the introduced queueing-based analytical approach seems to be sufficient for the use in practical applications.
## 5 Computational Study
The introduced method for the calculation of queuing lengths can be used to guide infrastructure managers when dimensioning railway junctions. This work particularly focuses on choosing the right infrastructure layout for a given operating program, i.e. problem statement (II) in Section 3.2.
Figure 7: Accuracy of the analytical approach in comparison to the confidence interval of conducted simulations
### Setup
In detail, a case study, deciding whether or not an overpass should be built for a junction with a given operating program has been conducted. An overpass is a way to reduce the number of route conflicts in a railway junction, an example for the track layout of the junction in Figure 1 is given in Figure 8.
To show the applicability of the described analytical approach, 23 different operating programs have been considered. Thus, all combinations of 23 different operating programs for both to the railway junction adjacent railway lines have been built and analysed for their peak traffic hour, resulting in a total number of \(23^{2}=529\) different examples. A detailed description of all considered operating program combinations can be found in B, we restrict ourselves to six exemplary railway lines here.
The operating programs have been selected according to different types of the adjacent railway lines, we distinguish between _mixed traffic lines_, _local train lines_, _long-distance train lines_, _freight train lines_, and _urban railway lines_. Additionally, two different loads have been considered for the mixed traffic line. Table 4 lists the selected operating programs.
In Figure 9 the considered combinations of those operating programs have been listed. In every entry, the top bar corresponds to the operating program on the _main line_, i.e. the routes from A to B (\(r_{1}\)) and from B to A (\(r_{3}\)), while the bottom bar corresponds to the operating program on the _branching line_, i.e. the routes from (\(r_{4}\)) and to (\(r_{2}\)) C.
\begin{table}
\begin{tabular}{c|c|c|c} operating & \# regional & \# high speed & \# freight \\ program & trains & trains & trains \\ \hline low intensity mixed & 2 & 0 & 1 \\ long distance & 0 & 4 & 0 \\ local train & 4 & 1 & 0 \\ freight train & 0 & 0 & 5 \\ high intensity mixed & 4 & 2 & 2 \\ urban railway & 10 & 0 & 0 \\ \hline \end{tabular}
\end{table}
Table 4: Operating programs considered
Figure 8: Track-layout for a double-track junction with an overpass
The total number of trains on a route \(n_{r}\) is utilized to determine the arrival rate \(\lambda_{r}=\frac{1}{n_{r}}\) for this route. Next, Continuous-Time Markov Chains (see Section 3.4) can be formulated for any possible service rate \(\mu\in(0,1]\), using the defined arrival rates \(\lambda_{r}\) and the maximum number of waiting positions per queue \(m=5\), while also taking conflicting routes into consideration. To ensure a sufficient precision while also maintaining efficient computability, all service rates \(\mu\in[0.01,1]\) with a step size of \(0.01\) have been taken into consideration in this work.
Hence, 100 Models have been analysed for both junction infrastructure layouts with/without an overpass for every considered combination of operating programs. The solving process has been automated using Python 3.10.9 (Van Rossum and Drake Jr, 1995; Python Software Foundation, 2022) and state-of-the-art model-checking Software. For this work, the model-checker _Storm_(Hensel et al., 2022) has been utilized. With its Python-Interface _Stormpy_(Junges and Volk, 2023) it is easy to use while also accomplishing competitive results in qualitative benchmarks (Budde et al., 2021). Used Models have been formulated according to Section 3.4, expressed in the PRISM modelling language (Parker et al., 2000; Kwiatkowska et al., 2011). Detailed information regarding the solving process and the used model files can be found in the online repository (Emunds and Niessen, 2023).
Figure 9: Operating program combinations considered
Aiming to gain insights about the performance benefit of the infrastructure layout including the overpass (see Figure 8), the expected length of the queue at route \(r_{3}\), from B to A, has been analysed. It has been chosen as it has two conflict points with other routes, one with the other route to A, \(r_{4}\) from C, the other with the route \(r_{2}\) from A to C, which is only conflicting in the junction layout without an overpass structure.
Resulting from the solving process is a grid of expected lengths of the queue for both infrastructure layouts for every considered service rate \(\mu\in[0.01,1]\). In Figure 10 the expected length of the queue at route \(r_{3}\) from B to A for both railway junction layouts of a local train main line and a long-distance train branching line is recorded. Both operating programs are only considering passenger transport, hence a ratio of passenger trains in all considered trains of \(p_{pt}=1\) can be used for the calculation of the threshold value \(L_{W,limit}\).
The combination is assumed to have to deal with an hourly load of 5 trains on the main line, from A to B and back, as well as 5 additional trains on the branching line, from A to C and back. This resumes in an arrival rate of \(\lambda_{r}=0.083\) for the routes \(r\in\{r_{1},r_{3}\}\) and of \(\lambda_{r^{\prime}}=0.067\) for the routes \(r^{\prime}\in\{r_{2},r_{4}\}\).
Furthermore, the threshold \(L_{W,limit}\) is depicted, modified according to Section 3.6, depending on the occupancy rate of \(\rho=\frac{\lambda_{r_{3}}}{\mu}\) for route \(r_{3}\). In this work, the coefficients of variation are assumed to be \(v_{A}=0.8\) for the arrival process and \(v_{S}=0.3\) for the service process, according with standard values in literature (see Wendler (1999)).
Utilizing the grid of resulting queue-length estimations as well as the calculated threshold, a minimum mean service rate \(\mu_{\min}\), needed for sufficient infrastructure quality, may be obtained. Hence, the maximum mean service time
Figure 10: Estimated lengths of the queue on route \(r_{3}\) for the combination of a local train main line and a long-distance train branching line
\(b_{\max}=\frac{1}{\mu_{\min}}\) for the given operation program on the main and branching lines can be derived. This maximum mean service time can be used to investigate the needed infrastructure by comparing it to the actual achieved service times on the analysed junction, which are subject to train and control system specific parameters.
### Results
The introduced derivation of a maximum mean service time has been applied to all 529 considered operating program combinations for the infrastructure settings with and without an overpass.
In Figures 11 and 12 the results for the selected 36 combinations are shown in heat-maps for both considered railway junction layouts. While both infrastructure settings share the same global distributions of service time requirements, results indicate that under the same load, as fixed by the operational program of the main and branching line, a railway junction including an overpass structure allows for a higher mean service time, while achieving the same operational quality as a railway junction without the overpass structure.
Further investigating the difference between the two infrastructure settings, Figure 13 includes the maximum mean service times (Figure 13a) for all 529 computed combinations of main and branching lines (see also B) and a histogram, representing the distribution of relative differences in the calculated maximum mean service times (Figure 13b).
Figure 11: Resulting maximum mean service times for the considered operating program combinations without an overpass
Taking the logarithmic scale in Figure 13a into account, a wide range of calculated maximum mean service times required for sufficient operational quality can be recognized. While some operating programs, i.e. examples with a low number of total trains in the considered peak hours, are granting sufficient operational quality even for mean service times as high as 10 to 20 minutes, the majority of considered examples require a mean service time of under 5 minutes, with very densely operated main and branching lines demanding maximum mean service times of even under 2 minutes.
By including an overpass in the layout of a planned railway junction, the required maximum mean service times can be decreased by a significant margin. For the majority of considered examples, the relative difference in the maximum mean service time lies between 5% and 10 %, but the achieved reduction of maximum mean service time is diverse. Some examples show virtually no difference between the considered infrastructure layouts, others report maximum mean service time decreases by up to 10 %.
Crucially, service times are dependent on various factors, some of which are not under the influence of infrastructure managers. Hence, substantial limitations to the lower bound of service times are relevant, affected by physical properties such as length or acceleration and braking performance of the rolling stock.
Concluding the computational experiment, differences between the considered infrastructure layouts have been analysed with the introduced method.
Figure 12: Resulting maximum mean service times for the considered operating program combinations with an overpass
For this, the calculation of many different operating program and service rate configurations has been conducted, resulting in substantiated estimations of infrastructure quality requirements. The achieved results indicate that railway junctions with a high total number of trains per hour can benefit from overpass structures to resolve some conflicts on requested routes.
## 6 Discussion and Outlook
This work introduced a novel method for analysing the timetable capacity of railway junctions based on queueing theory. It is applicable for solving both formulated problem statements (Section 3.2), timetable capacity determination of a given railway junction infrastructure (I) and dimension of junction infrastructure for a fixed operating program (II). By modelling railway junction routes as parts of a queueing system, while respecting their parallel service possibilities and taking resource conflicts into consideration, timetable independent analyses of examined infrastructure are enabled.
Utilizing classical queueing theory concepts, well established for railway line performance analysis (see Section 3.3), timetable capacity is determined by comparing queue-length estimations through a Continuous-Time Markov Chain representing the considered railway junction (see Section 3.4) with threshold values (see Section 3.6) depending on parameters of considered service and arrival process distributions along with operating program specifics. In this work, estimations of queue lengths have been carried out by model-checking software, enabling a fast and reliable computation for complex infrastructure dependencies.
The performance of the introduced approach has been studied by exemplary comparing the computation times and results of the analytical solution with
Figure 13: Comparison of the obtained maximum mean service times
simulations (see Section 4). Those simulations indicate that the accuracy of the introduced method does fulfill the requirements for sufficient analysis when utilizing threshold values, while also being significantly faster than the applied simulations.
Implementing the novel analytical model for operating programs in computational experiments (see 5), the selection of sufficient junction infrastructure has been tested. For this, 529 different operating program combinations have been considered, leading to substantive estimations regarding the effect of overpass structure on the timetable capacity of railway junctions. Hence, the introduced method has been proven to be applicable to the solution of conceptional issues on abstract and implementing scales.
Even though the concept is already applicable, additional research could still yield substantial benefit. As such, the modelling of general independent stochastic processes with Markov processes and approximation factors might be improvable, i.e. by including phase-type distributions. Similarly, utilized thresholds for the expected length of a queue have been introduced for the use in a context of railway lines, updating these to innovative measures could improve the real world applicability. Additionally, the introduced concept could be extended to implementations for railway stations and eventually railway networks. By including delay distributions and propagation in the model, similar measures for operational capacity could be enabled.
Utilizing the developed timetable independent method, infrastructure managers are however able to identify bottlenecks in early stages of the planning process. With the achieved computing time benefits when compared to simulation approaches, they might be setup to give junction designers substantiated indicators for required infrastructure. Hence, the introduced analytical timetable independent approach might be proven to form a valuable addition to the capacity method landscape.
**CRediT authorship contribution statement**
**Tamme Emunds:** Conceptualization, Methodology, Software, Formal Analysis, Writing - original draft. **Nils Niessen:** Conceptualization, Supervision, Writing - review & editing.
**Declaration of competing interest**
The authors declare that they have no known competing financial interests of personal relationships that could have appeared to influence the work reported in this paper.
## Acknowledgements
The authors thank Mr. Alexander Bork for his invaluable insights regarding model-checking techniques in general and regarding the software Storm in
specific. Furthermore, the authors thank Mr. Tobias Muller and Dr. Andreas Pfeifer for their supervision and guidance regarding practical implementations as well as providing application relevant case examples. Additionally, the authors thank DB Netz AG for the opportunity of applying the described theory in a practical project.
This work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 2236/2. Computational Experiments were performed with computing resources granted by RWTH Aachen University under project rwth1413.
| ```
多くのインフラストラクチャマネージャーが、需要増加という理由で、鉄道インフラストラクチャの容量を増加させる目標を持っている。鉄道線路インフラストラクチャの性能計算の方法が既に確立されているものの、鉄道交差の容量の決定は課題となっている。この研究では、キューイング理論の概念を利用して、鉄道交差の性能計算のための方法を開発している。この方法は、彼らのインフラストラクチャの配置、そして到着率とサービス率に依存する。導入されたアプローチの実施は、高度なモデルチェック技術に基づいている。この方法を利用すれば、その交差にオーバーパスが必要かどうかを判断できる。開発された方法は、長期的な鉄道容量計算の状況において、時間短縮と信頼性の高い時刻表の独立した交差評価に対応する。
``` |
2302.00056 | Absolute Ca II H & K and H-alpha flux measurements of low-mass stars:
Extending $R'_\mathrm{HK}$ to M dwarfs | Context: With the recent surge of planetary surveys focusing on detecting
Earth-mass planets around M dwarfs, it is becoming more important to understand
chromospheric activity in M dwarfs. Stellar chromospheric calcium emission is
typically measured using the $R'_\mathrm{HK}$ calibrations of Noyes et al.
(1984), which are only valid for $0.44 \le B-V \le 0.82$. Measurements of
calcium emission for cooler dwarfs $B-V \ge 0.82$ are difficult because of
their intrinsic dimness in the blue end of the visible spectrum. Aims: We
measure the absolute Ca II HK and H$\alpha$ flux of a sample of 110 HARPS M
dwarfs and also extend the calibration of $R'_\mathrm{HK}$ to the M dwarf
regime using PHOENIX stellar atmosphere models. Methods: We normalized a
template spectrum with a high signal-to-noise ratio that was obtained by
coadding multiple spectra of the same star to a PHOENIX stellar atmosphere
model to measure the chromospheric Ca II HK and H$\alpha$ flux in physical
units. We used three different $T_\mathrm{eff}$ calibrations and investigated
their effect on Ca II HK and H$\alpha$ activity measurements. We performed
conversions of the Mount Wilson S index to $R'_\mathrm{HK}$ as a function of
effective temperature for the range 2300 K $\le T_\mathrm{eff} \le$ 7200 K.
Last, we calculated continuum luminosity $\chi$ values for Ca II HK and
H$\alpha$ in the same manner as West & Hawley (2008) for $-1.0 \le
\mathrm{Fe/H} \le +1.0$ in steps of $\Delta \mathrm{Fe/H} = 0.5$. Results: We
compare different $T_\mathrm{eff}$ calibrations and find $\Delta T_\mathrm{eff}
\sim$ several 100 K for mid- to late-M dwarfs. Using these different
$T_\mathrm{eff}$ calibrations, we establish a catalog of $\log R'_\mathrm{HK}$
and $\mathcal{F}'_\mathrm{H\alpha}/\mathcal{F}_\mathrm{bol}$ measurements for
110 HARPS M dwarfs. [abridged] | C. J. Marvin, A. Reiners, G. Anglada-Escudé, S. V. Jeffers, S. Boro Saikia | 2023-01-31T19:44:17 | http://arxiv.org/abs/2302.00056v1 | Absolute Ca II H & K and H-alpha flux measurements of low-mass stars: Extending \(R^{\prime}_{\rm HK}\) to M dwarfs
###### Abstract
Context:With the recent surge of planetary surveys focusing on detecting Earth-mass planets around M dwarfs, it is becoming more important to understand chromospheric activity in M dwarfs. Stellar chromospheric calcium emission is typically measured using the \(R^{\prime}_{\rm HK}\) calibrations of Noyes et al. (1984), which are only valid for \(0.44\leq B-V\leq 0.82\). Measurements of calcium emission for cooler dwarfs \(B-V\geq 0.82\) are difficult because of their intrinsic dimness in the blue end of the visible spectrum.
Aims:We measure the absolute Ca II H & K and H\(\alpha\) flux of a sample of 110 HARPS M dwarfs and also extend the calibration of \(R^{\prime}_{\rm HK}\) to the M dwarf regime using PHOENIX stellar atmosphere models.
Methods:We normalized a template spectrum with a high signal-to-noise ratio that was obtained by coadding multiple spectra of the same star to a PHOENIX stellar atmosphere model to measure the chromospheric Ca II H & K and H\(\alpha\) flux in physical units. We used three different \(T_{\rm eff}\) calibrations and investigated their effect on Ca II H & K and H\(\alpha\) activity measurements. We performed conversions of the Mount Wilson S index to \(R^{\prime}_{\rm HK}\) as a function of effective temperature for the range \(2300\ \mathrm{K}\leq T_{\rm eff}\leq 7200\ \mathrm{K}\). Last, we calculated continuum luminosity \(\chi\) values for Ca II H & K and H\(\alpha\) in the same manner as West & Hawley (2008) for \(-1.0\leq\mathrm{[Fe/H]}\leq+1.0\) in steps of \(\mathrm{[Fe/H]}=0.5\).
Results:We compare different \(T_{\rm eff}\) calibrations and find \(\Delta T_{\rm eff}\sim\) several 100 K for mid- to late-M dwarfs. Using these different \(T_{\rm eff}\) calibrations, we establish a catalog of \(\log R^{\prime}_{\rm HK}\) and \(\mathcal{T}^{\prime}_{\rm HK}/\mathcal{T}_{\rm bol}\) measurements for 110 HARPS M dwarfs. The difference between our results and the calibrations of Noyes et al. (1984) is \(\Delta\log R^{\prime}_{\rm HK}=0.01\) dex for a Sun-like star. Our \(\chi\) values agree well with those of West & Hawley (2008). We confirm that the lower boundary of chromospheric Ca II H and K activity does not increase toward later-M dwarfs: it either stays constant or decreases, depending on the \(T_{\rm eff}\) calibration used. We also confirm that for H\(\alpha\), the lower boundary of chromospheric flux is in absorption for earlier -M dwarfs and fills into the continuum toward later M dwarfs.
Conclusions:We confirm that we can effectively measure \(R^{\prime}_{\rm HK}\) in M dwarfs using template spectra with a high signal-to-noise ratio. We also conclude that our calibrations are a reliable extension of previous \(R^{\prime}_{\rm HK}\) calibrations, and effective temperature calibration is the main source of error in our activity measurements.
Conclusions:We confirm that we can effectively measure \(R^{\prime}_{\rm HK}\) in M dwarfs using template spectra with a high signal-to-noise ratio. We also conclude that our calibrations are a reliable extension of previous \(R^{\prime}_{\rm HK}\) calibrations, and effective temperature calibration is the main source of error in our activity measurements.
## 1 Introduction
M dwarfs comprise the majority of the stellar population, but their fundamental properties present challenges in measuring some of the most common activity indicators in the optical wavelength region, in particular, Ca II H and K, which lie in the bluer part of the visible spectrum. The cooler temperatures of M dwarfs imply that the bulk of their radiation lies toward longer wavelengths than for their FGK counterparts. M dwarfs are also intrinsically dimmer, which either decreases the signal-to-noise ratio (S/N) of observations or requires longer exposure times. Figure 1 demonstrates the differences in brightness and spectral energy distribution between a Sun-like G2 dwarf and a typical M4 dwarf. Additionally, telescope transmission and detector sensitivity are often higher in the redder wavelengths, which further exacerbates the problem of comparing their calcium flux with FGK stars in a consistent manner.
In what is known as the de facto standard of stellar activity surveys, Baliunas et al. (1995) monitored Ca II H and K lines of 111 main-sequence FGKM stars for several decades using the dimensionless measure called the Mount Wilson S index, or \(S_{\rm MW}\). This S index is a ratio of the Ca II H and K line core fluxes normalized to nearby continuum bands. However, the fluxes of the nearby continuum bands are not constant across spectral types (Vaughan & Preston 1980; Hartmann et al. 1984), which makes the comparison of stellar activity of different spectral types difficult with \(S_{\rm MWO}\). To mitigate the color-dependence, \(S_{\rm MWO}\) is usually transformed into a physical quantity known as \(R_{\rm HK}\), which is the ratio of the Ca II H and K surface flux to bolometric flux. A more desirable measure, known as \(R^{\prime}_{\rm HK}\), subtracts the photospheric contribution \(R_{\rm HK,phot}\), leaving only the chromospheric flux excess. Here, the prime denotes that the flux measurement is solely of chromospheric origin. The most common method of calculating \(R^{\prime}_{\rm HK}\) is the prescription derived by Noyes et al. (1984), which requires only \(S_{\rm MWO}\) and \(B-V\) to obtain \(R^{\prime}_{\rm HK}\). However, this method is only valid for \(0.44\leq B-V\leq 0.82\)
which means that M dwarfs lie outside of this \(R^{\prime}_{\rm HK}\) calibration range.
Despite these difficulties, measurements of Ca ii H and K and H\(\alpha\) flux in M dwarfs have been performed before. Walkowicz & Hawley (2009) measured Ca ii H and K equivalent widths for a sample of M3 dwarfs with the spectral subtraction method, using a stellar atmosphere model to correct for the photospheric flux contribution. The same technique was used by Montes et al. (1995b), who coined the term "spectral subtraction technique" and that has been performed as far back as Barden (1985) and Herbig (1985). In these studies, a synthetic spectral line profile is used as the photospheric contribution of a given star and is subtracted from an observed spectrum, resulting in a measurement of the chromospheric flux excess. Montes et al. (1995a) used the spectral subtraction technique to measure the chromospheric Ca ii H and K flux excess in 28 FGK stars. Instead of using a photospheric spectrum, Cincunegui et al. (2007) measured the surface flux \(F_{\rm HK}\) of main-sequence stars from early-F down to M5 spectral types, and extrapolated the Noyes et al. (1984) photospheric contribution for the M dwarfs. To measure the fractional surface flux to bolometric flux of Ca ii H and K, West & Hawley (2008) used the \(\chi\) method, where multiplying an equivalent width by a factor \(\chi\), a continuum measurement nearby the calcium line, results in \(L_{\rm Ca\,infK}/L_{\rm bol}\). This study provided \(\chi\) factors of Ca ii H and K for the spectral range M0 - M8. However, it did not provide a correction for the photospheric contribution. To extend the photospheric flux relation down to \(B-V=1.6\) with the spectral subtraction technique, Martinez-Arnaiz et al. (2011) used a synthetic template photospheric spectrum obtained by adding together spectra of nonactive stars of similar spectral type, and measured excess surface fluxes of 298 main-sequence stars ranging from F to M. Mittag et al. (2013) used PHOENIX model atmospheres to update the relations of Noyes et al. (1984) and measured \(R^{\prime}_{\rm HK}\) for 2133 main-sequence stars. Instead of using stellar models, Suarez Mascareto et al. (2015) used HARPS spectra of main-sequence FGKM dwarfs to derive their own \(R^{\prime}_{\rm HK}\) relations down to \(B-V\sim 1.9\), and measured \(R^{\prime}_{\rm HK}\) for 48 late-F to mid-M type stars. Scandariato et al. (2017) used the spectral subtraction technique with BT-Settl models as photospheric spectra for 71 early-M dwarfs and measured Ca ii H, K, and H\(\alpha\). Astudillo-Defru et al. (2017) formulated their own S-index calibration using HARPS spectra and used their own conversion from S-index to \(R^{\prime}_{\rm HK}\) for 403 M dwarfs. Newton et al. (2017) found \(L_{\rm H\alpha}/L_{\rm bol}\) for 270 nearby M dwarfs using recomputed \(\chi_{H\alpha}\) values of West & Hawley (2008).
The relation between Ca ii H, K, and H\(\alpha\) emission (or absorption) is also of much interest. Measuring the line profiles of 147 K-M5 main-sequence stars, Rauscher & Marcy (2006) showed that Ca ii H and K lines form at slightly different heights in the chromosphere, and that the equivalent width of H\(\alpha\) only correlates with Ca ii H and K high widths. They also reported a possible threshold above which the lower and upper chromospheres become thermally coupled. Cincunegui et al. (2007) found a clear correlation between averaged Ca ii H, K, and H\(\alpha\) with the strongest correlation for stars with the strongest emission. Conversely, studying stars individually and at different time intervals, Cincunegui et al. (2007) found no clear indication of how H\(\alpha\) varies with Ca ii, with stars showing correlation, anticorellation, or no correlation. Also observing individual time measurements for a sample of 30 M dwarfs, Gomes et al. (2011) found a positive correlation for the most active stars, and a tendency for a low or negative correlation in the least active stars. Walkowicz & Hawley (2009) found an initial deepening of H\(\alpha\) absorption for the stars that are least active in Ca ii H and K before line filling and going into emission. Scandariato et al. (2017) found this same nonlinear relation between Ca ii H, K, and H\(\alpha\) in 71 early-M dwarfs. Maldonado et al. (2017) separated older stars from younger and more active stars using the distinction of two branches identified by Martinez-Arnaiz et al. (2011). They found that the log-fluxes of Ca ii H, K, and H\(\alpha\) relatively follow the the same linear relation for stars spectral type F to M, which they identify as being the inactive branch, and found that stars deviating from this tend to be more active and younger, and thus lie on the active branch. More recently, Reiners et al. (2022) reported relations between chromospheric Ca ii H, K, and magnetic flux, and also H\(\alpha\) emission and magnetic flux. Combining these relations, a relation between Ca ii H, K, and H\(\alpha\) might be derived.
Many calibrations for \(R^{\prime}_{\rm HK}\) exist for the main sequence from early-F to late-M spectral type. Very few studies have used high S/N coadded spectra with the spectral subtraction technique ( Boro Saikia et al. (2018); Perdelwitz et al. (2021)). In fact, our work in this paper provided the M18 template-model method and measurements of Boro Saikia et al. (2018) (see Sec.2 and 3.2.2 of the aforementioned work). In this work, we measure Ca ii H, K, and H\(\alpha\) activity with the spectral subtraction technique in a sample of 110 M dwarfs using high S/N template spectra that are flux-calibrated to PHOENIX stellar atmosphere models. The main difference of this study is that instead of taking the mean value of Ca ii H and K flux measurements, we combine all available spectra and coadd them together before the flux measurement. This allows us to not just scale the Ca ii H and K measurement to an absolute flux unit, but to fit the spectral energy distribution (SED) of the calcium line to a PHOENIX stellar atmosphere, and similarly for H\(\alpha\). We compare three different effective temperature calibrations, and investigate their effect on Ca ii H, K, and H\(\alpha\) activity measurements. We extend the \(R^{\prime}_{\rm HK}\) calibrations to 2300 K \(\leq T_{\rm eff}\leq\) 7200 K using PHOENIX stellar atmosphere models. We also provide a table of \(R^{\prime}_{\rm HK}\) calibrations in this effective temperature range for different metallicities and surface gravities of main-sequence stars. Last, we compute the \(\chi\) values of West & Hawley (2008) for Ca ii H, K, and H\(\alpha\) for different metallicities from the PHOENIX model atmospheres.
This paper is organized as follows: In Sec. 2 we briefly review the definition of Ca ii H and K and H\(\alpha\) activity. In Sec. 3 we discuss the sample of stars, and we calibrate \(T_{\rm eff}\) using three different methods. We discuss the technique of measuring Ca ii H, K, and H\(\alpha\) in M dwarfs with the subtraction method, using coadded template spectra and model photospheres. In Sec. 4 we discuss our Ca ii H, K, and H\(\alpha\) measurements, provide extended \(R^{\prime}_{\rm HK}\) calibrations, and compare our calibrations with previous works. In Sec. 6 we summarize our work.
## 2 Overview of the measurement equations
### Mount Wilson S-index
The HKP-2 spectrophotometer installed at the Mount Wilson Observatory measures the Ca ii H and K line cores with a triangular 1.09 A full width at half maximum (FWHM) bandpass while simultaneously measuring two 20 A wide bands; \(R\), centered on 4001.07 A, and \(V\), centered on 3901.07 A. To mimic the response of this instrument, Duncan et al. (1991) prescribed the following S index formula:
\[S=8\alpha\frac{N_{\rm H}+N_{\rm K}}{N_{R}+N_{V}}, \tag{1}\]
where \(N_{\rm H}\), \(N_{\rm K}\), \(N_{\rm R}\), and \(N_{V}\) are the counts in their respective bands, and \(\alpha\) is a proportionality constant equating measurements made by the HKP-2 spectrophotometer to those made with HKP-1; Duncan et al. (1991) adopted the value \(\alpha=2.4\). The factor of 8 arises from the 8:1 duty cycle between the line core and continuum bandpasses. Since its inception, the S-index has been the most widely used activity indicator for FGK stars.
### Chromospheric Ca II H and K ratio
To convert the dimensionless \(S_{\rm MWO}\) into arbitrary surface flux \(F_{\rm HK}\), Middelkoop (1982) and Rutten (1984) derived a continuum conversion factor \(C_{\rm cf}\). The arbitrary surface flux is defined as
\[F_{\rm HK}=S_{\rm MWO}C_{\rm cf}T_{\rm eff}^{4}10^{-14}, \tag{2}\]
and its conversion into absolute units is given by
\[{\cal F}_{\rm HK}=KF_{\rm HK}, \tag{3}\]
where \({\cal F}_{\rm HK}\) and \(K\) are in units of erg cm\({}^{-2}\) s\({}^{-1}\).
\[R^{\prime}_{\rm HK}=\frac{{\cal F}^{\prime}_{\rm H}+{\cal F}^{\prime}_{\rm K} }{\sigma T_{\rm eff}^{4}}=\frac{{\cal F}^{\prime}_{\rm HK}}{{\cal F}_{\rm bol}}, \tag{4}\]
where \({\cal F}^{\prime}_{\rm H}={\cal F}_{\rm H}-{\cal F}_{\rm H,phot}\) and \({\cal F}^{\prime}_{\rm K}={\cal F}_{\rm K}-{\cal F}_{\rm K,phot}\). Here, \({\cal F}^{\prime}_{\rm H}\) and \({\cal F}^{\prime}_{\rm K}\) are the chromospheric fluxes, \({\cal F}_{\rm H}\) and \({\cal F}_{\rm K}\) are the surface fluxes, and \({\cal F}_{\rm H,phot}\) and \({\cal F}_{\rm K,phot}\) are the photospheric fluxes of the Ca ii H and K lines, respectively. From a slight rearranging, Eq. 4 can be written as
\[R^{\prime}_{\rm HK}=\frac{{\cal F}_{\rm HK}-{\cal F}_{\rm HK,phot}}{\sigma T_ {\rm eff}^{4}}=R_{\rm HK}-R_{\rm HK,phot}, \tag{5}\]
with the surface flux ratio given by
\[R_{\rm HK}=\frac{{\cal F}_{\rm HK}}{\sigma T_{\rm eff}^{4}} \tag{6}\]
and the photospheric flux ratio given by
\[R_{\rm HK,phot}=\frac{{\cal F}_{\rm HK,phot}}{\sigma T_{\rm eff}^{4}}. \tag{7}\]
Typically, \(R^{\prime}_{\rm HK}\) is measured through a conversion from the Mount Wilson S-index. The method pioneered by Noyes et al. (1984) calculates \(R_{\rm HK}\) using the equation
\[R_{\rm HK}=1.34\times 10^{-4}C_{\rm cf}S_{\rm MWO}. \tag{8}\]
The quantity \(C_{\rm cf}\) is a color-dependent conversion factor to remove the color-dependence of the \(S_{\rm MWO}\), and Noyes et al. (1984) used the Middelkoop (1982) relation,
\[\log C_{\rm cf}=1.13(B-V)^{3}-3.91(B-V)^{2}+2.84(B-V)-0.47, \tag{9}\]
for \(0.45\leq(B-V)\leq 1.50\). To calculate \(R_{\rm phot}\), Noyes et al. (1984) used the following relation:
\[\log R_{\rm phot,N84}=-4.898+1.918(B-V)^{2}-2.893(B-V)^{3} \tag{10}\]
for \(0.44\leq(B-V)\leq 0.82\). Equation 8 and Eq. 10 are then combined to obtain the chromospheric flux excess,
\[R^{\prime}_{\rm HK}=R_{\rm HK}-R_{\rm phot,N84}. \tag{11}\]
### H-alpha
H\(\alpha\) can be measured in a similar way to \(R_{\rm HK}\) in Eq. 4, but substituting H\(\alpha\) for Ca ii H and K. The chromospheric flux ratio of H\(\alpha\) might be defined as the surface flux subtracted by the photospheric flux, divided by the bolometric flux so that
\[\frac{{\cal F}^{\prime}_{\rm H\alpha}}{{\cal F}_{\rm bol}}=\frac{{\cal F}_{\rm H \alpha}-{\cal F}_{\rm H\alpha,phot}}{\sigma T_{\rm eff}^{4}}. \tag{12}\]
In brief, we measure the surface flux of each line, \({\cal F}_{\rm line}\) by integrating the flux of the template spectrum, normalized to a stellar atmosphere model, inside an integration window centered on the line core. We measure the photospheric flux, \({\cal F}_{\rm line,phot}\), by integrating the flux of the stellar atmosphere model, with the same integration window centered on the line core. The bolometric flux \({\cal F}_{\rm bol}\) is determined from \(T_{\rm eff}\), which is needed to obtain a proper stellar atmosphere model for a given star. We provide more detail in Sec. 3.
## 3 Case study: HARPS M dwarf sample
We used high-resolution archival spectra obtained with the HARPS spectrograph (Pepe et al. 2002) installed on ESO's 3.6m telescope in La Silla, Chile. The sample mainly consists of 102 targets from the HARPS GTO M dwarf sample (Bonfils et al. 2013)1. We also used data obtained for the Cool Tiny Beats survey (Anglada-Escude et al. 2014; Berdinas et al. 2015)2, which adds the following four stars to the sample: GJ 160.2, GJ 180, GJ 570B, and GJ 317. Last, the following six M dwarfs with published planetary systems, and that are available in the ESO HARPS public archive, were added: GJ 676A, GJ 1214, HIP 12961, GJ 163, GJ 3634, and GJ 3740. Photometry values, mean radial velocities, proper motions, and parallaxes were acquired from SIMBAD (Wenger et al. 2000) (see Table D.1).
Footnote 1: ESO IDs 082.C-0718 and 183.C-0437
Footnote 2: ESO ID 191.C-0505
### High S/N template spectra
We used the HARPS-TERRA software (Anglada-Escude & Butler 2012) for all available spectra on all stars in the sample. HARPS-TERRA is a sophisticated tool that matches individual
Figure 1: Model G2V spectra (yellow) and model M4V spectra (red). The flux scale of the G2V is displayed on the left, and the flux scale of the M4V is given on the right. The blue shaded area indicates the region of the Ca ii H and K lines.
spectra to a high S/N "template spectrum" using a least-squares fit to compute high-precision radial velocities. The high S/N template spectrum is obtained by coadding all individual spectra of a given star using the highest S/N spectrum as a basis spectrum. Pixel weighting, telluric masking, and outlier filtering are all implemented in the algorithm to assess and reduce systematic biases (see Sec. 2 in Anglada-Escude & Butler (2012) for a much more detailed explanation). Because of this technique, the template spectrum for a star is essentially an averaged spectrum with median clipping. For each star, we obtained a high S/N template spectrum of each spectral order. We then used the corresponding spectral order that contains the chromospheric line of interest as the stellar observation spectrum.
After the initial run of HARPS-TERRA on all spectra of each target, we ran HARPS-TERRA a second time, excluding spectra that matched any of the following criteria: 1) program ID 60.A-9036(A), where spectra were acquired under nonoptimal conditions (an engineering run), 2) spectra reduced with cross-correlation function (CCF) masks earlier than M2 (the HARPS DRS pipeline uses an M2 mask for all M stars), and 3) spectra with radial velocity (RV) outliers determined by inspection. In general, looking at the RV time-series, spurious observation differences on the order of 1-10 km/s with respect to most observations are considered outliers. All three of the above criteria could negatively influence the resulting template spectra in a suboptimal way. After the second run of HARPS-TERRA, we corrected for the blaze function by running HARPS-TERRA a third time with the -useblaze option and the blaze file given by the ESO DRS BLAZE FILE keyword in the template spectrum file header. Finally, we obtained a blaze-corrected high S/N template spectrum for each star.
### Photospheric flux from stellar atmosphere models
To calculate photospheric fluxes, we used a grid of PHOENIX-ACES stellar atmosphere models from the Gottingen Spectral Library3(Husser et al., 2013). The parameters and step sizes of the grid are listed in Table 1. We calculated \(\mathcal{F}_{\mathrm{HK,phot}}\) by setting a 1.09 A FWHM triangular bandpass centered on the Ca ii H and K lines and summing the flux. The triangular bandpass mimics the response of the \(H\) and \(K\) bands of the HKP-2 Mount Wilson spectrograph (Duncan et al., 1991) (see Sec. 2.1). To measure the fractional chromospheric flux \(\mathcal{F}_{\mathrm{Hag}}^{\prime}/\mathcal{F}_{\mathrm{bol}}\), we used a 5.0 A wide rectangular bandpass centered on H\(\alpha\).
Footnote 3: [http://phoenix.astro.physik.uni-goettingen.de/](http://phoenix.astro.physik.uni-goettingen.de/)
### Surface flux from a high S/N template spectrum
For a given star, the surface flux is measured from the high S/N template spectrum computed in Sec. 3.1. In the left panel of Fig. 2, we plot a single HARPS observation of the Ca ii K line of an M1.5 star. Because HARPS has a high resolution of \(R\sim~{}110,000\), the S/N in the region near the calcium line is low, as shown in the figure. The right panel of Fig. 2 shows the same line of the same star, but this time, with 47 observations coadded together. This demonstrates the considerable S/N improvement obtained by coadding spectra.
For a single chromospheric line of a given star, the entire template order spectrum is normalized to a PHOENIX model spectrum via a first-degree polynomial least-squares fit, namely \(\mathcal{F}(\lambda)=af(\lambda)+b\). The PHOENIX spectra were bilinearly interpolated to a precision of \(\Delta T_{\mathrm{eff}}=1\) K and \(\Delta\mathrm{[Fe/H]}=0.01\) dex using the stellar parameters chosen by the methods outlined in Sec. 3.5 and listed in Table D.2. For the Ca ii K line, we used order 6, for the Ca ii H line, we used order 8, and for H\(\alpha\), we used order 68. Before normalizing the template spectrum to the PHOENIX spectrum, we converted counts into energy, shifted the spectrum to rest wavelengths, and then transformed wavelengths to vacuum wavelengths. We masked the active lines, as well as H\(\epsilon\) at 3971.2 A. To reduce the influence of low S/N at the spectral order edges from the blaze function, we also masked the outer 10% of each order. We took the sum of the line flux in the same manner as in Sec. 3.2.
Figure 3 demonstrates this normalization of a high S/N template spectrum of an early-M dwarf normalized to a PHOENIX atmosphere model around the Ca ii K line. The high S/N template spectrum consists of all available spectra coadded and flux-calibrated to the PHOENIX model atmosphere. Similarly, Fig. 4 shows this normalization of an M dwarf with H\(\alpha\) in absorption, while Fig. 5 shows another M dwarf with H\(\alpha\) in emission.
### S-index comparison
As a sanity check, we compared the \(S\) -index values of our sample (for their values, we refer to Boro Saikia et al. (2018) and for the treatment of how they were calculated) with those of Astudillo-Defru et al. (2017) in Fig. 6. Although it agrees, the linear fit slightly overestimates the values of Astudillo-Defru et al. (2017) compared to Boro Saikia et al. (2018) for \(S\) values below 2,
\[S_{\mathrm{AD17}}=1.0487S_{\mathrm{BS18}}+0.008 \tag{13}\]
### Stellar parameters
For an accurate stellar atmosphere model to normalize a template spectrum to, we must first determine a set of stellar parameters for each star in a self-consistent manner. Here we describe the
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Min & Max & Step size \\ \hline \(T_{\mathrm{eff}}\) [K] & 2300 & 7200 & 100 \\ \(\mathrm{[Fe/H]}\) & -1.0 & +1.0 & 0.5 \\ log(\(g\)) & 4.0 & 5.0 & 0.5 \\ \hline \end{tabular}
\end{table}
Table 1: PHOENIX grid parameter space
Figure 2: Ca ii K line of an M1.5 star. _Left_: Spectrum of a single HARPS observation. _Right_: Template spectrum consisting of 47 coadded spectra of the same star.
different calibrations we used to determine effective temperature and how we determined metallicity.
Effective temperatureFor each star in the sample, we estimated \(T_{\rm eff}\) using three different calibrations. The first and second calibration used the combined relation of spectral type, effective temperature, mass, and radius from Kenyon & Hartmann (1995) and Golimowski et al. (2004). This same combination of calibrations was used by Reiners & Basri (2007), Shulyak et al. (2011), and Reiners & Mohanty (2012). The first method simply converts spectral type into effective temperature. We denote effective temperatures obtained from this method with \(T_{\rm eff,SpT}\). The second method uses the \(M_{Ks}\)-\(\mathcal{M_{\star}}\) calibration of Delfosse et al. (2000) to first obtain a mass \(\mathcal{M_{\star}}\) from the absolute \(M_{Ks}\) magnitude, and then uses the former relation to convert \(\mathcal{M_{\star}}\) into \(T_{\rm eff}\). We denote effective temperatures obtained from this method with \(T_{\rm eff,M_{\star}}\). The third method adopts \(T_{\rm eff}\) values obtained by Maldonado et al. (2015) of 53 stars in the HARPS M dwarf GTO sample. We denote effective temperatures from this study with \(T_{\rm eff,M15}\).
Metallicity Maldonado et al. (2015) also calculated metallicities of the same 53 targets using the pseudo-equivalent width (PEW) technique of Neves et al. (2014), who also calculated metallicities for the entire HARPS GTO M dwarf sample. Maldonado et al. (2015) reported that their results agreed well overall with those of Neves et al. (2014), and we therefore adopted the metallicities of Neves et al. (2014) for the sample completeness. Although effective temperatures were also calculated, the authors noted that their \(T_{\rm eff}\) values are systematically underestimated compared with other works; therefore, we did not adopt the \(T_{\rm eff}\) values of Neves et al. (2014). For two stars not listed in Neves et al. (2014) (GJ 570B and GJ 180), we used the conversion of \(M_{Ks}\) and \(V-K\) to [Fe/H] in Neves et al. (2012). We note that the difference in metallicities between Neves et al. (2012) and Maldonado et al. (2015) can be up to \(\Delta\)[Fe/H] \(\pm\) 0.2. However, this difference is much smaller than the resolution of our grid of models, \(\Delta\)[Fe/H] \(\pm\) 0.5. For stars with no \(M_{Ks}\) or \(V-K_{S}\) measurements, we assumed solar metallicity. For all stars in the sample,
Figure 4: High S/N template spectrum of an M dwarf with H\(\alpha\) in absorption normalized to a PHOENIX atmosphere model. The solid red line is the high S/N template spectrum, while the dashed black line is the PHOENIX model atmosphere. The vertical dotted lines indicate a 5.0 Å integration region.
Figure 5: High S/N template spectrum of an active M dwarf with H\(\alpha\) in emission normalized to a PHOENIX atmosphere model. The solid red line is the high S/N template spectrum, while the dashed black line is the PHOENIX model atmosphere. The vertical dotted lines indicate the 5.0 Å integration region of H\(\alpha\).
Figure 3: High S/N template spectrum of an early-M dwarf normalized to a PHOENIX atmosphere model. The solid red line is the high S/N template spectrum, while the dashed black line is the PHOENIX model atmosphere. The blue shaded region shows the chromospheric emission. The dotted lines indicate the 1.09 Å FWHM triangular bandpass used to integrate the Ca ii H and K lines.
Figure 6: \(S_{\rm ADI7}\) of Astudillo-Defru et al. (2017) vs our work \(S_{\rm BSS}\) (Boro Saikia et al. 2018). _Top:_ Entire range of the sample. _Bottom:_ Same sample with zoomed in axes. The dotted line shows the 1:1 relation.
we constrained the surface gravity to log(\(g\)) = 5.0, which is a typical value for M dwarfs. The stellar parameters we used are listed in Table D.2.
The top panel of Fig. 7 shows \(T_{\rm eff}\) of the three methods we used as a function of \(M_{K_{S}}\), with their metallicity represented by color. For \(T_{\rm eff,S9T}\), we were able to calibrate 110 stars. With \(T_{\rm eff,\,M_{\star}}\), we were able to calibrate 99 stars, and for \(T_{\rm eff,\,M_{\star}}\), there are 49 stars. Temperatures of the same star are connected with a solid vertical line. In the lower panel of Fig. 7, we show the residuals of the different sources of \(T_{\rm eff}\). As is evident from both panels, the \(T_{\rm eff}\) determined by different methods disagrees to some extent, and this becomes much more pronounced for stars with \(M_{K_{S}}>8\). The mean of all \(\Delta T_{\rm eff}\) values is 176 K. For \(\Delta T_{\rm eff}\) values with \(M_{K_{S}}<8\), the mean difference drops to 122 K. The mean of \(\Delta T_{\rm eff}\) values for \(M_{K_{S}}>8\) becomes to 363 K. Most of these values with \(M_{K_{S}}>8\) belong to \(T_{\rm eff,\,M_{\star}}-T_{\rm eff,\,S9T}\).
We note that metallicity has an effect on the \(T_{\rm eff}\) determination. The largest scatter in \(\Delta T_{\rm eff}\) is between \(T_{\rm eff,\,S9T}\) and \(T_{\rm eff,\,M15}\). The determination of the spectral type depends on line ratios that are sensitive to both \(T_{\rm eff}\) and \({\rm[Fe/H]}\). Moreover, Maldonado et al. (2015) determined \(T_{\rm eff}\) simultaneously with metallicity. However, metallicity does not have much impact when determining \(T_{\rm eff}\) from \(M_{K_{S}}\)-\(\mathcal{M_{\star}}\), as infrared absolute magnitudes are less sensitive to metallicity (Delfosse et al., 2000). For \(T_{\rm eff,\,M_{\star}}\), since \(\mathcal{M_{\star}}\) is determined from a polynomial as a function of \(M_{K_{S}}\), we expect to see a clear relation between \(T_{\rm eff}\) and \(M_{K_{S}}\). Whether \(M_{K_{S}}\) actually is such a precise indicator of \(T_{\rm eff}\) is beyond the context of this study. Regardless, accurate effective temperatures of M dwarfs remain elusive; see Mann et al. (2015) and Passegger et al. (2016) for more thorough analyses of the current state of M dwarf stellar parameter determination.
## 4 Results
### Chromospheric Ca ii H and K flux
We plot the chromospheric Ca ii H and K flux normalized to bolometric flux, \(\log R^{\prime}_{\rm HK}\), as a function of absolute magnitude \(M_{K_{S}}\) in Fig. 8, \(\log R^{\prime}_{\rm HK}\) was calculated using the spectral subtraction technique outlined in Sec. 3, following Eq. 4. We estimated \(T_{\rm eff}\) using the three different methods described in Sec. 3.5 and for each \(T_{\rm eff}\), calculated \(\log R^{\prime}_{\rm HK}\). The plot legend contains the colors of each \(T_{\rm eff}\) calibration. We connect \(\log R^{\prime}_{\rm HK}\) measurements of the same star but different effective temperatures with a solid vertical line. Additionally, we plot stars with known planetary systems with open symbols, and stars without known planetary systems with closed symbols. Stars without H\(\alpha\) in emission are plotted with a circle and stars exhibiting H\(\alpha\) emission with a triangle.
#### 4.1.1 \(T_{\rm eff,\,S9T}\) calibration
For 110 stars we have spectral type information and are able to measure \(\log R^{\prime}_{\rm HK}\) using the spectral type to \(T_{\rm eff}\) conversion, \(T_{\rm eff,\,S9T}\). Of these stars, 13 exhibit H\(\alpha\) emission, and 19 have known planetary systems. The earliest-M dwarfs with \(M_{K_{S}}<5.8\) have higher values of \(\log R^{\prime}_{\rm HK}\), between -4.8 and -4.5. There is a drop in the lower boundary of activity levels near \(M_{K_{S}}=5.7\), where values range from \(-5.3<\log R^{\prime}_{\rm HK}<-4.7\). After this drop, the sequence of lower-activity stars has no apparent drop, and \(\log R^{\prime}_{\rm HK}\) stays between -5.5 and -4.9. Stars exhibiting H\(\alpha\) emission tend to have much higher \(\log R^{\prime}_{\rm HK}\) values than those that do not exhibit H\(\alpha\) emission. Their measured activity levels are \(\log R^{\prime}_{\rm HK}>-4.7\) and can be as high as \(\log R^{\prime}_{\rm HK}=-3.8\)
#### 4.1.2 \(T_{\rm eff,\,M_{\star}}\) calibration
For 99 stars, we have \(M_{K_{S}}\) measurements and can measure \(\log R^{\prime}_{\rm HK}\) using the \(M_{K_{S}}\) to mass to \(T_{\rm eff}\) calibration \(T_{\rm eff,\,M_{\star}}\). Of these stars, 13 exhibit H\(\alpha\) emission, and 16 have known planetary systems. Unlike \(\log R^{\prime}_{\rm HK}\) measured with \(T_{\rm eff,\,S9T}\), the lower boundary of activity for \(T_{\rm eff,\,M_{\star}}\) decreases with \(M_{K_{S}}\). This relation is expected because in this case, \(T_{\rm eff,\,M_{\star}}\) is calibrated using \(M_{K_{S}}\). This does not have a dramatic effect on \(\log R^{\prime}_{\rm HK}\) values until \(M_{K_{S}}=8\). At higher \(M_{K_{S}}\), the difference in \(\log R^{\prime}_{\rm HK}\) can be as high as 1.3 dex, and this arises from the large differences in \(T_{\rm eff}\) seen in Fig. 7. However, even though the measured values of \(\log R^{\prime}_{\rm HK}\) using \(T_{\rm eff,\,M_{\star}}\) are much lower, stars with H\(\alpha\) in emission still have significantly higher \(\log R^{\prime}_{\rm HK}\) values than their counterparts.
#### 4.1.3 \(T_{\rm eff,\,M15}\) calibration
For 49 stars with adopted values from Maldonado et al. (2015), we are able to measure \(\log R^{\prime}_{\rm HK}\) using \(T_{\rm eff,\,M15}\). Of these stars, 4 exhibit H\(\alpha\) emission, and 9 have known planetary systems. Similar with the other \(T_{\rm eff}\) calibrations, the lower level of \(\log R^{\prime}_{\rm HK}\) decreases from -4.5 to -5.4 between \(5<M_{K_{S}}<6.4\). There are only three stars with \(M_{K_{S}}\gtrsim 8\), and they tend to agree more with \(T_{\rm eff,\,S9T}\) values than \(T_{\rm eff,\,M_{\star}}\).
Figure 7: Effective temperature \(T_{\rm eff}\) of the stellar sample using different \(T_{\rm eff}\) determination methods, which are shown by different plot symbols. _Top_: Effective temperature \(T_{\rm eff}\) as a function of absolute magnitude \(M_{K_{S}}\) with metallicity [Fe/H] shown according to the color scale. Temperatures of the same star are connected with a solid vertical line. _Bottom_: Difference of the methods used to obtain \(T_{\rm eff}\) in K.
We list the individual log \(R^{\prime}_{\rm HK}\) measurements in Table 3. The largest differences of log \(R^{\prime}_{\rm HK}\) values occur at the latest spectral types where it is difficult to determine a consistent \(T_{\rm eff}\) using a color, magnitude, or spectral type relation. For the entire sequence of stars, the mean of \(\Delta\) log \(R^{\prime}_{\rm HK}\) is 0.17 dex. At \(M_{K_{\rm s}}<8\), the mean of \(\Delta\) log \(R^{\prime}_{\rm HK}\) is only 0.10 dex, however for \(M_{K_{\rm s}}>8\), the mean is 0.56 dex. The start with the highest difference in log \(R^{\prime}_{\rm HK}\) is GJ 1002, which has \(\Delta\) log \(R^{\prime}_{\rm HK}=1.31\) dex. This is because of its wide range of \(T_{\rm eff}\) calibrations, with \(\Delta T_{\rm eff}=534\) K.
### Chromospheric H-alpha flux
In Fig. 9, we correct for the photospheric contribution and plot the chromospheric flux ratio of H\(\alpha\), \({\cal F}^{\prime}_{\rm H\alpha}/{\cal F}_{\rm bol}\). This is not on a logarithmic scale, unlike how we plot log \(R^{\prime}_{\rm HK}\). This is because the sign of \({\cal F}^{\prime}_{\rm H\alpha}/{\cal F}_{\rm bol}\) indicates whether the line is in emission or absorption. Values of \({\cal F}^{\prime}_{\rm H\alpha}/{\cal F}_{\rm bol}~{}\sim~{}0\) indicate that H\(\alpha\) is filled to the continuum. Values \({\cal F}^{\prime}_{\rm H\alpha}/{\cal F}_{\rm bol}~{}>~{}0\) indicate that H\(\alpha\) is in emission, as in Fig. 5, while values \({\cal F}^{\prime}_{\rm H\alpha}/{\cal F}_{\rm bol}~{}<~{}0\) indicate that H\(\alpha\) is in absorption as in Fig. 4. The plotting colors and symbols are the same as used in Fig. 8.
For stars with H\(\alpha\) in absorption, \({\cal F}^{\prime}_{\rm H\alpha}/{\cal F}_{\rm bol}\) increases toward a filling-in of the continuum toward larger \(M_{K_{\rm s}}\). The stars in which H\(\alpha\) is in emission tend to have significantly higher \({\cal F}^{\prime}_{\rm H\alpha}/{\cal F}_{\rm bol}\) than stars for which H\(\alpha\) is near the continuum level. We note that the absorption of H\(\alpha\) stays in a relatively narrow range. Values of \({\cal F}^{\prime}_{\rm H\alpha}/{\cal F}_{\rm bol}\) are \(-0.6\cdot 10^{-4}<{\cal F}^{\prime}_{\rm H\alpha}/{\cal F}_{\rm bol}\leq 0\). However, for stars in which H\(\alpha\) is in emission, the range of \({\cal F}^{\prime}_{\rm H\alpha}/{\cal F}_{\rm bol}\) is on the order of several \(10^{-4}\). larger \(\Delta{\cal F}^{\prime}_{\rm H\alpha}/{\cal F}_{\rm bol}\) due to the different \(T_{\rm eff}\) calibrations. Stars for which H\(\alpha\) is in emission also have significantly higher values of \(R^{\prime}_{\rm HK}\) than those in which H\(\alpha\) is not in emission.
Figure 8: Fractional chromospheric Ca ii H and K flux normalized to bolometric flux on a logarithmic scale, \(\log R^{\prime}_{\rm HK}\), as a function of absolute magnitude \(M_{K_{\rm s}}\). For each star, \(\log R^{\prime}_{\rm HK}\) is calculated with different \(T_{\rm eff}\) calibrations, each plotted with a different color. Vertical gray lines connect different measurements of the same star. Triangles indicate stars exhibiting H\(\alpha\) emission. Filled symbols indicate stars without known planetary systems, and open symbols indicate stars with known planetary systems.
Figure 9: Fractional chromospheric H\(\alpha\) flux, \({\cal F}^{\prime}_{\rm H\alpha}/{\cal F}_{\rm bol}\), as a function of absolute magnitude \(M_{K_{\rm s}}\). Vertical gray lines connect different measurements of the same star. Triangles indicate stars exhibiting H\(\alpha\) emission. Filled symbols indicate stars without known planetary systems, and open symbols indicate stars with known planetary systems. Values \(>0\) indicate that H\(\alpha\) is in emission, and values \(<0\) indicate that H\(\alpha\) is in absorption. Values near 0 indicate filling-in of the line to the continuum.
### H-alpha versus calcium II H and K flux
We plot chromospheric flux values of H\(\alpha\) against Ca ii H and K in Figs. 10 and 11. Figure 10 contains flux in absolute flux units, and therefore contains negative values of H\(\alpha\) that correspond to absorption. Figure 11 shows flux on a log-scale and excludes inactive stars with H\(\alpha\) in absorption. Symbols and markers are the same as in Figures 8 and 9.
Fig. 10 shows H\(\alpha\) only in absorption (\(\mathcal{F}^{{}^{\prime}}_{\rm H\alpha}\lesssim 0\)), and a decreasing trend is apparent where it goes deeper into absorption with increasing \(\mathcal{F}^{{}^{\prime}}_{\rm HK}\). The trend appears to be linear from \(0.0\leq\mathcal{F}^{{}^{\prime}}_{\rm HK}\leq 4.0\cdot 10^{-5}\) erg cm\({}^{-2}\) s\({}^{-1}\). When H\(\alpha\) is in emission, the trend is more sensitive for H\(\alpha\) than for H\(\alpha\). This decreasing of the H\(\alpha\) line before filling-in and going into emission was reported by Walkowicz & Hawley (2009) and Scandariato et al. (2017). Scandariato et al. (2017) only observed a decreasing trend from \(0.0\leq\mathcal{F}^{{}^{\prime}}_{\rm HK}\leq 1.0\). In Fig. 11, we overplot as a dashed red line a linear fit that we find to be
\[\log\mathcal{F}^{{}^{\prime}}_{\rm H\alpha}=0.7571\log\mathcal{F}^{{}^{\prime}} _{\rm HK}+1.6695, \tag{14}\]
with \(R^{2}=0.706\).
### Ca II H and K surface flux calibrations
We compared calibrations of the continuum conversion factor \(C_{\rm cf}\) (used for measuring \(F_{\rm HK}\)) of other studies with \(C_{\rm cf}\) calibrations using the PHOENIX model grid. To derive our \(T_{\rm eff}\) dependent conversion factor \(C_{\rm cf}\), we combined Eq. 2, Eq. 3, and \(S=8\alpha\,(\mathcal{F}_{\rm HK}/\mathcal{F}_{RV})\), to arrive at
\[C_{\rm cf}=\frac{10^{14}}{8\alpha K}\frac{\mathcal{F}_{\rm RV}}{T^{4}_{\rm eff }}. \tag{15}\]
Using values of \(\alpha=2.4\) and \(K=1.07\cdot 10^{6}\) erg cm\({}^{-2}\) s\({}^{-1}\)(Hall et al., 2007), we then obtained
\[C_{\rm cf}=4.8676\cdot 10^{6}\cdot\frac{\mathcal{F}_{\rm RV}}{T^{4}_{\rm eff }}. \tag{16}\]
The top panel of Fig. 12 shows our computed \(\log C_{\rm cf}\) as a function of \(T_{\rm eff}\). Different values of [Fe/H] are shown as a function of color, and different values of log(\(g\)) are plotted with different symbols. For \(T_{\rm eff}>5000\) K, we used log(\(g\)) values of 4.0 and 4.5, and for \(T_{\rm eff}\leq 5000\) K, we used log(\(g\)) values of 4.5 and 5.0. A fifth-order polynomial,
\[\log C_{\rm cf}=a+bT_{\rm eff}+cT^{2}_{\rm eff}+dT^{3}_{\rm eff}+eT^{4}_{\rm eff }+fT^{5}_{\rm eff}, \tag{17}\]
was fit to all of the points and overplotted as a solid red line. The coefficients of this polynomial are listed in Table 2. We provide the computed PHOENIX stellar atmosphere log \(C_{\rm cf}\) values in Table 4 for the entire model grid.
Even for the wide range of metallicity values, the spread of log \(C_{\rm cf}\) is not too wide for \(T_{\rm eff}>3400\) K. For these higher temperatures, log \(C_{\rm cf}\) varies by 0.25 dex at most. At \(T_{\rm eff}<3100\) K, metallicity begins to contribute to a larger spread of log \(C_{\rm cf}\) values, around 0.5 dex. This spread of log \(C_{\rm cf}\) increases to 1.4 dex as \(T_{\rm eff}\) decreases to 2300 K.
The bottom panel of Fig. 12 compares the polynomial fit in the upper panel with previously published log \(C_{\rm cf}\)-\(T_{\rm eff}\) relations (see Sec. A for details about the relations). For the relations of the other authors, we plot the range for which the relations are calibrated. Our log \(C_{\rm cf}\) values agree well with those from other studies for \(T_{\rm eff}\geq 4100\) K to \(\sim 0.1\) dex. At temperatures cooler than 4100 K, our log \(C_{\rm cf}\) values start to deviate from those of Middelkoop (1982), Rutten (1984), Cincunegui et al. (2007), Suarez Mascareo et al. (2015), and Astudillo-Defru et al. (2017); our values are higher for cooler temperatures. For temperatures cooler than \(T_{\rm eff}=4100\) K, log \(C_{\rm cf}\) diverges by more than 0.2 dex. The difference between our relation and Cincunegui et al. (2007) increases to 0.4 dex at \(T_{\rm eff}=3100\) K, while the difference between our relation and both Suarez Mas
\begin{table}
\begin{tabular}{c c} \hline \hline Parameter & Value \\ \hline a & -2.9679E+01 \\ b & 2.6864E-02 \\ c & -1.0268E-05 \\ d & 1.9866E-09 \\ e & -1.9017E-13 \\ f & 7.1548E-18 \\ \hline \end{tabular} 1
\end{table}
Table 2: log \(C_{\rm cf}\) vs. \(T_{\rm eff}\) fifth-order polynomial fit coefficients
Figure 11: Fractional chromospheric H\(\alpha\) flux, \(\mathcal{F}^{{}^{\prime}}_{\rm H\alpha}\), as a function of fractional chromospheric Ca ii H and K flux, \(\mathcal{F}^{{}^{\prime}}_{\rm HK}\) flux, both on a logarithmic scale. Filled symbols indicate stars without known planetary systems, and open symbols indicate stars with known planetary systems. In this plot, only stars with H\(\alpha\) in emission (\(>0\)) are shown. A linear fit is shown with a dashed red line, and a 1:1 relation is plotted as a dotted gray line.
Figure 10: Fractional chromospheric H\(\alpha\) flux, \(\mathcal{F}^{{}^{\prime}}_{\rm H\alpha}\), as a function of fractional chromospheric Ca ii H and K flux, \(\mathcal{F}^{{}^{\prime}}_{\rm HK}\) flux. Filled symbols indicate stars without known planetary systems, and open symbols indicate stars with known planetary systems. Values \(>0\) indicate that H\(\alpha\) is in emission, and values \(<0\) indicate that H\(\alpha\) is in absorption. Values near 0 indicate filling-in of the line to the continuum.
careno et al. (2015) and Astudillo-Defru et al. (2017) increases to 0.8 dex at \(T_{\rm eff}=3000\) K. This discrepency in \(\log C_{\rm cf}\) might be attributed to the use of empirical calibrations with these studies as opposed to this work using stellar models (see Sec. 5).
As an example, when we take for the Sun \(B-V=0.642\) and \(S_{\rm MWO}=0.164\), the Noyes et al. (1984) calibration of \(R_{\rm HK}\) with silver give us \(\log R_{\rm HK}=-4.62\). Our calibration using Eq. 17 results in \(\log R_{\rm HK}=-4.60\). This means that for a Sun-like star, \(\Delta\log R_{\rm HK}=0.02\).
### Ca II H and K photospheric flux calibrations
In the top panel of Fig. 13, we plot \(\log R_{\rm HK,\,phot}\) as a function of \(T_{\rm eff}\), computed as described in Sec. 3.2. The colors and symbols are assigned in the same manner as in Fig. 12. A fifth-order polynomial fit
\[\log R_{\rm HK,\,phot}=a+bT_{\rm eff}+cT_{\rm eff}^{2}+dT_{\rm eff}^{3}+eT_{ \rm eff}^{4}+fT_{\rm eff}^{5}, \tag{18}\]
is overplotted as a solid red line, and coefficients of this polynomial are listed in Table 3. Similarly as with \(\log C_{\rm cf}\), metallicity has an effect on the spread of \(\log R_{\rm HK,\,phot}\) values. The spread increases at \(4000\leq T_{\rm eff}\leq 5100\) K and \(T_{\rm eff}\leq 3800\) K. At \(T_{\rm eff}>5100\) K, the spread of \(\log R_{\rm HK,\,phot}\) values is lower than 0.2 dex. The spread increases from \(T_{\rm eff}=5100\) K to \(T_{\rm eff}=4200\) K, where it is about 0.4 dex. At \(T_{\rm eff}=3100\) K, the spread of \(\log R_{\rm HK,\,phot}\) is 0.6 dex. Here, a change in metallicity of \(\pm 0.5\) dex can result in a \(\Delta\log R_{\rm HK,\,phot}\sim 0.1\) dex. At \(T_{\rm eff}<3300\) K, the spread continually increases with lower temperatures to almost 1.6 dex at \(T_{\rm eff}=2400\) K. At \(T_{\rm eff}=2400\) K., a change in metallicity of \(\pm 0.5\) can result in a \(\Delta\log R_{\rm HK,\,phot}\sim 0.5\) dex. We provide the computed PHOENIX stellar atmosphere \(\log R_{\rm HK,\,phot}\) values in Table D.4 for the entire model grid.
In the bottom panel of Fig. 13, we compare our \(\log R_{\rm HK,\,phot}\)-\(T_{\rm eff}\) polynomial fit with previous literature relations (see Sec. B for details about the relations). The higher \(\log R_{\rm HK,\,phot}\) values of both Mittag et al. (2013) and our work in comparison with Noyes et al. (1984) and Suarez Mascareno et al. (2015) are clear. The difference between Mittag et al. (2013) and Noyes et al. (1984) is \(\sim 0.3\) dex, and the difference between our work and Noyes et al. (1984) ranges from 0.4 to 0.5 dex. The difference between our work and Suarez Mascareno et al. (2015) can be as large as 1.3 dex at \(T_{\rm eff}=3000\) K. Our values of \(\log R_{\rm HK,\,phot}\) agree very well with those of Astudillo-Defru et al. (2017), with the largest difference being 0.12 dex at \(T_{\rm eff}=4800\) K, and other differences on the order of 0.1 dex at \(T_{\rm eff}=3500\) K, 3400 K, and 3100 K.
Our \(\log R_{\rm HK,\,phot}\) calibration has much higher values than the calibration used by Noyes et al. (1984). This systematic offset was also observed by Astudillo-Defru et al. (2017), who also used synthetic spectra to obtain an \(R_{\rm HK,\,phot}\) calibration. While the exact reason for this offset is not known (see the discussion in Astudillo-Defru et al. (2017)), an offset correction can be ap
Figure 12: Different \(\log C_{\rm cf}\) calibrations as a function of \(T_{\rm eff}\). The approximate spectral type is shown on the top axis. _Top_: PHOENIX stellar atmosphere models with different [Fe/H] and log(\(g\)), where [Fe/H] is indicated by color and log(\(g\)) indicated by symbol. The solid red line indicates a fifth-order polynomial fit. _Bottom_: Calibrations of \(\log C_{\rm cf}\) from other works where only the valid calibration region is plotted. Overplotted in red is the same fifth-order polynomial fit from the top panel.
Figure 13: Different \(\log R_{\rm HK,\,phot}\) as a function of \(T_{\rm eff}\). The approximate spectral type is shown on the top axis. _Top_: PHOENIX stellar atmosphere models with different [Fe/H] and log(\(g\)), where [Fe/H] is indicated by color and log(\(g\)) indicated by symbol. The solid red line indicates a fifth-order polynomial fit. _Bottom_: Calibrations of \(\log R_{\rm HK,\,phot}\) from other works where only the valid calibration region is plotted. Overplotted in red is the same fifth-order polynomial fit from the top panel.
plied to scale our \(\log R_{\rm HK,phot}\) calibration to Noyes et al. (1984),
\[\log R_{\rm phot,N84}=\log R_{\rm phot,\,ours}-0.4612, \tag{19}\]
where 0.4612 is the offset correction. This simple offset correction scales our \(\log R_{\rm HK,phot}\) values to Noyes et al. (1984) values in their valid calibration region, so that our calibration can be used to obtain comparable results to Noyes et al. (1984), and also to extend the calibration to later-type stars. We note that although they are widely used, the Noyes et al. (1984) calibrations are only calibrated in the range 5300 K \(\lesssim T_{\rm eff}\lesssim 6300\) K.
If we take the same \(B-V\) and \(S_{\rm MWO}\) values as in Sec. 4.4, the Noyes et al. (1984) calibration of \(R_{\rm HK,phot}\) will give us \(\log R_{\rm HK,\,phot}=-4.92\). This will give us an activity measurement of \(\log R_{\rm HK}=-4.92\). Our calibration using Eq. 19 results in \(\log R_{\rm HK,phot}=-4.89\), which gives us \(\log R_{\rm HK}=-4.91\). Then, for a Sun-like star, \(\Delta\log R_{\rm HK,\,phot}=0.03\) and \(\Delta\log R_{\rm HK}=0.01\).
### \(\chi\)-factor
The \(\chi\)-factor provides a method to convert the equivalent width of Ca ii H and K into \(R_{\rm HK}\) and H\(\alpha\) into \({\cal F}_{\rm H\alpha}/{\cal F}_{\rm bol}\) in M dwarfs. Walkowicz et al. (2004) define the \(\chi\) factor as the ratio of the \(H\alpha\) line continuum luminosity to bolometric luminosity, namely
\[\chi=L_{\rm H\alpha,\,cont}/L_{\rm bol}, \tag{20}\]
where \(L_{\rm H\alpha,\,cont}\) is the luminosity of a selected continuum region near H\(\alpha\). West & Hawley (2008) extend the selection of \(\chi\) values to higher-order Balmer lines and Ca ii H and K. When multiplied by the equivalent width of the respective line, we can obtain the ratio of the line luminosity to the bolometric luminosity,
\[L_{\rm line}/L_{\rm bol}=\chi_{\rm line}\cdot{\rm EW}_{\rm line}. \tag{21}\]
For both Ca ii H and K and H\(\alpha\), we calculated the \(\chi\) values of West & Hawley (2008) of the PHOENIX model grid for M dwarf \(T_{\rm eff}\) values and different metallicities, constraining \(\log(g)=5.0\). For Ca ii H and K, we used the continuum regions given by Walkowicz & Hawley (2009), and for H\(\alpha\), we used the continuum regions given by West & Hawley (2008). We plot these with the values listed in West & Hawley (2008) in Fig. 14, with Ca ii H and K in the top panel and H\(\alpha\) in the lower panel. We provide the computed \(\chi_{\rm line}\) values in Table 5.
Similar to Fig. 12, the continuum values diverge as \(T_{\rm eff}\) decreases for different metallicities. For \(\chi_{\rm CalH\alpha}\), a metallicity of \({\rm[Fe/H]}=-0.5\) agrees best with West & Hawley (2008). We note that all of our \(\chi_{\rm CalH\alpha}\) values are higher for M3 and M4 spectral types. However, this difference is very small (on the order of \(10^{-7}\)), and the error bars of West & Hawley (2008) are much smaller than the spread of \(\chi_{\rm CalH\alpha}\) by metallicity. Our \(\chi_{\rm H\alpha}\) values also agree well with West & Hawley (2008), especially for \({\rm[Fe/H]}=0.0\). Our \(\chi_{\rm H\alpha}\) values deviate from West & Hawley (2008) at spectral types M4 and earlier. However, like with \(\chi_{\rm CalH\alpha}\), the difference is small, on the order of \(10^{-6}\).
### Proxima Centauri
Proxima Centauri, or GJ 551, is the only star in this study that exhibits H\(\alpha\) emission and is a known planet host (Anglada-Escude et al. 2016). This indicator of high activity is consistent with the findings of Ribas et al. (2016), who reported that Proxima b receives ten times more far-UV flux than the current Earth.
Except for the \(T_{\rm eff,\,M15}\) calibration of Proxima Centauri (\(\log R_{\rm HK}^{\prime}=-3.92\), \(T_{\rm eff,\,M15}=3555\) K), we did not measure \(\log R_{\rm HK}^{\prime}>-4.5\) for any planet hosts. However, this particular value may be overestimated. For example, for Proxima Centauri, Boyajian et al. (2012) reported \(T_{\rm eff}\sim 3050\) K, and the calibration of \(T_{\rm eff,\,SpT}=2900\) K, which has \(\log R_{\rm HK}^{\prime}=-4.59\). The similarities between these two temperatures mean that \(\log R_{\rm HK}^{\prime}\) of Proxima Centauri probably sits closer to \(\sim-4.5\) than \(\sim-3.9\).
## 5 Discussion
With the exception of Proxima Centauri, which exhibits H\(\alpha\) in emission, the remaining known planet hosts exhibit low activity. This is expected as activity can mimic planetary signals and
Figure 14: _Top_: Continuum flux normalized to bolometric flux of Ca ii H and K, \(\chi_{\rm CalH\alpha}\), as a function of \(T_{\rm eff}\) using PHOENIX stellar atmospheres with \(\log(g)=5.0\) and different metallicities. Values from West & Hawley (2008) are plotted in gray. _Bottom_: Continuum flux normalized to bolometric flux of H\(\alpha,\chi_{\rm H\alpha}\), as a function of \(T_{\rm eff}\) using PHOENIX stellar atmospheres with \(\log(g)=5.0\) and different metallicities. Values from West & Hawley (2008) are plotted in gray.
\begin{table}
\begin{tabular}{c r} \hline \hline Parameter & Value \\ \hline a & -3.7550E+01 \\ b & 3.2131E-02 \\ c & -1.3177E-05 \\ d & 2.7133E-09 \\ e & -2.7466E-13 \\ f & 1.0887E-17 \\ \hline \end{tabular}
\end{table}
Table 3: \(\log R_{\rm HK,phot}\) vs. \(T_{\rm eff}\) fifth-order polynomial fit coefficients
cause incorrect planet detections, so that stars with lower activity stars are preferred for planet searches.
There is a divergence of \(\log C_{\rm cf}\) values between our work and other works that begins at the start of the M dwarf sequence near 4000 K. Middelkoop (1982), Rutten (1984), Cincunegui et al. (2007), Suarez Mascareno et al. (2015), and Astudillo-Defru et al. (2017) used observational data to calibrate \(\log C_{\rm cf}\) as a function of \(B-V\), while we used stellar models as a function of \(T_{\rm eff}\). The discrepancy might be due to an insufficient \((B-V)-T_{\rm eff}\) relation in this region. However, our \(\chi_{\rm CHII}\) value values agree with those of West & Hawley (2008), whose relations were also derived from observational data. This gives us confidence that the PHOENIX models are accurate in the continuum near the Ca ii H and K lines, although the continuum region of \(\chi_{\rm CHII}\) differs from the region used in \(\log C_{\rm cf}\).
Our \(\log R_{\rm HK,\,phot}\) values remain higher than \(\log R_{\rm HK,\,phot}\) values obtained using observational data (Noyes et al. 1984; Suarez Mascareno et al. 2016). As noted by Mittag et al. (2013), this difference arises from the integration region of the calcium line. Our work and Mittag et al. (2013) both integrated the entire photospheric line. The \(\log R_{\rm HK,\,phot}\) relation used by Noyes et al. (1984) is that of Hartmann et al. (1984), who only estimated the flux outside of the line core exterior to the H1 and K1 points: they assumed the flux of the line core to be zero. The \(\log R_{\rm HK,\,phot}\) relation derived by Suarez Mascareno et al. (2015) also masks the line core: 0.7 A for FGK stars, and 0.4 A for M stars. For this reason, we provide the correction factor in Eq. 19 to keep the calculated \(R^{\prime}_{\rm HK}\) values consistent with historical published measurements based on Noyes et al. (1984). Moreover, although both use PHOENIX models, our \(\log R_{\rm HK,\,phot}\) values are still higher than those of Mittag et al. (2013). One reason for this difference might be the models that were used; Mittag et al. (2013) used models computed in non-local thermodynamic equilibrium, while we used models computed in local thermodynamic equilibrium.
Astudillo-Defru et al. (2017) measured \(R^{\prime}_{\rm HK}\) of 403 stars of the HARPS M dwarf sample. They extended the technique of Noyes et al. (1984) by calibrating their own \(\log C_{\rm cf}\) using 14 G or K and M dwarf pairs and use BT-Settl models to arrive at \(R_{\rm HK,\,phot}\) through an S-index conversion. Although we used different methods to arrive at the measurement of \(R^{\prime}_{\rm HK}\) in M dwarfs, we find our results to be consistent with Astudillo-Defru et al. (2017). Namely, our Fig. 8 exhibits a very similar lower envelope of \(R^{\prime}_{\rm HK}\) values to their Fig. 10. For brighter \(M_{K_{S}}\) values (earlier-M dwarf types), we find a relatively constant lower envelope of \(R^{\prime}_{\rm HK}\) values (see Fig. 8), while Astudillo-Defru et al. (2017) also reported a constant lower envelope of \(R^{\prime}_{\rm HK}\) for the higher M dwarf masses (see their lower Fig. 10). As \(M_{K_{S}}\) increases and M dwarf types move to later types, the lower envelope of \(R^{\prime}_{\rm HK}\) values begins to decrease. Finally, this lower envelope flattens out again for lower masses, or later spectral types. Mittag et al. (2013) similarly reported that the initially constant lower envelope was followed by a decreasing lower envelope. We note a key difference in our findings in that the technique used to derive \(T_{\rm eff}\) influences the extent of the decreasing lower envelope and then the level of the constant envelope for the lowest masses. However, each method individually still exhibits this behavior.
Surveys that focus on determining stellar parameters (e.g., Maldonado et al. (2015)) would be the more reliable source of stellar parameters if the given star were included in that survey. Moreover, when the S-index of a given star is the sole measurement and no access to any spectra is possible, then we recommend the use of Eq. 17 and Eq. 18 to calculate \(R^{\prime}_{\rm HK}\).
## 6 Summary and conclusions
In this work, we have measured Ca ii H and K and H\(\alpha\) activity in a large sample of HARPS M dwarf spectra using high S/N template spectra and PHOENIX model atmospheres. We compared three different \(T_{\rm eff}\) calibrations and find \(\Delta T_{\rm eff}\sim\) several 100 K for mid- to late-M dwarfs. This uncertainty in \(T_{\rm eff}\) contributes up to \(\Delta\log R^{\prime}_{\rm HK}=1.31\) dex and \(\Delta\log{\cal F}^{\prime}_{\rm H\alpha}/{\cal F}_{\rm bol}=2.93\) dex. We have extended \(R^{\prime}_{\rm HK}\) calibrations to the M dwarf regime using PHOENIX model spectra. We compared these new \(R^{\prime}_{\rm HK}\) calibrations with previous calibrations. Our long \(C_{\rm cf}\) calibration agrees very well with previous calibrations within 0.2 dex, and extends the calibration from 3100 K \(\leq T_{\rm eff}\leq\) 6800 K to 2300 K \(\leq T_{\rm eff}\leq\) 7200 K. Our \(\log R_{\rm HK,\,phot}\) calibration overestimates the Noyes et al. (1984) calibration by 0.46 dex. However, our calibration extends \(\log R_{\rm HK,phot}\) to 2300 K \(\leq T_{\rm eff}\leq\) 7200 K, and a simple offset correction can be applied to scale our \(\log R_{\rm HK,\,phot}\) to that of Noyes et al. (1984). We have provided a grid of \(\log C_{\rm cf}\) and \(\log R_{\rm HK,\,phot}\) values as functions of \(T_{\rm eff}\), [Fe/H], and \(\log(g)\). This grid can be used to compute \(R^{\prime}_{\rm HK}\) from S-index values using either polynomial fits to or an interpolation of the grid, and can be further beneficial when constraints on the stellar parameters of the targets are established. We have calculated \(\chi_{\rm CHII}\) and \(\chi_{\rm H\alpha}\) for \(-1.0\leq\) [Fe/H] \(\leq+1.0\) in steps of \(\Delta\)[Fe/H] \(=0.5\) for the entire M dwarf \(T_{\rm eff}\) range. We find that our \(\chi\) values from PHOENIX models agree well with the \(\chi\) values of West & Hawley (2008).
We find that the lower boundary of \(\log R^{\prime}_{\rm HK}\) either stays constant or decreases with later-M dwarfs depending on the \(T_{\rm eff}\) calibration used. Because of conflicting \(T_{\rm eff}\) measurements toward later-M dwarfs, an accurate determination of \(R^{\prime}_{\rm HK}\) cannot be made beyond \(M_{K_{S}}>8\). For \({\cal F}^{\prime}_{\rm H\alpha}/{\cal F}_{\rm bol}\), the lower boundary of inactive stars begins with early-M dwarfs in deeper absorption, and fills in to the continuum towards later-M dwarfs. Stars with known planetary systems do not exhibit unexpected or peculiar levels of Ca ii H, K, and H\(\alpha\) activity in relation to stars of similar spectral type or absolute magnitude.
Our surface flux calibration of \(\log C_{\rm cf}\) agrees very well with that of Middelkoop (1982) for FGK stars, and our surface flux calibrations of \(\chi_{\rm CHII}\) and \(\chi_{\rm H\alpha}\) also agree well with those of West & Hawley (2008) for M stars. Our \(R^{\prime}_{\rm HK}\) calibrations agree very well with those of Noyes et al. (1984) to within \(\Delta\log R^{\prime}_{\rm HK}=0.01\) dex for the Sun. We conclude that our calibrations are a reliable extension to previous \(R^{\prime}_{\rm HK}\) calibrations, provide a consistent way to measure \(R^{\prime}_{\rm HK}\) across spectral types early F to late M, and allow the comparison of activity of Sun-like stars to M dwarfs.
###### Acknowledgements.
We thank Tim-Oliver Husser for fruitful discussions and providing us with recalculated PHOENIX models. The authors acknowledge research funding by the Deutsche Forschungsgemeinschaft (DFG) under the grant SFB 963, project A04. SVJ acknowledges the support of the DFG priority program SPP 1992 "Exploring the Diversity of Extrasolar Planets" (JE 701/5-1). SBS acknowledges the support of the Austrian Science Fund (FWF). Thes Meitner project M82B-9-N. This work is based on data products from observations made with ESO Telescopes at the La Silla Observatory (Chile) under the program IDs G0-A9036, 072.C-0488, 074.C-0364, 075.C-0202, 075.D-0614, 076.C-0155, 077.C-0364, 078.C-0041, 082.C-0718, 085.C-0019, 086.C-0284, 087.C-0831, 089.C-0050, 089.C-0732, 090.C-00359, 090.C-0421, 091.C-0034, 180.C-0886, 183.C-0437, 183.C-0972, 185.D-0056, 191.C-0505, 192.C-0224, and 283.C-5022. We acknowledge the effort of all the observers of the aforementioned ESO projects whose data we have used.
| ```
M Dwarfsの chromospheric activity を理解することが重要になっています。
``` |
2310.00247 | Bridging the Gap Between Foundation Models and Heterogeneous Federated
Learning | Federated learning (FL) offers privacy-preserving decentralized machine
learning, optimizing models at edge clients without sharing private data.
Simultaneously, foundation models (FMs) have gained traction in the artificial
intelligence (AI) community due to their exceptional performance across various
tasks. However, integrating FMs into FL presents challenges, primarily due to
their substantial size and intensive resource requirements. This is especially
true when considering the resource heterogeneity in edge FL systems. We present
an adaptive framework for Resource-aware Federated Foundation Models (RaFFM) to
address these challenges. RaFFM introduces specialized model compression
algorithms tailored for FL scenarios, such as salient parameter prioritization
and high-performance subnetwork extraction. These algorithms enable dynamic
scaling of given transformer-based FMs to fit heterogeneous resource
constraints at the network edge during both FL's optimization and deployment
stages. Experimental results demonstrate that RaFFM shows significant
superiority in resource utilization efficiency and uses fewer resources to
deploy FMs to FL. Despite the lower resource consumption, target models
optimized by RaFFM achieve performance on par with traditional FL methods
applied to full-sized FMs. This is evident across tasks in both natural
language processing and computer vision domains. | Sixing Yu, J. Pablo Muñoz, Ali Jannesari | 2023-09-30T04:31:53 | http://arxiv.org/abs/2310.00247v2 | # Bridging the Gap Between Foundation Models and Heterogeneous Federated Learning
###### Abstract
Federated learning (FL) offers privacy-preserving decentralized machine learning, optimizing models at edge clients without sharing private data. Simultaneously, foundation models (FMs) have gained traction in the artificial intelligence (AI) community due to their exceptional performance across various tasks. However, integrating FMs into FL presents challenges, primarily due to their substantial size and intensive resource requirements. This is especially true when considering the resource heterogeneity in edge FL systems. We present an adaptive framework for Resource-aware Federated Foundation Models (RaFFM) to address these challenges. RaFFM introduces specialized model compression algorithms tailored for FL scenarios, such as salient parameter prioritization and high-performance subnetwork extraction. These algorithms enable dynamic scaling of given transformer-based FMs to fit heterogeneous resource constraints at the network edge during both FL's optimization and deployment stages. Experimental results demonstrate that RaFFM shows significant superiority in resource utilization efficiency and uses fewer resources to deploy FMs to FL. Despite the lower resource consumption, target models optimized by RaFFM achieve performance on par with traditional FL methods applied to full-sized FMs. This is evident across tasks in both natural language processing and computer vision domains.
## 1 Introduction
Federated learning (FL) (McMahan et al., 2017) represents a significant advancement in machine learning, emphasizing decentralized training while preserving data privacy. FL enhances data privacy and collaboration compared to traditional machine learning by enabling model training across multitudes of decentralized devices without direct data sharing. However, challenges like non-identical independent distribution (non-IID) data and heterogeneous computational resources among devices present potential training failures.
Concurrently, transformer-based foundation models (FMs) (Bommasani et al., 2021), typified by GPT (Radford et al., 2019; Brown et al., 2020; OpenAI, 2023), BERT (Devlin et al., 2018), and ViT (Dosovitskiy et al., 2020), pre-trained on large-scale datasets, have revolutionized AI research. FMs leverage their inherent pre-trained knowledge, achieving exceptional performance across multiple domains in downstream tasks even with limited fine-tuning data.
Given the superior strengths of FMs in few-shot transfer learning, they appear well-suited for non-IID FL environments (Yu et al., 2023; Zhuang et al., 2023). However, seamlessly integrating FMs into FL presents significant challenges. The substantial size and intensive resource demands of FMs make their deployment on resource-constrained FL edge devices problematic. Furthermore, the uneven distribution of computational resources within FL increases the difficulty of existing challenges. A resource-limited device must first satisfy the resource requirements for FM optimization, despite the presence of more capable devices within the same FL network, leading to high system requirements overall. Additionally, fine-tuning FMs typically requires approximately seven times the resources compared to inference. This disparity means that FL often faces resource-hungry during model training while leaving resources underutilized during inference.
We propose a framework, Resource-aware Federated Foundation Models (RaFFM), to address the resource utilization challenges in FL. RaFFM uses specialized transformer-based FM compression algorithms tailored for FL-edge environments, and dynamically deploys resource-aware scaled FMs to local clients. Since FMs are over-parameterized, a subset of salient parameters is more impactful to model performance. RaFFM first identifies and prioritizes the salient parameters in the given FM before FL communication starts. The salient parameter prioritization can ease the resource-aware sub-model extractions from FMs and model fusion in FL global updates via parallel matrix operations. Then, RaFFM generates resource-aware sub-models with salient weights so clients can proceed with the FL fine-tuning cycle. Post-training, given that model inference is more resource-friendly than training, RaFFM can deploy larger models, ensuring optimized performance and consistent resource allocation based on the client's capabilities.
In essence, RaFFM brings forth the following key contributions:
* Designed specialized FM compression algorithms tailored for edge FL.
* Proposed specialized salient parameter prioritization strategy for transformer-based FMs.
* Enhanced resource utilization throughout the training and deployment stages of FL.
* Significant reduction in communication overhead between edge devices and central servers.
## 2 Background
### Federated Learning
With growing concerns over data privacy, Federated Learning (FL) has emerged as a decentralized, privacy-preserving machine learning paradigm. It allows model training on private user data without compromising its confidentiality (McMahan et al., 2017). In FL, private data remains on local clients, and the target model is optimized locally, ensuring data privacy and security. Clients only share model updates, such as weights and gradients, asynchronously, minimizing the risk of data leaks. A representative FL algorithm is FedAvg (McMahan et al., 2017). The innate privacy features of FL have made it a preferred choice in various applications, especially in sectors with stringent privacy requirements like healthcare.
However, data and resource heterogeneity in FL often lead to training failures. Unbalanced training across clients leads to poor model convergence and performance. Recent work in FL has focused on improving gradient descent to stabilize training (Liu et al., 2020; Karimireddy et al., 2020; Li et al., 2020); personalizing model weights to enhance performance on downstream tasks (Deng et al., 2020; Tan et al., 2022; Yu et al., 2022c); and employing model compression techniques like knowledge distillation (Yu et al., 2022d), dynamic dropout (Yu et al., 2021b), and adaptive pruning to reduce overfitting on non-IID datasets and improve communication efficiency (Jiang et al., 2022; Yu et al., 2021b; Lin et al., 2020; Yu et al., 2022b). Despite these advances, there remains a gap between traditional model training and FL, particularly in heterogeneous FL-edge environments.
### Foundation Models
Foundation models (FMs) (Bommasani et al., 2021), such as the GPT family (Brown et al., 2020; Radford et al., 2019), LLaMA Touvron et al. (2023), ViT (Dosovitskiy et al., 2020), CLIP (Radford et al., 2021), and BERT (Devlin et al., 2018), stand at the forefront of AI advancements. FMs pre-trained on vast datasets exhibiting remarkable performance across multiple tasks. The typical life-cycle of an FM encompasses pre-training, fine-tuning, and application. During pre-training, models undergo unsupervised or self-supervised learning on large datasets. The fine-tuning phase tailors them for specific tasks. As an illustration, GPT models (Brown et al., 2020; Radford et al., 2019; OpenAI, 2023) acquire grammar, syntax, and semantics during pre-training, making subsequent fine-tuning for tasks like text classification or sentiment analysis more effective. FMs excel in few-shot transfer learning (Brown et al., 2020), making them particularly suited for data-heterogeneous FL environments where limited and imbalanced local data are present. However, the inherent large size and resource-hungry of FMs pose significant challenges to seamlessly apply in FL settings.
## 3 Methodology
This section offers an in-depth overview of the primary components of the proposed Resource-Aware Federated Foundation Model (RaFFM): foundation model scaling and resource-aware federated learning.
### Foundation Model Scaling
RaFFM employs a foundation model scaling technique, inspired by (Munoz et al., 2022), primarily designed to compress pre-trained FMs while ensuring adherence to the heterogeneous resource constraints in edge-FL systems. The overarching objective is to enhance resource utilization both during the training and inference phases in FL. As highlighted in Figure 1, foundation model scaling incorporates two key components: salient parameter prioritization and high-performance sub-model extraction.
#### 3.1.1 Salient Parameter Prioritization
Recent advancements in model compression, such as network pruning (He et al., 2018; Blalock et al., 2020; Yu et al., 2022; Yu et al., 2022; 2021a) and neural architecture search (White et al., 2023; Munoz et al., 2022), underscore that deep neural networks, particularly pre-trained FMs, often exhibit over-parameterization. Only a subset of parameters critically influence the model's performance. Identifying these impactful parameters is of paramount importance in resource-limited FL settings. Within RaFFM, we leverage salient parameter prioritization to identify salient parameters in FMs, and extract high-performance sub-models with salient parameters that are uniquely tailored for individual clients' resources. Additionally, salient parameter prioritization can ease the resource-aware sub-model extractions and model fusion in FL global updates using parallel matrix slicing.
**Parameter Salience Score.** Inspired by magnitude-based pruning techniques, salient parameters are recognized by ranking model weights using a salience evaluation metric (examples include the L1 and L2 norms (Li et al., 2016; Kumar et al., 2021)). Our experimental analysis preferred the L1 norm, thus adopted for our context. The L1 norm salience for a channel \(c\) within weight matrix \(W\) is illustrated by:
\[\text{Salience}(c)=\sum_{i=1}^{n}|W_{c,i}| \tag{1}\]
Equation 1 captures the L1 norm by aggregating the absolute weight values in a channel, reflecting the channel's cumulative significance.
**Salient Parameter Prioritization.** We rank the original model's weight channels based on their salience scores. Prioritization ensures that channels with the highest salience are forefronted in the weight matrix.
\[c_{ranked}=\text{argsort}(\text{Salience}(c),\text{descending}) \tag{2}\]
\[W_{ranked}=W[c_{ranked}:] \tag{3}\]
Equation 2 and Equation 3 delineate the procedure for ranking channels based on salience scores.
#### 3.1.2 Parameter Prioritization in Transformer
While existing magnitude-based compression methods predominantly target convolutional neural networks (CNNs), applying salient parameter prioritization to multi-head attention transformers ar
Figure 1: Resource-aware Federated Foundation Model (RaFFM) Overview.
chitectures (Vaswani et al., 2017) requires additional deliberation. We introduce a specialized salient parameter prioritization strategy tailored for transformers, which ensures the preservation of the inherent attention information present in the original FM.
\[\text{Attention}(W^{q},W^{k},W^{v},x)=\text{softmax}\left(\frac{xW^{q}(xW^{k})^ {T}}{\sqrt{d_{k}}}\right)xW^{v} \tag{4}\]
Equation 4 characterizes the attention-head. Here, the input sequence \(x\in\mathbb{R}^{l\times d}\) has a sequence length of \(l\) and an embedding size of \(d\). The matrices \(W^{q}\in\mathbb{R}^{d\times d_{k}}\) and \(W^{k}\in\mathbb{R}^{d\times d_{k}}\) represent the query and key weights, respectively, while \(W^{v}\in\mathbb{R}^{d\times d_{v}}\) stands for the value weights.
**Theorem 1**: Given matrices \(W^{q}\) and \(W^{k}\) of dimensions \(d\times d_{k}\) and an input \(x\) of size \(l\times d\), if we uniformly apply a permutation \(\pi\) to the columns of both \(W^{q}\) and \(W^{k}\) to obtain \(W^{q^{\prime}}\) and \(W^{k^{\prime}}\) respectively, the subsequent dot-product attention scores determined using \(W^{q^{\prime}}\) and \(W^{k^{\prime}}\) will match those derived using the original \(W^{q}\) and \(W^{k}\). (A detailed proof is provided in Appendix A.1).
\[\text{Salience}(c)=\frac{\text{Salience}(w_{c}^{q})+\text{Salience}(w_{c}^{k })}{2} \tag{5}\]
To retain the inherent attention characteristics of FMs, we prioritize \(W^{q}\) and \(W^{k}\) within an attention head by employing the same ranked permutation. The salience score, defined in Equation 5, calculates the average norm of the query and key matrices for each channel index \(c\). A consistent salience-based rank is concurrently imposed on both \(W^{q}\) and \(W^{k}\). As validated by **Theorem 1**, this ensures that post-prioritization, the derived dot-product attention scores will remain identical with the original attention head of the FMs.
#### 3.1.3 High-performance Sub-Model extraction
Given a resource constraint, denoted as \(\tau\), and a FM \(\mathcal{F}(\mathcal{W})\) with weights \(\mathcal{W}\), the objective of high-performance sub-model extraction is to derive a sub-model from \(\mathcal{F}(\mathcal{W})\) that comprises salient parameters and adheres to constraints \(\tau\).
Leveraging the salient parameter prioritization, we can easily extract high-performance sub-networks using weight matrix slicing. We represent a sub-model, \(\mathcal{F}(\mathcal{W}_{c_{\tau}})\), as a network configuration \(c_{\tau}\in\mathbb{R}^{n}\) satisfying constraint \(\tau\). The \(c_{\tau}\) is a list specifying the layer width for each hidden layer of the FM. To sample the target \(c_{\tau}\), the sampling space is denoted by \(\mathcal{S}=[s_{1},...,s_{n}]\) where \(s_{i}\) denotes the width (number of channels) of the \(i^{th}\) hidden layer.
For a sampled network configuration \(c_{\tau}\), we evaluate the size of the sub-model \(\mathcal{F}(\mathcal{W}_{c_{\tau}})\) in terms of measurable metrics (e.g., number of parameters). If it aligns the constraints \(\tau\), we extract \(\mathcal{F}(\mathcal{W}_{c_{\tau}})\) using Equation 6:
\[\mathcal{W}_{c_{\tau}}=\mathcal{W}[\text{: }c_{\tau}]=\{w_{i}[\text{: }c_{i}]|w_{i}\in\mathcal{W},c_{i}\in c_{\tau},\text{and }i=1,...,n\} \tag{6}\]
In the above equation, \(w_{i}\) represents the weights of the \(i^{th}\) hidden layer, and \(c_{i}\) stands for the number of channels sampled from the \(i^{th}\) hidden layer. The salient parameter prioritization component ensures the extracted sub-model, \(\mathcal{F}(\mathcal{W}_{c_{\tau}})\), retains the most salient weights from the original FM.
### Resource-aware Federated Foundation Models
RaFFM can seamlessly integrate with mainstream FL model fusion algorithms, such as FedAvg (McMahan et al., 2017) and Fedprox (Li et al., 2020). This is because all resource-aware local models in RaFFM are sub-networks derived from the forefront channels of the given FM. As a result, heterogeneous federated learning model fusion can be easily executed using matrix slicing, as demonstrated in Equations 6 and 7. In this paper, we use FedAvg as our backend algorithm.
\[\mathcal{W}^{t}=\sum_{c_{\tau}\in C}(\mathcal{W}^{t-1}[\text{: }c_{\tau}]+\eta_{c_{\tau}}\nabla\mathcal{W}^{t}_{c_{\tau}}) \tag{7}\]
Equation 7 aggregate resource-aware local models, \(c_{\tau}\) represent the local model configuration satisfying constraint \(\tau\), and \(\eta_{c_{\tau}}\) signifies the learning step for the client.
The RaFFM procedure in an FL communication round can be described as follows:
1. In the \(t^{th}\) communication round, RaFFM first perform salient parameters prioritization for the global FM \(\mathcal{F}(\mathcal{M}^{t})\).
2. RaFFM samples a set of sub-network configurations, \(C=\{c_{r1},..,c_{rk}\}\), in accordance with the resource constraints of participating clients, then generate sub-models.
3. These sampled sub-models, represented as \(\{\mathcal{W}^{t}_{c_{\tau}}|c_{\tau}\in C\}\), are dispatched to the local participating clients for further training.
4. Once local training finished, the clients relay their model updates back to the server.
5. The server then undertakes model fusion using Equation 7.
The entire process is iteratively executed until federated learning is complete (Refer Appendix 1).
## 4 Experiments
### Experiment setup
**Federated Learning Settings.** RaFFM is designed to address the challenges posed by resource heterogeneity in Federated Learning (FL) scenarios, especially deploying resource-hungry FMs. Our experiments were conducted in a standard cross-silo FL setup. This environment comprises 100 local clients coordinated by a central server. In each communication round, 10% of the clients are randomly selected to participate in local updates. We employ FedAvg (McMahan et al., 2017) as the underlying FL algorithm for model aggregation.
**Model and Datasets.** Our evaluations span a variety of pre-trained models:
* NLP models: DistilBert (Sanh et al., 2019), RoBERTa (Liu et al., 2019), BERT (Devlin et al., 2018), and FLAN-T5 (Chung et al., 2022).
* Large language model: LLaMA2 (Touvron et al., 2023).
* Computer Vision model: ViT (Dosovitskiy et al., 2020).
For our experiments, we employ diverse datasets, including the GLUE benchmark (Wang et al., 2018), question-answering benchmarks like SQuAD and SQuAD-v2 (Rajpurkar et al., 2016), SVAMP (Patel et al., 2021), CIFAR-10 and CIFAR-100 (Krizhevsky and Hinton, 2009), and the Flower-102 dataset (Nilsback and Zisserman, 2008).
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Method**} & \multirow{2}{*}{**\#Param**} & \multirow{2}{*}{**QQP**} & \multirow{2}{*}{**QNLI**} & \multirow{2}{*}{**SST-2**} & \multirow{2}{*}{**CoLA**} & \multirow{2}{*}{**STS-B**} & \multirow{2}{*}{**MRPC**} & \multirow{2}{*}{**RTE**} & \multirow{2}{*}{**MNLI**} & \multirow{2}{*}{**AVG**} & \multicolumn{2}{c}{
\begin{tabular}{c} **Training** \\ **Accel.** \\ \end{tabular} } \\ \hline \multirow{3}{*}{BERT-Large} & FL & 335M & 89.01 & 91.46 & 93.81 & 56.08 & 89.61 & 75.24 & 68.59 & 85.71 & 81.19 & 1.00 \(\times\) \\ & RaFFM & **100M** & 88.40 & 91.18 & 94.15 & 54.72 & 88.81 & 75.49 & 68.23 & 85.40 & 80.80 & **6.13**\(\times\) \\ \hline \multirow{3}{*}{BERT-Base} & FL & 109M & 88.34 & 86.43 & 91.74 & 48.03 & 86.34 & 77.94 & 64.26 & 80.83 & 77.99 & 1.00 \(\times\) \\ & RaFFM & **60M** & 88.00 & 88.41 & 91.63 & 47.76 & 86.00 & 77.45 & 63.90 & 80.33 & 77.94 & **2.45**\(\times\) \\ \hline \multirow{3}{*}{RoBERTa} & FL & 125M & 89.19 & 91.19 & 93.92 & 49.68 & 87.11 & 84.80 & 71.48 & 86.20 & 81.70 & 1.00 \(\times\) \\ & RaFFM & **53M** & 89.14 & 91.25 & 93.92 & 52.20 & 87.67 & 85.29 & 65.34 & 86.22 & 81.38 & **3.62**\(\times\) \\ \hline \multirow{3}{*}{DistilBERT} & FL & 67M & 86.73 & 84.71 & 89.91 & 45.36 & 83.76 & 79.17 & 55.60 & 77.90 & 75.39 & 1.00 \(\times\) \\ & RaFFM & **50M** & 86.80 & 85.08 & 90.48 & 41.69 & 82.85 & 78.19 & 57.40 & 78.34 & 75.10 & **1.55**\(\times\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on GLUE benchmark.
### Learning Efficiency
Our primary objective with RaFFM is to achieve efficient FL by leveraging fewer computational resources without significantly compromising model performance. To this end, we compare the performance of resource-aware sub-models deployed through RaFFM against the conventional full-size FL model implementations.
#### 4.2.1 Results on GLUE Benchmark
We assessed RaFFM's efficacy using the NLP General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018), which comprises eight language datasets: QQP, QNLI, SST-2, CoLA, STS-B, MRPC, RTE, and MNLI.
In experiments, baseline FL optimizes full-size FMs at local clients. In contrast, RaFFM deployed resource-aware sub-models tailored to individual clients. Table 1 summarizes the results. Performance metrics were computed for the global model post-training on validation datasets. The column titled #Param represents the average count of local model parameters across all clients for the eight datasets. Training acceleration estimates the relative GPU hours. We also present an average score (AVG) as an aggregate performance indicator.
Notably, RaFFM required fewer computational resources (evidenced by the reduced parameter count) and demonstrated faster training times than the conventional full-model FL. Impressively, despite the reduced model size at the client side, RaFFM-optimized models not only competitive but occasionally surpassed the performance of baseline full-size FL models. This superiority was particularly pronounced in datasets such as SST-2, MRPC, and MNLI.
#### 4.2.2 Results on Question Answering Benchmark
Further, we evaluate RaFFM on the Stanford Question Answering Dataset (SQuAD and SQuAD-V2) (Rajpurkar et al., 2016).
Table 2 presents the results across various models. Impressively, RaFFM consistently speeds up the training process on both SQuAD tasks. Taking BERT-Large as an example, RaFFM accelerates FL training (measured in GPU hours) by a factor of 6.59. Remarkably, despite deploying substantially trimmed models at edge clients, RaFFM's efficiency advantage does not come at the cost of model performance. Indeed, in cases like FLAN-T5 base and BERT-Large, RaFFM's performance even eclipses that of full-size FL deployments.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Method**} & \multicolumn{5}{c}{**SQuAD**} & \multicolumn{5}{c}{**SQuAD-V2**} \\ & & **\#Param** & **Exact Match** & **F1** & **Training Accel.** & **\#Param** & **Exact Match** & **F1** & **Training Accel.** \\ \hline \multirow{3}{*}{BERT-Large} & FL & 33dM & 83.29 & 89.25 & 1.00 \(\times\) & 335M & 82.05 & 90.24 & 1.00 \(\times\) \\ & RaFFM & **95M** & 83.34 & 90.87 & **6.59 \(\times\)** & **103M** & 82.27 & 90.28 & **6.59 \(\times\)** \\ \hline \multirow{3}{*}{BERT-Base} & FL & 109M & 73.68 & 82.99 & 1.00 \(\times\) & 109M & 72.85 & 82.69 & 1.00 \(\times\) \\ & RaFFM & **70M** & 72.37 & 82.00 & **1.94 \(\times\)** & **73M** & 71.23 & 81.36 & **1.82 \(\times\)** \\ \hline \multirow{3}{*}{DistillBERT} & FL & 67M & 71.70 & 81.30 & 1.00 \(\times\) & 67M & 69.55 & 79.55 & 1.00 \(\times\) \\ & RaFFM & **34M** & 70.59 & 80.15 & **2.77 \(\times\)** & **34M** & _69.60_ & 79.75 & **2.77 \(\times\)** \\ \hline \multirow{3}{*}{RoBERTa} & FL & 12dM & 82.64 & 89.75 & 1.00 \(\times\) & 12dM & 81.05 & 89.08 & 1.00 \(\times\) \\ & RaFFM & **90M** & 82.42 & 89.71 & **1.61\(\times\)** & **89M** & 80.77 & 88.83 & **1.64\(\times\)** \\ \hline \multirow{3}{*}{FLAN-T5} & FL & 77M & 48.65 & 64.43 & 1.00 \(\times\) & 77M & 49.80 & 61.49 & 1.00 \(\times\) \\ & RaFFM & **61M** & 48.83 & 64.60 & **1.42\(\times\)** & **61M** & 50.00 & 61.74 & **1.42\(\times\)** \\ \hline \multirow{3}{*}{FLAN-T5 Base} & FL & 248M & 60.70 & 77.75 & 1.00 \(\times\) & 248M & 67.22 & 80.24 & 1.00 \(\times\) \\ & RaFFM & **163M** & 60.92 & 78.01 & **1.88\(\times\)** & **16dM** & 67.67 & 80.33 & **1.86\(\times\)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experiments on question answering tasks
We further evaluate the parameter efficiency--especially vital in resource-tight FL settings. Figure 2 (a) and (b) visually depict RaFFM's supremacy in parameter efficiency (higher values are preferable). As an illustrative point, in BERT-Large, RaFFM, with an average model size of 95M parameters, posts an Exact Match score of 83.34 and an F1 score of 90.87 on SQuAD, which stands in stark contrast to full-size FM deployment in FL, and the efficiency translates to a speed-up of 3.52 times.
### Results on Computer Vision Tasks
In addition to our evaluations on NLP benchmarks, we extended our assessment to ascertain the efficacy of RaFFM on computer vision (CV) tasks, specifically leveraging the Vision Transformer (ViT) (Dosovitskiy et al., 2020). As shown in Table 3, we fine-tune ViT with 12-shot learning. The results corroborate RaFFM's consistent performance metrics on CV tasks. Specifically, in line with our findings from NLP benchmarks, RaFFM demonstrated a marked superiority in the training acceleration. Additionally, the reduced communication overhead and negligible compromise in model performance further underline its efficiency and robustness.
Conclusively, RaFFM offers an excellent balance between performance and computational efficiency. With fewer parameters and higher speed-up factors, it provides a viable alternative to more computationally intensive models without perceptible degradation in performance.
### Communication Efficiency
A key challenge in FL is the substantial communication cost, often due to frequent model weights or gradients sharing between edge devices and the central server. Figure 2 (c) illustrates the average communication footprint of a client in each round under various FMs. RaFFM substantially reduces communication costs at the client level. The rationale is straightforward: its resource-aware model deployment leads to more compact models at the edge. Consequently, there is less information to relay between the edge devices and the server, reducing communication burdens.
Additionally, we optimized FMs to meet specific performance benchmarks (set to the average median convergence accuracy). We monitored the network traffic induced by the training process during communication. As highlighted in Table 4, it consistently underlines RaFFM's superiority in minimizing communication costs across all experimental setups. Furthermore, Figure 2 (d) showcases the network traffic needed to achieve the average median convergence performance on the GLUE benchmark. RaFFM also consistently outperforms full-size model deployment, evidencing significantly decreased average communication cost across different FMs.
### Resource Efficiency
#### 4.5.1 System Resource Efficiency
Traditional FL approaches often suffer inefficient system resource utilization. When identical models are dispatched to both high-end and resource-constrained devices, the latter often dictates the
constraints, compelling the entire system to conform to its limitations. This results in inflated system requirements and often leaves high-end devices underutilized. Figure 3 (a) elucidates this challenge, showcasing the minimum system resource prerequisites for various FMs within a 100-client FL system to achieve the median F1 scores as detailed in Table 2. The baseline FL has lofty uniform resource demands, whereas RaFFM, illustrated through box plots, showcases a range of resource allocations wherein it attains the target performance. Evidently, RaFFM exhibits enhanced system resource efficiency, paving the way for a more cost-effective FL setup in terms of hardware requirements.
Further clarity is provided in Figure 3 (b), which shows the system requirements of RoBERTa deployments for different performance levels on the SST-2 dataset. The red dashed line represents full-size FM deployment in traditional FL, which indicates high baseline system requirements, irrespective of the performance target. Conversely, RaFFM, depicted by the box plot, offers a flexible range of system requirements to attain similar performance outcomes. This adaptability of RaFFM enables dynamic system budget adjustments based on performance goals, maximizing resource efficiency and preventing unnecessary expenditures.
In Figure 3 (c) and (d), we further assess RaFFM's robustness amidst a variety of system resource constraints, spanning Xlarge to XSmall budget categories. The findings underscore RaFFM's consistency: even as resources swell or dwindle, its performance remains steadfast, signifying its adaptability in heterogeneous resource environments.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Dataset**} & \multicolumn{2}{c}{**Target Performance**} & \multicolumn{2}{c}{**Communication Cost**} \\ & & & **(P1 Score)** & **Traffic/Client** & **Total** & **A** & **C**nt \\ \hline \multirow{4}{*}{BERT-Large} & FL & SQUADv1 & 8\% & 1401MB & 109.45GB & **-62.81GB** \\ & RaFFM & & **396MB** & **46.46GB** & **-62.81GB** \\ & RaFFM & & **390MB** & 1401MB & 71.38 GB & **-99.41 GB** \\ & RaFFM & & **42.32GB** & **24.47 GB** & **-49.41 GB** \\ \hline \multirow{4}{*}{DistillBERT} & RAFT & SQUADv1 & 80.15\% & **21.41MB** & 25.63 GB & **-2.34 GB** \\ & RaFFM & & **14.27GB** & **57.31 GB** & **-23.41 GB** \\ & RaFFM & & **251.41MB** & **50.52 GB** & **-24.85 GB** \\ & RaFFM & & **142.67MB** & **25.64 GB** & **-24.85 GB** \\ \hline \multirow{4}{*}{FLAN-TS} & FL & SQUADv1 & 77.28\% & **67.81MB** & **47.27GB** & **-75.34GB** \\ & RaFFM & & **103.60MB** & **54.70GB** & **-27.06GB** \\ & RaFFM & & **688.40MB** & **332.66GB** & **-225.02GB** \\ \hline \multirow{4}{*}{FLAN-TS} & RAFT & SQUADv1 & 61.45\% & **255.32MB** & **-54.85GB** \\ & RaFFM & & **325.99MB** & **-75.35GB** & **-57.06GB** \\ & RaFFM & & **285.48MB** & **69.85GB** & **-57.06GB** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Communication cost for achieving target performance on QA benchmark
Figure 4: (a) Resource distribution heat map of two distinct FL systems. (b) Client local model performance on SST-2 in distinct FL systems. (c) Heat map of local model performance under different model sizes. (d) Resource utilization efficiency in the training stage and inference stage.
Figure 3: (a) Lowest system resource requirements for deploying various FMs in FL. (b) System requirements for achieving target performance. (c) Question answering performance under various resource budgets. (d) Blue score under various resource budgets.
#### 4.5.2 Edge Resource Efficiency
In FL, not only do system-wide resource requirements matter, but individual client resources also play a pivotal role. Given the varied resource capacities of individual clients, ensuring stable performance of resource-aware models on these clients becomes crucial.
Figure 4 (a) and (b) depict two distinct resource budget FL systems. The distribution heat map of clients' resources for these two setups is showcased in (a). Figure 4 (b) depicts local clients' performances, demonstrating the stability of local clients' performance: even with notable variations in local resources, every client's performance aligns closely with its peers. This inherent stability is further emphasized when comparing the two different FL systems; despite the system disparities, the local model performance across the two systems remains stable. This consistency is underscored by Figure 4 (c), which plots the relationship between local model size and its achieved performance. Interestingly, even as the model sizes differ, each maintains commendable local performance. To sum up, RaFFM showcases its prowess in adeptly navigating resource heterogeneity, ensuring that performance remains stable at the client level.
#### 4.5.3 Edge Inference Efficiency
The resource consumption during the training phase is usually at least seven times that of the inference phase. This disparity means that FL often faces resource-hungry during model training while leaving resources underutilized during inference.
The mentioned observation is depicted in Figure 4 (d), which delineates the resource utilization efficiency during both training and inference phases for edge clients. RaFFM, with its inherent FM scaling components, enables the post-training deployment of relatively larger models at the edge during the inference stage, hence increasing the model performance and resource utilization.
### Enhancing RaFFM with Efficient Fine-tuning: A Case with LoRA
Incorporating parameter efficient fine-tuning (PEFT) methods like LoRA (Hu et al., 2021) into RaFFM holds potential for optimized performance, especially when dealing with large language models (LLMs) in an FL setting. Though PEFTs such as LoRA can markedly trim the trainable parameters of LLMs--often to less than 1% of the total parameters--it's essential to note that the full-size weights and activations are still retained during training. This results in substantial memory overheads.
To investigate the benefits of this synergy, we paired RaFFM with LoRA and fine-tuned the LLaMA2 model (Touvron et al., 2023) using the preprocessed instruction math question answering dataset, SVAMP (Hu et al., 2023; Patel et al., 2021). This dataset was partitioned among 10 FL clients for the experiments. Table 5 highlights the strengths of the RaFFM-LoRA combination. Specifically, compared to a full-size model paired with LoRA in an FL context, RaFFM coupled with LoRA demonstrates enhanced communication efficiency and a marked acceleration in inference.
## 5 Conclusion
We propose RaFFM addressing the challenges when deploy FMs to resource-heterogeneous FL systems. RaFFM introduced specialized FM compression algorithms for edge-FL system that allows scaling down the FM to edge constraints. The experiments demonstrate RaFFM's capability to optimize resource utilization during FL's life cycle, showing its potential for resource-efficient FL. Moreover, the flexibility of RaFFM allows for accelerated LLM fine-tuning in FL with PEFT. Nevertheless, it is essential to recognize the limitations of our approach. Notably, certain foundation models, even post-compression - such as Llama-7B - remain unsuitable for deployment on resource-constrained edge devices in FL settings. Addressing this limitation necessitates advancements in both hardware technology and algorithmic strategies, marking a promising avenue for our future research endeavors.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Adapter**} & \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**\#Clients**} & \multicolumn{2}{c}{**Avg.**} & \multicolumn{2}{c}{**\#Trainable**} & \multirow{2}{*}{**Accuracy**} \\ & & & & & & **Memory**\(\downarrow\) & **Param** & \\ \hline \multirow{2}{*}{LLaMA-2} & FL & \multirow{2}{*}{LoRA} & \multirow{2}{*}{SVAMP} & \multirow{2}{*}{10} & 1.00 \(\times\) & 262.41M & 53.44\% \\ & RaFFM & & & & **1.63**\(\times\) & **189.72M** & 52.80\% \\ \hline \hline \end{tabular}
\end{table}
Table 5: Efficient instruction tuning on LLM | Federated学習 (FL) はプライバシーを保護しながら分散型の機械学習を提供し、エッジクライアントでモデルを最適化させながら、プライベートデータの共有を回避します。同時に、基盤モデル (FM) は、様々なタスクにおける優れたパフォーマンスにより、人工知能 (AI) コミュニティで注目を集めています。しかし、FLに FM を統合する際には、主にその巨大なサイズと、多くのリソースを必要とするという課題が生じます。特に、エッジ FL システムにおけるリソースの多様性に着目すると、この課題がさらに顕著になります。そこで、この論文では、リソース Aware Federated Foundation Models (RaFFM) という適応型のフレームワークを提案しました。RaFFM は、FL シナリオに適応した、特殊なモデル圧縮アルゴリズムを導入します。例えば、重要なパラメータの優先順位付けと高性能サブネットワークの抽出があります。 |
2309.12720 | Towards a Near-real-time Protocol Tunneling Detector based on Machine
Learning Techniques | In the very last years, cybersecurity attacks have increased at an
unprecedented pace, becoming ever more sophisticated and costly. Their impact
has involved both private/public companies and critical infrastructures. At the
same time, due to the COVID-19 pandemic, the security perimeters of many
organizations expanded, causing an increase of the attack surface exploitable
by threat actors through malware and phishing attacks. Given these factors, it
is of primary importance to monitor the security perimeter and the events
occurring in the monitored network, according to a tested security strategy of
detection and response. In this paper, we present a protocol tunneling detector
prototype which inspects, in near real time, a company's network traffic using
machine learning techniques. Indeed, tunneling attacks allow malicious actors
to maximize the time in which their activity remains undetected. The detector
monitors unencrypted network flows and extracts features to detect possible
occurring attacks and anomalies, by combining machine learning and deep
learning. The proposed module can be embedded in any network security
monitoring platform able to provide network flow information along with its
metadata. The detection capabilities of the implemented prototype have been
tested both on benign and malicious datasets. Results show 97.1% overall
accuracy and an F1-score equals to 95.6%. | Filippo Sobrero, Beatrice Clavarezza, Daniele Ucci, Federica Bisio | 2023-09-22T09:08:43 | http://arxiv.org/abs/2309.12720v1 | # Towards a Near-real-time Protocol Tunneling Detector based on Machine Learning Techniques
###### Abstract
In the very last years, cybersecurity attacks have increased at an unprecedented pace, becoming ever more sophisticated and costly. Their impact has involved both private/public companies and critical infrastructures. At the same time, due to the COVID-19 pandemic, the security perimeters of many organizations expanded, causing an increase of the attack surface exploitable by threat actors through malware and phishing attacks. Given these factors, it is of primary importance to monitor the security perimeter and the events occurring in the monitored network, according to a tested security strategy of detection and response. In this paper, we present a protocol tunneling detector prototype which inspects, in near real time, a company's network traffic using machine learning techniques. Indeed, tunneling attacks allow malicious actors to maximize the time in which their activity remains undetected. The detector monitors unencrypted network flows and extracts features to detect possible occurring attacks and anomalies, by combining machine learning and deep learning. The proposed module can be embedded in any network security monitoring platform able to provide network flow information along with its metadata. The detection capabilities of the implemented prototype have been tested both on benign and malicious datasets. Results show 97.1% overall accuracy and an F1-score equals to 95.6%.
passive network analysis; dns tunneling; anomaly detection; machine learning; deep learning +
Footnote †: journal: J. Cybersecurity. Priv.
## 1 Introduction
Cybersecurity attacks keep increasing year over year at an unprecedented pace, becoming ever more sophisticated and costly [1; 2]. The growth between 2021 and 2022 has resulted in a rise of attacks' volume and impact on both private/public companies and critical infrastructures. Companies comprise digital service providers, public administrations and governments, and include businesses operating in finance and health sectors. In particular, service providers have experimented more than 15% raise in intrusions (infamous has been the case of Solarwinds [3]) compared to 2021 [1], a trend destined to grow in the next years [4]. At the same time, due to the COVID-19 pandemic, the security perimeters of many organizations expanded to cope with the new needs of remote working, causing an increase of the attack surface exploitable by attackers [4]. The European Union Agency for Cybersecurity estimates that more than 10 terabytes of data are stolen monthly from target assets that are made unavailable, until a ransom is payed [1], while IBM calculates that the average cost of these attacks is $4.54M, arriving up to $5.12M [2]. On the other hand, malware attacks are still on the rise after the pause recorded during the pandemic and phishing continues to be the common attack vector for initial access [1].
Given these factors, it is of primary importance to monitor the security perimeter and the events occurring in the network, according to a tested security strategy of detection and response. According to Gartner [4], newly proposed solutions should be automated as much as possible, since human errors continue to play a crucial role in most security
###### Abstract
The proposed approach to detect complex networks is based on the proposed approach and their sizes in bytes (as detailed in Sections 4.1 and 5). The performance of the proposed approach has been evaluated on different datasets that either contain legitimate traffic or simulate DNS tunneling attacks, which are the most common [6]. The obtained overall accuracy of the proposed prototype is 97.1%, along with an F1-score equals to 95.6%.
Keywords:DNN, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks, Mobile tunneling attacks Mobile tunneling attacks, Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling attacks Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Tunneling Tunneling Tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Attack Mobile tunneling Attack Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Attack Attack Mobile tunneling Attack Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Attack Attack Mobile tunneling Attack Attack Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Attack Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Attack Mobile tunneling Attack Attack Mobile tunneling Attack Attack Mobile tunneling Attack Attack Mobile tunneling Attack Mobile tunneling Attack Attack Mobile tunneling Attack Mobile tunneling Attack Attack Attack Mobile tunneling Attack Attack Mobile tunneling Attack Mobile tunneling Attack Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Attack Mobile tunneling Attack Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Attack Mobile tunneling Attack Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile Attack Mobile tunneling Attack Mobile tunneling Attack Mobile tunneling Attack Mobile Attack Mobile tunneling Attack Mobile tunneling Attack Mobile Attack Mobile Tunneling Mobile Attack Mobile Tunneling Mobile Attack Mobile Tunneling Attack Mobile Tunneling Mobile Attack Mobile Tunneling Attack Mobile Tunneling Mobile Attack Mobile Tunneling Attack Mobile Tunneling Mobile Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling Attack Mobile Tunneling Mobile Tunneling Mobile Tunneling Attack Mobile Tunneling
This allows the attacker to bypass traditional network security controls and potentially exfiltrate sensitive information. Therefore, as discussed in Section 1, using cleartext network protocols may pose a significant risk when these are abused by malicious actors.
In this context, DNS tunneling represents one of the most common techniques employed for covertly exfiltrating data from a network, by encoding the data in DNS queries and responses. Since this method is becoming increasingly prevalent, a growing body of research aims at detecting and mitigating DNS tunneling attacks. In [22], the authors review detection technologies from a perspective of rule-based and model-based methods with descriptions and analyses of DNS-based tools and their corresponding features, covering detection approaches developed from 2006 to 2020 by means of a comparative analysis.
Latest works in the area of DNS tunneling detection mainly cover three main categories, i.e., detection approches via machine learning, real-time detection approaches, and detection of DNS tunneling variants (e.g., fast flux [8], and domain generation algorithms (DGAs) [7]).
Regarding the first group, researchers have recently proposed deep learning algorithms such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) for detecting DNS tunneling traffic. In [23], the authors develop a novel DNS tunneling detection method employing a Convolutional Neural Network (CNN) to analyze DNS queries and responses and identify DNS tunneling activities. The proposed approach is evaluated using a dataset of real-world DNS traffic and show promising results in detecting DNS tunneling attacks with high accuracy. The work of [24] apply both Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) for detecting DNS tunneling traffic. The authors show that these algorithms can effectively spot and identify malicious patterns.
The second group of studies has focused on developing real-time detection systems for DNS tunneling. These systems use a combination of several detection techniques to quickly identify malicious DNS traffic [25]. In [26], the authors present an overview of principal countermeasures for DNS tunneling attacks.
Regarding the state of the art of approaches that analyze encrypted communications, it has already been presented in [10]. The approach we present and evaluate in the next sections passively extracts both sequential and statistical features from network flows to detect tunneling attacks in cleartext protocols. As sequential features, we refer to those characteristics obtained from raw flow sequences. Differently from the works previously described in this section, we directly examines, for each packet, a specific sequence of bytes for tunneling detection by using artificial neural network, which are simpler deep learning models.
## 3 Background
### DNS Tunneling
Protocol tunneling is an attack technique commonly used to maximize the time in which the infection remains undetected in a targeted network. In this context, the DNS protocol is usually abused in order to bypass security gateways and, then, to tunnel malware and other data through a client-server model [6]. Figure 1 depicts a typical DNS tunneling scenario: firstly, an attacker registers a malicious domain (e.g., attacker.com) on a C&C center managed by her; at that point, assuming that the attacker has already taken control over a machine inside the targeted network and violated its security perimeter, the infected computer sends a query to the malicious domain. Since DNS requests are typically allowed to move in and out of the network, the query through the DNS resolver reaches the attacker's C&C center, where the tunneling program is installed. This established tunnel can be used either to exfiltrate data and sensitive information or for other malicious purposes.
### Support Vector Machines
The original formulation of Support Vector Machines [27] (SVMs) is related to the resolution of supervised tasks with the objective of finding a maximum margin hyperplane that separates two or more classes of observations. In the last years, also one-class SVMs have been shown to represent a suitable choice in the context of anomaly detection [28]. It is defined as a boundary-based anomaly detection method, which modifies the original SVM approach by extending it in order to deal with unlabeled data. Like traditional SVMs, one-class SVMs can also benefit of the so called kernel trick when extended to non-linearly transformed spaces, by defining an appropriate scalar product in the feature space.
### Artificial Neural Networks
Artificial Neural Networks (ANNs) are deep learning models that have been successfully applied to a vast number of knowledge fields ranging from computing science to arts [29]. They are internally constituted by groups of multiple neurons, which can be thought of as mathematical functions that take one or more inputs. In ANNs, inputs are processed only forward and are multiplied by weights within each neuron and summed up to be then passed to an activation function and become the neuron's output. In general, artificial neural networks consist of three different layers: input, hidden and output; the first layer accepts inputs, while the hidden layers process them to learn the optimum weights. Finally, the output layer produces the result.
## 4 Protocol tunneling detector
The proposed architecture splits the burden of processing the traffic of a monitored network into two different sub-modules: the first mainly deals with secure connections, while the second inspects unencrypted traffic. As previously discussed, the former analytics has been detailed in [10]. At a glance, it detects possible anomalies occurring during a SSL/TLS handshake between a client, located inside the network monitored by the software platform outlined in Section 1, and an external server. The SSL/TLS detection analytics examines information contained in X.509, SSL, and TLS exchanged protocol messages. Instead, the second module looks for anomalies in unencrypted traffic, regarding the abuse of specific protocols (i.e., tunneling attack techniques). To provide these detection capabilities, this prototype collects a sequence of bytes from each network packet and inspects its content. The content, along with its features, is fed to a testing module, which detects possible anomalies that are signaled to security analysts.
### General approach
Figure 2 reports the general structure of the proposed anomaly detection methodology: for each packet observed in the network, the prototype collects a sequence of \(N\) bytes belonging to the highest network protocol used in the communication. As an example, in a secure connection which relies on HTTPS, the bytes returned by the extraction process are the ones related to HTTPS, and not to the other lower-layer protocols (e.g., TCP). From the obtained bitstream, we extract the following sequential features (i.e., those features obtained from raw flow sequences):
Figure 1: A DNS tunneling example
* binary representation of collected bytes
* bitstream entropy and \(p\)-values obtained from statistical tests for random and pseudo-random number generators for cryptographic applications [30]
* statistical properties of the bitstream hexadecimal representation
and we keep the protocol label associated to the bitstream itself. While the binary representation of the \(N\) bytes is meant to label the protocol of each packet under analysis, the sequential features allow to understand if the packet content is either compressed or encrypted.
After feature extraction, the raw dataset constituted by streams of bits and their corresponding labels is properly sanitized. Indeed, it is easily possible to lightly label the network packets belonging to a connection by simply looking either at the ports or at the connection metadata. However, this labeling may be prone to errors since it either does not take into account potential custom configurations of services (e.g., SMB protocol operating on a port different from 445) or intentional misuse of specific protocols by attackers (as in the case of tunneling). Moreover, cleartext protocols may transfer packets containing compressed data, whose presence could compromise the correct identification of the correct network protocol. Hence, it is paramount to have a refined and clean dataset to let models perform at their best. During our experimental evaluations, we have found out that the accuracy of the trained models, after refining the raw dataset, has significantly increased: 7% for the ANN and 20% for compression/encryption detector. To achieve this performance boost, we have specially implemented an input sanitization module, shown in Figure 3. In this module, we combine unsupervised and supervised support vector machines (SVMs) to clean the raw dataset: first, for each network protocol, we train a one-class SVM both on cleartext and encrypted protocols, in order to filter out outliers from the raw dataset. As an example, in protocols like HTTP and SMB, requests and responses may contain either the content of (compressed) files or other types of information that are not strictly correlated with the specific protocol communication patterns. Thus, in order to exclude these outliers, we build one-class SVMs, one for each different protocol, whose hyperparameters are properly tuned on the raw labeled dataset. Trained models are then applied to identify outliers and remove them from the raw dataset. This refined dataset is then used to train a SVM by applying a one-vs-all classification for detecting packets which are either compressed or encrypted. This single classifier is applied to remove both compressed and encrypted packets from cleartext protocols. It is worth mentioning that, in proxied environments, encrypted packets may be present in connections labeled as HTTP:
Figure 3: Input sanitization module.
Figure 2: Protocol tunneling detector prototype overview.
indeed, in these scenarios, also secure communications pass through the proxy, even if these connections are erroneously labeled as HTTP.
This sanitized dataset is then split in training and validation sets to essentially build two different models: (_i_) an artificial neural network (ANN) able to classify cleartext protocols (e.g., DNS) and (_ii_) a SVM that is a compression/encryption detector for identifying, respectively, compressed and encrypted packets. As later shown in Section 5, after construction, the training set is considerably unbalanced towards secure protocols. For this reason, we apply SMOTE data augmentation technique [31] to increase the samples of those protocols belonging to minority classes. During the test phase, performed light labeling based on connection's destination port is not taken into account and the resulting bitstreams are grouped by connection. Each packet is given in input to a trained ANN (whose training process is detailed in Section 4.3) and the analytics both verifies if, in the connection, there are some packets that have been classified with low confidence and more than one protocol is present. While in this latter case, the co-presence of multiple protocols might signal a possible tunneling attack, when the ANN classifies packets with low confidence, then, the connection could contain either compressed/encrypted packets or packets whose byte sequences differ from the ones usually observed in the network. To distinguish between these two cases, a more in depth verification is carried out: if the connection is not entirely encrypted, meaning that it is a not a secure communication, the prototype checks if the packets signaled as anomalous (i.e., with low confidence) by the ANN are either encrypted or belongs to another protocol. If either encryption or compression is detected, the anomaly is notified to security analysts. On the other hand, if the entire connection is encrypted, it is collected and stored in a database, periodically accessed in order to retrieve data and metadata about X.509, SSL, and TLS exchanged protocol messages in order to be analyzed by the analytics described in [10].
### Feature extraction
As discussed in Section 4.1, sequential features allow to understand if the content of a network packet is either compressed or encrypted. We rely on a statistical package developed by the Information Technology Laboratory at the National Institute of Standards and Technology, containing a set of 15 tests that measure the randomness of a binary sequence [30]. These tests have been designed to provide a first step towards the decision whether or not a generated binary sequence can be used in cryptographic applications, namely if the sequence appears to be randomly generated. In other words, each new bit of the sequence should be unpredictable. From a statistical point of view, each test verifies if the sequence being under analysis is random. This null hypothesis can be either rejected or accepted depending on the statistic value on the data exceeding or not a specific value - called critical value - that is typically far in the tails of a distribution of reference. Test reference distributions used in the NIST tests are the standard normal and the \(\chi^{2}\) distributions. Even if the statistical package contains 15 tests, we use only 5 of them, because the length \(N\) of the binary sequence we test does not meet the corresponding input size recommendation in [30]. To each sequence we apply the following tests: frequency within a block, longest-run-of-ones in a block, serial test, approximate entropy and cumulative sums. In addition, in our experimental evaluations, we extract some statistical properties and compute the Shannon entropy metrics [32] that, combined with the previously mentioned tests, have shown to improve the overall accuracy of the classification. As statistical properties, the following features are extracted from the corresponding hexadecimal representation \(h\) of a bitstream of \(N\) bytes:
* number of different alphanumeric characters in \(h\) normalized over \(h\) length
* number of different letters in \(h\) normalized over \(h\) length
* longest consecutive sequence of the same character in \(h\) normalized over \(h\) length
### Input sanitization
For accurately training machine learning models, the training set should be as much "clean" as possible. In Section 4.1, we have already discussed how labeling based on connection metadata could be error prone either due to potential custom configurations of services, intentional misuse of specific protocols by attackers, or network protocols encapsulating compressed data. In addition, during our experimental evaluations, we have observed that in some cases the employed traffic analyzer can assign an empty label or multiple labels to a single network packet. While in the first case, bitstreams with empty labels can be easily discarded for the training phase, in the presence of multi-labels is possible to assign a unique correct label if, among the labels, there exist a protocol that is monitored by the prototype itself. As an example, if the assigned labels are NTLM, GSSAPI, SMB, and DCE_RPC the resulting label is SMB. For these reasons, the very first step of the sanitization module is to correct the multi-labels associated to bitstreams and discard the empty ones. Then, we train an ensamble of one-class SVMs, one for each protocol (see Figure 3): each different classifier is properly tuned to filter out outliers from the raw dataset. As stated in Section 4.1, HTTP and SMB requests or responses may contain either the content of (compressed) files or other types of information that are not strictly correlated with the specific protocol communication patterns. Trained models are then applied to identify these kind of network packets and remove them from the raw dataset. This preprocessed dataset is used to train a supervised support vector machine, called compression/encryption detector, by applying a one-vs-all classification for detecting packets which are either compressed or encrypted. It is worth noting that all these models are still inaccurate because they are trained on a "dirty" dataset. Hence, to further increase the quality of the labels and obtain the final training set, the compression/encryption detector is fed with cleartext bitstreams to remove possible compressed/encrypted packets from cleartext protocols, as in the case of proxied environments. The result of this sanization process is a dataset which allows to train and validate two accurate models: an artificial neural network for cleartext protocols and a SVM for compressed and encrypted traffic.
### Anomaly detection
During the test phase (see Figure 2), bitstreams are analyzed by the trained ANN. In turn, the ANN flags three different cases as potential tunneling attacks and alerts security analysts when these cases occur: (_i_) the high confidence detection of more than one protocol in the same connection, (_ii_) the low confidence detection of one protocol for all the packets in the same connection, and (_iii_) the labeling, both with high and low confidence, of one or more protocols for the packets belonging to the same connection (as in the case of secure protocols over DNS). As later specified in Section 5, in the ANN, the high/low confidence threshold \(c\) can be dinamically set. In any case, the detection of encrypted packets into a cleartext connection generates alert notifications enriched with the information about the presence of encrypted protocol messages. Possibly, notified alerts can be filtered whitelisting source and/or destination IPs to reduce the false positives caused by well-known machines.
Hence, if some packets of the connection are classified with low confidence, the corresponding bitstream's sequential features (refer to Section 4.2) are given in input to the compression/encryption detector. If all the packets contained in the connection are encrypted, then the connection and its corresponding metadata are given in input to the SSL/TLS analytics for further scrutiny [10]. On the contrary, if the connection contains some compressed/encrypted packets or none of them, depending on the protocol, the connection is considered anamalous. Indeed, it is worth noting that not in all cases the combination of two different protocols is a signal of an occurring attack: as already discussed, SMB and HTTP connections can contain protocol-specific messages along with compressed data; however, DNS messages interleaved with other protocols are highly suspicious.
## 5 Experimental evaluation
The proposed prototype and the experimental evaluations have been, respectively, implemented and performed in Python. The size \(N\) we have chosen for the byte sequences, extracted from network packets, is 52 bytes. More in detail, we retrieve the first 64 bytes of the payload of each TCP/UDP packet, from which we remove the first 12B: indeed, a preliminary evaluation has shown that these first bytes had a very low variance in their binary representation among different packets of the same protocol. The specific selection of the byte sequence to extract has improved the accuracy of the trained neural network, increasing its anomaly detection capabilities.
For the experimental evaluation of the proposed prototype, we collected both benign and malicious datasets. The benign communication dataset contains a subset of legitimate traffic observed in a real corporate network during a period of about 2 days. From this initial dataset, we sample connections to start building the models' training sets and the dataset that will be used for testing. Figure 4 summarizes general statistics about the collected training set in terms of packets, before and after sanitization. Next to it, Table 1 reports how the test set of legitimate network traffic is characterized. The sanitization process makes the training set, which is obviously unbalanced towards encrypted protocols, balanced: indeed, after sanitization, the number of packets belonging to respectively cleartext and secure protocols is almost even. It is worth noting that the balanced training set for the ANN, containing DHCP, DNS, NTP, HTTP and SMB packets, comprises also data belonging to KRB network protocol (i.e., encrypted): our experimental evaluations have shown that during the test phase, the neural network performs better when it is trained also with encrypted byte sequences. As ANN, we use a Keras\({}^{1}\) sequential model with 3 hidden layers.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline _Model_ & _Kernel_ & \(\gamma\) & \(v\) & \(t\) & \(C\) \\ \hline \hline DHCP one-class SVM & RBF & 0.7 & 0.03 & 0.77 & \(-\) \\ \hline DNS one-class SVM & RBF & 0.7 & 0.03 & 0.77 & \(-\) \\ \hline NTP one-class SVM & RBF & 0.03 & 0.1 & 0.92 & \(-\) \\ \hline HTTP one-class SVM & RBF & 0.08 & 0.07 & 0.91 & \(-\) \\ \hline SMB one-class SVM & RBF & 0.06 & 0.08 & 0.77 & \(-\) \\ \hline KRB one-class SVM & RBF & 0.04 & 0.05 & 0.97 & \(-\) \\ \hline SFTP one-class SVM & RBF & 0.7 & 0.05 & 0.97 & \(-\) \\ \hline SSH one-class SVM & RBF & 0.7 & 0.05 & 0.97 & \(-\) \\ \hline SSL one-class SVM & RBF & 0.0001 & 0.0028 & 0.97 & \(-\) \\ \hline Compression/encryption detector & RBF & 0.01 & \(-\) & \(-\) & 100 \\ \hline \end{tabular}
\end{table}
Table 2: Support vector machine hyperparameter settings.
Figure 4: Packet distribution for each network protocol, before and after balancing.
\begin{table}
\begin{tabular}{|l|c|} \hline _Statistics_ & _Count [(\%)]_ \\ \hline \hline DNS packets & \(30,669\) (1.10\%) \\ \hline SMB packets & \(65,944\) (2.35\%) \\ \hline HTTP packets & \(262\) (0.01\%) \\ \hline NTP packets & \(46\) (0.002\%) \\ \hline DHCP packets & \(20\) (0.001\%) \\ \hline KRB packets & \(741\) (0.03\%) \\ \hline SFTP packets & \(69\), \(158\) (2.46\%) \\ \hline No labeled packets & \(61\), \(552\) (2.20\%) \\ \hline SSL packets & \(2\), \(571\), \(608\)(01.84\%) \\ \hline Distinct connections & \(51\),\(459\) \\ \hline Distinct source machines & \(758\) \\ \hline Distinct dest. machines & \(1\),\(566\) \\ \hline \end{tabular}
\end{table}
Table 1: Benign test set summary.
The input layer accepts 416 bits (i.e., 52B) and the output layer consists of 6 neurons, one for each cleartext protocol and KRB. Regarding SVMs, we rely on the open-source library scikit-learn2. For completeness, we report in Table 2 the hyperparameters we have used to train the different SVMs in the sanitization module and, in addition, the hyperparameters we obtained by tuning the compression/encryption detector in the validation phase. It is worth mentioning that the parameter \(t\), in Table 2, is used for each protocol one-class SVM as a threshold to filter only those outliers which have a Shannon entropy greater than \(t\). The intuition behind this filtering is that byte sequences having high entropy do not specifically belong to cleartext protocol communications and, thus, they have to be discarded from the training set.
Footnote 2: Scikit-learn library: [https://scikit-learn.org/stable/index.html](https://scikit-learn.org/stable/index.html)
On the other hand, malicious datasets are constituted by packet captures (PCAPs) shared by [33], [34], and [35]. The former dataset contains 3 different types of DNS tunnels generated in a controlled environment, whose size are approximately 750MB each. Tunneled data contains respectively SFTP, SSH, and Telnet malicious protocol messages. Each sample is made up of one single connection containing millions of DNS packets. It is reasonable to note that such connections would either easily stand out to security analysts or be simply detectable through well-known statistical approaches (e.g., outlier detection). Subsequently, as stated in Section 4.1, our approach groups data by connection and, therefore, a single malicious packet is enough to flag the entire connection as anomalous. For the above reasons, we have decided to split each sample in \(n\) different connections, composed by approximately \(5,000\) DNS packets each. The size of the split, reported in Table 3, has been chosen according to the size of the connections monitored in the controlled environment. The second malicious dataset, instead, was born by the collaboration between the Bell Canada company's Cyber Threat Intelligence group and the Canadian Institute for Cybersecurity. In this dataset, we take only into account DNS packets that, in their payloads, contain exfiltrations of various types of files and we discard legitimate traffic. Moreover, it is worth mentioning that all the packets contained in [34] have been truncated at capture time to 96B; this has required a slightly different approach to test these samples that will be discussed later in this Section. Finally, [35] is a single packet capture to test detection and alerting capabilities of Packetbeat3, Elastic's network packet analyzer. Malicious packet captures have been injected into the network security platform in order to be processed and analyzed as ordinary traffic. Table 3 reports a summary of the malicious assembled datasets: for each PCAP, we list the number of packets in the capture and which ones of these packets have been successfully processed by the platform's network analyzer (i.e., those packets whose size is greater or equal than 64B); in addition, Table 3 depicts the number of connections in the PCAP and how many of them have been identified as protocol tunneling attacks (i.e., true positives \(TP\)). Finally, the true positive rate \(TPR\) of the proposed detector is reported for each packet capture. Analogously, Table 4 reports the same information contained in Table 3, but with reference to the test set described in Table 1. Being legitimate traffic, the last two columns reports the connections mistakenly
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline \multirow{2}{*}{_Tumal type_} & No. _of PCAP_ & _No. of processed_ & \multirow{2}{*}{_No. of connections_} & \multirow{2}{*}{_TP_} & \multirow{2}{*}{_TPR_(\%)} \\ & _packets_ & _PCAP packets_ & & & \\ \hline \hline Telnet over DNS tunnel [33] & 2.4M & 2.2M & 457 & 457 & 100\% \\ \hline SFTP over DNS tunnel [33] & 2M & 1M & 209 & 209 & 100\% \\ \hline SSH over DNS tunnel [33] & 2.8M & 2.7M & 545 & 545 & 100\% \\ \hline \hline Light file exfiltration [34] & \(187,500\) & \(102,000\) & \(7,617\) & \(7,361\) & \(96.6\%\) \\ \hline Heavy file exfiltration [34] & 1.34M & \(765,000\) & \(43,964\) & \(42,441\) & \(96.5\%\) \\ \hline \hline Data exfiltration over Iodine & \multirow{2}{*}{438} & \multirow{2}{*}{247} & \multirow{2}{*}{1} & \multirow{2}{*}{1} & \multirow{2}{*}{100\%} \\ DNS Tunnel [35] & & & & & \\ \hline \end{tabular} \({}^{1}\) Keras library: [https://keras.io/](https://keras.io/)
\({}^{2}\) Scikit-learn library: [https://scikit-learn.org/stable/index.html](https://scikit-learn.org/stable/index.html)
\({}^{3}\) Elastic Packetbeat: [https://www.elastic.co/beats/packetbeat](https://www.elastic.co/beats/packetbeat)
\end{table}
Table 3: Malicious test set summary.
classified as tunnels (i.e., false positives \(FP\)) and the false positive rate \(FPR\). The results of the evaluation, reported in Table 3 and 4, show a false positive rate and a true positive rate, respectively, equal to 5.8% and 96.6%. The overall accuracy of the proposed prototype is 97.1%, while the resulting F1-score is 95.6%.
We conclude this section by discussing how we slightly modified the proposed approach, used in the other datasets, to be compliant with [34]. Indeed, the DNS packets contained in this dataset have been truncated during traffic acquisition, resulting in byte sequences that do not have the same length. In order to solve this dataset generation problem, we reduced all the DNS packets to a common length of 44B, discarding the shorter byte sequences and trimming the longer ones. The result of the filtering operation is clearly shown in Table 3, where the number of processed PCAP packets is more than 54% less than the ones received in input by the traffic analyzer.
Being the bitstream lengths different from the datasets [33] and [35], we have retrained our ANN to be fed with 44B sequences. On the contrary, we have maintained for this evaluation the same hyperparameters for the different SVMs, reported in Table 2, and the same threshold \(c\), used in the other experiments. In particular, for all our experimental evaluations, we set \(c\) to 0.999999 in order to maximize the algorithm sensitivity and to compensate for the lesser information provided by the processing of [34]. This explains why, in the experimental evaluations, we were not able to achieve a very low false positive rate, as shown in Table 4. However, in context where a high number of false positives could be detrimental, \(c\) can be tuned to obtain a 0.5% false positive rate or lesser without losing accuracy on protocol tunneling attacks.
## 6 Conclusion
In this paper, we proposed a software prototype for detecting protocol tunneling attacks in a monitored network. Relying on a combination of machine learning and deep learning techniques, the proposed solution identifies anomalous connections that deviate from the ones usually established in the network. Since machine learning models are built based only on legitimate traffic, the proposed solution is therefore able to deal with zero-day attacks, because malicious traffic is not required for the learning phase. The prototype has been evaluated both on malicious and benign datasets: results show a very high accuracy in detecting malicious samples and a low false positive rate on legitimate traffic.
As future work, we plan to optimize the algorithm through a deeper analysis on how the choice of bytestream length affects the computational time, in order to find a value which guarantees the best trade-off between efficiency and accuracy. Indeed, in this work, we mainly focused on accuracy. Secondly, we envision that the engineered prototype will be integrated into a streaming architecture, where new data are analyzed by the proposed prototype as soon as they are collected to provide the fastest possible response. In parallel, the models of the protocol tunneling detector are periodically retrained to keep them up to data with possible deviations from the usual behaviour of the monitored network. Finally, in Section 4.4 we outlined the usage of an IP whitelisting filter. Once in production, the prototype can be easily extended with other security- analyst-defined whitelists as, for example, domain or autonomous system whitelists. This will allow the analysts to apply domain-specific knowledge of the monitored network to the protocol tunneling detector, further reducing potential false positives and improving overall performance.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline _Dataset_ & _No. of PCAP_ & _No. of processed_ & & _FPR_ (\%) \\ & _packets_ & _PCAP packets_ & _No. of connections_ & \(FP\) & \(FPR\)(\%) \\ \hline \hline Legitimate traffic & 5.4M & 2.8M & 51,459 & 2,966 & 5.8\% \\ \hline \end{tabular}
\end{table}
Table 4: Benign test set summary. | 最近の最後年の間に、サイバーセキュリティ攻撃は、前例のないスピードで増加し、その技術的 sophistication が高まってコストも大きくなっています。その影響は、民間企業と公的企業、そして重要なインフラに及んでいます。同時に、COVID-19パンデミックにより、多くの組織のセキュリティPerimeterは拡大し、マルウェアとフィッシング攻撃によって攻撃表面が拡大しているのです。これらの要因を踏まえ、セキュリティPerimeterの監視と監視されたネットワークにおける発生するイベントの監視が、検出と対応のためのテスト済みセキュリティ戦略に基づいています。この論文では、テスト済みセキュリティ戦略に基づいて、ネットワークトラフィックをリアルタイムで検査するプロトコルtunneling検出器プロトタイプを提案します。Tunneling攻撃は、悪意のある攻撃者が活動が検出されない期間を最大限に利用できるという特性を持っています。この検出器は、未暗号化のネットワークフローを監視 |
2309.11201 | Noise-induced transition from superfluid to vortex state in
two-dimensional nonequilibrium polariton condensates -- semi-analytical
treatment | We develop a semi-analytical description for the
Berezinskii-Kosterlitz-Thouless (BKT) like phase transition in nonequilibrium
Bose-Einstein condensates. Our theoretical analysis is based on a noisy
generalized Gross-Pitaevskii equation. Above a critical strength of the noise,
spontaneous vortex-antivortex pairs are generated. We provide a semi-analytical
determination of the transition point based on a linearized Bogoliubov
analysis, to which some nonlinear corrections are added. We present two
different approaches that are in agreement with our numerical calculations in a
wide range of system parameters. We find that for small losses and not too
small energy relaxation, the critical point approaches that of the equilibrium
BKT transition. Furthermore, we find that losses tend to stabilize the ordered
phase: keeping the other parameters constant and increasing the losses leads to
a higher critical noise strength for the spontaneous generation of
vortex-antivortex pairs. Our theoretical analysis is relevant for experiments
on microcavity polaritons. | Vladimir N. Gladilin, Michiel Wouters | 2023-09-20T10:38:03 | http://arxiv.org/abs/2309.11201v1 | Noise-induced transition from superfluid to vortex state in two-dimensional nonequilibrium polariton condensates - semi-analytical treatment
###### Abstract
We develop a semi-analytical description for the Berezinskii-Kosterlitz-Thouless (BKT) like phase transition in nonequilibrium Bose-Einstein condensates. Our theoretical analysis is based on a noisy generalized Gross-Pitaevskii equation. Above a critical strength of the noise, spontaneous vortex-antivortex pairs are generated. We provide a semi-analytical determination of the transition point based on a linearized Bogoliubov analysis, to which some nonlinear corrections are added. We present two different approaches that are in agreement with our numerical calculations in a wide range of system parameters. We find that for small losses and not too small energy relaxation, the critical point approaches that of the equilibrium BKT transition. Furthermore, we find that losses tend to stabilize the ordered phase: keeping the other parameters constant and increasing the losses leads to a higher critical noise strength for the spontaneous generation of vortex-antivortex pairs. Our theoretical analysis is relevant for experiments on microcavity polaritons.
## I Introduction
The interest in nonequilibrium phase transitions of quantum many body systems has witnessed a rapid growth over the last decade thanks to the developments in Bose-Einstein condensation in optical systems (micro-cavity polaritons and photons in dye filled cavities) [1], circuit QED [2] and ultracold atomic gases [3]. One of the most elementary phase transitions in these systems is the onset of Bose-Einstein condensation, defined as the emergence of spontaneous long range phase coherence. Where at thermal equilibrium, long range phase coherence appears when the temperature is lowered below a density-dependent critical temperature, in nonequilibrium systems, the phase coherence is determined by the interplay between the hamiltonian and dissipative parts of the dynamics or even between competing dissipative mechanisms [4; 5].
Since quantum fluids of light are only available in one or two dimensions, true long range order is actually absent. In one-dimensional bose gases, both at thermal equilibrium and out of equilibrium, the spatial decay of the first order coherence function is always exponential [6; 7]. In two dimensions and at equilibrium there is the celebrated Berezinskii-Kosterlitz-Thouless phase transition [8; 9] that separates the normal and the superfluid state, with exponential and algebraic decay of the spatial coherence respectively. In equilibrium, the phase dynamics is in the XY universality class and the corresponding universal jump in the superfluid stiffness has been experimentally observed in \({}^{4}\)He [10]. More recently, the flexibility of the platform of ultracold atoms allowed a direct observation of the spontaneous formation of vortex-antivortex pairs above the BKT transition [11]. The ultracold atomic gases are in the weakly interacting regime, for which the transition temperature was computed by Prokof'ev and Svistunov by a clever combination of the linear Bogoliubov approximation and numerical Monte Carlo simulations [12].
For photonic systems out of equilibrium, the phase dynamics is actually in the Kardar-Parisi-Zhang universality class where a nonlinear term in the phase evolution is essential [13; 14]. For one-dimensional polariton systems, the spatial decay of the correlations remains qualitatively unaffected by the nonlinearity in the phase dynamics [15], but a specific spatiotemporal scaling emerges, that was recently observed experimentally [16].
In two dimensions, the KPZ phase dynamics was predicted to make long range phase coherence impossible in isotropic systems [13; 17]. Numerical studies on the other hand have shown a transition toward a state with algebraic decay of the coherence [18] and an associated disappearance of vortex-antivortex pairs [18; 19; 20; 21] without the formation of topological defects even when the spatiotemporal correlations feature KPZ scaling [22; 23]. Since computational resources limit the system sizes for numerical studies, the discrepancy between the renormalisation group studies could be due to finite size effects, but at present it does not seem that the issue is fully settled. Even when the numerically observed BKT transition is due to a limited system size, experimentally available systems necessarily also work with relatively small sizes, so that there is a clear interest in the nonequilibrium BKT transition. Compared to the equilibrium case, the current understanding of the dependence of the BKT critical point on the system parameters is much less mature. The reason here is twofold. First, out of equilibrium the standard Boltzmann-Gibbs ensemble can no longer be used and the steady state has to be characterized by a more involved simulation of the system dynamics. Second, the nonequilibrium dynamics is governed by more parameters: in addition to the system Hamiltonian and environment temperature, also the details of the coupling to the environment come into play in the non-equilibrium situation.
In our previous work on photon condensation [24], we have pinpointed the nonequilibrium BKT critical point with numerical simulations and developed a semi-analytical approach in order to get a better understanding of the location of the critical point. In our nu
merical simulations, the transition was approached from the ordered side with no vortices present in the initial state. Above a critical value of the noise strength in the stochastic classical field description of the dynamics, vortex-antivortex pairs spontaneously appear, signalling the BKT like transition to the disordered state. Our work involved both numerical simulations and analytical approximations that capture the dependences of the transition point on all the system parameters. The analytical approximation for photon condensates was based on the Bogoliubov approximation, combined with an infrared cutoff set by the inverse vortex core size [25]. In our previous study on the BKT transition for (interacting) polaritons [20], no such analytical estimate was given.
In the present article, we wish to fill this gap. Moreover, we extend our previous results to the regime of vanishing interactions, so that we can elucidate the effect of both the nonequilibrium condition and of interactions on the BKT transition point. When the interactions become small compared to the gain saturation nonlinearity, the vortex core size can significantly deviate from the usual healing length defined as \(\xi=\hbar/\sqrt{mg\vec{n}}\), where \(m\) is the mass, \(g\) the interaction constant and \(\vec{n}\) the density of polaritons in the condensate. The vortex core size appears in our treatment as a good proxy for the inverse of the infrared cutoff that we have to introduce to avoid the divergence of a momentum integral. We therefore carried out a systematic analysis of the vortex size and structure as a function of the strength of the interactions and of the driving and dissipation.
The structure of this paper is as follows. In Sec. II, we introduce our model for polariton condensates and derive the density and phase fluctuations within the linear (Bogoliubov) approximation. In Sec. III, we construct some approximate formulae for the BKT critical point with a few fitting parameters that are able to capture our numerical simulations. We start with a simple approach that is able to capture the main dependencies of the critical point on the system parameters and then present a more refined approach that allows for a very good fitting of the numerical results. Conclusions are drawn in Sec. IV and the vortex structure is discussed in appendix A.
## II Model and linearization
We consider nonresonantly excited two-dimensional polariton condensates. In the case of sufficiently fast relaxation in the exciton reservoir, this reservoir can be adiabatically eliminated and the condensate is described by the noisy generalized Gross-Pitaevskii equation [26; 27; 28; 29]
\[(\mathrm{i}-\kappa)\hbar\frac{\partial\psi}{\partial t}= \left[-\frac{\hbar^{2}\nabla^{2}}{2m}+g|\psi|^{2}\right. \tag{1}\] \[\left.+\frac{\mathrm{i}}{2}\left(\frac{P}{1+|\psi|^{2}/n_{s}}- \gamma\right)\right]\psi+\sqrt{D}\xi.\]
Here \(m\) is the effective mass and the contact interaction between polaritons is characterized by the strength \(g\). The imaginary term in the square brackets on the right hand side describes the saturable pumping (with strength \(P\) and saturation density \(n_{s}\)) that compensates for the losses (\(\gamma\)). We take into account the energy relaxation \(\kappa\) in the condensate [30]. The complex stochastic increments have the correlation function \(\langle\xi^{*}(x,t)\xi(x^{\prime},t^{\prime})\rangle=2\delta(\mathbf{r}- \mathbf{r})\delta(t-t^{\prime})\). Eq.(1) is a classical stochastic field model that describes all the fluctuations in the system as classical. This model is therefore only valid in the weakly interacting regime \(gm/\hbar^{2}\ll 1\), where quantum fluctuations are small.
For \(\kappa=0\), the zero momentum steady state of Eq. (1) is under homogeneous pumping \(\psi_{0}(\mathbf{x},t)=\sqrt{n_{0}}e^{-ign_{0}t}\), with \(n_{0}=n_{s}(P/\gamma-1)\). By expressing the particle density \(|\psi|^{2}\) in units of \(n_{0}\), dividing time by \(\hbar(1+\kappa^{2})/n_{0}\), length by \(\hbar/\sqrt{2mn_{0}}\), and noise intensity by \(\hbar^{3}n_{0}/(2m)\), Eq. (1) takes the form:
\[\frac{\partial\psi}{\partial t}= (i+\kappa)\left[\nabla^{2}-g|\psi|^{2}-\frac{i\gamma}{2n_{s}} \frac{1-|\psi|^{2}}{1+\nu|\psi|^{2}}\right]\psi\] \[+\sqrt{D}\xi, \tag{2}\]
where \(\nu=n_{0}/n_{s}\). The steady state density is then in the absence of noise given by [20]
\[\bar{n}=\sqrt{\left(\frac{\kappa+c}{2\kappa\nu}\right)^{2}+\frac{c}{\kappa\nu }}-\left(\frac{\kappa+c}{2\kappa\nu}\right) \tag{3}\]
with \(c\equiv\gamma/(2gn_{s})\).
In order to gain some insight in the physics of the fluctuations induced by the noise in Eq. (2), one can consider in first approximation the linearized equations for the density and phase fluctuations around the steady state:
\[\psi(\mathbf{x},t)=\sqrt{\bar{n}+\delta n(\mathbf{x},t)}e^{-ig\bar{n}t+i \delta\theta(\mathbf{x},t)} \tag{4}\]
After a spatial Fourier transform, these obey the linearized equations of motion
\[\frac{\partial}{\partial t}\delta\theta_{\mathbf{k}} =-\kappa\epsilon_{\mathbf{k}}\delta\theta_{\mathbf{k}}-\frac{ \epsilon_{\mathbf{k}}}{2\bar{n}}\delta n_{\mathbf{k}}-(g-\kappa\tilde{\gamma} )\delta n_{\mathbf{k}}\] \[+\sqrt{\frac{D}{\bar{n}}}\xi_{\mathbf{k}}^{(\theta)}, \tag{5}\]
\[\frac{1}{\bar{n}}\frac{\partial}{\partial t}\delta n_{\mathbf{k}} =-\kappa\epsilon_{\mathbf{k}}\frac{\delta n_{\mathbf{k}}}{\bar{n }}+2\epsilon_{\mathbf{k}}\delta\theta_{\mathbf{k}}-2(\kappa g+\tilde{\gamma} )\delta n_{\mathbf{k}}\] \[+2\sqrt{\frac{D}{\bar{n}}}\xi_{\mathbf{k}}^{(n)}, \tag{6}\]
where
\[\tilde{\gamma}=\frac{\gamma(1+\nu)}{2n_{s}(1+\nu\bar{n})^{2}}. \tag{7}\]
Using the Ito formula [31], one can obtain from Eqs. (5) and (6) a set of three equations:
\[\frac{D}{\bar{n}\epsilon_{\mathbf{k}}} =2\kappa\left\langle\left|\delta\theta_{\mathbf{k}}\right|^{2} \right\rangle+\left\langle\frac{\delta\theta_{-\mathbf{k}}\delta n_{\mathbf{k}} }{\bar{n}}\right\rangle\] \[+\frac{2(g-\kappa\tilde{\gamma})\bar{n}}{\epsilon_{\mathbf{k}}} \left\langle\frac{\delta\theta_{-\mathbf{k}}\delta n_{\mathbf{k}}}{\bar{n}} \right\rangle, \tag{8}\]
\[\frac{D}{\bar{n}\epsilon_{\mathbf{k}}} =\left[\frac{\kappa}{2}+\frac{(\kappa g+\tilde{\gamma})\bar{n}}{ \epsilon_{\mathbf{k}}}\right]\left\langle\left|\frac{\delta n_{\mathbf{k}}}{ \bar{n}}\right|^{2}\right\rangle\] \[-\left\langle\frac{\delta\theta_{-\mathbf{k}}\delta n_{\mathbf{ k}}}{\bar{n}}\right\rangle, \tag{9}\]
\[\left[\epsilon_{\mathbf{k}}+2(g-\kappa\tilde{\gamma})\bar{n} \right]\left\langle\left|\frac{\delta n_{\mathbf{k}}}{\bar{n}}\right|^{2} \right\rangle=4\epsilon_{\mathbf{k}}\left\langle\left|\delta\theta_{\mathbf{ k}}\right|^{2}\right\rangle\] \[-4\left[\kappa\epsilon_{\mathbf{k}}+(\kappa g+\tilde{\gamma}) \bar{n}\right]\left\langle\frac{\delta\theta_{-\mathbf{k}}\delta n_{\mathbf{k} }}{\bar{n}}\right\rangle, \tag{10}\]
where
\[\epsilon_{\mathbf{k}}=k^{2}. \tag{11}\]
Eqs. (8)-(10) can be solved for the density and phase fluctuations and are accurate when they are small. Close to the BKT transition, this condition however breaks down. In the following, we will outline how these equations can still be used in order to obtain an estimate for the critical point, in analogy with our study of the BKT transition in photon condensates [24].
## III Approximations for the BKT critical point
### Heuristic estimate of density-phase correlator
In order to obtain our estimate of the critical point, we start by integrating Eq. (8) over all momenta. In the right hand side, we then use that for a homogeneous system
\[\int d^{2}\mathbf{k}\langle\left|\delta\theta_{\mathbf{k}}\right| ^{2}\rangle =\left\langle\delta\theta(\mathbf{x})\,\delta\theta(\mathbf{x}) \right\rangle\equiv\left\langle\delta\theta^{2}\right\rangle \tag{12}\] \[\int d^{2}\mathbf{k}\langle\delta\theta_{-\mathbf{k}}\delta n_{ \mathbf{k}}\rangle =\left\langle\delta\theta(\mathbf{x})\,\delta n(\mathbf{x})\right\rangle \equiv\left\langle\delta\theta\delta n\right\rangle \tag{13}\]
When integrating the left-hand side of Eq. (8) over \(\mathbf{k}\), we assume the presence of a finite UV momentum (energy) cutoff \(k_{+}\) (\(\epsilon_{+}=k_{+}^{2}\)). Our numerical simulations are performed for a lattice with grid size \(h\), for which our UV cutoff equals \(k_{+}=\pi/h\) [i.e, \(\epsilon_{+}=(\pi/h)^{2}\)]. Furthermore, one has to take into account that for the systems, described by nonlinear equations similar to Eq. (2), the use of the linear approximation given by Eq. (11) is physically meaningful [24; 12] only for \(k\) above a certain IR momentum (energy) cutoff \(k_{-}\) (\(\epsilon_{-}=k_{-}^{2}\)). Then the Fourier transform of the left-hand side of Eq. (8) can be represented as \(D[C_{1}+\ln(\epsilon_{+}/\epsilon_{-})]/(4\pi\bar{n})\), where the fitting constant \(C_{1}\) approximates the contribution of momenta smaller than \(k_{-}\).
Physically, the correlator \(\left\langle\delta\theta\delta n\right\rangle\) expresses correlations between the density and current fluctuations (since the velocity is the spatial derivative of the phase). In nonequilibrium condensates, density and velocity fluctuations are correlated because the particle balance equation: a local suppression of the density leads to local reduction of particle losses, which is compensated by an outward flow of particles. In the context of the BKT transition, this physics plays an important role, because the density in a vortex core is reduced so that vortices are accompanied by outgoing radial currents. The magnitude of the density-phase correlator was estimated in Ref. [24] for nonequilibrium photon condensates. Following this approach, for the system under consideration here, we obtain
\[\left\langle\delta\theta\,\delta n\right\rangle=\frac{\tilde{\gamma}}{\bar{n} }\langle\delta N^{2}\rangle, \tag{14}\]
where \(\delta N=\int_{0}^{x}\delta n(x^{\prime})dx^{\prime}\). In the case of a plane density wave \(n=\bar{n}(1-a\cos kx)\) one has
\[\left\langle\delta N^{2}\right\rangle=\frac{a^{2}\bar{n}^{2}}{2k^{2}}. \tag{15}\]
At the BKT transition, vortices have to nucleate, which requires in a continuum model strong density fluctuations with amplitude \(\bar{n}\) (i.e. \(a=1\)) [24]. Those strong fluctuations have appreciable probability only for relatively large momenta \(k\sim k_{+}\) as seen from the fact that the best fitting in Ref. [24] corresponds to the effective momentum value \(k\approx 0.3k_{+}\) in Eq. (15). Therefore, we approximate the correlator \(\left\langle\delta\theta\delta n\right\rangle\) by \(C_{2}\bar{n}\tilde{\gamma}/\epsilon_{+}\), where \(C_{2}\sim 1\) is a fitting parameter.
Analogously, the Fourier transform of \(\left\langle\delta\theta_{-\mathbf{k}}\delta n_{\mathbf{k}}\right\rangle/ \epsilon_{\mathbf{k}}\) in the last term of Eq. (8) is approximated by \(C_{3}\bar{n}\tilde{\gamma}/\epsilon_{+}^{2}\) with a fitting constant \(C_{3}\). As a result, we obtain the following approximate expression for the critical noise
\[d_{\mathrm{BKT}} =\left\{2\kappa\langle\delta\theta^{2}\rangle_{\mathrm{BKT}}+ \left[C_{2}+\frac{2C_{3}(g-\kappa\tilde{\gamma})}{\epsilon_{+}}\right]\frac{ \tilde{\gamma}}{\epsilon_{+}}\right\}\] \[\times\frac{4\pi}{C_{1}+\ln(\epsilon_{+}/\epsilon_{-})}, \tag{16}\]
where \(d_{\mathrm{BKT}}\equiv\left.(D/\bar{n})\right|_{\mathrm{BKT}}\).
In line with Refs. [24; 12], we will assume that at the transition \(\left\langle\delta\theta^{2}\right\rangle_{\mathrm{BKT}}=1/2\). In the equilibrium case (and at \(\kappa^{2}\ll 1\)) the IR momentum cutoff is inversely proportional to the healing length, so that the corresponding energy cutoff is \(\sim g\bar{n}\). Since the healing length corresponds at equilibrium to the vortex core size, a natural generalization to the nonequilibrium situation is to take a cutoff based on an estimate of the vortex core size. Our estimation of the vortex core size, detailed in appendix
A, leads to
\[\epsilon_{-}=\bar{n}\left[g+B_{0}\tilde{\gamma}\left(\frac{B_{0}\tilde{\gamma}}{g+B _{0}\tilde{\gamma}}\right)^{3}\right], \tag{17}\]
where \(B_{0}=0.524\). The average density \(\bar{n}\) in Eq. (17) will be approximated by its steady-state value in the absence of noise (3).
The results of fitting the numerical data for \(d_{\rm BKT}\) with Eq. (16) are represented by the dashed lines in Figs. 1 and 2 where the determined fitting parameters are \(C_{1}=8.87\), \(C_{2}=1.64\), and \(C_{3}=5.92\times 10^{-5}\). The small numerical value of \(C_{3}\) implies it can actually be set to zero without affecting the quality of the fits. The numerical data in Figs. 1(a) and 2(a) and the main panels in Figs. 1(b) and 2(b) are taken from Ref. [20]. To numerically solve Eq. (2), a finite-difference scheme was used. Specifically, we use periodic boundary conditions for a square of size \(L_{x}=L_{y}=40\) with grid step equal to \(0.2\). The location of the critical point is determined in the following way: after a long time evolution in the presence of noise, the system was evolved without noise for a short time (few our units of time) before checking for the presence of vortices. This noiseless evolution gives the advantage of cleaning up the density and phase fluctuations while it is too short for the unbound vortex-antivortex pairs to recombine. The propensity for their recombination is reduced [20] with respect to the equilibrium case thanks to outgoing radial currents that provide an effective repulsion between vortices and antivortices. To determine the critical noise for the BKT transition, \(D_{\rm BKT}\), we use the following criterion. If for a noise intensity \(D\) unbound vortex pairs are present after a noise exposure time \(t_{D}\) (and hence \(D>D_{\rm BKT}\)), while for a certain noise intensity \(D^{\prime}<D\) no vortex pairs appear even at noise exposures few times longer then \(t_{D}\), then \(D^{\prime}\) lies either below \(D_{\rm BKT}\) or above \(D_{\rm BKT}\) and closer to \(D_{\rm BKT}\) then to \(D\). Therefore, the critical noise intensity can be estimated as \(D_{\rm BKT}=D^{\prime}\pm(D-D^{\prime})\).
As seen from the comparison between the dashed lines and the symbols in Figs. 1 and 2, Eq. (16) qualitatively reproduces the main trends in the behavior of the numerically determined \(d_{\rm BKT}(c,\kappa,\nu,h)\) at relatively small grid steps \(h\), when \(\epsilon_{+}\) is considerably larger than \(\epsilon_{-}\). This qualitative agreement is ensured, in particular, by taking into account the contributions related to density-phase correlation, which are zero in equilibrium systems but play a crucial role for the BKT transition out of equilibrium. At the same time, this simple and transparent heuristic estimate of these contributions does not appear sufficient for a good quantitative description of the numerical results.
### Bogoliubov theory with nonlinear correction
In order to obtain a better quantitative description of the numerics for the nonequilibrium BKT transition, we
Figure 1: Numerically (symbols) and semi-analytical (lines) determined renormalized critical noise \(d_{\rm BKT}=D_{\rm BKT}/n_{\rm BKT}\) as a function of \(c=\gamma/(2n_{s}g)\) (a), \(\kappa\) (b), and \(\nu\) (c). The insets in panels (b) and (c) show the dependence of \(d_{\rm BKT}\) on \(\kappa\) and \(\nu\), respectively, in the case of \(g=0\). The solid and dashed lines correspond to Eqs. (26) and (16), respectively.
develop below a different approach that leads to a slightly more involved expression. To this purpose, we start from the linear approximation for the phase fluctuations in the steady state, obtained by solving Eqs. (8)-(10). Inserting \(D/\bar{n}\) from Eq. (8) and \(\left<\left|\delta n_{\mathbf{k}}/\bar{n}\right|^{2}\right>\) from Eq. (10) into Eq. (9), we obtain the relation
\[\left[\epsilon_{\mathbf{k}}+3g\bar{n}+2\left(g^{2}+\tilde{\gamma }^{2}\right)\frac{\bar{n}^{2}}{\epsilon_{\mathbf{k}}}\right]\left<\frac{ \delta\theta_{-\mathbf{k}}\delta n_{\mathbf{k}}}{\bar{n}}\right>\] \[=2\tilde{\gamma}\bar{n}\left<\left|\delta\theta_{\mathbf{k}} \right|^{2}\right>. \tag{18}\]
Using Eq. (18), we express \(\left<\delta\theta_{-\mathbf{k}}\delta n_{\mathbf{k}}/\bar{n}\right>\) through \(\left<\left|\delta\theta_{\mathbf{k}}\right|^{2}\right>\) and insert the result into Eq. (8). For the phase fluctuations, this leads to the equation
\[\left<\left|\delta\theta_{\mathbf{k}}\right|^{2}\right>=\frac{D}{\bar{n}}f( \epsilon_{\mathbf{k}}), \tag{19}\]
where
\[f(\epsilon)=\frac{1}{2\kappa}\frac{\epsilon+3\bar{n}g+2\left(g^{2}+\tilde{ \gamma}^{2}\right)\bar{n}^{2}/\epsilon}{(\epsilon+\epsilon_{1})(\epsilon+ \epsilon_{2})}. \tag{20}\]
with
\[\epsilon_{1}=\bar{n}\left(g+\frac{\tilde{\gamma}}{\kappa}\right),\quad \epsilon_{2}=2\bar{n}g. \tag{21}\]
From Eqs. (19) and (20), one sees that the phase fluctuations are, as expected, proportional to the noise strength \(D\) and decrease as a function of the density \(\bar{n}\) and energy relaxation \(\kappa\). For what concerns their energy dependence, Eq. (20) shows a \(1/\epsilon\) behavior both at small and large energies. As a consequence, the Fourier transform of phase fluctuations, needed to obtain their real space correlations requires the introduction of an infrared cutoff \(\epsilon_{-}\), analogous to the treatment in Sec. III.1. As a result of Fourier transformation, the local phase variance becomes
\[\left<\delta\theta^{2}\right>=\frac{D}{4\pi\bar{n}}(F+F_{-}) \tag{22}\]
where
\[F= \int\limits_{\epsilon_{-}}^{\epsilon_{+}}f(\epsilon)d\epsilon= \frac{1}{2}\frac{g^{2}+\tilde{\gamma}^{2}}{g(\kappa g+\tilde{\gamma})}\ln \left(\frac{\epsilon_{+}}{\epsilon_{-}}\right)\] \[+\frac{\tilde{\gamma}}{\tilde{\gamma}+\kappa g}\left(\frac{1}{2 \kappa}+\frac{\kappa\tilde{\gamma}}{\tilde{\gamma}-\kappa g}\right)\ln\left( \frac{\epsilon_{+}+\epsilon_{1}}{\epsilon_{-}+\epsilon_{1}}\right)\] \[-\frac{\tilde{\gamma}^{2}}{2g(\tilde{\gamma}-\kappa g)}\ln\left( \frac{\epsilon_{+}+\epsilon_{2}}{\epsilon_{-}+\epsilon_{2}}\right), \tag{23}\]
where the logarithmic dependence on the lower and upper energy cutoffs is a consequence of the \(1/\epsilon\) behavior of \(f(\epsilon)\) at low and high energies. The term
\[F_{-}=C_{-}\epsilon_{-}f(\epsilon_{-}) \tag{24}\]
in Eq. (22) approximates the contribution of the integral over \(\epsilon\) from \(0\) to \(\epsilon_{-}\), where \(C_{-}\) is a fitting parameter.
Expression (22), derived with the use of linearized equations for the phase and density fluctuations, is expected to be applicable when these fluctuations are small. As discussed above, at the BKT transition, where both phase and density fluctuations are large, the real-space correlator \(\left<\delta\theta\delta n\right>\) is mainly determined by the contributions of \(k\sim k_{+}\). According to Eq. (18), the quantity \(\left<\left|\delta\theta_{\mathbf{k}}\right|^{2}\right>\) contains a term that is exactly proportional to \(\left<\delta\theta_{-\mathbf{k}}\delta n_{\mathbf{k}}\right>\). This implies that at the BKT transition the expression for the phase fluctuations \(\left<\delta\theta^{2}\right>\), derived above, needs an additional "nonlinear correction", which would describe an enhanced contribution of large momenta \(k\sim k_{+}\) (large energies \(\epsilon\sim\epsilon_{+}\)). Here, we approximate this correction by adding to \(F\) the term
\[F_{+}=C_{+}\epsilon_{+}\,f(\epsilon_{+}), \tag{25}\]
Figure 2: Numerically (symbols) and semi-analyticaly (lines) determined renormalized critical noise \(d_{\text{BKT}}\) as a function of the grid step at \(\kappa\geq 0.1\) (a) and \(\kappa=0\) (b) for nonzero \(g\). Inset in panel (b): \(d_{\text{BKT}}\) as a function of the grid step at \(g=0\). The solid and dashed lines correspond to Eqs. (26) and (16), respectively.
where \(C_{+}\) is a fitting parameter. Then at the BKT point we have
\[d_{\rm BKT}=\langle\delta\theta^{2}\rangle_{\rm BKT}\ \frac{4\pi}{F+F_{-}+F_{+}}, \tag{26}\]
where again we take \(\langle\delta\theta^{2}\rangle_{\rm BKT}=1/2\).
Applying Eq. (26) to fit the numerical data for \(d_{\rm BKT}\), we obtain for the two fitting parameters: \(C_{-}=2.24\) and \(C_{+}=7.33\). As compared to the results of the heuristic approach described in the previous subsection (dashed lines in Figs. 1 and 2), the results corresponding to more involved and accurate Eq. (26), which are shown by the solid lines in Figs. 1 and, demonstrate a much better quantitative agreement with the numerically determined \(d_{\rm BKT}\).
The semi-analytical expression for \(d_{\rm BKT}\), given by Eq. (26) together with Eqs. (17), (20), (21), and (23)-(25), can be considered as a function of three independent parameters: \(\tilde{\gamma}/g\), \(\kappa\) and \(\epsilon_{+}/\epsilon_{-}\). In Fig. 3, the renormalized critical noise \(d_{\rm BKT}/\kappa\), corresponding to Eq. (26), is plotted for a wide range of the parameters \(\tilde{\gamma}/g\) and \(\kappa\) at three different values of the ratio \(\epsilon_{+}/\epsilon_{-}\).
For small losses and not too small \(\kappa\), the ratio \(d_{\rm BKT}/\kappa\) is of order one, in line with the equilibrium BKT transition where according to fluctuation-dissipation relation \(D=\kappa T\)[32] and where the critical temperature scales in first approximation as \(T_{BKT}\sim n\). In line with our previous studies for polariton condensates [20] and photon condensates [24], we see that the losses stabilize the ordered phase: when \(\tilde{\gamma}\) is increased at fixed \(\kappa\), the noise required to make the transition to the state with free vortex-antivortex pairs increases. We explained this trend by the reduction of the density fluctuations for increased driving and dissipation [20], that manifests itself through density-phase correlations [24] [see discussions preceding Eq. (16) and Eq. (25)].
In the limit without losses (\(\tilde{\gamma}=0\)), our estimate for the critical point reduces to
\[n_{\rm BKT}=\frac{T_{\rm BKT}}{2\pi}\left[\log\left(\frac{1}{mh^{2}gn_{\rm BKT }}\right)+A_{1}\right]. \tag{27}\]
Here, we have used that \(T_{\rm BKT}=D_{\rm BKT}/\kappa\), defined \(A_{1}=C_{+}+C_{-}+\log(\pi^{2}/2)\approx 11.2\) and restored physical units. We can compare this expression with the equilibrium BKT transition for the weakly interacting lattice Bose gas (Eq. (12) in [12])
\[n_{\rm BKT}=\frac{mT_{\rm BKT}}{2\pi}\log\frac{A}{mh^{2}gT_{\rm BKT}}, \tag{28}\]
with \(A=6080\). This expression can be written as
\[n_{\rm BKT}=\frac{mT_{\rm BKT}}{2\pi}\left[\log\left(\frac{1}{mh^{2}gn_{\rm BKT }}\right)+A_{2}\right], \tag{29}\]
with
\[A_{2}=\log\left[\frac{A}{2\pi}\log\left(\frac{A}{m^{2}h^{2}gT_{\rm BKT}} \right)\right]. \tag{30}\]
Assuming here \(m^{2}h^{2}gT_{\rm BKT}\approx 1\), one obtains \(A_{2}\approx 9.1\), which is reasonably close to our \(A_{1}\approx 11.5\) given the simplicity of our approach and considering that the equilibrium case is actually a somewhat singular limiting case of our model where the gain and losses simultaneously tend to zero.
## IV Conclusions
In this paper, we have developed a semi-analytical approach to describe the BKT transition point for driven-dissipative weakly interacting Bose gases. We start from the linearized equations of motion for the density and phase fluctuations and subsequently correct phenomenologically for nonlinearities that are important close to the BKT transition. Our resulting analytical formulae contain some fitting parameters that are fitted to a series of numerical simulations in a wide parameter range. The good fitting of our numerical results indicates the validity of the physical intuition underlying our semi-analytical approach and promotes our formulae to a concise summary of the numerical results.
Of course, our numerical results were obtained for a finite size system and we can therefore not settle what
will happen for much larger system sizes, where it remains possible that the KPZ nonlinearity may destabilize the algebraically ordered phase [13; 17], even though recent numerical work has shown that KPZ scaling can be witnessed in 2D nonequilibrium condensates without the phase coherence being destabilized by the formation of vortex antivortex pairs [22; 23].
## Acknowledgements
We thank Iacopo Carusotto for continuous stimulating discussions. VG was financially supported by the FWO-Vlaanderen through grant nr. G061820N.
| We are developing a semi-analytical description for the BKT-like phase transition in nonequilibrium Bose-Einstein condensates. Our theoretical analysis is based on a noisy generalized Gross-Pitaevskii equation. Above a critical strength of the noise, spontaneous vortex-antivortex pairs are generated. We provide a semi-analytical determination of the transition point based on a linearized Bogoliubov analysis, to which some nonlinear corrections are added. We present two different approaches that are in agreement with our numerical calculations in a wide range of system parameters. We find that for small losses and not too small energy relaxation, the critical point approaches that of the equilibrium BKT transition. Furthermore, we find that losses tend to stabilize the ordered phase: keeping the other parameters constant and increasing the losses leads to a higher critical noise strength for the spontaneous generation of vortex-antivortex pairs. Our theoretical analysis is relevant for experiments on microcavity polaritons. |
2309.08988 | Multi-objective tuning for torque PD controllers of cobots | Collaborative robotics is a new and challenging field in the realm of motion
control and human-robot interaction. The safety measures needed for a reliable
interaction between the robot and its environment hinder the use of classical
control methods, pushing researchers to try new techniques such as machine
learning (ML). In this context, reinforcement learning has been adopted as the
primary way to create intelligent controllers for collaborative robots, however
supervised learning shows great promise in the hope of developing data-driven
model based ML controllers in a faster and safer way. In this work we study
several aspects of the methodology needed to create a dataset to be used to
learn the dynamics of a robot. For this we tune several PD controllers to
several trajectories, using a multi-objective genetic algorithm (GA) which
takes into account not only their accuracy, but also their safety. We
demonstrate the need to tune the controllers individually to each trajectory
and empirically explore the best population size for the GA and how the speed
of the trajectory affects the tuning and the dynamics of the robot. | Diego Navarro-Cabrera, Niceto R. Luque, Eduardo Ros | 2023-09-16T13:06:36 | http://arxiv.org/abs/2309.08988v1 | # Multi-objective tuning for torque PD controllers of cobots
###### Abstract
Collaborative robotics is a new and challenging field in the realm of motion control and human-robot interaction. The safety measures needed for a reliable interaction between the robot and its environment hinder the use of classical control methods, pushing researchers to try new techniques such as machine learning (ML). In this context, reinforcement learning has been adopted as the primary way to create intelligent controllers for collaborative robots, however supervised learning shows great promise in the hope of developing data-driven model based ML controllers in a faster and safer way. In this work we study several aspects of the methodology needed to create a dataset to be used to learn the dynamics of a robot. For this we tune several PD controllers to several trajectories, using a multi-objective genetic algorithm (GA) which takes into account not only their accuracy, but also their safety. We demonstrate the need to tune the controllers individually to each trajectory and empirically explore the best population size for the GA and how the speed of the trajectory affects the tuning and the dynamics of the robot.
torque control, genetic algorithms, PD control
## I Introduction
Collaborative robotics is an emerging field that studies the creation and development of robots designed for a safe human-machine interaction i.e. human-robot collaboration. The motion control of these cobotic systems is a complex problem since it incorporates both active safety measures, such as torque control that aims to minimize the force applied by the joints, and passive measures, like the integration of elastic elements that provide a higher level of compliance in case of an impact with humans or objects in the environment. These measures hinder the calculation of the analytical dynamic model of the cobot, which prevents the use of classical torque-based control algorithms that rely on widely used rigid simple models. Furthermore, position-based control is not well suited for human-robot interaction (HRI) as the commanded motion can carry significant levels of inertia, posing a risk to human safety.
To overcome the reliance on an analytical definition of system dynamics in traditional control theory, machine learning (ML) is being profusely used [1]. ML offers promising control solutions for operating model-free dynamic systems, enabling accurate and safe task performance. Among various learning types, reinforcement learning emerges as the most prevalent due to its capability for generalization and data capture through practice [4]. However, this learning approach does come with certain drawbacks for real systems, including a lengthy learning period and an exploration stage that can pose risks to both the robot and its environment [4].
As a result, in this work, we focus on studying the methodology required to create a database that enables the data-driven learning of a cobot's dynamic model, rather than calculating it analytically [2]. Building upon the previous discussion, our main goal is to generate a dataset that facilitates the study and development of supervised learning models, so that they can be used for avoiding risks during the learning stages with reinforcement learning or other adaptive control alternatives. This approach takes advantage of optimized position control scheme for gathering data.
The database we propose captures the relationship between the reached position and velocity of the cobot and the corresponding applied torque values. Depending on the direction of this relationship (reached position to applied torque values or vice versa), the database can serve as either an inverse dynamic model or a forward dynamic model of the cobot. This database is obtained by executing a representative set of trajectories with the cobot operating in torque control, guided by a proportional-derivative (PD) controller. The PD controller is adjusted using a multi-objective GA that optimizes movement precision and torque values to ensure safety. The extracted data from this process will be used to train the subsequent ML controller, providing optimal torque sequences for the cobot to accurately perform the desired trajectories, akin to accurate position control, while minimizing torque requirements.
The PD torque control requires precise adjustment of the PD parameters for each target trajectory. Each data sequence of torque value-reached position, obtained from individual PD adjustments, is generated specifically to train a subsequent ML controller. This ML controller will be able to generalize the control action and adapt it to various types of trajectories [11].
## II Related work
The PD control architecture is widely used in robotic manipulators due to its simplicity [5]. This technique involves adjusting only two parameters per robotic joint and provides accurate control for simple tasks within a limited range of motion.
PD adjustment using GA is widely used in the industry, leading to a wide range of proposed GA techniques [3]. While most of these works focus on single-objective GA techniques, in our collaborative robot approach the goal is not only to maximize controller accuracy but also to ensure HRI safety by minimizing torque values, PD adjustment requires the use of multi-objective GA. An example of such an GA is the NSGA-II [6], which enables the optimization of multiple control goals simultaneously.
The adjustment of PD controllers using multi-objective GA has been previously addressed by [7], where a PID (proportional-integral-derivative) controller was tuned using NSGA-II. In [8] multi-objective cuckoo search algorithm (MOCSA) is used for the same problem but no comparison between algorithms is provided so we cannot say whether MOCSA is more appropriate than NSGA-II for this problem. [9] also uses NSGA-II and compares it with a variation of the same algorithm which uses decision maker preference information to reduce the decision parameters' search space. All of these results, while promising, were only tested in simulation with a relatively simplistic planar two-degree-of-freedom (d.o.f.) robot arm model. In this work, our aim is to validate the effectiveness of the NSGA-II solution using the more complex Kuka iiwa LBR robot arm, equipped with 7 d.o.f. and flexible joints [10].
Despite the proven usefulness of learned dynamic cobot models [11], to the best of our knowledge, there is currently no publicly available dataset that captures the relationship between torque values and motion of a cobot in a manner suitable for learning its dynamic model. Therefore, the objective of this work is to present and discuss the methodology used to collect the necessary data required for learning a dynamic cobot model
## III Proposed solutions
To ensure a balance between optimal torque utilization and accuracy in the collected data, we will utilize a custom-tuned PD controller. As mentioned earlier, PD adjustment can result in highly accurate torque-based control for specific trajectories. However, as we will demonstrate later, the accuracy diminishes significantly when performing dissimilar trajectories located far from the PD working point.
For the PD adjustment, we propose the use of a multi-objective GA to jointly optimize accuracy and safety, specifically maximizing accuracy while minimizing the torque values involved. To achieve this, we incorporate two objectives within the objective functions. The first objective assigns weight to the accuracy error, measured as the mean Euclidean distance between the end effector and the desired Cartesian coordinates. Meanwhile, the second objective assigns weight to the torque values applied throughout the trajectory. The torque values are calculated using Eq. (1), where \(U\) represents the vector of commanded torques, \(T\) denotes the number of steps in a trajectory, and \(u_{i}\) corresponds to the torque applied at time \(i\).
\[f_{t}(U)=\frac{1}{T}\sum_{i=1}^{T}(u_{i}-u_{i-1})^{2} \tag{1}\]
Our methodology divides the data collection process into four main layers, as shown in Figure 1:
* Sensor/Actuator layer: This layer comprises the sensors and actuators used by the cobot. It receives instructions from the controller and provides data on the joint states.
* Control layer: The PD controller is located at this layer and receives information regarding the next desired setpoint as well as the parameters (Kp and Kd) to be used. It sends corresponding torque commands.
* System layer: This layer sends data about the desired trajectory to the control layer.
* Analytic layer: The GA in this layer is used to adjust the PD controller gains based on the system performance.
System, control and actuator/sensor layers work on a real-time loop at 500Hz. In this period of time the system layer sends the trajectory to the control layer which sends the torque command to the cobot and receives the updated sensor data. The torque, position and velocity of each joint is registered in an array and written to a file once the trajectory is finished. Once the the data file is created, the analytic layer reads it to evaluate performance and communicates asynchronously with the control layer to update the PD gains.
This division facilitates the scalability of our methodology by separating the analytic and system layers from the control and sensor layers. Furthermore, it allows for the parallel utilization of multiple cobots with the same trajectory.
Regarding the trajectories included in the dataset, and following the findings in [11], we incorporate spiral and random trajectories. These trajectory types generate meaningful data sets while avoiding excessive data size, making them suitable for effective training of the ML controller. Additionally, we introduce pyramid-like trajectories that combine linear movements with sharp turns. These trajectories aim to better teach a ML controller how to function when working with high acceleration and velocity gradients, resulting in larger inertia values. Fig. 2 depicts some examples of the trajectory dataset.
Finally, to compare the GA solutions for the PD parameters, we utilize the hypervolume indicator metric [15]. This metric takes a reference point (e.g. the maximum values between all the controllers tested). It then calculates the area between the Pareto front and this reference point. A visual example of this metric can be seen in Figure 3
## IV Progress to date
The results presented in this work are obtained from a simulated environment (and the application and validation in a non-simulated environment are left for future work). For our robotic simulation platform, we use ROS2 (Robot Operating System) [14], and Gazebo [13] as our dynamic simulator. The close integration of Gazebo with ROS makes it a suitable choice for the performed study.
Each experiment in this section is repeated 5 times accounting for the stochastic nature of the GA. This number of trials balances computation time (over a couple of weeks) and reliability of the data and conclusions. Box plots are used to represent the locality and spread of the results. These experiments are conducted to demonstrate the feasibility of the proposed methodology.
In one set of experiments, various population sizes are compared to find the optimal balance between accuracy and computation time. Fig. 4 depicts that the algorithm (NSGA-II) achieves the best results with a population size of around 30 individuals. Increasing the population size carries similar results, but with significantly longer execution times.
Once the GA is configured, we compare the accuracy achieved by a generic track controller and a specific track controller. The generic controller is adjusted to perform globally on all trajectories in the dataset, while the specific controller is tuned for a single trajectory. Fig. 5 demonstrates that the specific controller outperforms the generic controller, not only in terms of precision but also in minimizing the applied torque values. This indicates that while it is possible to achieve high accuracy (at least in simulation) by overloading the joint motors, achieving smooth and safe movements requires a well-tuned specific PD controller. Since the goal of the PD optimization is to be able to perform different movements with different optimal accuracy/torque profiles, it is key to use a specific optimized controllers for each trajectory. Then, all the data gathered from the different trajectories (and specific controllers) will be added to the database. This specific optimization stage is required, because during the trajectory execution stage (gathering the dataset) it is captured both the robot dynamics but also the properties of the controller used. Thus optimizing specific controllers leads
Fig. 1: Architecture of the proposed system. General framework adapted from IMOCO4.E [12]. First the system layer sends the desired setpoint to the controller, then the control layer sends torque commands to the cobot and finally the sensor layer returns the position and velocity of each joint. The extracted data is saved into a file used asynchronously by the analytic layer to update the PD controller gains.
Fig. 4: Comparison of the number of evaluations needed for convergence (a) and the hypervolume of the obtained pareto front (b) based on population size.
Fig. 3: Diagram depicting the calculation of the hypervolume indicator. This metric is obtained measuring the area between the pareto front and a reference point.
Fig. 2: Examples of useful trajectories for data gathering.
to a richer database in terms of accuracy and torque trade-off (Figure 5).
Finally, the speed of the trajectory is one of the key factors that significantly impacts the dynamics of a cobot. Thus, we investigate the extent to which the speed of the trajectory influences the PD adjustment and determine the optimal speed at which the trajectories in our dataset shall be executed.
To accomplish this, we create multiple variations of the same target trajectory (a spiral), each with a different duration ranging from 3 to 6 seconds. This range was selected on the consideration that faster trajectories are not achievable, and slower trajectories would exhibit negligible differences in their dynamics. As the duration of the trajectory increases, the motion commands required for the cobot to track it become slower. Next, we adjust a set of PD controllers for each individual trajectory and assess their performance on the other trajectories. The results of this study are illustrated in Figure 6, where \(X^{\prime\prime}controller\) represents a set of controllers that were specifically adjusted using a trajectory of \(X\) seconds.
From these results, two conclusions can be drawn. Firstly, the speed of the trajectory has a notable impact on the accuracy of the control, with accuracy rapidly decreasing at higher speeds. The accuracy stabilizes at around 5 seconds, making it the optimal duration for this trajectory as it strikes the best balance between execution time and controller accuracy.
Secondly, although there is a slight drop in performance when transferring a controller from one trajectory to another, the differences between sets of controllers are relatively small. This suggests that at this regime, the speed of the trajectory does not significantly affect the PD adjustment but rather the data gathered.
## V Conclusions and future research
The work presented here was focused on defining a methodology to create a dataset from which most ML solutions were able to capture the dynamics model of a cobot. To collect optimal tuples of torque-position/velocity data, we applied multi-objective GAs to finely adjust PDs that controlled the torque of a cobot throughout its working space, maximizing accuracy and minimizing torque values.
In future work, we aim to apply this methodology to a non-simulated cobot platform covering the sim2real gap and demonstrating how the main concepts indicated in this work also apply for real robots. Although the specific trajectories and optimized controllers may differ when addressing the GA, the presented work and results provide valuable insights. It is important to note that the intensive optimization effort presented in this work cannot be directly performed on a robotic platform due to the potential risk it poses to the robot integrity.
## Acknowledgment
This study was supported by the EU with the IMOCOe4.0 [EU H2020RIA-101007311] project and by Spanish national funding [PCI2021-121925]. This study was also supported by SPIKEGEG [PID2020-113422GA-I00] by the Spanish Ministry of Science and Innovation MCIN/AEI/10.13039/501100011033, awarded to NRL; DLROB [TED2021-131294B-I00] funded by MCIN/AEI/10.13039/501100011033 and by the European Union NextGenerationEU/PRTR, awarded to NRL; MUSCLEBOT [CNS2022-135243] funded by MCIN/AEI/10.13039/501100011033 and by the European Union NextGenerationEU/PRTR, awarded to NRL.
| **Collaborative robotics は、運動制御と人間ロボットインタラクションの分野において、新しい挑戦的な分野です。ロボットと環境の信頼できる相互作用を必要とする安全対策は、古典的な制御手法の使用を阻害し、研究者は機械学習 (ML) などの新しい技術を試すようになっています。この文脈では、強化学習が協働ロボットの intelligent controller を作成する主要な方法として採用されています。しかし、監督学習は、より速く、より安全な方法でデータに基づいたモデルに基づいた ML コントローラーを開発する可能性を示しています。この研究では、協働ロボットの運動のダイナミクスを学習するために使用されるデータセットを作成する必要がある方法論をいくつか調査しています。このため、いくつかの PD コントローラーを複数の軌跡に調整し、多目的遺伝子アルゴリズム (GA) を使用しました。このGAは、精度だけでなく、安全性を考慮して、これらの |
2308.16647 | On size Ramsey numbers for a pair of cycles | We show that there exists an absolute constant $A$ such that the size Ramsey
number of a pair of cycles $(C_n$, $C_{2d})$, where $4\le 2d\le n$, is bounded
from above by $An$. We also study the restricted size Ramsey number for such a
pair. | Małgorzata Bednarska-Bzdęga, Tomasz Łuczak | 2023-08-31T11:42:29 | http://arxiv.org/abs/2308.16647v1 | # On size Ramsey numbers for a pair of cycles
###### Abstract.
We show that there exists an absolute constant \(A\) such that the size Ramsey number of a pair of cycles \((C_{n},\,C_{2d})\), where \(4\leq 2d\leq n\), is bounded from above by \(An\). We also study the restricted size Ramsey number for such a pair.
Key words and phrases:Ramsey number, cycles, restricted size Ramsey number 2010 Mathematics Subject Classification: Primary: 05C55; secondary: 05C38 The second author was supported in part by National Science Centre, Poland, grant 2022/47/B/ST1/01517
Introduction
Let \(\Omega\) be a bounded bounded domain with radius \(d\) and let \(\Omega^{\prime}\) be a bounded domain with radius \(d\). We say that \(\Omega\) is _strongly bounded_ if there exists a constant \(d\) such that
\[\Omega^{\prime}(\Omega)\leq\frac{d}{d}\]
for all \(\Omega\).
Let \(\Omega\) be a bounded domain with radius \(d\) and let \(\Omega^{\prime}\) be a bounded domain with radius \(d\). We say that \(\Omega\) is _strongly bounded_ if there exists a constant \(d\) such that
\[\Omega^{\prime}(\Omega)\leq\frac{d}{d}\]
for all \(\Omega\).
Let \(\Omega\) be a bounded domain with radius \(d\) and let \(\Omega^{\prime}\) be a bounded domain with radius \(d\). We say that \(\Omega\) is _strongly bounded_ if there exists a constant \(d\) such that
\[\Omega^{\prime}(\Omega)\leq\frac{d}{d}\]
for all \(\Omega\).
Let \(\Omega^{\prime}\) be a bounded domain with radius \(d\) and let \(\Omega^{\prime}\) be a bounded domain with radius \(d\). We say that \(\Omega\) is _strongly bounded_ if there exists a constant \(d\) such that
\[\Omega^{\prime}(\Omega)\leq\frac{d}{d}\]
for all \(\Omega\).
Let \(\Omega^{\prime}\) be a bounded domain with radius \(d\) and let \(\Omega^{\prime}
**Lemma 5**.: _Let \(t\) and \(d\) be integers with \(t\geq 2\cdot 10^{49}d\) and \(d\geq 8\). Let \(G\) be a graph such that \(K_{d,d}\not\subseteq G^{c}\) and \(|S\cup N(S)|\geq t\) for every \(S\subseteq V(G)\) with \(|S|\geq d\). If \(x\) and \(y\) are two endpoints of a path on at least \(8d\) vertices in \(G\), then there is a path on \(t\) vertices in \(G\) with endpoints \(x\) and \(y\)._
Let us rewrite the above lemma in the colored graph setting.
**Corollary 6**.: _Let \(t\) and \(d\) be integers with \(t\geq 2\cdot 10^{49}d\) and \(d\geq 8\). Let \(G\) denote a complete graph on at least \(r(C_{t},K_{d,d})=t+d-1\) vertices and suppose every edge of \(G\) is colored either red or blue in such a way that there exists no red copy of \(K_{d,d}\) in \(G\). Moreover, let \(x\) and \(y\) be two endpoints of a blue path on at least \(8d\) vertices in \(G\). Then for every \(\ell\) such that \(t\leq\ell\leq|V(G)|-d+1\), there exists a blue path on \(\ell\) vertices in \(G\) with endpoints \(x\) and \(y\)._
Proof of Theorem 3.: Let \(A^{\prime}\geq 20\) be a constant such that \(r(C_{k},K_{d,d}))=k+d-1\) for every \(d\geq 8\) and \(k\geq A^{\prime}d\). From Theorem 4 we know that such a constant exists.
Let \(d\geq 8\), \(\eta\leq 1/A^{\prime}\) and \(n\geq 8d/\eta\). We put \(s=\lfloor\eta n/4d\rfloor\) and consider a 'blow-up' \(G\) of the cycle \(C_{2s}=v_{1}v_{2}\cdots v_{2s}\) in which we replace each vertex \(v_{i}\) by a clique \(S_{i}\), such that \(12d/\eta\geq|S_{i}|\geq d/\eta\) and \(\sum_{i=1}^{2s}|S_{i}|=n+2sd\), and moreover each edge of \(C_{2s}\) is replaced by a complete bipartite graph between corresponding sets. Observe that such a family of sets exists since \(2s\lceil d/\eta\rceil\leq n+2sd\leq 2s\lfloor 12d/\eta\rfloor\). Then, clearly, \(G\) has \(n+2sd\leq(1+\eta)n\) vertices, while the number of edges of \(G\) is bounded from above by
\[\frac{1}{2}(n+2sd)3\cdot\frac{12d}{\eta}\leq\frac{18}{\eta}(1+\eta)dn<\frac{ 20}{\eta}dn\,.\]
Now let us suppose that we color edges of \(G\) with red and blue in such a way that there are no red copy of \(K_{d,d}\). Then, since \(|S_{2i-1}\cup S_{2i}|\geq 2d/\eta\geq(A^{\prime}+1)d\), by Theorem 4 we infer that for \(i=1,2,\ldots,s\) the subgraph \(H_{i}=G[S_{2i-1}\cup S_{2i}]\) contains a blue cycle \(C^{i}\) on \(|S_{2i-1}|+|S_{2i}|-d\) vertices. Moreover, since subgraphs induced by \(S_{2i}\cup S_{2i+1}\) contain no red \(K_{d,d}\), from each such subgraph we can choose one blue edge \(\{x^{i},y^{i+1}\}\) which connects \(C^{i}\) to \(C^{i+1}\) for \(i=1,2,\ldots,s\), and a blue edge \(\{x^{s},y^{1}\}\) joining cycles \(C^{s}\) and \(C^{1}\). Note that \(x^{i}\in S_{2i}\) and \(y^{i}\in S_{2i-1}\) are two different vertices of \(C^{i}\) and since the cycle \(C^{i}\) is much longer than \(16d\), by Corollary 6, for every \(i\) we can connect \(x^{i}\) and \(y^{i}\) by a blue path of length \(|S_{2i-1}|+|S_{2i}|-2d\) to build a blue cycle of length \(n\) in \(G\). Hence the first part of the assertion follows.
In order to verify the second part, put \(A=80A^{\prime}\), \(\eta=1/A^{\prime}=80/A\) and notice that for \(d\geq 8\) and \(n\geq Ad/4>8d/\eta\) the above graph \(G\) has less than \(20A^{\prime}dn=Adn/4\) edges. Now to complete the proof it is enough to observe that \(K_{d,d}\subseteq K_{8,8}\) for every \(d\leq 8\), so the upper bound in the second part of the theorem holds for \(2\leq d\leq 8\) as well. To see the lower bound notice that every minimal graph \(H\) such that \(H\to(C_{n},K_{d,d})\) has minimum degree greater than \(d\).
## 3. Proof of Theorem 1
The argument we use to show Theorem 1 is somewhat similar to the one we apply in the proof of Theorem 3. We split the set of \((1+\eta)n\) vertices into small sets, arrange them in the circle, and in each consecutive ones we embed the same graph \(G\). Now, however, \(G\) is not the complete graph but a sparse graph \(G\) which has the property that each coloring of the edges of \(G\) with red and blue either leads to a red cycle \(C_{2d}\), or results in a large set \(S\) such that every pair of vertices of \(S\) is connected by a blue path of any length chosen from some fairly large interval. Then the argument goes as in the proof of Theorem 1 - we generate such sets \(S\) in every second pair of sets, connect them by edges, and select
blue paths in \(S\)'s in such a way that the resulting blue cycle has length \(n\). Thus, the main challenge here is to find a graph \(G\) we use in this construction.
In order to accomplish that we employ a sparse version of the Regularity Lemma, so let us start with some notions necessary to state it correctly. Let \(G=(V,E)\) be a graph and \(V_{1},V_{2}\subseteq V\) be two disjoint subsets of its vertices. By \(e_{G}(V_{1},V_{2})\) we mean the number of edges between \(V_{1}\) and \(V_{2}\), and for \(p>0\) we define the scaled density \(d_{p,G}(V_{1},V_{2})\) as
\[d_{p,G}(V_{1},V_{2})=\frac{e(V_{1},V_{2})}{p|V_{1}||V_{2}|}\,.\]
Here and below we shall often omit the index \(G\) in \(e_{G}(V_{1},V_{2})\) and \(d_{p,G}(V_{1},V_{2})\) when it does not lead to misunderstandings. A pair \((V_{1},V_{2})\) is called \((p,\varepsilon)\)**-regular** if for every \(U_{i}\subseteq V_{i}\), such that \(|U_{i}|\geq\varepsilon|V_{i}|\), for \(i=1,2\), we have
\[\big{|}d_{p}(V_{1},V_{2})-d_{p}(U_{1},U_{2})\big{|}\leq\varepsilon\,.\]
We call such a pair \((V_{1},V_{2})\)**good** if for every \(W_{i}\subseteq V_{i}\), we have
\[|N(W_{i})|\geq\min\{9|W_{i}|,(1-2\varepsilon)|V_{3-i}|\},\quad\text{for}\quad i =1,2,\]
where by \(N_{G}(W)=N(W)\) we always denote the neigborhood of the set \(W\) in a graph \(G\). It turns out that, as was proved by Balogh, Csaba and Samotij [2], every \((p,\varepsilon)\)-regular pair contains a large good \((p,\varepsilon)\)-regular pair. The following lemma is a special case of Lemma 19 from [2].
**Lemma 7**.: _Let \((V_{1},V_{2})\) be an \((p,\varepsilon)\)-regular pair for some \(0<\varepsilon<0.1\) such that \((1-2\varepsilon)k\leq|V_{1}|,|V_{2}|\leq k\) and \(d_{p}(V_{1},V_{2})\geq\varepsilon\). Moreover, let \(V_{i}^{\prime}\subseteq V_{i}\), \(|V_{i}^{\prime}|\geq 40\varepsilon k\), for \(i=1,2\). Then there exist sets \(V_{i}^{\prime\prime}\subseteq V_{i}^{\prime}\) such that \(|V_{i}^{\prime\prime}|\geq(1-\varepsilon)|V_{i}^{\prime}|\) and \((V_{1}^{\prime\prime},V_{2}^{\prime\prime})\) is a good \((p,2\varepsilon|V_{1}|/|V_{1}^{\prime\prime}|)\)-regular pair._
We also define an \(\alpha\)**-expanding tree** as a rooted tree \(T\) of height \(r\) such that the number of vertices \(S_{i}\) at the distance \(i\) from the root is \(\lceil\alpha|S_{i-1}|\rceil\) for \(i=1,\ldots,r-1\). The following fact is crucial for our argument.
**Lemma 8**.: _Let \((V_{1},V_{2})\) be a good \((p,\varepsilon)\)-regular pair for some \(0<\varepsilon\leq 10^{-4}\) such that \((1-2\varepsilon)k\leq|V_{1}|,|V_{2}|\leq k\) and \(d_{p}(V_{1},V_{2})\geq 3\varepsilon\). Moreover, let \(h=\lceil\log_{8}(\varepsilon k)\rceil\), \(h\leq\ell\leq(2-10\sqrt{\varepsilon})k\), and \(x\in V_{1}\cup V_{2}\). Then there exists a set \(Y\subseteq V_{1}\cup V_{2}\) of at least \(\varepsilon k\) vertices such that for each \(y\in Y\) there is a path of length \(\ell\) joining \(x\) and \(y\)._
Proof.: We shall show, using an induction on \(\ell\), a slightly stronger statement, namely, that for every \(\ell\), \(h\leq\ell\leq(2-10\sqrt{\varepsilon})k\), and \(x\in V_{1}\cup V_{2}\), there exist sets \(Z\) and \(Y\subseteq Z\) such that \(|Z|\leq\ell+4\varepsilon k\), \(Y\geq\varepsilon k\), and for each \(y\in Y\) there exists a path \(P_{x,y}\) of length \(\ell\) joining \(x\) and \(y\) such that all vertices of \(P_{x,y}\) are contained in \(Z\).
Note first that since the pair \((V_{1},V_{2})\) is good, for every \(2\leq\alpha\leq 8\) and every \(x\in V_{1}\cup V_{2}\), the vertex \(x\) is a root of an \(\alpha\)-expanding tree \(T(x,\alpha)\) such that the number of leaves at the highest level of \(T(x,\alpha)\) is \(\lceil 2\varepsilon k\rceil\). In particular, the assertion holds for all \(\ell\) such that \(h\leq\ell\leq 3h\).
Now let us suppose that the assertion holds for some \(x\) and \(\ell_{0}\), and by \(Z_{0}\) and \(Y_{0}\) we denote the sets which certify that it is true. We shall show that it holds also for \(x\) and \(\ell_{1}=\ell_{0}+h\). In order to see it consider the pair \((V_{1}^{\prime},V_{2}^{\prime})\), where \(V_{i}\setminus Z_{0}\) for \(i=1,2\). From Lemma 7 it follows that there exist sets \(V_{i}^{\prime\prime}\subseteq V_{i}^{\prime}\), \(i=1,2\), such that \(|V_{1}^{\prime\prime}|,|V_{2}^{\prime\prime}|\geq\sqrt{\varepsilon}k\geq\varepsilon k\) and the pair \((V_{1}^{\prime\prime},V_{2}^{\prime\prime})\) is a good \((p,2\sqrt{\varepsilon})\)-regular pair. Since \((V_{1},V_{2})\) was \((p,\varepsilon)\)-regular, there exists at least one edge \(e=\{y,z\}\) between the set \(Y_{0}\) and \(V_{1}^{\prime\prime}\cup V_{2}^{\prime\prime}\); let \(y=e\cap Y_{0}\) and \(z=e\cap(V_{1}^{\prime\prime}\cup V_{2}^{\prime\prime})\). However, since \((V_{1}^{\prime\prime},V_{2}^{\prime\prime})\) is good, there exists a \(4\)-expanding tree \(T(z,4)\) rooted at \(z\) which has at least
\[4\sqrt{\varepsilon}\min\{|V_{1}^{\prime\prime}|,|V_{2}^{\prime\prime}|\}\geq\varepsilon k\]
leaves. Thus, we may take for \(Z\) the vertices of a path of length \(\ell_{0}\) joining \(x\) and \(y\) and all vertices of \(T(z,4)\), and for \(Y\) the set of vertices in distance \(\ell_{1}\) from \(z\) in \(T(z,4)\).
Let us note the following consequence of the above lemma.
**Lemma 9**.: _Let \((V_{1},V_{2})\) be a good \((p,\varepsilon)\)-regular pair for some \(0<\varepsilon<10^{-4}\) such that \((1-\varepsilon)k\leq|V_{1}|,|V_{2}|\leq k\) and \(d_{p}(V_{1},V_{2})\geq 3\varepsilon\). Moreover, let \(h=\lceil\log_{4}(\varepsilon k)\rceil\), \(\ell\) be an odd natural number such that \(4h\leq\ell\leq(2-30\sqrt{\varepsilon})k\), and \(x_{i}\in V_{i}\) for \(i=1,2\). Then there exists a path of length \(\ell\) joining \(x_{1}\) and \(x_{2}\)._
_In particular, \((V_{1},V_{2})\) contains a cycle of length \(\ell+1\)._
Proof.: Using the expanding property we build two vertex disjoint \(4\)-expanding trees \(T_{1}(x_{1},4)\) and \(T_{2}(x_{2},4)\) of height \(h\) rooted at \(x_{1}\) and \(x_{2}\) respectively. Let \(V_{1}^{\prime}=V_{1}\setminus(V(T_{1})\cup V(T_{2}))\) and \(V_{2}^{\prime}=V_{2}\setminus(V(T_{1})\cup V(T_{2}))\). From Lemma 7 it follows that there exist sets \(V_{i}^{\prime\prime}\subseteq V_{i}^{\prime}\), \(i=1,2\), such that \(|V_{i}^{\prime\prime}|\geq(1-8\sqrt{\varepsilon})k\) and \((V_{1}^{\prime\prime},V_{2}^{\prime\prime})\) is a good \((p,2\sqrt{\varepsilon})\)-regular pair. Let \(x\in V_{1}^{\prime\prime}\cup V_{2}^{\prime\prime}\) be one of neighbors of at least \(\varepsilon k\) leaves of \(T_{1}(x_{1},4)\) - since the pair \((V_{1},V_{2})\) is \((p,\varepsilon)\)-regular such a neighbor always exists. Using Lemma 8 we generate a set \(Y\) of at least \(\varepsilon k\) vertices each of which is connected to \(x\) by a path of length \(\ell-2h-2\). Then, using again \((p,\varepsilon)\)-regularity of \((V_{1},V_{2})\), we find an edge connecting one of the vertices of \(Y\) with one of the leaves of \(T_{2}(4,\varepsilon)\), closing a path of length \(\ell\) between \(x_{1}\) and \(x_{2}\).
To see the last part of the lemma it is enough to take any edge and join its ends by a path of length \(\ell\).
Let us recall that a sparse version of the Regularity Lemma, discovered independently by Kohayakawa and Rodl in the early nineties, states that for every \(\varepsilon>0\) the vertex set of a graph \(G=(V,E)\) whose edges are, in a way, 'uniformly distributed', can be partitioned into a few sets of equal size so that almost all pairs of sets are \((p,\varepsilon)\)-regular, where \(p\) is the density \(|E|/\binom{|V|}{2}\) of \(G\). The condition that edges are 'uniformly distributed' holds for random graphs and all its dense subgraphs, so the Regularity Lemma can be used. Since this applications of the Regularity Lemma is now routine (see, for instance, [1, 2, 8, 10]) we only state here its consequence, when it is applied to a random graph \(G(N,c/N)\).
**Lemma 10**.: _For every \(\varepsilon>0\) there exist constants \(N_{0}\), \(T\), and \(c\), such that the following holds. For every \(N\geq N_{0}\) there exists a graph \(G=(V,E)\), \(|V|=N\), \(|E|\leq cN/2\), with the property that for every coloring of edges of \(G\) with red and blue there exists a partition of \(V\) into sets \(V_{1},\ldots,V_{t}\), such that_
1. \(1/\varepsilon\leq t\leq T\)_;_
2. \(\big{|}|V_{i}|-|V_{j}|\big{|}\leq 1\) _for_ \(1\leq i<j\leq t\)_;_
3. _all but at most_ \(\varepsilon t^{2}\) _pairs_ \((V_{i},V_{j})\) _are_ \((c/N,\varepsilon)\)_-regular in both the red graph_ \(R\) _and the blue graph_ \(B\)_;_
4. _for every_ \(1\leq i<j\leq t\) _and_ \(p=c/N\) _either_ \(d_{p,B}(V_{i},V_{j})\geq 1/3\) _or_ \(d_{p,R}(V_{i},V_{j})\geq 1/3\)_._
We denote the graph whose existence is guaranteed by the above lemma by \(\hat{G}_{N}(c,\varepsilon)\). For any coloring of \(\hat{G}_{N}(c,\varepsilon)\) and a partition for which conditions (i)-(iv) hold, by \(\mathbf{G}_{t}(\varepsilon)\) we denote the **reduced graph** of the partition defined as the graph with vertices \(\mathbf{v}_{1},\ldots,\mathbf{v}_{t}\), where two vertices \(\mathbf{v}_{i}\) and \(\mathbf{v}_{j}\) are connected by a red [blue] edge if the pair \((V_{i},V_{j})\) is \((p,\varepsilon)\)-regular in the red graph \(R\) [the blue graph \(B\)] with \(p=c/N\) and its scaled density in \(R\) [\(B\)] is larger than \(1/3\). Note that \(\mathbf{G}_{t}(\varepsilon)\) is the complete graph on \(t\) vertices from which we have removed at most \(\varepsilon t^{2}\) edges and remaining edges are colored with red and blue.
The following result is, in some way, analogous to Lemma 5.
**Lemma 11**.: _For every positive \(\eta<1\) there exist constants \(n_{1}\), \(a_{1}\), \(A_{1}\) and \(c_{1}\) such that for every \(n\geq n_{1}\) there exists a graph \(G_{n}(\eta)\) with \(n(1+\eta)\) vertices and fewer than \(c_{1}n/2\) edges for which the following property holds._
_For every \(d\) and \(\ell\) such that \(A_{1}\log n\leq d\leq a_{1}n\), \(A_{1}\log n\leq\ell\leq n(1+\eta/3)\), and every coloring of edges of \(G_{n}(\eta)\) with red and blue which do not lead to red \(C_{2d}\):_
1. _each two subsets of vertices of_ \(G_{n}(\eta)\) _of size_ \(n/3\) _each are connected by at least one blue edge;_
2. _there exists a vertex set_ \(S\) _such that_ \(|S|\geq(1+\eta/2)n\) _and any two vertices_ \(x,y\in S\) _are connected by a blue path of length_ \(\ell\)_._
_In particular, for each such coloring, the graph \(G_{n}(\eta)\) contains a blue cycle of length \(\ell+1\)._
Proof.: Put \(\varepsilon=\eta^{2}/10^{4}\), \(N=(1+\eta)n\), and let \(\hat{G}_{N}(c,\varepsilon)\) be a graph whose existence is assured by Lemma 10. Color its edges with red and blue so that it contains no red copy of \(C_{2d}\). First we argue that the reduced graph \(\mathbf{G}_{t}(\varepsilon)\) contains no red edges. Indeed, such edge means that a red graph contains an \((p,\varepsilon)\)-regular pair \((V_{i},V_{j})\) with scaled density at least \(1/3\) and \(\lfloor n/t\rfloor\leq|V_{i}|,|V_{j}|\leq\lceil N/t\rceil\) and so, by Lemma 9, it contains a (red) cycle of every length between \(\log_{4}(N/t)\) and \(n/(2t)\). Hence, \(\mathbf{G}_{t}(\varepsilon)\) is a graph on \(t\) vertices with at least \((1-2\varepsilon)\binom{t}{2}\) blue edges. Note that such a graph can have at most \(3\varepsilon t\) vertices of degree smaller than \(t/3\). It proves (i) and, due to Dirac Theorem, it implies that the reduced graph contains an odd blue cycle on at least \(2r+1\geq(1-4\varepsilon)t\) vertices. Without loss of generality let us assume that this is a cycle \(\mathbf{v}_{1}\mathbf{v}_{2}\cdots\mathbf{v}_{2r+1}\mathbf{v}_{1}\). Now let \(W^{j}_{i}\subseteq V_{i}\), \(|W^{j}_{i}|=\lfloor\sqrt{\varepsilon}N/t\rfloor\), for \(i=1,2,\ldots 2r\) and \(j=1,2,3,4\). Using Lemma 7 we infer that for every \(i=1,2,\ldots 2r\), there exists \(\bar{V}_{i}\subseteq V_{i}\setminus(W^{1}_{i}\cup W^{2}_{i}\cup W^{3}_{i} \cup W^{4}_{i})\) such that \(|\bar{V}_{j}|\geq(1-5\sqrt{\varepsilon})N/t\) and for \(s=1,2,\ldots,r\) the pair \((\bar{V}_{2s-1},\bar{V}_{2s})\) is a good \((p,\varepsilon)\)-regular pair, with \(p\)-density at least \(1/3\) for \(p=c/N\). Moreover, since for every \(s=1,2,\ldots,r\), and \(j=1,2,3,4\), the pair \((W^{j}_{2s-1},W^{j}_{2s-1})\) is \((p,\sqrt{\varepsilon})\)-regular, there exist sets \(\bar{W}^{j}_{2s-1}\subseteq W^{j}_{2s-1}\) with \(|\bar{W}^{j}_{2s-1}|\geq(1-2\sqrt{\varepsilon})|W^{j}_{2s-1}|\), and \(\bar{W}^{j}_{2s}\subseteq W^{j}_{2s}\) with \(|\bar{W}^{j}_{2s}|\geq(1-2\sqrt{\varepsilon})|W^{j}_{2s}|\), such that pairs \((\bar{W}^{j}_{2s-1},\bar{W}^{j}_{2s})\) are good \((p,2\sqrt{\varepsilon})\)-regular pairs. Note also that since the pair \((V_{2s-1},V_{2s})\) is \((p,\varepsilon)\)-regular with scaled density at least \(1/3\), there exist sets \(\hat{V}_{2s-1}\subseteq\bar{V}_{2s-1}\), and \(\hat{V}_{2s}\subseteq\bar{V}_{2s}\), such that \(|\hat{V}_{2s-1}|\geq(1-6\varepsilon)|\bar{V}_{2s-1}|\), \(|\hat{V}_{2s}|\geq(1-6\varepsilon)|\bar{V}_{2s}|\), each vertex of \(\hat{V}_{2s-1}\) has at least one neighbor in \(\hat{V}_{2s}\) and each of the sets \(\bar{W}^{j}_{2s}\), and each vertex of \(\hat{V}_{2s}\) has at least one neighbor in \(\hat{V}_{2s-1}\) and in each of the sets \(\bar{W}^{j}_{2s-1}\), for \(s=1,2,\ldots,r\) and for \(j=1,2,3,4\).
Let us set
\[S=\bigcup_{i=1}^{2r}\hat{V}_{i}\,.\]
Note first that
\[|S|\geq 2r(1-6\varepsilon)(1-5\sqrt{\varepsilon})N/t\geq\big{(}(1-4\varepsilon )t-1\big{)}(1-6\sqrt{\varepsilon})N/t\geq(1-7\sqrt{\varepsilon})N\geq(1+\eta/2 )n\,.\]
Now let \(x,y\in S\). We will argue that in \(\hat{G}_{N}(c,\varepsilon)\) there are even and odd blue paths joining \(x\) and \(y\), both of length at most \(13r\log_{4}(N/t)\), which contain precisely one edge from each pair \((\bar{V}_{2s-1},\bar{V}_{2s})\) for \(s=1,2,\ldots,r\). In order to construct such paths we use the fact that \(\mathbf{v}_{1}\mathbf{v}_{2}\cdots\mathbf{v}_{2r+1}\mathbf{v}_{1}\) is a blue cycle in the reduced graph \(\mathbf{G}_{t}(\varepsilon)\). Let us suppose that \(x\in\hat{V}_{1}\) and \(y\in\hat{V}_{i_{0}}\) for some \(i_{0}=1,2,\ldots,2r\). First we connect \(x\) to a vertex \(v_{2}\) from \(\hat{V}_{2}\). For the next vertex of the path we choose a neighbor \(w_{1}\in\bar{W}^{1}_{1}\) of \(v_{2}\). Then we use the fact that the pair \((\bar{W}^{1}_{1},\bar{W}^{1}_{2})\) is good and build a tree \(T_{1}\) rooted at \(w_{1}\) with \(\varepsilon|V_{2}|\) leaves in \(\bar{W}^{1}_{2}\). Since the pair \((V_{2},V_{3})\) is \((p,\varepsilon)\)-regular, at least one of these leaves has a neighbor \(w_{3}\) in \(\bar{W}^{1}_{3}\) - we select it as the next vertex of the path. Then we use edges of the pair \((\bar{W}^{1}_{3},\bar{W}^{1}_{4})\) to build
a tree \(T_{2}\) rooted at \(w_{3}\) which has at least \(\varepsilon|V_{3}|\) leaves in, say, \(\bar{W}_{4}^{1}\). At least one of these leaves has a neighbor \(v_{3}\) in \(\hat{V}_{3}\); we select it as the next vertex of our path and connect it with one of its neighbors \(v_{4}\in\hat{V}_{4}\). Further, we choose a vertex \(w_{3}^{\prime}\) in \(\bar{W}_{3}^{2}\) adjacent to \(v_{4}\) and build a tree rooted on \(w_{3}^{\prime}\) with a lot of leaves, using edges between \((\bar{W}_{3}^{2},\bar{W}_{4}^{2})\). In this way we go around the cycle using 'buffer sets' \(\bar{W}_{i}^{j}\) and \(V_{2r+1}\), picking one edge from each pair \((\hat{V}_{2s-1},\hat{V}_{2s})\) on the way, but never use any buffer more than ones. Moreover, we omit the pair \((\hat{V}_{2s-1},\hat{V}_{2s})\) which contains \(y\) until the very end. To close the path we start with \(y\), choose its neighbor \(z\) in \(\hat{V}_{i_{0}+1}\) or \(\hat{V}_{i_{0}-1}\), choose a neighbor \(z_{i_{0}}\) of \(z\) in buffer set \(\bar{W}_{i_{0}}^{j}\) which has not being used so far, for some \(j=1,2,3,4\), and build a tree with a lot of leaves rooted at \(z_{i_{0}}\), so that we can connect it to rest of the path. Note also that, if necessary, we can adjust the parity of the path running over the cycle, using buffer sets, one extra time. Thus, one can create a path joining \(x\) and \(y\) of required parity and length at most \(4(2r+1)\log_{4}N\leq 13r\log_{4}n\), which contains one edge from every pair \((\bar{V}_{2s-1},\bar{V}_{2s})\) and only one vertex from each of the sets \(\bar{V}_{i}\), \(i=1,2,\ldots,2r\). Now we can apply Lemma 9 to adjust the length of the path by replacing each blue edge contained in a good pair by a path of any odd length up to
\[(2-30\sqrt{\varepsilon})(1-4\sqrt{\varepsilon})N/t\geq(2-40\sqrt{\varepsilon} )N/t\,.\]
Thus \(x\) and \(y\) can be joined by a path of any length \(\ell\), provided \(\ell\geq 13r\log_{4}(N/t)\) and
\[\ell\leq r(2-40\sqrt{\varepsilon})\frac{N}{t}\,.\]
Since
\[r(2-40\sqrt{\varepsilon})\frac{N}{t}\geq\frac{(1-4\varepsilon)t}{2}(2-40\sqrt {\varepsilon})\frac{N}{t}\geq(1-21\sqrt{\varepsilon})(1+\eta)n\geq(1+\eta/3)n\,,\]
this completes the proof of (ii).
Proof of Theorem 1.: If \(d\) is smaller than \(n/C\) and larger than \(C\log n\) for some large constant \(C\), the assertion follows from Lemma 11. If \(d\) is of the order \(n\), then \(r(C_{n},C_{2d})\) is larger than \(n(1+\eta)\), but the argument similar to that we used to prove Lemma 11 still works - one should find in the reduced graph either large enough blue or large enough red cycle and apply Lemma 9 to adjust it length. Since it is a rather standard procedure (see Letzter [11] for recent developments in this area) and the proof follows the very same line as in the diagonal case studied in [1, 10], we omit the details and concentrate on the most interesting case, when \(d=O(\log n)\).
We show Theorem 1 for \(d\) which is large enough, i.e. larger than \(d_{1}=\sqrt{n_{1}}\), where \(n_{1}\) is a constant chosen in such a way the the assertion of Lemma 11 holds. Our argument will be somewhat similar to the construction we used in the proof of Theorem 3. For \(0<\eta<1/4\), \(n\), and \(d\) such that \(d_{1}\leq d\leq n^{1/3}\), let us consider a graph \(G_{d^{2}}(\eta)\) on \(N=\lfloor(1+\eta)d^{2}\rfloor\) vertices and average degree smaller than some constant \(c_{1}\), so that for \(G_{d^{2}}(\eta)\) the assertion of Lemma 11 holds. Now let us build a new graph \(H=(V,E)\) whose vertex set consists of disjoint sets \(W_{1},\ldots,W_{2r}\), where \(|W_{2i}|=\lfloor N/2\rfloor\), \(|W_{2i-1}|=\lceil N/2\rceil\), for \(i=1,2,\ldots,r\) and \(r=\lfloor n/d^{2}\rfloor\). In every pair \(W_{i}\cup W_{i+1}\), \(i=1,2,\ldots,2r\), we embed a copy of graph \(G_{d^{2}}(\eta)\) and we also embed such a copy in a pair \(W_{1}\cup W_{2r}\). Note that \(H\) has
\[r(1+\eta)d^{2}\leq(1+\eta)n\]
vertices and the average degree of \(H\) is bounded from above by \(2c_{1}\). Now let us suppose that we color edges of \(H\) with red and blue in such a way that there are no red copies of \(C_{2d}\). Then in view of Lemma 11 each copy of \(G_{d^{2}}(\eta)\) embedded in \(W_{2i-1}\cup W_{2i}\) for \(i=1,\ldots,r\), contains a set \(S_{i}\) such that \(|S_{i}|\geq(1+\eta/2)d^{2}\) and each two vertices of \(S_{i}\) are
connected by path of any length \(\ell\) between, say, \(d^{2}/2\) and \((1+\eta/3)d^{2}\). Moreover, \(S_{i}\) and \(S_{i+1}\) are connected by at least one blue edge for \(i=1,2,\ldots,r-1\), and so are the sets \(S_{1}\) and \(S_{r}\). Consequently, the graph \(H\) contains blue cycles of any length \(\ell\) such that
\[rd^{2}/2\leq\ell\leq r(1+\eta/3)d^{2}\,.\]
Since
\[rd^{2}/2\leq n(1+\eta)/2\leq n\]
and
\[r(1+\eta/3)d^{2}\geq(1+\eta/4)n\geq n,\]
\(H\) contains a blue copy of \(C_{n}\). This completes the proof for \(d\geq d_{1}\). However, from Theorem 3 we know that for \(d\leq d_{1}\)
\[\hat{r}(C_{n},C_{2d})\leq\hat{r}(C_{n},K_{d,d})\leq Adn\,,\]
so the assertion of Theorem 1 holds for small \(d\) as well.
## 4. Lower bounds for \(r^{*}(C_{n},C_{2d})\)
Since for \(n\geq 3d\) we have \(r(C_{n},C_{2d})=n+d-1\), we start with the following observation.
**Fact 12**.: _If \(G\to(C_{n},C_{2d})\) and \(G\) has \(n+d-1\) vertices, then \(\delta(G)\geq d+1\)._
_In particular, if \(n\geq 3d\), then \(r^{*}(C_{n},C_{2d})\geq\lceil n(d+1)/2\rceil\)._
Proof.: Let us suppose that some vertex \(v\) of \(G\) has neighbors \(w_{1},w_{2},\ldots,w_{d}\) (if the degree of \(v\) is smaller than \(d\), just add to \(G\) some edges). Now color all edges incident to one of the vertices \(w_{1},w_{2},\ldots,w_{d-1}\), as well as the edge \(\{v,w_{d}\}\), with red, and all the remaining edges with blue. In this coloring no \(C_{n}\) is colored blue and no \(C_{2d}\) is colored red.
When \(d\) is small, this bound can be substantially improved. Here and below \(P_{n}\) denote the path on \(n\) vertices.
**Theorem 13**.: _Let \(n\geq 64\) and \(1\leq b\leq n/64\). Suppose that \(H=(V,E)\) is a graph on \(n+b-1\) vertices such that \(H\to(P_{4},P_{n})\). Then_
\[|E|\geq\frac{n\log_{2}(n/b)}{8\log_{2}\log_{2}(n/b)}\,.\]
Proof.: Let \(s=4|E|/|V|\). We define recursively two graph sequences \(H_{0}\supseteq H_{1}\ldots\supseteq H_{t}\) and \(G_{0}\supseteq G_{1}\supseteq\ldots\supseteq G_{t}\), a non-decreasing sequence \(X_{0}\subseteq X_{1}\subseteq\ldots\subseteq X_{t}\) of subsets of \(V\) and one more sequence \((S_{1},S_{2},\ldots,S_{t})\) of subsets of \(V\) in the following way. Put \(X_{0}=\emptyset\), \(H_{0}=H\), and let \(G_{0}\) be the graph induced in \(H\) on the set of all vertices of degree at most \(s\) in \(H\). Let \(S_{j}\) be the largest set of vertices of \(G_{j}\) such that their pairwise distances in \(H_{j}\) are greater than \(2\). As long as \(N_{H_{j-1}}(S_{j})\) is not empty, we define the set \(X_{j}=X_{j-1}\cup N_{H_{j-1}}(S_{j})\) and the graphs \(H_{j}=H_{j-1}\setminus N_{H_{j-1}}(S_{j})\), \(G_{j}=G_{j-1}\setminus N_{H_{j-1}}(S_{j})\). By \(t\) we mean the largest index \(j\geq 1\) such that \(N_{H_{j-1}}(S_{j})\) is not empty.
**Claim 14**.: _For every \(0\leq j\leq t\) the following holds._
1. \(|S_{j+1}|\leq|X_{j}|+b\)_,_
2. \(|N_{H_{j}}(S_{j+1})|\leq(|X_{j}|+b)s\)_,_
3. \(V(G_{j})\subseteq S_{j+1}\cup N_{H_{j}}(S_{j+1})\cup N_{G_{j}}(N_{H_{j}}(S_{j+ 1}))\)_._
Proof.: Suppose that \(|S_{j+1}|>|X_{j}|+b\), for some \(0\leq j\leq t\). Observe that if we color the edges of \(H\) such that all edges incident to \(S_{j+1}\) in \(H_{j}\) are red and all remaining edges of \(H\) are blue, then we have no red copy of \(P_{4}\), as well as no blue path longer than \(|V(H_{j})|-|S_{j+1}|+2|X_{j}|+1\leq|V(H)|-b=n-1\). It contradicts the assumption that \(H\to(P_{4},P_{n})\) and hence (i) follows.
The second part of the claim is the consequence of the first one and the fact that every vertex in \(S_{j+1}\subseteq V(G_{j})\) has degree at most \(s\).
To see (iii) note that if there exists a vertex \(u\in V(G_{j})\setminus S_{j+1}\) such that \(u\not\in N_{H_{j}}(S_{j+1})\cup N_{G_{j}}(N_{H_{j}}(S_{j+1}))\), then \(u\) is in distance at least \(3\) from every vertex of \(S_{j+1}\), which contradicts the maximality of \(S_{j+1}\).
Based on the above claim and the definition of \(X_{i}\), one can easily verify inductively that for every \(0\leq j\leq t\) we have
\[|N_{H_{j}}(S_{j+1})|\leq(s+1)^{j}bs\text{ and }|X_{j}|\leq\big{(}(s+1)^{j}-1 \big{)}b. \tag{4.1}\]
Furthermore, since \(V(G_{j})\subseteq S_{j+1}\cup N_{H_{j}}(S_{j+1})\cup N_{G_{j}}(N_{H_{j}}(S_{j+ 1}))\), every vertex of \(V(G_{j})\setminus N_{H_{j}}(S_{j+1})\) either is isolated in \(H_{j}\) or has got at least one neighbor in \(N_{H_{j}}(S_{j+1})\). Therefore \(\deg_{H_{j+1}}(u)\leq\deg_{H_{j}}(u)-1\) for every vertex \(u\in V(G_{j+1})\) with positive degree in \(H_{j}\). Thus \(t\leq s\), since the degree in \(H\) of every vertex of \(u\in V(G_{0})\) is at most \(s\).
Let us recall that we assumed that \(N_{H_{i}}(S_{t+1})\) is empty. It means that \(S_{t+1}=V(G_{t})\). The construction of the sets \(X_{j}\) implies that \(|V(G_{0})|\leq|V(G_{t})|+|X_{t}|\) so \(|V(G_{0})|\leq|S_{t+1}|+|X_{t}|\) and by Claim 14 we have \(|V(G_{0})|\leq 2|X_{t}|+b\). Hence by (4.1) we obtain
\[|V(G_{0})|\leq 2\big{(}(s+1)^{t}-1\big{)}b+b\leq 2\big{(}(s+1)^{s}-1\big{)}b+b <2(s+1)^{s}b.\]
In view of the definition of \(s\) and \(G_{0}\), we have \(|V(G_{0})|\geq|V|/2\), so the above inequality implies that \((s+1)^{s}>|V|/(4b)\) and hence \(s\geq\log_{2}(|V|/(4b))/\log_{2}\log_{2}(|V|/(4b))\) for \(b\leq|V|/64\). Therefore
\[|E|=\frac{1}{4}|V|s\geq\frac{|V|\log_{2}(|V|/(4b))}{4\log_{2}\log_{2}(|V|/(4b ))}\geq\frac{n\log_{2}(n/b)}{8\log_{2}\log_{2}(n/b)}\,.\]
The above result immediately implies the following lower bound for \(r^{*}(C_{n},C_{2d})\), which for \(d\ll\log n/\log\log n\) is better than the simple bound given in Fact 12, but, unfortunately, it decreases as a function of \(d\).
**Corollary 15**.: _If \(n\geq 64d\), then_
\[r^{*}(C_{n},C_{2d})\geq\frac{n\log_{2}(n/d)}{8\log_{2}\log_{2}(n/d)}.\]
We also remark that it follows from Theorem 13 that \(r^{*}(C_{n},P_{4})=\Omega(n\log n/\log\log n)\), while it is known that \(r^{*}(C_{n},P_{3})=O(n)\) (see for instance Ben-Shimon, Krivelevich, Sudakov [3]).
## 5. Upper bounds for \(r^{*}(C_{n},C_{2d})\)
In order to estimate \(r^{*}(C_{n},C_{2d})\) from above we need to find a sparse graph \(G\) on \(n+d-1\) vertices such that \(G\to(C_{n},C_{2d})\). We start with a technical lemma which says the following. Let \(F\) be a (sparse) graph such that for every coloring of its edges with red and blue which does not lead to a red copy of \(C_{2d}\), we find at most \(s\) blue paths which covers all vertices of \(H\). Then, if we add \(O(s+d)\) vertices to \(F\) and connect them with every vertex of \(F\), then for the resulting graph \(G\) we have \(G\to(C_{n},C_{2d})\), where \(|V(G)|=n-d+1\), precisely as we want. In order to state this result we introduce an additional notion. We call a partially colored graph \(\hat{G}\) on \(n\) vertices an \((n,s,t)\)**-system**, if \(\hat{G}\) consists of:
* the 'central clique' \(K\) on \(t\) vertices,
* \(s\geq 0\) vertex disjoint'satellite paths' \(L_{1},\ldots,L_{s}\), whose edges are colored with blue. All vertices of each of the paths \(L_{i}\), \(i=1,2,\ldots,s\), are connected by (uncolored) edges to the central clique.
Note that some of the satellite paths may be trivial, i.e. they may be just vertices.
**Lemma 16**.: _Let \(G\) be an \((n,s,t)\)-system such that \(t\geq 10d+4s\) for some \(d\geq 2\). Then each coloring of the uncolored edges of \(G\) with two colors, red and blue, leads to either a red copy of \(C_{2d}\), or a blue copy of \(C_{n-d+1}\)._
Proof.: Let \(K\) be the central clique of \(G\) and \(L_{1},L_{2},\ldots,L_{s}\) be the family of the satellite paths. We denote the fully colored \(G\) by \(\bar{G}\) and assume that it contains no red copy of \(C_{2d}\). By \(X\) we denote the set of \(d-1\) vertices of \(\bar{G}\) which have the smallest blue neighborhood in the central clique and write \(G^{\prime}\) for the subgraph induced in \(\bar{G}\) by \(V(G)\setminus X\).
**Claim 17**.: _Every vertex of \(V(G^{\prime})\) has at least \(4d+2s\) blue neighbors in the central clique._
Proof.: Suppose that a vertex \(u\in V(G^{\prime})\) has fewer than \(4d+2s\) blue neighbors in the central clique. Let \(A=X\cup\{u\}\) and \(B\subseteq V(K\setminus X)\). Then \(|A|=d\), \(|B|\leq|V(K)|-|X|>t-d\geq 9d+4s\) and so every two vertices in \(A\) have more than \(|B|-8d-4s\geq d\) common red neighbors in \(B\). Hence in a greedy way we can find a red \(C_{2d}\) in \(\bar{G}[A\cup B]\).
For every \(i=1,2,\ldots,s\), we define a path \(L^{\prime}_{i}\) as the longest path contained in \(L_{i}\) such that its ends do not belong to \(X\) (if all vertices of \(L_{i}\) are in \(X\) we say that the path is empty so we can ignore it). Note that \(V(L^{\prime}_{i})\supseteq V(L_{i})\setminus X\) but \(V(L^{\prime}_{i})\) can also contain some vertices from \(X\). Let \(d^{\prime}\) stand for the number of such vertices, i.e.
\[d^{\prime}=\left|X\cap\bigcup_{i=1}^{s}V(L^{\prime}_{i})\right|.\]
We 'compensate' for these vertices by selecting in \(V(K)\setminus X\) any \(d^{\prime}\) vertices and denote this set as \(X^{\prime}\). Note that
\[|V(K)\cap(X\cup X^{\prime})|=|V(K)\cap X|+d^{\prime}\leq d-1\,.\]
We will show that the blue subgraph spanned in \(\bar{G}\) by \((V(K)\setminus X^{\prime})\cup\bigcup_{i=1}^{s}V(L^{\prime}_{i})\) is hamiltonian.
Let us first study graph \(H\) which is the blue subgraph induced in \(\bar{G}\) by vertices \(V(K)\setminus(X\cup X^{\prime})\).
**Claim 18**.: \(H\) _is \(2(d+s)\)-vertex-connected._
Proof.: By the previous claim, each vertex of \(V(H)\) has at least \(4d+2s\) blue neighbors in \(K\) and hence it has at least \(4d+2s-|V(K)\cap(X\cup X^{\prime})|>3d+2s\) blue neighbors in \(H\). Moreover, since \(K\) contains no red copy of \(C_{2d}\), any pair of disjoint subsets \(U_{1},U_{2}\subseteq V(H)\) of \(d\) vertices each is connected by at least one blue edge. Thus, every vertex cut of \(H\) has at least \(2d+2s\) vertices.
We also remark that, in view of Claim 17 and the definition of \(H\), every vertex of \(V(G^{\prime})\) has at least \(4d+2s-|V(K)\cap(X\cup X^{\prime})|>3d+2s\) blue neighbors in \(H\).
For every \(i=1,2,\ldots,s\) such that \(L^{\prime}_{i}\) is not empty we select a set \(E_{i}\) of blue edges in the following way. By the definition of the path \(L^{\prime}_{i}\), its ends \(x,y\) (possibly equal) have more than \(3d+2s\) blue neighbors in \(V(H)\). We choose blue edges \(xa_{i},yb_{i}\in E(\bar{G})\) such that \(a_{i},b_{i}\in V(H)\), \(a_{i}\neq b_{i}\) and we define the set \(E_{i}=E(L^{\prime}_{i})\cup\{xa_{i},yb_{i}\}\). Note that \(E_{i}\) is the edge set of a blue path. Moreover, since we choose blue neighbors of at most \(2s\) vertices, and each such vertex has more than \(3d+2s\) blue neighbors in \(V(H)\), we can select \(a_{i}\)'s and \(b_{i}\)'s in such a way that they are all distinct.
Now we add at most \(2s\) auxiliary edges \(\{a_{i},b_{i}\}\) to \(H\) and color them azure (if \(a_{i}b_{i}\in E(H)\), then we repaint it). In this way we obtain the blue-azure graph \(H^{\prime}\) consisting of all non-repainted blue edges of \(H\) and a matching \(B\) which consists of at most \(s\) azure edges. Clearly, for the connectivity \(\kappa(H^{\prime})\) of \(H^{\prime}\) we get
\[\kappa(H^{\prime})\geq\kappa(H)\geq 2(d+s)\,.\]
Furthermore, from the assumption that \(\bar{G}\) has no red \(C_{2d}\), we infer that the independence number \(\alpha(H^{\prime})\) of \(H^{\prime}\) is smaller than \(2d\). Hence, \(\kappa(H^{\prime})>\alpha(H^{\prime})+|B|\). Now we use the following result of Haggkvist and Thomassen [6] which is a generalization of the well known Chvatal-Erdos criterion for hamiltonicity.
**Lemma 19**.: _If \(H^{\prime}\) is a graph with \(\kappa(H^{\prime})\geq\alpha(H^{\prime})+m\), then for every set of vertex disjoint paths in \(H^{\prime}\) with \(m\) edges in total, \(H^{\prime}\) has a Hamilton cycle containing all these paths._
Thus, the graph \(H^{\prime}\) contains a Hamilton cycle \(C^{\prime}\) containing all azure edges. It is not hard to see that the edge set \((E(C^{\prime})\setminus B)\cup\bigcup_{i=1}^{s}E_{i}\) forms a blue cycle \(C\) in \(\bar{G}\) such that \(V(G)\setminus V(C)\) consists of all elements of \(X\setminus\bigcup_{i=1}^{s}V(L^{\prime}_{i})\) and all elements of \(V(K)\cap(X\cup X^{\prime})\). Thus, in view of the definition of \(X^{\prime}\), exactly \(d-1\) vertices of \(G\) do not belong to \(C\). Thereby we obtain a blue cycle on \(n-d+1\) vertices.
We use the above result to estimate \(r^{*}(C_{n},C_{2d})\) for \(d=\Omega(\sqrt{n\ln n})\).
**Lemma 20**.: _If \(10^{13}\sqrt{n\ln n}\leq 2d\leq n/10\) and \(d\) is large enough, then_
\[r^{*}(C_{n},C_{2d})\leq 20dn\,. \tag{5.1}\]
Proof.: The following observation is crucial for our argument.
**Claim 21**.: _For every large enough \(d\) such that \(10^{13}\sqrt{n\ln n}\leq 2d\leq n\) there exists a graph \(G(n,d)\) with \(N=n-17d-1\) vertices and fewer than \(dn\) edges such that each subset of vertices of \(G(n,d)\) on at least \(4d\) vertices contains a copy of \(C_{2d}\)._
Proof.: Let us consider a random graph \(G(N,p)\), where \(p=d/3n\). Its edges are binomially distributed with parameters \(\binom{N}{2}<n^{2}/2\) and \(p\), so with probability at least \(1/3\) it has fewer than \(dn\) edges. Now let \(\varepsilon=10^{-4}\) and let random variable \(X\) counts pairs of disjoint sets of vertices \(S,T\) of \(G(N,p)\) such that \(|S|,|T|\geq 2\varepsilon d\) and the number of edges deviates from its expected value \(p|S||T|\) by more than \(\varepsilon p|S||T|\). Then, using Chernoff bounds (see, for instance, Corollary 2.3 in [7])
\[\mathrm{E}X \leq\sum_{i=\lceil 2\varepsilon d\rceil}^{N}\sum_{j=\lceil 2 \varepsilon d\rceil}^{N}\binom{N}{i}\binom{N}{j}\exp\Big{(}-\frac{ \varepsilon^{2}}{3}ijp\Big{)}\leq N^{2}\binom{N}{\lceil 2\varepsilon d \rceil}^{2}\exp\Big{(}-\frac{\varepsilon^{2}(2\varepsilon d)^{2}d}{9N} \Big{)}\] \[\leq N^{2}\Big{(}\frac{e^{2}N^{2}}{4\varepsilon^{2}d^{2}}\exp \Big{(}-\frac{2\varepsilon^{3}d^{2}}{3N}\Big{)}\Big{)}^{2\varepsilon d},\]
and, since \(d\geq 10^{13}\sqrt{n\ln n}\), \(N\leq n\) and \(2\cdot 10^{13}\varepsilon^{3}/9\geq 2\), for \(d\) large enough we get
\[\Pr(X>0)\leq\mathrm{E}X\leq n^{2}\big{(}e^{2}\varepsilon^{-2}/d\big{)}^{2 \varepsilon d}\leq 1/3\,.\]
Thus, with probability at least \(2/3\) each pair of subsets \((U,W)\) with \(|U|,|W|\geq 2d\), is a \((p,\varepsilon)\)-regular pair. However, due to Lemmata 7 and 8, such a pair contains cycles of each length from, say, \(d/4\), to \(2d(2-2\varepsilon-10\sqrt{\varepsilon})>2d\). Hence, with positive probability a graph \(G(n,d)\) with required property exists.
To build a graph \(H(n,d)\) with \(n+d-1\) vertices and fewer than \(20dn\) edges such that \(H(n,d)\to(C_{n},C_{2d})\), take a graph \(G(n,d)\) whose existence is assured in Claim above, add to it a clique on \(18d\) new vertices, and connect every vertex of the clique to all vertices
of \(G(n,d)\). Now let us color the edges of a graph \(H(n,d)\) obtained in this way with red and blue so that there are no red copies of \(C_{2d}\). Consider the largest family \(\mathcal{P}\) of vertex disjoint blue paths contained in \(G(n,d)\), and let \(S\) denote the set obtained by taking one end of each such path. From the maximality of the family \(\mathcal{P}\) we infer that \(S\) contains no blue edges, and so \(\mathcal{P}\) contains fewer than \(4d\) paths. Now we can use Lemma 16 with \(s=4d\) to deduce that in this coloring of \(H(n,d)\) there exists a blue copy of \(C_{n}\).
In order to present a construction which gives a general upper bound for \(r^{*}(C_{n},C_{2d})\), we introduce some more definition. By \(T_{N}\) we denote a **binary tree** on \(N\) vertices which is obtained from the perfect rooted binary tree of height \(h=\lceil\log_{2}(N-1)\rceil\) by removing some leaves on the highest level. Let \(\hat{T}_{N}\) be the **closure** of \(T_{N}\) which is obtained from \(\hat{T}_{N}\) by joining each vertex of \(T_{N}\) with each of its descendants. By leaves of \(\hat{T}_{n}\) we mean the vertices which had degree \(1\) in \(T_{N}\) (and have degree \(h-1\) or \(h-2\) in \(\hat{T}_{N}\)). Note also that if \(N=2^{k}-1\), then vertices of each level of \(\hat{T}_{N}\) send at most \(N\) edges to its descendants, so the number of edges in \(\hat{T}_{N}\) can be crudely estimated from above by \(N\log_{2}N\).
Now for \(n\geq 14d\geq 28\) let \(U(n,d)\) be a 'blow-up' of the closure of a binary tree, which is constructed in the following way. Take \(N=\lfloor(n+d-1)/(14d)\rfloor\) and replace each vertex of \(\hat{T}_{N}\) by a clique of \(14d\) elements, except, perhaps, one leaf which we replace by the clique of
\[(n+d-1)-14d(N-1)\leq 28d\]
elements, so that the resulting graph has precisely \(n+d-1\) vertices. Moreover, we replace each edge by the complete bipartite graph between the corresponding sets. Thus, \(U(n,d)\) has at most
\[(14d)^{2}N\log_{2}N+N\binom{14d}{2}+(14d)(\log_{2}n+14d)\leq 20dn\log_{2}(n/d)\]
edges, provided \(n\geq 14d\).
The following observation is a consequence of Lemma 16.
**Lemma 22**.: _If \(n\geq 14d\geq 28\), then \(U(n,d)\to(C_{n},C_{2d})\)._
_In particular, for \(n\geq 3d\geq 6\) we have \(r^{*}(C_{n},C_{2d})\leq 20nd\log_{2}(n/d)\)._
Proof.: We show that \(U(n,d)\to(C_{n},C_{2d})\) using induction on \(n\). If \(n<28d\), then \(U(n,d)\) is a clique of size larger than \(r(C_{n},C_{2d})=n+d-1\), so the assertion holds. Let us assume that \(n\geq 28d\) and let \(K\) denote the set of \(14d\) vertices which replaced the root of \(\hat{T}_{N}\). Then, if we remove \(K\) from \(U(n,d)\), the resulting graph \(\bar{U}(n,d)\) either can be identified with \(U(n-14d,d)\) or consists of two components which are graphs \(U(n_{1},d)\) and \(U(n_{2},d)\), where \(n_{1}+n_{2}+d-1=n-14d\). Now suppose that we color edges of \(U(n,d)\) by two colors, red and blue, in such a way that there are no red copies of \(C_{2d}\). Then, by the induction hypothesis, in the obtained graph \(\bar{U}(n,d)\) there exists a blue path which contains all but at most \(d-1\) vertices of the graph, or there exist two blue paths which contain all but at most \(2(d-1)\) vertices combined. Since we treat isolated vertices as trivial paths, it means that the vertex set of \(\bar{U}(n,d)\) can be covered by at most \(2d\) blue paths. Since the clique \(K\) has \(14d\) vertices, we infer from Lemma 16 that one can use the edges of \(K\) and the edges between \(K\) and \(\bar{U}(n,d)\) to create a blue cycle on precisely \(n\) vertices.
To see the second part of Lemma 22 it is enough to observe that if \(2d\leq n\leq 14d\), then the complete graph on \(n+d-1\) vertices has fewer than \(20dn\) edges.
Notice that one can easily construct graphs \(G\) on \(n+d-1\) vertices such that \(G\to(C_{n},C_{2d})\) and have density slightly smaller than \(U(n,d)\). For instance, in the closure of an appropriately chosen binary tree we may replace all leaves not by cliques of size \(14d\), but by independent sets of size \(d\). However, this improvement only modifies a constant
next to \(dn\) which we, quite crudely, estimated by 20. However, when \(d\) is close to \(\sqrt{n}\) one can use Lemma 20 and can get a substantially better estimates by replacing each leaf by a copy of a graph \(G\) on \(k+d-1=10^{-26}d^{2}/\log_{2}d\) vertices and at most \(20dk\) edges which is such that \(G\to(C_{k},C_{2d})\). Since \(k\gg d\), the height of a tree we need is \(O(\log_{2}((n+d-1)/k))\) so it results in \(O(dn\log(n/k))\) as the upper bound for \(r^{*}(C_{n},C_{2d})\) (Theorem 2 gives bounds with specific constants).
## 6. Final remarks
Clearly, in the paper we do not resolve all problems concerning the size Ramsey and the restricted size Ramsey numbers for pair of cycles. The first question which naturally comes to mind is whether \(\hat{r}(C_{n},C_{\ell})=O(n)\) also in the case when \(\ell\leq n\) and \(\ell\) is odd. We are convinced that this is the case and believe that we can prove it, however, since the argument is much more complicated than in the case of even \(\ell\), we decided not to include it here, especially that this work is dedicated to study Ramsey numbers of pairs of cycles in which the shorter cycle is even.
Theorem 2 raises much more questions. Let us state at least some of them. Here by \(o_{d}(1)\) we denote a function which tends to \(0\) as \(d\to\infty\).
**Open Problems**.: _Find smallest possible constants \(0\leq a_{1}\leq a_{2}\leq a_{3}\) such that for every \(0<\eta<1-a_{3}\) and \(3d\leq n\) large enough the following holds._
1. _For_ \(2d\geq n^{a_{3}+\eta}\) _we have_ \[r^{*}(C_{n},C_{2d})=\lceil(n+d-1)(d+1)/2\rceil\,.\] (6.1)
2. _For_ \(2d\geq n^{a_{2}+\eta}\) _we have_ \[r^{*}(C_{n},C_{2d})=(1/2+o_{d}(1))d(n+d)\,.\] (6.2)
3. _For_ \(2d\geq n^{a_{1}+\eta}\) _we have_ \[r^{*}(C_{n},C_{2d})\leq 20dn\,.\] (6.3)
Theorem 2 implies that \(a_{1}\leq 1/2\). On the other hand, it is easy to see that \(a_{3}\geq 1/2\). Indeed, each graph \(G\) on \(n+d-1\) vertices with \(\Delta(G)\leq\sqrt{n}/2\) contains a vertex \(v_{1}\) and vertices \(v_{2},\ldots,v_{d}\notin N(N(v_{1}))\). Then we can color all edges incident to one of vertices \(v_{1},v_{2},\ldots,v_{d}\) red and the rest of edges blue, creating neither red copy of \(C_{2d}\) nor blue copy of \(C_{n}\). Hence \(G\not\to(C_{n},C_{2d})\).
Besides of the above information we can only speculate on the values \(a_{1},a_{2},a_{3}\). Our guess, which is not based on any solid evidence, is that \(a_{3}=2/3\), \(a_{2}=1/2\), and the value of \(a_{1}\) is either \(0\) or \(1/2\).
Finally, another interesting problem is to study the asymptotic behavior of \(r^{*}(C_{n},C_{4})\) and \(r^{*}(P_{n},P_{4})\) which we determined only up to a factor of \(\log\log n\).
| ある絶対定数 $A$ が存在し、$n$ である二つの周期 $(C_n, C_{2d})$ のランゼー数の上限は $An$ である。 また、このようなペアに対する制限されたランゼー数についても研究する。
Let me know if you have any questions. |
2305.19588 | Active causal structure learning with advice | We introduce the problem of active causal structure learning with advice. In
the typical well-studied setting, the learning algorithm is given the essential
graph for the observational distribution and is asked to recover the underlying
causal directed acyclic graph (DAG) $G^*$ while minimizing the number of
interventions made. In our setting, we are additionally given side information
about $G^*$ as advice, e.g. a DAG $G$ purported to be $G^*$. We ask whether the
learning algorithm can benefit from the advice when it is close to being
correct, while still having worst-case guarantees even when the advice is
arbitrarily bad. Our work is in the same space as the growing body of research
on algorithms with predictions. When the advice is a DAG $G$, we design an
adaptive search algorithm to recover $G^*$ whose intervention cost is at most
$O(\max\{1, \log \psi\})$ times the cost for verifying $G^*$; here, $\psi$ is a
distance measure between $G$ and $G^*$ that is upper bounded by the number of
variables $n$, and is exactly 0 when $G=G^*$. Our approximation factor matches
the state-of-the-art for the advice-less setting. | Davin Choo, Themis Gouleakis, Arnab Bhattacharyya | 2023-05-31T06:15:50 | http://arxiv.org/abs/2305.19588v1 | # Active causal structure learning with advice
###### Abstract
We introduce the problem of active causal structure learning with advice. In the typical well-studied setting, the learning algorithm is given the essential graph for the observational distribution and is asked to recover the underlying causal directed acyclic graph (DAG) \(G^{*}\) while minimizing the number of interventions made. In our setting, we are additionally given side information about \(G^{*}\) as advice, e.g. a DAG \(G\) purported to be \(G^{*}\). We ask whether the learning algorithm can benefit from the advice when it is close to being correct, while still having worst-case guarantees even when the advice is arbitrarily bad. Our work is in the same space as the growing body of research on _algorithms with predictions_. When the advice is a DAG \(G\), we design an adaptive search algorithm to recover \(G^{*}\) whose intervention cost is at most \(\mathcal{O}(\max\{1,\log\psi\})\) times the cost for verifying \(G^{*}\); here, \(\psi\) is a distance measure between \(G\) and \(G^{*}\) that is upper bounded by the number of variables \(n\), and is exactly \(0\) when \(G=G^{*}\). Our approximation factor matches the state-of-the-art for the advice-less setting.
## 1 Introduction
A _causal directed acyclic graph_ on a set \(V\) of \(n\) variables is a Bayesian network in which the edges model direct causal effects. A causal DAG can be used to infer not only the observational distribution of \(V\) but also the result of any intervention on any subset of variables \(V^{\prime}\subseteq V\). In this work, we restrict ourselves to the _causally sufficient_ setting where there are no latent confounders, no selection bias, and no missingness in data.
The goal of _causal structure learning_ is to recover the underlying DAG from data. This is an important problem with applications in multiple fields including philosophy, medicine, biology, genetics, and econometrics [12, 13, 14, 15, 16, 17, 18]. Unfortunately, in general, it is known that observational data can only recover the causal DAG up to an equivalence class [12, 1]. Hence, if one wants to avoid making parametric assumptions about the causal mechanisms, the only recourse is to obtain experimental data from interventions [1, 1].
Such considerations motivate the problem of _interventional design_ where the task is to find a set of interventions of optimal cost which is sufficient to recover the causal DAG. There has been a series of recent works studying this problem [13, 1, 1, 1, 1, 14, 15, 16, 17, 18, 19, 20, 21, 22] under various assumptions. In particular, assuming causal sufficiency, [10] gave an adaptive algorithm that actively generates a sequence of interventions of bounded size, so that the total number of interventions is at most \(\mathcal{O}(\log n)\) times the optimal.
Typically though, in most applications of causal structure learning, there are domain experts and practitioners who can provide additional "advice" about the causal relations. Indeed, there has been a long line of work studying how to incorporate expert advice into the causal graph discovery process; e.g. see [12, 1, 13, 14, 15, 16, 17, 18, 19, 20]. In this work, we study in a principled way how using purported expert advice can lead to improved algorithms for interventional design.
Before discussing our specific contributions, let us ground the above discussion with a concrete problem of practical importance. In modern virtualized infrastructure, it is increasingly common for applications to be modularized into a large number of interdependent microservices. These microservices communicate with each other in ways that depend on the application code and on the triggering userflow. Crucially, the communication graph between microservices is often unknown to the platform provider as the application code may be private and belong to different entities. However, knowing the graph is useful for various critical platform-level tasks,
such as fault localization [27], active probing [19], testing [18], and taint analysis [15]. Recently, [14] and [16] suggested viewing the microservices communication graph as a sparse causal DAG. In particular, [14] show that arbitrary interventions can be implemented as fault injections in a staging environment, so that a causal structure learning algorithm can be deployed to generate a sequence of interventions sufficient to learn the underlying communication graph. In such a setting, it is natural to assume that the platform provider already has an approximate guess about the graph, e.g. the graph discovered in a previous run of the algorithm or the graph suggested by public metadata tagging microservice code. The research program we put forth is to design causal structure learning algorithms that can take advantage of such potentially imperfect advice1.
Footnote 1: Note however that the system in [14] is not causally sufficient due to confounding user behavior and [16] does not actively perform interventions. So, the algorithm proposed in this work cannot be used directly for the microservices graph learning problem.
### Our contributions
In this work, we study _adaptive intervention design_ for recovering _non-parametric_ causal graphs _with expert advice_. Specifically, our contributions are as follows.
* **Problem Formulation**. Our work connects the causal structure learning problem with the burgeoning research area of _algorithms with predictions_ or _learning-augmented algorithms_[15] where the goal is to design algorithms that bypass worst-case behavior by taking advantage of (possibly erroneous) advice or predictions about the problem instance. Most work in this area has been restricted to online algorithms, data structure design, or optimization, as described later in Section 2.5. However, as we motivated above, expert advice is highly relevant for causal discovery, and to the best of our knowlege, ours is the first attempt to formally address the issue of _imperfect_ advice in this context.
* **Adaptive Search Algorithm**. We consider the setting where the advice is a DAG \(G\) purported to be the orientations of all the edges in the graph. We define a distance measure which is always bounded by \(n\), the number of variables, and equals \(0\) when \(G=G^{*}\). For any integer \(k\geq 1\), we propose an adaptive algorithm to generate a sequence of interventions of size at most \(k\) that recovers the true DAG \(G^{*}\), such that the total number of interventions is \(\mathcal{O}(\log\psi(G,G^{*})\cdot\log k)\) times the optimal number of interventions of size \(k\). Thus, our approximation factor is never worse than the factor for the advice-less setting in [13]. Our search algorithm also runs in polynomial time.
* **Verification Cost Approximation**. For a given upper bound \(k\geq 1\), a verifying intervention set for a DAG \(G^{*}\) is a set of interventions of size at most \(k\) that, together with knowledge of the Markov equivalence class of \(G^{*}\), determines the orientations of all edges in \(G^{*}\). The minimum size of a verifying intervention set for \(G^{*}\), denoted \(\nu_{k}(G^{*})\), is clearly a lower bound for the number of interventions required to learn \(G^{*}\) (regardless of the advice graph \(G\)). One of our key technical results is a structural result about \(\nu_{1}\). We prove that for any two DAGs \(G\) and \(G^{\prime}\) within the same Markov equivalence class, we always have \(\nu_{1}(G)\leq 2\cdot\nu_{1}(G^{\prime})\) and that this is tight in the worst case. Beyond an improved structural understanding of minimum verifying intervention sets, which we believe is of independent interest, this enables us to "blindly trust" the information provided by imperfect advice to some extent.
Similar to prior works (e.g. [12, 13, 14]), we assume causal sufficiency and faithfulness while using ideal interventions. Under these assumptions, running standard causal discovery algorithms (e.g. PC [17], GES [15]) will always successfully recover the correct essential graph from data. We also assume that the given expert advice is consistent with observational essential graph. See Appendix A for a discussion about our assumptions.
### Paper organization
In Section 2, we intersperse preliminary notions with related work. Our main results are presented in Section 3 with the high-level technical ideas and intuition given in Section 4. Section 5 provides some empirical validation. See the appendices for full proofs, source code, and experimental details.
Preliminaries and Related Work
Basic notions about graphs and causal models are defined in Appendix B. To be _very_ brief, if \(G=(V,E)\) is a graph on \(|V|=n\) nodes/vertices where \(V(G)\), \(E(G)\), and \(A(G)\subseteq E(G)\) denote nodes, edges, and arcs of \(G\) respectively, we write \(u\sim v\) to denote that two nodes \(u,v\in V\) are connected in \(G\), and write \(u\to v\) or \(u\gets v\) when specifying a certain direction. The _skeleton_\(\operatorname{skel}(G)\) refers to the underlying graph where all edges are made undirected. A _v-structure_ in \(G\) refers to a collection of three distinct vertices \(u,v,w\in V\) such that \(u\to v\gets w\) and \(u\not\sim w\). Let \(G=(V,E)\) be fully unoriented. For vertices \(u,v\in V\), subset of vertices \(V^{\prime}\subseteq V\) and integer \(r\geq 0\), we define \(\operatorname{\mathtt{dist}}_{G}(u,v)\) as the shortest path length between \(u\) and \(v\), and \(N^{r}_{G}(V^{\prime})=\{v\in V:\min_{u\in V^{\prime}}\operatorname{\mathtt{ dist}}_{G}(u,v)\leq r\}\subseteq V\) as the set of vertices that are \(r\)-hops away from \(V^{\prime}\) in \(G\). A directed acyclic graph (DAG) is a fully oriented graph without directed cycles. For any DAG \(G\), we denote its Markov equivalence class (MEC) by \([G]\) and essential graph by \(\mathcal{E}(G)\). DAGs in the same MEC have the same skeleton and the essential graph is a partially directed graph such that an arc \(u\to v\) is directed if \(u\to v\) in _every_ DAG in MEC \([G]\), and an edge \(u\sim v\) is undirected if there exists two DAGs \(G_{1},G_{2}\in[G]\) such that \(u\to v\) in \(G_{1}\) and \(v\to u\) in \(G_{2}\). It is known that two graphs are Markov equivalent if and only if they have the same skeleton and v-structures [20, 2] and the essential graph \(\mathcal{E}(G)\) can be computed from \(G\) by orienting v-structures in \(\operatorname{skel}(G)\) and applying Meek rules (see Appendix D). In a DAG \(G\), an edge \(u\to v\) is a _covered edge_ if \(\operatorname{\mathtt{Pa}}(u)=\operatorname{\mathtt{Pa}}(v)\setminus\{u\}\). We use \(\mathcal{C}(G)\subseteq E(G)\) to denote the set of covered edges of \(G\).
### Ideal interventions
An _intervention_\(S\subseteq V\) is an experiment where all variables \(s\in S\) is forcefully set to some value, independent of the underlying causal structure. An intervention is _atomic_ if \(|S|=1\) and _bounded size_ if \(|S|\leq k\) for some \(k\geq 1\); observational data is a special case where \(S=\emptyset\). The effect of interventions is formally captured by Pearl's dodeluculus [10]. We call any \(\mathcal{I}\subseteq 2^{V}\) a _intervention set_: an intervention set is a set of interventions where each intervention corresponds to a subset of variables. An _ideal intervention_ on \(S\subseteq V\) in \(G\) induces an interventional graph \(G_{S}\) where all incoming arcs to vertices \(v\in S\) are removed [1]. It is known that intervening on \(S\) allows us to infer the edge orientation of any edge cut by \(S\) and \(V\setminus S\)[1, 1, 13, 14, 15].
We now give a definition and result for graph separators.
**Definition 1** (\(\alpha\)-separator and \(\alpha\)-clique separator, Definition 19 from [11]).: Let \(A,B,C\) be a partition of the vertices \(V\) of a graph \(G=(V,E)\). We say that \(C\) is an _\(\alpha\)-separator_ if no edge joins a vertex in \(A\) with a vertex in \(B\) and \(|A|,|B|\leq\alpha\cdot|V|\). We call \(C\) is an _\(\alpha\)-clique separator_ if it is an _\(\alpha\)-separator_ and a clique.
**Theorem 2** ([1], instantiated for unweighted graphs).: _Let \(G=(V,E)\) be a chordal graph with \(|V|\geq 2\) and \(p\) vertices in its largest clique. There exists a \(1/2\)-clique-separator \(C\) involving at most \(p-1\) vertices. The clique \(C\) can be computed in \(\mathcal{O}(|E|)\) time._
For ideal interventions, an \(\mathcal{I}\)-essential graph \(\mathcal{E}_{\mathcal{I}}(G)\) of \(G\) is the essential graph representing the Markov equivalence class of graphs whose interventional graphs for each intervention is Markov equivalent to \(G_{S}\) for any intervention \(S\in\mathcal{I}\). There are several known properties about \(\mathcal{I}\)-essential graph properties [1, 1]: Every \(\mathcal{I}\)-essential graph is a chain graph2 with chordal3 chain components. This includes the case of \(\mathcal{I}=\emptyset\). Orientations in one chain component do not affect orientations in other components. In other words, to fully orient any essential graph \(\mathcal{E}(G^{*})\), it is necessary and sufficient to orient every chain component in \(\mathcal{E}(G^{*})\).
Footnote 2: A partially directed graph is a _chain graph_ if it does _not_ contain any partially directed cycles where all directed arcs point in the same direction along the cycle.
Footnote 3: A chordal graph is a graph where every cycle of length at least \(4\) has an edge that is not part of the cycle but connects two vertices of the cycle; see [1] for an introduction.
For any intervention set \(\mathcal{I}\subseteq 2^{V}\), we write \(R(G,\mathcal{I})=A(\mathcal{E}_{\mathcal{I}}(G))\subseteq E\) to mean the set of oriented arcs in the \(\mathcal{I}\)-essential graph of a DAG \(G\). For cleaner notation, we write \(R(G,I)\) for single interventions \(\mathcal{I}=\{I\}\) for some \(I\subseteq V\), and \(R(G,v)\) for single atomic interventions \(\mathcal{I}=\{\{v\}\}\) for some \(v\in V\). For any interventional set \(\mathcal{I}\subseteq 2^{V}\), define \(G^{\mathcal{I}}=G[E\setminus R(G,\mathcal{I})]\) as the _fully directed_ subgraph DAG induced by the _unoriented arcs_ in \(\mathcal{E}_{\mathcal{I}}(G)\), where \(G^{\emptyset}\) is the graph obtained after removing all the oriented arcs in the observational essential graph due to v-structures. See Fig. 1 for an example. In the notation of \(R(\cdot,\cdot)\), the following result justifies studying verification and adaptive search via ideal interventions only on DAGs without v-structures, i.e. moral DAGs (Definition 4): since \(R(G,\mathcal{I})=R(G^{\emptyset},\mathcal{I})\mathbin{\dot{\cup}}R(G,\emptyset)\), any oriented arcs in the observational graph can be removed _before performing any interventions_ as the optimality of the solution is unaffected.4
**Theorem 3** ([13]).: _For any DAG \(G=(V,E)\) and intervention sets \(\mathcal{A},\mathcal{B}\subseteq 2^{V}\),_
\[R(G,\mathcal{A}\cup\mathcal{B})=R(G^{A},\mathcal{B})\mathbin{\dot{\cup}}\,R(G^{ \mathcal{B}},\mathcal{A})\mathbin{\dot{\cup}}(R(G,\mathcal{A})\cap R(G,\mathcal{ B}))\]
**Definition 4** (Moral DAG).: A DAG \(G\) is called a _moral DAG_ if it has no v-structures. So, \(\mathcal{E}(G)=\operatorname{skel}(G)\).
### Verifying sets
A _verifying set_\(\mathcal{I}\) for a DAG \(G\in[G^{*}]\) is an intervention set that fully orients \(G\) from \(\mathcal{E}(G^{*})\), possibly with repeated applications of Meek rules (see Appendix D), i.e. \(\mathcal{E}_{\mathcal{I}}(G^{*})=G^{*}\). Furthermore, if \(\mathcal{I}\) is a verifying set for \(G^{*}\), then so is \(\mathcal{I}\cup S\) for any additional intervention \(S\subseteq V\). While there may be multiple verifying sets in general, we are often interested in finding one with a minimum size.
**Definition 5** (Minimum size verifying set).: An intervention set \(\mathcal{I}\subseteq 2^{V}\) is called a verifying set for a DAG \(G^{*}\) if \(\mathcal{E}_{\mathcal{I}}(G^{*})=G^{*}\). \(\mathcal{I}\) is a _minimum size verifying set_ if \(\mathcal{E}_{\mathcal{I}^{\prime}}(G^{*})\neq G^{*}\) for any \(|\mathcal{I}^{\prime}|<|\mathcal{I}|\).
For bounded size interventions, the _minimum verification number_\(\nu_{k}(G)\) denotes the size of the minimum size verifying set for any DAG \(G\in[G^{*}]\); we write \(\nu_{1}(G)\) for atomic interventions. That is, any revealed arc directions when performing interventions on \(\mathcal{E}(G^{*})\) respects \(G\). [13] tells us that it is necessary and sufficient to intervene on a minimum vertex cover of the covered edges \(\mathcal{C}(G)\) in order to verify a DAG \(G\), and that \(\nu_{1}(G)\) is efficiently computable given \(G\) since \(\mathcal{C}(G)\) induces a forest.
**Theorem 6** ([13]).: _Fix an essential graph \(\mathcal{E}(G^{*})\) and \(G\in[G^{*}]\). An atomic intervention set \(\mathcal{I}\) is a minimal sized verifying set for \(G\) if and only if \(\mathcal{I}\) is a minimum vertex cover of covered edges \(\mathcal{C}(G)\) of \(G\). A minimal sized atomic verifying set can be computed in polynomial time since the edge-induced subgraph on \(\mathcal{C}(G)\) is a forest._
For any DAG \(G\), we use \(\mathcal{V}(G)\subseteq 2^{V}\) to denote the set of all _atomic_ verifying sets for \(G\). That is, each _atomic_ intervention set in \(\mathcal{V}(G)\) is a minimum vertex cover of \(\mathcal{C}(G)\).
### Adaptive search using ideal interventions
Adaptive search algorithms have been studied in earnest [1, 1, 2, 3] as they can use significantly less interventions than non-adaptive counterparts.5
Footnote 5: If the essential graph \(\mathcal{E}(G^{*})\) is a path of \(n\) nodes, then non-adaptive algorithms need \(\Omega(n)\) atomic interventions to recover \(G^{*}\) while \(\mathcal{O}(\log n)\) atomic interventions suffices for adaptive search.
Most recently, [13] gave an efficient algorithm for computing adaptive interventions with provable approximation guarantees on general graphs.
**Theorem 7** ([13]).: _Fix an unknown underlying DAG \(G^{*}\). Given an essential graph \(\mathcal{E}(G^{*})\) and intervention set bound \(k\geq 1\), there is a deterministic polynomial time algorithm that computes an intervention set \(\mathcal{I}\) adaptively such that \(\mathcal{E}_{\mathcal{I}}(G^{*})=G^{*}\), and \(|\mathcal{I}|\) has size 1. \(\mathcal{O}(\log(n)\cdot\nu_{1}(G^{*}))\) when \(k=1\) 2. \(\mathcal{O}(\log(n)\cdot\log(k)\cdot\nu_{k}(G^{*}))\) when \(k>1\)._
Meanwhile, in the context of local causal graph discovery where one is interested in only learning a _subset_ of causal relationships, the SubsetSearch algorithm of [13] incurs a multiplicative overhead that scales logarithmically with the number of relevant nodes when orienting edges within a node-induced subgraph.
**Definition 8** (Relevant nodes).: Fix a DAG \(G^{*}=(V,E)\) and arbitrary subset \(V^{\prime}\subseteq V\). For any intervention set \(\mathcal{I}\subseteq 2^{V}\) and resulting interventional essential graph \(\mathcal{E}_{\mathcal{I}}(G^{*})\), we define the _relevant nodes_\(\rho(\mathcal{I},V^{\prime})\subseteq V^{\prime}\) as the set of nodes within \(V^{\prime}\) that is adjacent to some unoriented arc within the node-induced subgraph \(\mathcal{E}_{\mathcal{I}}(G^{*})[V^{\prime}]\).
For an example of relevant nodes, see Fig. 1: For the subset \(V^{\prime}=\{A,C,D,E,F\}\) in (II), only \(\{A,C,D\}\) are relevant since incident edges to \(E\) and \(F\) are all oriented.
**Theorem 9** ([13]).: _Fix an unknown underlying DAG \(G^{*}\). Given an interventional essential graph \(\mathcal{E}_{\mathcal{I}}(G^{*})\), node-induced subgraph \(H\) with relevant nodes \(\rho(\mathcal{I},V(H))\) and intervention set bound \(k\geq 1\), there is a deterministic polynomial time algorithm that computes an intervention set \(\mathcal{I}\) adaptively such that \(\mathcal{E}_{\mathcal{I}\cup\mathcal{I}^{\prime}}(G^{*})[V(H)]=G^{*}[V(H)]\), and \(|\mathcal{I}^{\prime}|\) has size 1. \(\mathcal{O}(\log(|\rho(\mathcal{I},V(H))|)\cdot\nu_{1}(G^{*}))\) when \(k=1\) 2. \(\mathcal{O}(\log(|\rho(\mathcal{I},V(H))|)\cdot\log(k)\cdot\nu_{k}(G^{*}))\) when \(k>1\)._
Note that \(k=1\) refers to the setting of atomic interventions and we always have \(0\leq|\rho(\mathcal{I},V(H))|\leq n\).
### Expert advice in causal graph discovery
There are three main types of information that a domain expert may provide (e.g. see the references given in Section 1):
1. Required parental arcs: \(X\to Y\)
2. Forbidden parental arcs: \(X\not\to Y\)
3. Partial order or tiered knowledge: A partition of the \(n\) variables into \(1\leq t\leq n\) sets \(S_{1},\ldots,S_{t}\) such that variables in \(S_{i}\)_cannot come after_\(S_{j}\), for all \(i<j\).
In the context of orienting unoriented \(X\sim Y\) edges in an essential graph, it suffices to consider only information of type (I): \(X\not\to Y\) implies \(Y\to X\), and a partial order can be converted to a collection of required parental arcs.6
Footnote 6: For every edge \(X\sim Y\) with \(X\in S_{i}\) and \(Y\in S_{j}\), enforce the required parental arc \(X\to Y\) if and only if \(i<j\).
Maximally oriented partially directed acyclic graphs (MPDAGs), a refinement of essential graphs under additional causal information, are often used to model such expert advice and there has been a recent growing interest in understanding them better [17, 16, 15]. MPDAGs are obtained by orienting additional arc directions in the essential graph due to background knowledge, and then applying Meek rules. See Fig. 1 for an example.
### Other related work
Causal Structure LearningAlgorithms for causal structure learning can be grouped into three broad categories, constraint-based, score-based, and Bayesian. Previous works on the first two approaches are described in Appendix C. In Bayesian methods, a prior distribution is assumed on the space of all structures, and the posterior is updated as more data come in. [14] was one of the first works on learning from interventional data in this context, which spurred a series of papers (e.g. [14, 15, 16, 17]). Research on active experimental design for causal structure learning with Bayesian updates was initiated by [18, 19] and [20]. [20] considered a combination of Bayesian and constraint-based approaches. [1] and [1] have used active learning and Bayesian updates to help recover biological networks. While possibly imperfect expert advice may be used to guide the prior in the Bayesian approach, the works mentioned above do not provide rigorous guarantees about the number of interventions performed or about optimality, and so they are not directly comparable to our results here.
Figure 1: **(I)** Ground truth DAG \(G^{*}\); **(II)** Observational essential graph \(\mathcal{E}(G^{*})\) where \(C\to E\gets D\) is a v-structure and Meek rules orient arcs \(D\to F\) and \(E\to F\); **(III)**\(G^{\emptyset}=G[E\setminus R(G,\emptyset)]\) where oriented arcs in \(\mathcal{E}(G^{*})\) are removed from \(G^{*}\); **(IV)** MPDAG \(\tilde{G}\in[G^{*}]\) incorporating the following partial order advice (\(S_{1}=\{B\},S_{2}=\{A,D\},S_{3}=\{C,E,F\}\)), which can be converted to required arcs \(B\to A\) and \(B\to D\). Observe that \(A\to C\) is oriented by Meek R1 via \(B\to A\sim C\), the arc \(A\sim D\) is still unoriented, the arc \(B\to A\) disagrees with \(G^{*}\), and there are two possible DAGs consistent with the resulting MPDAG.
Algorithms with predictionsLearning-augmented algorithms have received significant attention since the seminal work of [10], where they investigated the online caching problem with predictions. Based on that model, [14] proposed algorithms for the ski-rental problem as well as non-clairvoyant scheduling. Subsequently, [13], [12], and [1] improved the initial results for the ski-rental problem. Several works, including [15, 16, 1], improved the initial results regarding the caching problem. Scheduling problems with machine-learned advice have been extensively studied in the literature [1, 10, 11]. There are also results for augmenting classical data structures with predictions (e.g. indexing [12] and Bloom filters [13]), online selection and matching problems [1, 1], online TSP [1, 10], and a more general framework of online primal-dual algorithms [1].
In the above line of work, the extent to which the predictions are helpful in the design of the corresponding online algorithms, is quantified by the following two properties. The algorithm is called (i) _\(\alpha\)-consistent_ if it is _\(\alpha\)-competitive_ with no prediction error and (ii) _\(\beta\)-robust_ if it is _\(\beta\)-competitive_ with any prediction error. In the language of learning augmented algorithms or algorithms with predictions, our causal graph discovery algorithm is \(1\)-consistent and \(\mathcal{O}(\log n)\)-robust when competing against the verification number \(\nu_{1}(G^{*})\), the minimum number of interventions necessary needed to recover \(G^{*}\). Note that even with arbitrarily bad advice, our algorithm uses asymptotically the same number of interventions incurred by the best-known advice-free adaptive search algorithm [10].
## 3 Results
Our exposition here focuses on interpreting and contextualizing our main results while deferring technicalities to Section 4. We first focus on the setting where the advice is a fully oriented DAG \(\widetilde{G}\in[G^{*}]\) within the Markov equivalence class \([G^{*}]\) of the true underlying causal graph \(G^{*}\), and explain in Appendix E how to handle the case of partial advice. Full proofs are provided in the appendix.
### Structural property of verification numbers
We begin by stating a structural result about verification numbers of DAGs within the same Markov equivalence class (MEC) that motivates the definition of a metric between DAGs in the same MEC our algorithmic guarantees (Theorem 14) are based upon.
**Theorem 10**.: For any DAG \(G^{*}\) with MEC \([G^{*}]\), we have that \(\max_{G\in[G^{*}]}\nu_{1}(G)\leq 2\cdot\min_{G\in[G^{*}]}\nu_{1}(G)\).
Theorem 10 is the first known result relating the minimum and maximum verification numbers of DAGs given a fixed MEC. The next result tells us that the ratio of two is tight.
**Lemma 11** (Tightness of Theorem 10).: There exist DAGs \(G_{1}\) and \(G_{2}\) from the same MEC with \(\nu_{1}(G_{1})=2\cdot\nu_{1}(G_{2})\).
Theorem 10 tells us that we can blindly intervene on any minimum verifying set \(\widetilde{V}\in\mathcal{V}(\widetilde{G})\) of any given advice DAG \(\widetilde{G}\) while incurring only at most a constant factor of \(2\) more interventions than the minimum verification number \(\nu(G^{*})\) of the unknown ground truth DAG \(G^{*}\).
### Adaptive search with imperfect DAG advice
Recall the definition of \(r\)-hop from Section 2. To define the quality of the advice DAG \(\widetilde{G}\), we first define the notion of _min-hop-coverage_ which measures how "far" a given verifying set of \(\widetilde{G}\) is from the set of covered edges of \(G^{*}\).
**Definition 12** (Min-hop-coverage).: Fix a DAG \(G^{*}\) with MEC \([G^{*}]\) and consider any DAG \(\widetilde{G}\in[G^{*}]\). For any minimum verifying set \(\widetilde{V}\in\mathcal{V}(\widetilde{G})\), we define the _min-hop-coverage_\(h(G^{*},\widetilde{V})\in\{0,1,2,\ldots,n\}\) as the minimum number of hops such that _both_ endpoints of covered edges \(\mathcal{C}(G^{*})\) of \(G^{*}\) belong in \(N_{\operatorname{skel}(\mathcal{E}(G^{*}))}^{h(G^{*},\widetilde{V})}(\widetilde {V})\).
Using min-hop-coverage, we now define a quality measure \(\psi(G^{*},\widetilde{G})\) for DAG \(\widetilde{G}\in[G^{*}]\) as an advice for DAG \(G^{*}\).
**Definition 13** (Quality measure).: Fix a DAG \(G^{*}\) with MEC \([G^{*}]\) and consider any DAG \(\widetilde{G}\in[G^{*}]\). We define \(\psi(G^{*},\widetilde{G})\) as follows:
\[\psi(G^{*},\widetilde{G})=\max_{\widetilde{V}\in\mathcal{V}(\widetilde{G})} \Big{|}\rho\left(\widetilde{V},N^{h(G^{*},\widetilde{V})}_{\text{skel}( \mathcal{E}(G^{*}))}(\widetilde{V})\right)\Big{|}\]
By definition, \(\psi(G^{*},G^{*})=0\) and \(\max_{G\in[G^{*}]}\psi(G^{*},G)\leq n\). In words, \(\psi(G^{*},\widetilde{G})\) only counts the relevant nodes within the min-hop-coverage neighborhood after intervening on the _worst_ possible verifying set \(\widetilde{V}\) of \(\widetilde{G}\). We define \(\psi\) via the worst set because any search algorithm _cannot_ evaluate \(h(G^{*},\widetilde{V})\), since \(G^{*}\) is unknown, and can only consider an _arbitrary_\(\widetilde{V}\in\mathcal{V}(\widetilde{G})\). See Fig. 2 for an example.
Our main result is that it is possible to design an algorithm that leverages an advice DAG \(\widetilde{G}\in[G^{*}]\) and performs interventions to fully recover an unknown underlying DAG \(G^{*}\), whose performance depends on the advice quality \(\psi(G^{*},\widetilde{G})\). Our search algorithm only knows \(\mathcal{E}(G^{*})\) and \(\widetilde{G}\in[G^{*}]\) but knows neither \(\psi(G^{*},\widetilde{G})\) nor \(\nu(G^{*})\).
**Theorem 14**.: Fix an essential graph \(\mathcal{E}(G^{*})\) with an unknown underlying ground truth DAG \(G^{*}\). Given an advice graph \(\widetilde{G}\in[G^{*}]\) and intervention set bound \(k\geq 1\), there exists a deterministic polynomial time algorithm (Algorithm 1) that computes an intervention set \(\mathcal{I}\) adaptively such that \(\mathcal{E}_{\mathcal{I}}(G^{*})=G^{*}\), and \(|\mathcal{I}|\) has size
1. \(\mathcal{O}(\max\{1,\log\psi(G^{*},\widetilde{G})\}\cdot\nu_{1}(G^{*}))\) when \(k=1\)
2. \(\mathcal{O}(\max\{1,\log\psi(G^{*},\widetilde{G})\}\cdot\log k\cdot\nu_{k}(G^ {*}))\) when \(k>1\).
Consider first the setting of \(k=1\). Observe that when the advice is perfect (i.e. \(\widetilde{G}=G^{*}\)), we use \(\mathcal{O}(\nu(G^{*}))\) interventions, i.e. a constant multiplicative factor of the minimum number of interventions necessary. Meanwhile, even with low quality advice, we still use \(\mathcal{O}(\log n\cdot\nu(G^{*}))\) interventions, asymptotically matching the best known guarantees for adaptive search without advice. To the best of our knowledge, Theorem 14 is the first known result that principally employs imperfect expert advice with provable guarantees in the context of causal graph discovery via interventions.
Consider now the setting of bounded size interventions where \(k>1\). The reason why we can obtain such a result is precisely because of our algorithmic design: we deliberately designed an algorithm that invokes SubsetSearch as a black-box subroutine. Thus, the bounded size guarantees of SubsetSearch given by Theorem 9 carries over to our setting with a slight modification of the analysis.
Techniques
Here, we discuss the high-level technical ideas and intuition behind how we obtain our adaptive search algorithm with imperfect DAG advice. See the appendix for full proofs; in particular, see Appendix F for an overview of Theorem 10.
For brevity, we write \(\psi\) to mean \(\psi(G^{*},\widetilde{G})\) and drop the subscript \(\operatorname{skel}(\mathcal{E}(G^{*}))\) of \(r\)-hop neighborhoods in this section. We also focus our discussion to the atomic interventions. Our adaptive search algorithm (Algorithm 1) uses SubsetSearch as a subroutine.
We begin by observing that \(\operatorname{\texttt{SubsetSearch}}(\mathcal{E}(G^{*}),A)\) fully orients \(\mathcal{E}(G^{*})\) into \(G^{*}\) if the covered edges of \(G^{*}\) lie within the node-induced subgraph induced by \(A\).
**Lemma 15**.: Fix a DAG \(G^{*}=(V,E)\) and let \(V^{\prime}\subseteq V\) be any subset of vertices. Suppose \(\mathcal{I}_{V^{\prime}}\subseteq V\) is the set of nodes intervened by \(\operatorname{\texttt{SubsetSearch}}(\mathcal{E}(G^{*}),V^{\prime})\). If \(\mathcal{C}(G^{*})\subseteq E(G^{*}[V^{\prime}])\), then \(\mathcal{E}_{\mathcal{I}_{V^{\prime}}}(G^{*})=G^{*}\).
Motivated by Lemma 15, we design Algorithm 1 to repeatedly invoke SubsetSearch on node-induced subgraphs \(N^{r}(\widetilde{V})\), starting from an _arbitrary_ verifying set \(\widetilde{V}\in\mathcal{V}(\widetilde{G})\) and for _increasing_ values of \(r\).
For \(i\in\mathbb{N}\cup\{0\}\), let us denote \(r(i)\in\mathbb{N}\cup\{0\}\) as the value of \(r\) in the \(i\)-th invocation of SubsetSearch, where we insist that \(r(0)=0\) and \(r(j)>r(j-1)\) for any \(j\in\mathbb{N}\). Note that \(r=0\) simply implies that we intervene on the verifying set \(\widetilde{V}\), which only incurs \(\mathcal{O}(\nu_{1}(G^{*}))\) interventions due to Theorem 10. Then, we can appeal to Lemma 15 to conclude that \(\mathcal{E}(G^{*})\) is completely oriented into \(G^{*}\) in the \(t\)-th invocation if \(r(t)\geq h(G^{*},\widetilde{V})\).
While the high-level subroutine invocation idea seems simple, one needs to invoke SubsetSearch at _suitably chosen intervals_ in order to achieve our theoretical guarantees we promise in Theorem 14. We now explain how to do so in three successive attempts while explaining the algorithmic decisions behind each modification introduced.
As a reminder, we _do not_ know \(G^{*}\) and thus _do not_ know \(h(G^{*},\widetilde{V})\) for any verifying set \(\widetilde{V}\in\mathcal{V}(\widetilde{G})\) of \(\widetilde{G}\in[G^{*}]\).
#### Naive attempt: Invoke for \(r=0,1,2,3,\ldots\)
The most straightforward attempt would be to invoke SubsetSearch repeatedly each time we increase \(r\) by \(1\) until the graph is fully oriented - in the worst case, \(t=h(G^{*},\widetilde{V})\). However, this may cause us to incur way too many interventions. Suppose there are \(n_{i}\) relevant nodes in the \(i\)-th invocation. Using Theorem 9, one can only argue that the overall number interventions incurred is \(\mathcal{O}(\sum_{i=0}^{t}\log n_{i}\cdot\nu(G^{*}))\). However, \(\sum_{i}\log n_{i}\) could be significantly larger than \(\log(\sum_{i}n_{i})\) in general, e.g. \(\log 2+\ldots+\log 2=(n/2)\cdot\log 2\gg\log n\). In fact, if \(G^{*}\) was a path on \(n\) vertices \(v_{1}\to v_{2}\rightarrow\ldots\to v_{n}\) and \(\widetilde{G}\in[G^{*}]\) misleads us with \(v_{1}\gets v_{2}\leftarrow\ldots\gets v_{n}\), then this approach incurs \(\Omega(n)\) interventions in total.
#### Tweak 1: Only invoke periodically
Since Theorem 9 provides us a logarithmic factor in the analysis, we could instead consider only invoking SubsetSearch after the number of nodes in the subgraph _increases by a polynomial factor_. For example, if we invoked SubsetSearch with \(n_{i}\) previously, then we will wait until the number of relevant nodes surpasses \(n_{i}^{2}\) before invoking SubsetSearch again, where we define \(n_{0}\geq 2\) for simplicity. Since \(\log n_{i}\geq 2\log n_{i-1}\), we can see via an inductive argument that the number of interventions used in the final invocation will dominate the total number of interventions used so far: \(n_{t}\geq 2\log n_{t-1}\geq\log n_{t-1}+2\log n_{t-2}\geq\ldots\geq\sum_{i=0}^{ t-1}\log n_{i}\). Since \(n_{i}\leq n\) for any \(i\), we can already prove that \(\mathcal{O}(\log n\cdot\nu_{1}(G^{*}))\) interventions suffice, matching the advice-free bound of Theorem 7. However, this approach and analysis does _not_ take into account the quality of \(\widetilde{G}\) and is _insufficient_ to relate \(n_{t}\) with the advice measure \(\psi\).
#### Tweak 2: Also invoke one round before
Suppose the final invocation of SubsetSearch is on \(r(t)\)-hop neighborhood while incurring \(\mathcal{O}(\log n_{t}\cdot\nu_{1}(G^{*}))\) interventions. This means that \(\mathcal{C}(G^{*})\) lies within \(N^{r(t)}(\widetilde{V})\) but _not_ within \(N^{r(t-1)}(\widetilde{V})\). That is, \(N^{r(t-1)}(\widetilde{V})\subset N^{h(G^{*},\widetilde{V})}(\widetilde{V}) \subseteq N^{r(t)}(\widetilde{V})\). While this tells us that \(n_{t-1}\leq|\rho(\widetilde{V},N^{r(t-1)}(\widetilde{V}))|<|\rho(\widetilde{ V},N^{h(G^{*},\widetilde{V})}(\widetilde{V}))|=\psi\), what we want is to conclude that \(n_{t}\in\mathcal{O}(\psi)\). Unfortunately, even when \(\psi=r(t-1)+1\), it could be the case that \(|\rho(\widetilde{V},N^{h(G^{*},\widetilde{V})}(\widetilde{V}))|\ll|N^{r(t)}( \widetilde{V})|\) as the number of relevant nodes could blow up within a single hop (see Fig. 3). To control this potential blow up in the analysis, we can introduce the following technical fix: whenever
we want to invoke SubsetSearch on \(r(i)\), first invoke SubsetSearch on \(r(i)-1\) and terminate earlier if the graph is already fully oriented into \(G^{*}\).
#### Putting together
Algorithm 1 presents our full algorithm where the inequality \(\rho(\mathcal{I}_{i},N^{r}_{\text{skel}(\mathcal{E}(G^{*}))}(\widetilde{V})) \geq n_{i}^{2}\) corresponds to the first tweak while the terms \(C_{i}\) and \(C_{i}^{\prime}\) correspond to the second tweak.
In Appendix H, we explain why our algorithm (Algorithm 1) is simply the classic "binary search with prediction"7 when the given essential graph \(\mathcal{E}(G^{*})\) is an undirected path. So, another way to view our result is a _generalization_ that works on essential graphs of arbitrary moral DAGs.
Footnote 7: e.g. see [https://en.wikipedia.org/wiki/Learning_augmented_algorithm#Binary_search](https://en.wikipedia.org/wiki/Learning_augmented_algorithm#Binary_search)
For bounded size interventions, we rely on the following known results.
**Theorem 16** (Theorem 12 of [13]).: _Fix an essential graph \(\mathcal{E}(G^{*})\) and \(G\in[G^{*}]\). If \(\nu_{1}(G)=\ell\), then \(\nu_{k}(G)\geq\lceil\frac{\ell}{k}\rceil\) and there exists a polynomial time algo. to compute a bounded size intervention set \(\mathcal{I}\) of size \(|\mathcal{I}|\leq\lceil\frac{\ell}{k}\rceil+1\)._
**Lemma 17** (Lemma 1 of [11]).: _Let \((n,k,a)\) be parameters where \(k\leq n/2\). There exists a polynomial time labeling scheme that produces distinct \(\ell\) length labels for all elements in \([n]\) using letters from the integer alphabet \(\{0\}\cup[a]\) where \(\ell=\lceil\log_{a}n\rceil\). Further, in every digit (or position), any integer letter is used at most \(\lceil n/a\rceil\) times. This labelling scheme is a separating system: for any \(i,j\in[n]\), there exists some digit \(d\in[\ell]\) where the labels of \(i\) and \(j\) differ._
Theorem 16 enables us to easily relate \(\nu_{1}(G)\) with \(\nu_{k}(G)\) while Lemma 17 provides an efficient labelling scheme to partition a set of \(n\) nodes into a set \(S=\{S_{1},S_{2},\ldots\}\) of bounded size sets, each \(S_{i}\) involving at most \(k\) nodes. By invoking Lemma 17 with \(a\approx n^{\prime}/k\) where \(n^{\prime}\) is related to \(\nu_{1}(G)\), we see that \(|S|\approx\frac{n^{\prime}}{k}\cdot\log k\). As \(\nu_{k}(G)\approx\nu_{1}(G)/k\), this is precisely why the bounded intervention guarantees in Theorem 7, Theorem 9 and Theorem 14 have an additional multiplicative \(\log k\) factor.
## 5 Empirical validation
While our main contributions are theoretical, we also performed some experiments to empirically validate that our algorithm is practical, outperforms the advice-free baseline when the advice quality is good, and still being at most a constant factor worse when the advice is poor.
Motivated by Theorem 3, we experimented on synthetic moral DAGs from [14]: For each undirected chordal graph, we use the uniform sampling algorithm of [14] to uniformly sample 1000 moral DAGs \(\widetilde{G}_{1},\ldots,\widetilde{G}_{1000}\) and randomly choose one of them as \(G^{*}\). Then, we give \(\{(\mathcal{E}(G^{*}),\widetilde{G}_{i})\}_{i\in[1000]}\) as input to Algorithm 1.
Figure 3: Consider the ground truth DAG \(G^{*}\) with unique minimum verifying set \(\{v_{2}\}\) and an advice DAG \(\widetilde{G}\in[G^{*}]\) with chosen minimum verifying set \(\widetilde{V}=\{v_{1}\}\). So, \(h(G^{*},\widetilde{V})=1\) and ideally we want to argue that our algorithm uses a constant number of interventions. Without tweak \(2\) and \(n_{0}=2\), an algorithm that increases hop radius until the number of relevant nodes is squared will _not_ invoke SubsetSearch until \(r=3\) because \(\rho(\widetilde{V},N^{1})=1<n_{0}^{2}\) and \(\rho(\widetilde{V},N^{2})=2<n_{0}^{2}\). However, \(\rho(\widetilde{V},N^{3})=n-1\) and we can only conclude that the algorithm uses \(\mathcal{O}(\log n)\) interventions by invoking SubsetSearch on a subgraph on \(n-1\) nodes.
```
1:Input: Essential graph \(\mathcal{E}(G^{*})\), advice DAG \(\widetilde{G}\in[G^{*}]\), intervention size \(k\in\mathbb{N}\)
2:Output: An intervention set \(\mathcal{I}\) such that each intervention involves at most \(k\) nodes and \(\mathcal{E}_{\mathcal{I}}(G^{*})=G^{*}\).
3:Let \(\widetilde{V}\in\mathcal{V}(\widetilde{G})\) be any atomic verifying set of \(\widetilde{G}\).
4:if\(k=1\)then
5: Define \(\mathcal{I}_{0}=\widetilde{V}\) as an atomic intervention set.
6:else
7: Define \(k^{\prime}=\min\{k,|\widetilde{V}|/2\}\), \(a=\lceil|\widetilde{V}|/k^{\prime}\rceil\geq 2\), and \(\ell=\lceil\log_{a}|C|\rceil\). Compute labelling scheme on \(\widetilde{V}\) with \((|\widetilde{V}|,k,a)\) via Lemma17 and define \(\mathcal{I}_{0}=\{S_{x,y}\}_{x\in[\ell],y\in[a]}\), where \(S_{x,y}\subseteq\widetilde{V}\) is the subset of vertices whose \(x^{th}\) letter in the label is \(y\).
8:endif
9:Intervene on \(\mathcal{I}_{0}\) and initialize \(r\gets 0\), \(i\gets 0\), \(n_{0}\gets 2\).
10:while\(\mathcal{E}_{\mathcal{I}_{i}}(G^{*})\) still has undirected edges do
11:if\(\rho(\mathcal{I}_{i},N^{r}_{\text{skel}(\mathcal{E}(G^{*}))}(\widetilde{V})) \geq n_{i}^{2}\)then
12: Increment \(i\gets i+1\) and record \(r(i)\gets r\).
13: Update \(n_{i}\leftarrow\rho(\mathcal{I}_{i},N^{r}_{\text{skel}(\mathcal{E}(G^{*}))}( \widetilde{V}))\)
14:\(C_{i}\leftarrow\texttt{SubsetSearch}(\mathcal{E}_{\mathcal{I}_{i}}(G^{*}),N^{ r-1}_{\text{skel}(\mathcal{E}(G^{*}))}(\widetilde{V}),k)\)
15:if\(\mathcal{E}_{\mathcal{I}_{i-1}\cup C_{i}}(G^{*})\) still has undirected edges then
16:\(C^{\prime}_{i}\leftarrow\texttt{SubsetSearch}(\mathcal{E}_{\mathcal{I}_{i-1} \cup C_{i}}(G^{*}),N^{r}_{\text{skel}(\mathcal{E}(G^{*}))}(\widetilde{V}),k)\)
17: Update \(\mathcal{I}_{i}\leftarrow\mathcal{I}_{i-1}\cup C_{i}\cup C^{\prime}_{i}\).
18:else
19: Update \(\mathcal{I}_{i}\leftarrow\mathcal{I}_{i-1}\cup C_{i}\).
20:endif
21:endif
22: Increment \(r\gets r+1\).
23:endwhile
24:return\(\mathcal{I}_{i}\)
```
**Algorithm 1** Adaptive search algorithm with advice.
Fig. 4 shows one of the experimental plots; more detailed experimental setup and results are given in Appendix I. On the X-axis, we plot \(\psi(G^{*},\widetilde{V})=\left|\rho\left(\widetilde{V},N^{h(\mathcal{G}^{*}, \widetilde{V})}_{\text{skel}(\mathcal{E}(G^{*}))}(\widetilde{V})\right)\right|\), which is a _lower bound_ and proxy8 for \(\psi(G^{*},\widetilde{G})\). On the Y-axis, we aggregate advice DAGs based on their quality measure and also show (in dashed lines) the empirical distribution of quality measures of all DAGs within the Markov equivalence class.
Footnote 8: We do not know if there is an efficient way to compute \(\psi(G^{*},\widetilde{G})\) besides the naive (possibly exponential time) enumeration over all possible minimum verifying sets.
As expected from our theoretical analyses, we see that the number of interventions by our advice search starts from \(\nu_{1}(G^{*})\), is lower than advice-free search of [22] when \(\psi(G^{*},\widetilde{V})\) is low, and gradually increases as the advice quality degrades. Nonetheless, the number of interventions used is always theoretically bounded below \(\mathcal{O}(\psi(G^{*},\widetilde{V})\cdot\nu_{1}(G^{*}))\); we do not plot \(\psi(G^{*},\widetilde{V})\cdot\nu_{1}(G^{*})\) since plotting it yields a "squashed" graph as the empirical counts are significantly smaller. In this specific graph instance, Fig. 4 suggests that our advice search outperforms its advice-free counterpart when given an advice DAG \(\widetilde{G}\) that is better than \(\sim 40\%\) of all possible DAGs consistent with the observational essential graph \(\mathcal{E}(G^{*})\).
## 6 Conclusion and discussion
In this work, we gave the first result that utilizes imperfect advice in the context of causal discovery. We do so in a way that the performance (i.e. the number of interventions in our case) does not degrade significantly even when the advice is inaccurate, which is consistent with the objectives of learning-augmented algorithms. Specifically, we show a smooth bound that matches the number of interventions needed for verification of the causal relationships in a graph when the advice is completely accurate and also depends logarithmically on the distance of the advice to the ground truth. This ensures robustness to "bad" advice, the number of interventions needed is asymptotically the same as in the case where no advice is available.
Our results do rely on the widely-used assumptions of sufficiency and faithfulness as well as access to ideal
iterations; see Appendix A for a more detailed discussion. Since wrong causal conclusions may be drawn when these assumptions are violated by the data, thus it is of great interest to remove/weaken these assumptions while maintaining strong theoretical guarantees in future work.
### Interesting future directions to explore
Partial adviceIn Appendix E, we explain why having a DAG \(\widetilde{G}\) as advice may not always be possible and explain how to extend our results to the setting of _partial advice_ by considering the worst case DAG consistent with the given partial advice \(\mathcal{A}\). The question is whether one can design and analyze a better algorithm than a trivial \(\max_{\widetilde{G}\in\mathcal{A}}\). For example, maybe one could pick \(\widetilde{G}=\operatorname*{argmin}_{G\in\mathcal{A}}\max_{H\in[G^{*}]} \psi(H,G)\)? The motivation is as follows: If \([G^{*}]\) is a disc in \(\mathbb{R}^{2}\) and \(\psi\) is the Euclidean distance, then \(\widetilde{G}\) should be the point within \(\mathcal{A}\) that is closest to the center of the disc. Note that we can only optimize with respect to \(\max_{H\in[G^{*}]}\) because we do not actually know \(G^{*}\). It remains to be seen if such an object can be efficiently computed and whether it gives a better bound than \(\max_{\widetilde{G}\in\mathcal{A}}\).
Incorporating expert confidenceThe notion of "confidence level" and "correctness" of an advice are orthogonal issues - an expert can be confidently wrong. In this work, we focused on the case where the expert is fully confident but may be providing imperfect advice. It is an interesting problem to investigate how to principally handle both issues simultaneously; for example, what if the advice is not a DAG \(\widetilde{G}\in[G^{*}]\) in the essential graph but a distribution over all DAGs in \([G^{*}]\)? Bayesian ideas may apply here.
Better analysis?Empirically, we see that the log factor is a rather loose upper bound both for blind search and advice search. _Can there be a tighter analysis?_[2] tells us that \(\Omega(\log n\cdot\nu_{1}(G^{*}))\) is unavoidable when \(\mathcal{E}(G^{*})\) is a path on \(n\) vertices with \(\nu_{1}(G^{*})=1\) but this is a special class of graphs. What if \(\nu_{1}(G^{*})>1\)? Can we give tighter bounds in other graph parameters? Furthermore, in some preliminary testing, we observed that implementing tweak 2 or ignoring it yield similar empirical performance and we wonder if there is a tighter analysis without tweak 2 that has similar guarantees.
Figure 4: Experimental plot for one of the synthetic graphs \(G^{*}\), with respect to \(1000\ll||G^{*}||\approx 1.4\times 10^{6}\) uniformly sampled advice DAGs \(\widetilde{G}\) from the MEC \([G^{*}]\). The solid lines indicate the number of atomic interventions used while the dotted lines indicate the empirical cumulative probability density of \(\widetilde{G}\). The true cumulative probability density lies within the shaded area with probability at least \(0.99\) (see Appendix I for details).
## Acknowledgements
This research/project is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-PhD/2021-08-013). TG and AB are supported by the National Research Foundation Fellowship for AI (Award NRF-NRFFAI-0002), an Amazon Research Award, and a Google South & Southeast Asia Research Award. Part of this work was done while the authors were visiting the Simons Institute for the Theory of Computing. We would like to thank Kirankumar Shiragur and Joy Qiping Yang for valuable feedback and discussions.
| ```
アクティブな因果構造学習の問題を、アドバイスを用いて導入します。
典型的なよく研究されている設定では、学習アルゴリズムは観測分布の必須グラフを与えられ、そのグラフを基盤となる因果directed acyclic graph ($G^*$) を復元しながら、介入の数を最小限に抑えられます。
私たちの設定では、$G^*$に関するサイド情報をアドバイスとして与えられ、例えば、$G$ が $G^*$ と推定されたDAGです。
学習アルゴリズムが $G^*$ が近い状態にある場合、アドバイスを有効活用できるかどうかを疑問に投げかけ、アドバイスが任意に悪い場合でも、最悪ケースの保証を維持します。
私たちの研究は、予測を持つアルゴリズムに関する研究の広がりと密接に関連しています。
$G$ がアドバイスの場合、$G^*$ を復元するための適応的な検索アルゴリズムを設計しました。
このアル |
2309.08750 | Surface barrier effect as evidence of chiral soliton lattice formation
in chiral dichalcogenide CrTa$_{3}$S$_{6}$ crystals | The formation of chiral magnetic soliton lattice (CSL) is investigated in
monoaxial chiral dichalcogenide CrTa$_{3}$S$_{6}$ crystals in terms of a
surface barrier, which prevents a penetration of chiral solitons into the
system and is an intrinsic origin of hysteresis for the continuous phase
transition of nucleation-type, as discussed in the system of quantized vortices
in type-II superconductors. The magnetoresistance (MR) was examined with
microfabricated platelet samples in different dimensions with regard to the
$c$-axis direction of the crystal. The CSL formation was confirmed by the
discrete MR changes, reflecting the number of chiral solitons, as well as by
the presence of surface barrier, recognized as a fixed ratio of critical
magnetic fields during the hysteresis field cycle. We also argue the influence
of the surface barrier in the bulk CrTa$_{3}$S$_{6}$ crystals. | K. Mizutani, J. Jiang, K. Monden, Y. Shimamoto, Y. Kousaka, Y. Togawa | 2023-09-15T20:26:45 | http://arxiv.org/abs/2309.08750v1 | Surface barrier effect as evidence of chiral soliton lattice formation in chiral dichalcogenide CrTa\({}_{3}\)S\({}_{6}\) crystals
###### Abstract
The formation of chiral magnetic soliton lattice (CSL) is investigated in monoaxial chiral dichalcogenide CrTa\({}_{3}\)S\({}_{6}\) crystals in terms of a surface barrier, which prevents a penetration of chiral solitons into the system and is an intrinsic origin of hysteresis for the continuous phase transition of nucleation-type, as discussed in the system of quantized vortices in type-II superconductors. The magnetoresistance (MR) was examined with microfabricated platelet samples in different dimensions with regard to the \(c\)-axis direction of the crystal. The CSL formation was confirmed by the discrete MR changes, reflecting the number of chiral solitons, as well as by the presence of surface barrier, recognized as a fixed ratio of critical magnetic fields during the hysteresis field cycle. We also argue the influence of the surface barrier in the bulk CrTa\({}_{3}\)S\({}_{6}\) crystals.
## I Introduction
Chiral helimagnets induce an antisymmetric exchange interaction strongly coupled to a chiral crystalline structure [1; 2]. As a consequence of its competition with a symmetric Heisenberg exchange interaction in the absence or presence of magnetic fields, nontrivial chiral magnetic structures emerge such as chiral helimagnetic order (CHM) [1; 2], chiral soliton lattice (CSL) [3; 4; 5; 6; 7], and chiral magnetic vortices called magnetic Skyrmions [8; 9]. These chiral magnetic structures have been observed via neutron scattering or electron microscopy in a recent decade [10; 11; 12].
CrNb\({}_{3}\)S\({}_{6}\) is one of the well-established transition-metal dichalcogenides (TMD) that exhibit chiral helimagnetism. CrNb\({}_{3}\)S\({}_{6}\) forms a chiral monoaxial crystal structure with space group \(P\)\({}_{6}\)\(22\)[13; 14; 15], where the symmetric and antisymmetric exchange interactions work along the principal \(c\)-axis of the crystal. The CSL formation, as schematically drawn in Fig. 1(a), was detected in real space images and reciprocal scattering data by using Lorentz microscopy in CrNb\({}_{3}\)S\({}_{6}\)[12]. Neutron [16] and resonant magnetic X-ray [17] scattering experiments are also useful for identifying the CHM and CSL in a reciprocal space.
The CSL exhibits robust phase coherence at macroscopic length scale [12]. Thus, nontrivial characteristics appear in various physical properties. Indeed, the CSL shows giant magnetoresistance (MR) due to a proliferation of chiral solitons [18], discretization MR effect [19], robust response of chiral solitons under oblique magnetic fields [20], nonreciprocal electrical transport [21], and collective elementary excitation of the CSL up to a frequency of sub-terahertz [22]. Moreover, very anisotropic soliton defects appear in the CSL system when decreasing the magnetic field \(H\)[23; 24; 25]. Such coherent, topological, and collective nature of the CSL could be useful for spintronic device applications such as memory multi-bits and 6G communications technology using the CSL [26; 27; 28].
The MR is one of the feasible methods for identifying the CSL. In particular, when reducing the sample dimensions, discretized MR appears because of a countable nature of chiral solitons in the CSL [19]. In addition, the surface barrier emerges upon the penetration of chiral solitons into the system in the \(H\) decrease process, as schematically drawn in Fig. 1, because of the phase coherence of the CSL [29]. The strength of surface barrier is quantified by solving a one-dimensional (1D) chiral sine-Gordon model in a semi-infinite system, which describes well the CSL system realized in the monoaxial chiral helimagnets such as CrNb\({}_{3}\)S\({}_{6}\). The ratio of the magnetic field \(H_{\rm b}\), where the surface barrier disappears, to the critical magnetic field \(H_{\rm c}\) takes a constant value (\(H_{\rm b}/H_{\rm c}=4/\pi^{2}\sim 0.405\)) [29]. In the experiments, the presence of surface barrier could be detected as a sudden jump of physical quantity at a particular strength of the magnetic field (\(H_{\rm jump}\)) when decreasing \(H\). The values of \(H_{\rm jump}/H_{\rm c}\), which were experimentally obtained by using the MR in CrNb\({}_{3}\)S\({}_{6}\) (e.g., 0.416, 0.405, and 0.408 in a particular micrometer-sized crystal) [29], showed an excellent agreement with \(4/\pi^{2}\) expected for the 1D chiral sine
Figure 1: The formation of chiral soliton lattice (CSL) in the \(H\) increase (a) and decrease (b) processes in a semi-infinite system with a boundary between the material and vacuum. The CSL undergoes a continuous phase transition to a forced ferromagnetic state toward \(H_{\rm c}\) in the former case, while the surface barrier prevents the penetration of chiral solitons until it disappears at \(H_{\rm b}\) in the latter case. There is a chiral surface twist structure at the sample edge, the formation of which is associated with the presence of the surface barrier.
Gordon model. This coincidence is regarded as evidence of the presence of CSL. Importantly, the surface barrier is an intrinsic origin of hysteresis for the CSL system that exhibits continuous phase transition of nucleation-type [30], as discussed in the Bean-Linvingston barrier [31; 32] for Abrikosov quantized vortices in type-II superconductors. The surface barrier has been evaluated quantitatively for the first time in the CSL systems in CrNb\({}_{3}\)S\({}_{6}\)[29]. The importance of the surface barrier and related surface twist structure was also discussed in cubic chiral helimagnets in the study of magnetic Skyrmions [33; 34].
Recently, CrTa\({}_{3}\)S\({}_{6}\) and related TMDs, which form the same crystal structure as that of CrNb\({}_{3}\)S\({}_{6}\), have also attracted attention because of the possible emergence of chiral helimagnetism [35; 36; 37]. CrTa\({}_{3}\)S\({}_{6}\) was initially reported as a ferromagnetic compound [38; 39]. However, reexamination has revealed that CrTa\({}_{3}\)S\({}_{6}\) exhibits the CHM and CSL [35; 36; 37]. Now, it turns out that CrTa\({}_{3}\)S\({}_{6}\) has a helimagnetic period of 22 nm and large \(H_{\rm c}\) of 1.2 \(-\)1.6 T, while 48 nm and 0.2 T respectively in CrNb\({}_{3}\)S\({}_{6}\). The discretization MR effect was observed in the cleaved CrTa\({}_{3}\)S\({}_{6}\) samples [40], as reported in CrNb\({}_{3}\)S\({}_{6}\)[41]. However, there has been no experimental report on the surface barrier effect in CrTa\({}_{3}\)S\({}_{6}\). It is not clear whether the surface barrier works among the chiral helimagnets hosting the CSL.
In this paper, we investigate magnetic and transport properties of CrTa\({}_{3}\)S\({}_{6}\) in the viewpoint of the surface barrier effect during the CSL formation. To scrutinize this unique property, the MR and magnetization measurements were performed with micrometer-sized and bulk crystals with different dimensions with regard to the \(c\)-axis direction. The obtained results demonstrate the existence of surface barrier in CrTa\({}_{3}\)S\({}_{6}\) crystals. Namely, characterizing the surface barrier is useful for identifying the CSL system in chiral magnetic materials.
## II Experimental methods
Single crystals of CrTa\({}_{3}\)S\({}_{6}\) were obtained by chemical vapor transport (CVT) technique in a temperature gradient using iodine I\({}_{2}\) as a transporting agent [16; 42]. The polycrystalline powders, synthesized by gas phase method with a mixture of Cr, Ta and S in the molar ratio of \(x_{\rm nominal}:3:6\) (\(x_{\rm nominal}\) is the nominal amount of Cr), were placed at one end of an evacuated silica tube and then heated in the electric tube furnace under the fixed temperature gradient from 1100 \({}^{\circ}\)C to 1000 \({}^{\circ}\)C for two weeks. The bulk crystals were grown at the other end of the silica tube. The grown crystals have the shape of a hexagonal plate of around 0.5 to 1.0 mm in diameter and of 100 \(\upmu\)m in thickness.
The magnetization of the obtained bulk crystals was examined using a SQUID magnetometer (Quantum Design MPMS3). Magnetoresistance (MR) measurements were performed with the micrometer-sized specimens of CrTa\({}_{3}\)S\({}_{6}\) crystals, which were prepared from the bulk CrTa\({}_{3}\)S\({}_{6}\) crystal by using a focused ion beam (FIB) system [21]. The size of the specimens was evaluated using a scanning electron microscopy system. The MR data were collected by the standard four-terminal method using a physical property measurement system (Quantum Design PPMS). Note that \(H\) was applied in the direction parallel to the sample plane of bulk and microfabricated crystals, as described below, so as to reduce demagnetizing field and extrinsic metastability effects [43].
Three configurations of the platelet specimens with regard to the \(c\)-axis direction were prepared for the MR measurements. In the first case, a \(c\)-plane sample was fabricated, as shown in Fig. 3(a). A dimension of this platelet sample #1 is 12 \(\upmu\)m \(\times\) 4 \(\upmu\)m \(\times\) 0.1 \(\upmu\)m, where the shortest length corresponds to the direction of the \(c\)-axis. This length limits the maximum number of the solitons in the CSL and thus the discretization effect was observed in the present CrTa\({}_{3}\)S\({}_{6}\) crystal, as seen in CrNb\({}_{3}\)S\({}_{6}\)[28]. Another platelet samples #2 and #3 were fabricated with the \(c\)-axis being along the longitudinal direction of the sample plane, as shown in Figs. 3(b) and 3(c). A size of the sample #2 is 9 \(\upmu\)m \(\times\) 1 \(\upmu\)m \(\times\) 10 \(\upmu\)m, where the longest length corresponds to the \(c\)-axis direction. This sample shape is similar to that of CrNb\({}_{3}\)S\({}_{6}\) used for the demonstration of the surface barrier [29]. The sample #3 has an elongated geometry with a dimension of 2 \(\upmu\)m \(\times\) 1 \(\upmu\)m \(\times\) 19 \(\upmu\)m (/_c_-axis).
## III Experimental results
For determining the optimum condition for the crystal growth, it should be noted that the magnetic property of the grown crystals is very sensitive to \(x_{\rm nominal}\) of the powder precursor. For instance, in the case of CrNb\({}_{3}\)S\({}_{6}\) crystal growth [44], the amount of Cr directly measured in the single crystals was found to be smaller than \(x_{\rm nominal}\). The \(x_{\rm nominal}\) was determined to be 1.11 so as to obtain the ideal crystals of CrNb\({}_{3}\)S\({}_{6}\) without Cr defects.
The optimization of CrTa\({}_{3}\)S\({}_{6}\) crystal growth was performed by using powder precursors with \(x_{\rm nominal}\) from 1.00 to 1.50. Imprints of the CSL formation were successfully obtained in the crystals grown with \(x_{\rm nominal}\) = 1.29, while ferromagnetic response appeared in other crystals with different \(x_{\rm nominal}\) values.
Figure 2(a) shows a peak anomaly of the magnetization of
Figure 2: Temperature dependence of the magnetization in the CrTa\({}_{3}\)S\({}_{6}\) single crystal at 0.10 T (a) and at higher \(H\)s up to 1.25 T (b). \(H\) was applied in the direction perpendicular to the \(c\)-axis. Closed and open marks denote the magnetization data collected in the field cooling and zero-field cooling processes, respectively.
the obtained CrTa\({}_{3}\)S\({}_{6}\) crystal at around 150 K with a magnetic field \(H\) of 0.1 T applied in a direction perpendicular to the \(c\)-axis. Here, the critical temperature of the helimagnetic phase transition \(T_{\text{c}}\) is defined at a peak top of the magnetization. The \(T_{\text{c}}\) values decrease with increasing the \(H\) strength, as shown in Fig. 2(b). Note that the \(T_{\text{c}}\) of 150 K is 10 K higher than those reported in the previous studies [36; 37]. Such a variation of the \(T_{\text{c}}\) values indicates that the crystals used in the present study may have a small amount of Cr defects, as reminiscent of a dome-shaped profile of the relationship between \(T_{\text{c}}\) and \(x_{\text{nominal}}\) discussed in CrNb\({}_{3}\)S\({}_{6}\)[44].
To see the presence of chiral solitons in the obtained CrTa\({}_{3}\)S\({}_{6}\) crystals via the discretization effect, the MR was examined in the \(c\)-plane thin sample with \(H\) applied in the direction perpendicular to the \(c\)-axis. First, the MR full loop was taken by cycling \(H\) between zero and above the critical magnetic field (defined as \(H_{\text{sat}}\) in the experiments), where all the chiral solitons escape from the sample and magnetic moments are likely to be saturated. Then, the MR minor loops were collected by sweeping \(H\) below \(H_{\text{sat}}\).
All the MR data are presented in the same panel in Fig. 3(a). It is clear that, in the \(H\) increase process of the MR full loop, the MR exhibits a gradual negative change associated with a reduction of the number of chiral solitons, whereas it shows a sudden jump at \(H_{\text{jump}}\) in the \(H\) decrease process. In addition, six discrete MR values appear in a series of the MR minor loops. Taking into consideration the helical period of 22 nm in CrTa\({}_{3}\)S\({}_{6}\)[35], the thickness of the present MR sample was calculated to be approximately 110 nm, which is consistent with the value estimated from the device fabrication.
Another feature is that the ratio of \(H_{\text{jump}}/H_{\text{sat}}\) is found to be 0.362. Although this value is slightly smaller than the theoretical value [29]\(4/\pi^{2}\), such a large hysteresis may indicate the influence of surface barrier against the penetration of chiral solitons into the present CrTa\({}_{3}\)S\({}_{6}\) crystal.
The presence of surface barrier was first demonstrated in the micrometer-sized platelet CrNb\({}_{3}\)S\({}_{6}\) crystals with the \(c\)-axis orienting within the plane, in which the experimental data of \(H_{\text{jump}}/H_{\text{sat}}\) is in an excellent agreement with the theoretical value \(4/\pi^{2}\). The slight discrepancy found in Fig. 3(a) may be ascribed to the difference in the sample geometry. In this respect, it is worth examining the MR behavior in terms of the surface barrier in the CrTa\({}_{3}\)S\({}_{6}\) sample with the dimensions similar to those of the CrNb\({}_{3}\)S\({}_{6}\) sample used in the previous study [29].
Figure 3(b) shows that such a CrTa\({}_{3}\)S\({}_{6}\) sample (#2) indeed shows the MR hysteresis behavior. Note that \(H\) is applied in the direction perpendicular to the \(c\)-axis and within the plane so as to eliminate the demagnetization effect. To precisely determine the \(H_{\text{jump}}\) and \(H_{\text{sat}}\) values, the MR measurements were performed five times repeatedly.
In the \(H\) increase process, the MR change is well fitted by the CSL density, which is derived from the chiral sine-Gordon model and plays a role as an order parameter of the CSL formation. On the other hand, in the \(H\) decrease process, the MR shows a sudden change at \(H_{\text{jump}}\). Note that the position of \(H_{\text{jump}}\) and the amplitude of the MR change at \(H_{\text{jump}}\) were reproducible in all the five MR measurements. Moreover, the ratio \(H_{\text{jump}}/H_{\text{sat}}\) was averaged to be 0.407, which is quite consistent with the theoretical value \((4/\pi^{2}\sim 0.405)\). These features are consistent with those observed in the CrNb\({}_{3}\)S\({}_{6}\) sample [29; 27; 29].
Figure 3: MR data taken in three micrometer-sized platelet CrTa\({}_{3}\)S\({}_{6}\) crystals. (a) Full and minor loops of the MR in the \(c\)-plane sample #1 with \(H\) applied in the direction perpendicular to the \(c\)-axis. The red closed circles represent the MR data during the \(H\) increase process toward above \(H_{\text{sat}}\), while the other symbols correspond to the MR data in the \(H\) decrease process. The discretization effect of the MR behavior is clearly observed. (b and c) MR data in the samples with the \(c\)-axis pointing out along the longitudinal direction of the platelet sample. The \(c\)-axis lengths of the samples #2 (b) and #3 (c) are 10 μm and 19 μm, respectively. The red closed circles represent the MR data in the \(H\) increase process, and the blue dotted line in (b) corresponds to a theoretical equation of the soliton density. The other symbols show the MR data in the \(H\) decrease process taken five and three times repeatedly for the samples #2 and #3, respectively. The ratio \(H_{\text{sat}}/H_{\text{jump}}\) turns to be 0.407 for the sample #2 and 0.392 for #3 on average. These results indicate that the surface barrier works against the penetration of chiral solitons.
Figure 3(c) shows the MR data in the sample #3 with the elongated geometry. The data was collected three times. The sudden change of the MR occurs at \(H_{\text{jump}}\) reproducibly. The ratio \(H_{\text{jump}}/H_{\text{sat}}\) was averaged to be 0.392, which is slightly smaller than the theoretical value.
The present MR data in the three different CrTa\({}_{3}\)S\({}_{6}\) samples strongly supports that the surface barrier works against the penetration of chiral solitons. Namely, the CSL formation was successfully demonstrated in the CrTa\({}_{3}\)S\({}_{6}\) crystals via the MR measurements.
Interestingly, the hysteresis behavior is observed in the magnetization curves even in the bulk CrTa\({}_{3}\)S\({}_{6}\) crystals, as shown in Fig. 4. A typical geometry of the bulk crystals is a platelet shape with the \(c\)-plane being of about 100 \(\upmu\)m in thickness, as presented in the optical photographs in Fig. 4. The magnetization curves at 5 K show the downward convex behavior in the \(H\) increase process, which is regarded as evidence of the CSL formation [5; 6; 7; 18; 27]. On the other hand, in the \(H\) decrease process, the magnetization decreases linearly until \(H\) reaches down to the first \(H_{\text{jump}}\). Sharp drops of the magnetization appear at \(H_{\text{jump1}}\) and \(H_{\text{jump2}}\) in the crystal #4, while a drastic drop occurs at \(H_{\text{jump1}}\) in the crystal #5. The positions of \(H_{\text{jump1}}\) and \(H_{\text{jump2}}\) were confirmed to be reproducible in the repeated measurements. This behavior is reminiscent of the MR behavior, as discussed in the micrometer-sized CrTa\({}_{3}\)S\({}_{6}\) crystals in Fig. 3, and indicates that the surface barrier hampers the penetration of chiral solitons into the bulk crystal.
To see an indication of the surface barrier, the dependence of the magnetization curves on temperature was examined, as shown in Figs. 5(a) and 5(b). The hysteresis remains small at temperatures in the vicinity of \(T_{\text{c}}\), while large hysteresis accompanying a sharp drop of the magnetization becomes evident with cooling temperature.
Figures 5(c) and 5(d) shows the ratio \(H_{\text{jump}}/H_{\text{sat}}\) as a function of temperature in the bulk CrTa\({}_{3}\)S\({}_{6}\) crystals #4 and #5, respectively. It is found that the \(H_{\text{jump}}/H_{\text{sat}}\) values at 5 K reduce to 0.60 and 0.66 in the crystals #4 and #5, respectively. These values are still larger than the theoretical value
Figure 4: Magnetization curves at 5 K with two different bulk CrTa\({}_{3}\)S\({}_{6}\) crystals #4 (a) and #5 (b). Closed and open marks denote the magnetization data collected in the \(H\) increase and decrease processes, respectively. Sharp drops of the magnetization appear below the saturation field \(H_{\text{sat}}\) in the \(H\) decrease process. The first and second (last) jumps occur at almost the same \(H\) value, which are respectively labeled as \(H_{\text{jump1}}\) and \(H_{\text{jump2}}\), in the crystal #4, while a single large jump appears in the crystal #5.
Figure 5: Magnetization curves at various temperatures with two different bulk CrTa\({}_{3}\)S\({}_{6}\) crystals #4 (a) and #5 (b). With increasing temperature, large hysteresis gradually shrinks in both crystals. The \(H_{\text{jump}}/H_{\text{sat}}\) values are given as a function of temperature for the crystals #4 (c) and #5 (d). Here, \(H_{\text{jump2}}\) is identified as the last jump in the magnetization curves.
Figure 6: The ratio \(H_{\text{sat}}/H_{\text{jump}}\) as a function of the sample geometry. \(H_{\text{sat}}/H_{\text{jump}}\) is evaluated in terms of the length \(L\) along the \(c\)-axis in (a), while it is given with the \(c\)-plane area \(S\) normalized by \(L\) in (b). The numbers #1 to #5 correspond to the microfabricated and bulk samples shown in Figs. 3, 4 and 5. The dashed line represents the theoretical value \(4/\pi^{2}\) of the surface barrier effect.
expected for the analytical model of surface barrier [29]. Nevertheless, the behavior observed in the present CrTa\({}_{3}\)S\({}_{6}\) crystals is totally different from the magnetization data previously reported in bulk crystals of CrNb\({}_{3}\)S\({}_{6}\)[45; 46] and CrTa\({}_{3}\)S\({}_{6}\)[36], where \(H_{\text{jump}}/H_{\text{sat}}\) was respectively kept to be 0.82 - 0.91 and 0.93 even at low temperatures.
The ratio \(H_{\text{jump}}/H_{\text{sat}}\) is summarized in terms of the sample dimensions. For the crystals with the \(c\)-axis length of around ten micrometers, which contain hundreds of chiral solitons, \(H_{\text{jump}}/H_{\text{sat}}\) shows a good agreement with the theoretical value, as seen in Fig. 6(a). When the \(c\)-plane area is normalized by the \(c\)-axis length, the experimental values tend to deviate from the theoretical value in the samples with large normalized area, as shown in Fig. 6(b). In this respect, an elongated geometry along the helical axis is likely to be favorable for the surface barrier.
The surface barrier is an intrinsic effect in the CSL system with clean surface [29]. In this respect, the discrepancy from the theoretical value may be ascribed to imperfect edges of the hexagonal-shaped bulk crystals, as shown in Fig. 4, which is quite different from the ideal surface theoretically treated in the 1D chiral sine-Gordon model. Conversely, reproducible large hysteresis appears in the present bulk CrTa\({}_{3}\)S\({}_{6}\) crystals. Namely, the chiral solitons in CrTa\({}_{3}\)S\({}_{6}\) exhibit less extrinsic metastability than those in CrNb\({}_{3}\)S\({}_{6}\), indicating that CrTa\({}_{3}\)S\({}_{6}\) is an ideal material losing the robust CSL.
The quality of the crystal may influence the effectiveness of surface barrier. Note that the \(T_{\text{c}}\) and \(H_{\text{sat}}\) values in the present CrTa\({}_{3}\)S\({}_{6}\) crystals are larger than those reported in the literature. Indeed, \(T_{\text{c}}\) is 10 K higher than the reported values, as described above, and \(H_{\text{sat}}\) is 1.65 T at 5 K, which is 0.35 T larger than that in the previous work [36]. It was already found in CrNb\({}_{3}\)S\({}_{6}\) that \(T_{\text{c}}\) and \(H_{\text{sat}}\) decrease when the amount of Cr intercalation deviates from the ideal unity [44] and are closely correlated with the strength of symmetric and antisymmetric exchange interactions. Importantly, the symmetric exchange interaction perpendicular to the \(c\)-axis \(J_{\perp}\) is enlarged in the present CrTa\({}_{3}\)S\({}_{6}\) crystals because \(T_{\text{c}}\) is correlated with the strength of \(J_{\perp}\)[47]. \(J_{\perp}\) also works for enhancing the phase coherence of the CSL [28], which is in favor of working the surface barrier even in the bulk crystals. Clarifying the relationship between the strength of the exchange interactions and surface barrier would be an interesting issue in the CSL system.
The surface quality of the \(c\)-plane may also be a key element governing the strength of the surface barrier. In the experiments, the samples #1 to #3 were prepared by using FIB fabrication and the samples #4 and #5 have an as-grown wide \(c\)-plane surface. A clear difference was not found in terms of the surface quality but rather the controllability of the surface barrier was evident in the dependence on thickness and aspect ratio, as seen in Fig. 6. The comparison of the surface barrier strength using various types of the surface such as a freshly-cleaved surface [40; 41] and a sharp crystal edge would promote the understanding of the surface barrier in the CSL system.
One of the unknown characteristics in the present CrTa\({}_{3}\)S\({}_{6}\) crystals is a reduction of the magnetic moment at \(H_{\text{sat}}\) to 1.6 \(\mu_{\text{B}}/\text{Cr}\), which is almost a half smaller than the value expected for the isolated Cr\({}^{3+}\) ion. The magnetic moment increases monotonically above \(H_{\text{sat}}\) and reaches to 3.0 \(\mu_{\text{B}}\) around 10 T with a linear extrapolation. This behavior is different from that reported in the previous work [36]. The reason of such a discrepancy in CrTa\({}_{3}\)S\({}_{6}\) remains to be clarified. Inferred from an electronic structure of CrNb\({}_{3}\)S\({}_{6}\), itinerant electrons composed of Ta and S atoms may interact with localized electrons of Cr atoms in an electronic structure of CrTa\({}_{3}\)S\({}_{6}\). However, the picture of localized electrons in the Cr atoms has not been fully validated yet. In this connection, it would be interesting to examine an electronic structure of CrTa\({}_{3}\)S\({}_{6}\) using x-ray magnetic circular dichroism (XMCD) spectroscopy together with density functional theory (DFT) calculations to evaluate the degree of hybridization between Ta 5\(d\) and Cr 3\(d\) orbitals, as discussed in CrNb\({}_{3}\)S\({}_{6}\)[48].
In summary, we demonstrate the CSL formation via characterizing the surface barrier in CrTa\({}_{3}\)S\({}_{6}\) crystals. This observation indicates that the surface barrier effect occurs among TMDs hosting the CSL and induces nontrivial physical response such as discretized MR reflecting the topological nature of the CSL.
###### Acknowledgements.
We thank Yusuke Kato for fruitful discussion. This work was supported by JSPS KAKENHI Grant Numbers 19H05822, 19H05826, 23H01870 and 23H00091.
## Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
| ```
磁気的チラルソリトン格子(CSL)の形成を、monoaxial chiral dichalcogenide CrTa$_{3}$S$_{6}$結晶における表面障壁という観点から調査しました。これは、チラルソリトンがシステムに侵入することを阻止し、その結晶の連続的な相転換のヒステリシスという内在的な起源をもたらします。これは、量子化された渦巻き系を持つII型超伝導体において議論されています。磁気抵抗(MR)は、異なる断面積の微細化された板状試料を用いて、結晶の c 軸方向に対する観点から調べました。CSL の形成は、チラルソリトンの数に対応する微小なMR変化によって確認され、表面障壁の存在によっても確認されました。表面障壁は、ヒステリシスフィールドサイクル中の críticas磁場比の固定比率として認識されています。この研究 |
2309.13416 | Preconditioned Primal-Dual Gradient Methods for Nonconvex Composite and
Finite-Sum Optimization | In this paper, we first introduce a preconditioned primal-dual gradient
algorithm based on conjugate duality theory. This algorithm is designed to
solve composite optimization problem whose objective function consists of two
summands: a continuously differentiable nonconvex function and the composition
of a nonsmooth nonconvex function with a linear operator. In contrast to
existing nonconvex primal-dual algorithms, our proposed algorithm, through the
utilization of conjugate duality, does not require the calculation of proximal
mapping of nonconvex functions. Under mild conditions, we prove that any
cluster point of the generated sequence is a critical point of the composite
optimization problem. In the context of Kurdyka-\L{}ojasiewicz property, we
establish global convergence and convergence rates for the iterates. Secondly,
for nonconvex finite-sum optimization, we propose a stochastic algorithm that
combines the preconditioned primal-dual gradient algorithm with a class of
variance reduced stochastic gradient estimators. Almost sure global convergence
and expected convergence rates are derived relying on the
Kurdyka-\L{}ojasiewicz inequality. Finally, some preliminary numerical results
are presented to demonstrate the effectiveness of the proposed algorithms. | Jiahong Guo, Xiao Wang, Xiantao Xiao | 2023-09-23T15:58:27 | http://arxiv.org/abs/2309.13416v2 | # Preconditioned Primal-Dual Gradient Methods for Nonconvex Composite and Finite-Sum Optimization
###### Abstract
In this paper, we first introduce a preconditioned primal-dual gradient algorithm based on conjugate duality theory. This algorithm is designed to solve composite optimization problem whose objective function consists of two summands: a continuously differentiable nonconvex function and the composition of a nonsmooth nonconvex function with a linear operator. In contrast to existing nonconvex primal-dual algorithms, our proposed algorithm, through the utilization of conjugate duality, does not require the calculation of proximal mapping of nonconvex functions. Under mild conditions, we prove that any cluster point of the generated sequence is a critical point of the composite optimization problem. In the context of Kurdyka-Lojasiewicz property, we establish global convergence and convergence rates for the iterates. Secondly, for nonconvex finite-sum optimization, we propose a stochastic algorithm that combines the preconditioned primal-dual gradient algorithm with a class of variance reduced stochastic gradient estimators. Almost sure global convergence and expected convergence rates are derived relying on the Kurdyka-Lojasiewicz inequality. Finally, some preliminary numerical results are presented to demonstrate the effectiveness of the proposed algorithms.
**Keywords:** Nonconvex first-order primal-dual algorithms; Kurdyka-Lojasiewicz inequality; global convergence; convergence rates; stochastic approximation.
## 1 Introduction
In this paper, we first consider the following composite optimization problem:
\[\min_{x\in\mathbb{X}}\;f(x)+h(Ax), \tag{1.1}\]
where \(\mathbb{X}\) and \(\mathbb{Y}\) are finite-dimensional vector spaces, \(f:\mathbb{X}\to\mathbb{R}\) is a continuously differentiable and possibly nonconvex function, \(A:\mathbb{X}\to\mathbb{Y}\) is a linear operator and \(h:\mathbb{Y}\to(-\infty,+\infty]\) is a simple and possibly nonsmooth, nonconvex function. Problem (1.1) arises in a variety of practical applications from machine learning, statistics, image processing, and so on. In many applications, function \(h\) is usually referred to the _regularizer_ which is used to guarantee certain regular properties of the solution. Recently, nonconvex regularizers, such as \(\ell_{0}\), \(\ell_{p}\) (\(0<p<1\)), smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP), have drawn a lot of attention and been witnessed to achieve significant improvement over convex regularizers, see [52] and references therein.
For problem (1.1) in the fully nonconvex setting (both \(f\) and \(h\) are nonconvex), there has been an intensive renewed interest in the convergence analysis of various algorithms based on the Kurdyka-Lojasiewicz (KL) property in recent years. Attouch et al. [3] established the global convergence of a forward-backward splitting algorithm for (1.1) with \(A\) being the identity operator and \((f+h)\) being a KL function. Li and Pong [34] demonstrated the convergence of an alternating direction method of multipliers (ADMM) under assumptions that both \(f\) and \(h\) are semialgebraic, and \(A\) is surjective. A nonmonotone linesearch algorithm based on the forward-backward envelope was proposed in [48] and shown to own
superlinear convergence rates. Geiping and Moeller [25] investigated a class of majorization-minimization methods for (1.1) with a nonlinear operator \(A\), and derived the global convergence under the KL property and the uniqueness of R-minimizing solutions. In [8], the authors employed a Lyapunov method to established the global convergence of a bounded sequence to a critical point for several Lagrangian-based methods, including proximal multipliers method and proximal ADMM, within the semialgebraic setting. By assuming that the associated augmented Lagrangian possesses the KL property, Bot and Nguyen [13] proved that the iterates of proximal ADMM converge to a Karush-Kuhn-Tucker point. They also derived convergence rates for both the augmented Lagrangian and the iterates. The algorithms for problem (1.1) with \(h\) being \(\ell_{0}\) norm were reviewed in the survey paper [49]. For problem (1.1) with convex \(h\), there exists a vast literature on various nonconvex composite optimization algorithms, see, for instance, [41, 15, 9, 10].
Motivated by a class of well-studied primal-dual hybrid gradient (PDHG) algorithms for convex optimization [16, 44, 37], and drawing upon the conjugate duality theory for nonconvex optimization, we propose a preconditioned first-order primal-dual algorithm for solving nonconvex composite optimization problem (1.1). In most of the aforementioned related algorithms, it is required to compute the elements of the generalized proximal (set-valued) mapping for nonconvex function \(h\) and/or nonconvex function \(f\) at each iteration. In contrast, at each iteration of our proposed algorithm, we only need to calculate the proximal mapping of the conjugate function \(h^{*}\) which is always convex and lower semicontinuous. This fact makes the proposed algorithm much easier to implement in many scenarios.
In the second part of this paper, we consider to extend the proposed algorithm for the following finite-sum optimization problem:
\[\min_{x\in\mathbb{X}}\ \frac{1}{N}\sum_{i=1}^{N}f_{i}(x)+h(Ax), \tag{1.2}\]
where \(f_{i}:\mathbb{X}\rightarrow\mathbb{R},i=1,\cdots,N\) are continuously differentiable and possibly nonconvex, and \(h(Ax)\) is the same as in (1.1). Problem (1.2) arises frequently in the fields of statistics [27] and machine learning [14]. In many applications, problem (1.2) is also called as _regularized empirical risk minimization_ and the component functions \(f_{i},i=1,\ldots,N\) correspond to certain loss models. Moreover, in various interesting problems such as deep learning, dictionary learning and classification with nonconvex activation functions, the loss functions \(f_{i}\) exhibit nonconvexity. Since the number of components \(N\) (usually represents the size of a dataset) can be extremely large, the exact computation of the full gradient \(\frac{1}{N}\sum_{i=1}^{N}\nabla f_{i}(x)\) becomes prohibitively expensive in practice. Consequently, the stochastic approximation techniques have gained increasing importance in designing efficient numerical algorithms for problem (1.2), see [32] for example. In particular, the success of many popular variance reduced stochastic algorithms for convex finite-sum optimization has been witnessed in recent years, such as SAG [47], SAGA [21], SVRG [31] and SARAH [42].
For problem (1.2) with nonconvex \(f_{i}\) and convex \(h\), a large amount of algorithms have been developed over the past few years. We only name a few here. Li and Li [36] introduced a stochastic proximal gradient algorithm based on variance reduction, and established a global linear convergence rate for nonconvex \(f_{i}\) satisfying Polyak-Lojasiewicz condition. Nhan et al. [43] presented a stochastic first-order algorithm by combining a proximal gradient step with the SARAH estimator, and analyzed the complexity bounds in terms of stochastic first-order oracle calls. Fort and Moulines [24] introduced a stochastic variable metric proximal gradient algorithm by using a mini-batch strategy with variance reduction called SPIDER [23]. In [51], a generic algorithmic framework for stochastic proximal quasi-Newton methods was introduced. Milzarek et al. [38, 39] proposed a stochastic semismooth Newton method for nonsmooth nonconvex stochastic and finite-sum optimization, and established the almost sure global convergence as well as local convergence rates with high probability. Jin and Wang [30] studied a single-loop stochastic primal-dual method for problem (1.2) coupled with a large number of nonconvex functional constraints.
We next review the stochastic approximation algorithms for problem (1.2) in the fully nonconvex setting, where \(f_{i},i=1,\ldots,N\) and \(h\) are nonconvex. Xu et al. [54] showed that the stochastic proximal gradient methods for problem (1.2) with nonconvex \(h\) enjoy the same complexities as their counterparts for convex regularized problem to find an approximate stationary point. Cheng et al. [18] proposed an interior stochastic gradient method for bounded constrained optimization problems where the objective function is the sum of an expectation function and a nonconvex \(\ell_{p}\) regularizer. A stochastic algorithm that combines ADMM with a class of variance reduced stochastic gradient estimators, including SAGA,
SVRG and SARAH, was proposed in [6]. The global convergence in expectation was established under the condition that \(f_{i},i=1,\ldots,N\) and \(h\) are semialgebraic, and the convergence rates in the expectation sense were derived based on Lojasiewicz exponent. In [33], by employing the forward-backward envelope serving as a Lyapunov function, Latafat et al. proved that the cluster points of the iterates generated by the popular proximal Finito/MISO algorithm are the stationary points almost surely in the fully nonconvex case. They further established the global and linear convergence under the assumption that \(f_{i},i=1,\ldots,N\) and \(h\) are KL functions. By combining the proposed algorithm for problem (1.1) with the variance reduced stochastic gradient estimators (uniformly defined in [6, 22]), we study a stochastic preconditioned first-order primal-dual algorithm for solving the fully nonconvex finite-sum optimization problem (1.2).
**Contributions.** The main contributions of this paper can be summarized as follows.
* We propose a preconditioned primal-dual gradient (PPDG) method for the composite optimization problem (1.1). This problem poses significant challenges due to its fully nonconvex structure including the smooth nonconvex function \(f\) and the nonsmooth nonconvex regularizer \(h\) coupled with linear operator \(A\). Under certain mild assumptions that the gradient of \(f\) is Lipschitz continuous, the liner operator \(A\) is surjective and the convex hull of \(h\) is proper, we prove that any convergent subsequence of the iterates converges to a critical point of the Lagrange function associated with problem (1.1). This is realized based on establishing the nonincreasing property of a properly selected Lyapunov function. With the additional KL property of the Lyapunov function, we demonstrate the global convergence of the generated sequence of iterates. We further derive convergence rates for the sequence, provided that the Lyapunov function has the Lojasiewicz property.
* To address the challenge of solving problem (1.2) in the fully nonconvex setting, we introduce a stochastic preconditioned primal-dual gradient (SPPDG) method, which can be viewed as a stochastic variant of PPDG. To analyze the convergence of SPPDG, we first establish a crucial descent property related to the expectation of a Lyapunov function based on the Lagrange function of problem (1.2). Moreover, the upper bound for the conditional expectation of the subgradient of the Lyapunov function is derived. Leveraging these important auxiliary results and assuming that the generated iterates of SPPDG are bounded almost surely, we establish the subsequence convergence in the almost sure sense. Moreover, if the Lyapunov function is a KL function, we prove that the whole iteration sequence possesses the finite length property and converges almost surely to a critical point. To the best of our knowledge, such almost sure global convergence result for stochastic algorithms applied to (1.2) in the fully nonconvex setting is new.
* We report the numerical performances of the proposed methods with PPDG being applied to image denoising via \(\ell_{0}\) gradient minimization, as well as SPPDG being applied to image classification using deep neural network and a nonconvex graph-guided fused lasso problem. Compared with the existing popular algorithms, the numerical results verify the advantages of the proposed methods.
**Organization.** The rest of this paper is organized as follows. In Section 2, we explore the convergence of a preconditioned primal-dual gradient method for composite optimization problem (1.1). In Section 3, we propose a stochastic preconditioned primal-dual gradient method for finite-sum problem (1.2), and provide a convergence analysis. Numerical experiments are presented in Section 4 to show the effectiveness of the proposed algorithms.
**Notation.** Let \(\mathbb{X}\) and \(\mathbb{Y}\) be two finite-dimensional real vector spaces equipped with standard inner products \(\langle\cdot,\cdot\rangle\) and norms \(\|\cdot\|=\sqrt{\langle\cdot,\cdot\rangle}\). Let \(\mathbb{X}^{*}\) and \(\mathbb{Y}^{*}\) be the dual spaces of \(\mathbb{X}\) and \(\mathbb{Y}\), respectively. The operator norm of a linear operator \(A:\mathbb{X}\to\mathbb{Y}\) is
\[\|A\|:=\max\{\|Ax\|:x\in\mathbb{X}\text{ with }\|x\|\leq 1\}.\]
Given a closed set \(\mathcal{C}\subset\mathbb{X}\) and a vector \(x\in\mathbb{X}\), the _distance_ of \(x\) to \(\mathcal{C}\) is given by \(\operatorname{dist}(x,\mathcal{C}):=\min_{y\in\mathcal{C}}\|x-y\|\). Let \(f:\mathbb{X}\to(-\infty,+\infty]\) be a proper lower semicontinuous convex function. The extended _proximal mapping_ of \(f\) associated to a positive definite linear operator \(M\) is defined as
\[\operatorname{prox}^{M}_{f}(y):=\operatorname*{arg\,min}_{x\in\mathbb{X}} \left\{f(x)+\frac{1}{2}\|x-y\|_{M}^{2}\right\}.\]
Here, \(\|x\|_{M}^{2}:=\langle Mx,x\rangle\).
For an extended real-valued function \(f:\mathbb{X}\to(-\infty,+\infty]\), let \(\mathrm{dom}f:=\{x\in\mathbb{X}:f(x)<+\infty\}\) be its domain and
\[f^{*}(y):=\sup_{x\in\mathbb{X}}\{\langle y,x\rangle-f(x)\},\ y\in\mathbb{X}^{*}\]
be its _conjugate function_. The conjugate function \(h^{*}\) is always convex and lower semicontinuous, see [4, Theorem 4.3]. When \(f\) is convex, let \(\partial f\) denote its _subdifferential_. A set-valued mapping \(F:\mathbb{X}\to\mathbb{Y}\) is said to be _outer semicontinuous_ at \(\bar{x}\), if for any \(u\in\mathbb{Y}\) satisfying that there exist \(x^{k}\to\bar{x}\) and \(u^{k}\to u\) with \(u^{k}\in F(x^{k})\), it holds that \(u\in F(\bar{x})\). From [45, Theorem 24.4], if \(f\) is lower semicontinuous, proper and convex, the set-valued mapping \(\partial f\) is outer semicontinuous, or equivalently, its graph is closed.
## 2 PPDG for nonconvex composite optimization
In this section, we propose PPDG, a preconditioned primal-dual first-order method based on conjugate duality, for solving the nonconvex composite optimization problem (1.1), and establish its convergence. We begin by reviewing preliminary conjugate duality results in Subsection 2.1. The algorithmic framework of PPDG and the main assumptions are described in Subsection 2.2. Subsection 2.3 devotes to derive the descent property of a Lyapunov function. The subsequence convergence is investigated in Subsection 2.4. Finally, in the setting of KL property, the main theoretical results regarding global convergence and convergence rates are established in Subsection 2.5.
### Conjugate duality and necessary optimality
Going back to problem (1.1) and drawing upon the conjugate duality theory presented in [11, Section 2.5.3], we have that the dual problem of (1.1) is
\[\max_{y\in\mathbb{Y}^{*}}\left\{\inf_{x\in\mathbb{X}}\mathcal{L}(x,y)\right\}, \ \text{where}\ \mathcal{L}(x,y):=f(x)+\langle y,Ax\rangle-h^{*}(y). \tag{2.1}\]
As per [11, Theorem 2.158], if \(\inf_{x\in\mathbb{X}}\mathcal{L}(x,y)>-\infty\) for any \(y\in\mathbb{Y}^{*}\), then \(\bar{x}\) and \(\bar{y}\) are optimal solutions of (1.1) and (2.1), respectively, if and only if the following relations hold true:
\[\left\{\begin{array}{l}\bar{x}\in\arg\min_{x\in\mathbb{X}}\mathcal{L}(x, \bar{y}),\\ 0=h(A\bar{x})+h^{*}(\bar{y})-\langle\bar{y},A\bar{x}\rangle.\end{array}\right. \tag{2.2}\]
From the definition of conjugate function, it follows that, if (2.2) is satisfied, we have \(0\in\partial\mathcal{L}(\bar{x},\bar{y})\), i.e.,
\[\left\{\begin{array}{l}0=\nabla f(\bar{x})+A^{T}\bar{y},\\ 0\in-\partial h^{*}(\bar{y})+A\bar{x},\end{array}\right. \tag{2.3}\]
where \(A^{T}\) is the adjoint operator of \(A\). Let us denote the set of critical points of \(\mathcal{L}\) by
\[\mathrm{crit}\mathcal{L}:=\{(\bar{x},\bar{y})\in\mathbb{X}\times\mathbb{Y}^{*} :0\in\partial\mathcal{L}(\bar{x},\bar{y})\}.\]
Therefore, our primary aim of this paper is to find a pair \((\bar{x},\bar{y})\) that satisfies the necessary optimality conditions of the nonconvex problem (1.1), that is, \((\bar{x},\bar{y})\in\mathrm{crit}\mathcal{L}\). Similarly, for the nonconvex finite-sum problem (1.2), our goal is to obtain a critical point of
\[\mathcal{L}_{s}(x,y):=\frac{1}{N}\sum_{i=1}^{N}f_{i}(x)+\langle y,Ax\rangle-h^ {*}(y).\]
### The PPDG algorithm
The detail of PPDG is described in Algorithm 1. This algorithm can be viewed as a first-order primal-dual algorithm by observing the necessary optimality conditions (2.3). In specific, (2.5a) is a standard gradient step associated with the first relation in (2.3), and (2.5b) can be regarded as a proximal gradient step coupled with the preconditioning technique introduced in [44] for the second relation \(0\in-\partial h^{*}(\bar{y})+A\bar{x}\). We also point out that (2.5b) is equivalent to
\[y^{k+1}=\operatorname*{arg\,min}_{y\in\mathbb{Y}^{*}}\left\{h^{*}(y)-\langle y,A(2x^{k+1}-x^{k})\rangle+\frac{1}{2}\|y-y^{k}\|_{M}^{2}\right\},\]
hence the inverse of \(M\) is not actually required in practice. Moreover, in view of (2.5b), from the definition of \(\operatorname{prox}_{h^{*}}^{M}\), it follows that there exists a vector \(g^{k+1}\in\partial h^{*}(y^{k+1})\) such that
\[g^{k+1}=-M(y^{k+1}-y^{k})+A(2x^{k+1}-x^{k}). \tag{2.4}\]
If the sequence \(\{(x^{k},y^{k})\}\) converges to \((\bar{x},\bar{y})\), then (2.4) immediately implies the second relation \(A\bar{x}\in\partial h^{*}(\bar{y})\) due to the outer semicontinuity of \(\partial h^{*}(\cdot)\).
```
1 Initialization: Choose an initial point \((x^{0},y^{0})\in\mathbb{X}\times\mathbb{Y}^{*}\), a constant \(\alpha>0\) and a positive definite matrix \(M\).
2for\(k=0,1,2,\dots\)do
3 Update \(x^{k},y^{k}\) as follows, \[\left\{\begin{aligned} x^{k+1}&=x^{k}-\alpha( \nabla f(x^{k})+A^{T}y^{k}),\\ y^{k+1}&=\operatorname{prox}_{h^{*}}^{M}(y^{k}+M^{-1 }A(2x^{k+1}-x^{k})).\end{aligned}\right.\] (2.5b)
4 Set \(k\gets k+1\).
```
**Algorithm 1**PPDG
Compared with the existing first-order algorithms for nonsmooth nonconvex optimization problems, one of the main features of Algorithm 1 is that, we compute the proximal mapping of the conjugate function \(h^{*}\) rather than dealing with \(h\) directly. This is partially motivated by the popular PDHG algorithm [16] for convex optimization problems. However, let us emphasize that, in the nonconvex setting there are several additional advantages. Firstly, many related algorithms [3, 2, 7, 8] involve the calculation of the proximal mapping with respect to the nonconvex function \(h\), i.e.,
\[\operatorname{prox}_{h}(x)\in\operatorname*{arg\,min}_{u\in\mathbb{Y}}\left\{h (u)+\frac{1}{2}\|u-x\|^{2}\right\},\]
which is usually more prohibitive than computing \(\operatorname{prox}_{h}^{M}(x)\) because \(h^{*}\) is lower semicontinuous and convex. Secondly, upon observing (2.3), in both the definition of \(\operatorname{crit}\mathcal{L}\) and the later subsequence convergence analysis of Algorithm 1, we do not need to introduce complex generalized subdifferentials of nonconvex functions, as is often required in many well-studied first-order algorithms for nonsmooth nonconvex optimization problems, see, e.g., [1, 2, 7, 12].
We would also like to remark that the update of \(x^{k}\) in Algorithm 1 is different from the standard PDHG algorithm, in which,
\[x^{k+1}=\operatorname{prox}_{\alpha f}(x^{k}-\alpha A^{T}y^{k}).\]
In the convex setting, this can be rewritten as the following implicit step,
\[x^{k+1}=x^{k}-\alpha(\nabla f(x^{k+1})+A^{T}y^{k}),\]
due to the fact that \(x=\operatorname{prox}_{\alpha f}(y)\) is equivalent to \(x=y-\alpha\nabla f(x)\). The success of PDHG for convex optimization depends on the efficient calculation of the proximal mapping of \(f\), that, however,
is unrealistic in our nonconvex setting. The interested readers are referred to the survey paper [17] for more discussion of PDHG in the convex setting.
In order to establish the convergence of Algorithm 1, we impose some standard assumptions throughout this section.
**Assumption 1**.: _Suppose that:_
1. _The function_ \(f\) _is_ \(L\)_-smooth over_ \(\mathbb{X}\)_, i.e.,_ \(f\) _is continuously differentiable and there exists a constant_ \(L>0\) _such that for any_ \(x,z\in\mathbb{X}\)_,_ \[\|\nabla f(x)-\nabla f(z)\|\leq L\|x-z\|.\]
2. \(\inf_{x\in\mathbb{X}}\mathcal{L}(x,y)>-\infty\) _for any_ \(y\in\mathbb{Y}^{*}\)_._
3. _The linear operator_ \(A\) _is surjective._
4. _The convex hull of_ \(h\) _is proper._
_Remark 2.1_.: Some comments on Assumption 1 are in order.
1. A well-known gradient descent lemma under Assumption (i) is that, \[f(x^{k+1})\leq f(x^{k})+\langle\nabla f(x^{k}),x^{k+1}-x^{k}\rangle+\frac{L}{2} \|x^{k+1}-x^{k}\|^{2}.\] (2.6) Moreover, applying (2.5a) and Assumption (i), we have \[\|A^{T}(y^{k+1}-y^{k})\| =\ \left\|\left(\frac{x^{k+1}-x^{k+2}}{\alpha}-\nabla f(x^{k+1}) \right)-\left(\frac{x^{k}-x^{k+1}}{\alpha}-\nabla f(x^{k})\right)\right\|\] (2.7) \[\leq\ \left(\frac{1}{\alpha}+L\right)\|x^{k+1}-x^{k}\|+\frac{1}{ \alpha}\|x^{k+2}-x^{k+1}\|.\]
2. Assumption (ii) ensures that the sequence generated by Algorithm 1 is well-defined. It is also indispensable in the subsequence convergence analysis (see Theorem 2.6).
3. The linear operator \(A\) is surjective if and only if the matrix associated with \(AA^{T}\) is positive definite. Thus, under Assumption (iii), a natural choice of \(M\) in Algorithm 1 is the associated matrix of \(\alpha AA^{T}\). As a special case, if \(A\) is the identity operator, then we can choose \(M=\alpha I\) and hence the extended proximal mapping \(\mathrm{prox}_{h^{*}}^{M}(\cdot)\) reduces to the classical proximal mapping \(\mathrm{prox}_{\frac{1}{\alpha}h^{*}}(\cdot)\). Moreover, under Assumption (iii), for any \(y\in\mathbb{Y}^{*}\) we have \[\hat{\lambda}\|y\|\leq\|A^{T}y\|,\] (2.8) where \(\hat{\lambda}:=\sqrt{\lambda_{\min}(AA^{T})}\) and \(\lambda_{\min}(AA^{T})\) denotes the smallest eigenvalue of \(AA^{T}\).
4. It is known that, without any assumption on \(h\), the conjugate function \(h^{*}\) is lower semicontinuous and convex, see, e.g., [4, Theorem 4.3]. However, in order to guarantee that \(h^{*}\) is proper, an additional assumption is required. It is shown in [46, Theorem 11.1] that \(h^{*}\) is proper if Assumption (iv) is satisfied.
### A Lyapunov function
As discussed previously, the primary aim of this section is to establish the convergence result that the sequence \((x^{k},y^{k})\) generated by Algorithm 1 converges to a critical point of the Lagrange function \(\mathcal{L}(x,y)\). However, this is difficult to fulfill for the nonconvex composite optimization problem (1.1) through the usual approach owing to the lack of the descent property of \(\mathcal{L}\). Instead, we shall work with an auxiliary function to alleviate this difficulty.
Let us define the following Lyapunov function
\[\mathscr{L}(x,y,u,v):=\mathcal{L}(x,y)-a\|x-u\|^{2}+b\|x-v\|^{2},\ \forall x,u,v\in\mathbb{X},\ y\in\mathbb{Y}^{*}.\]
Here, with the stepsize \(\alpha\) and the Lipschitz constant \(L\) of \(\nabla f\), let
\[a:=\frac{\delta}{\alpha},\quad b:=\frac{1}{2\alpha}-\frac{L}{4}-\frac{\delta}{ \alpha}-\frac{\alpha\delta L^{2}}{2}-\delta L+\frac{\alpha L^{2}}{4\delta}, \tag{2.9}\]
and \(\delta\) be a properly selected constant such that \(a>0\) and \(b>0\). Let \(c\) be a constant given by
\[c:=b-\frac{\alpha L^{2}}{2\delta}. \tag{2.10}\]
By an elementary calculation, if we choose \(\delta=1/5\), then the choice of stepsize \(\alpha\in(0,1/3L)\) is sufficient to guarantee \(c>0\), and consequently, \(a>0\) and \(b>0\). Therefore, we can safely assume that \(a\), \(b\) and \(c\) are positive in the rest of this section.
The convergence analysis of Algorithm 1 will significantly rely on the properties of \(\mathscr{L}\) which shall be investigated in this subsection. For a start, we show in the following lemma that the critical point set \(\mathrm{crit}\mathscr{L}\) is closely related to \(\mathrm{crit}\mathcal{L}\).
**Lemma 2.2**.: _Let \(x,u,v\in\mathbb{X}\), \(y\in\mathbb{Y}^{*}\). Then, \((x,y,u,v)\in\mathrm{crit}\mathcal{L}\) is equivalent to \((x,y)\in\mathrm{crit}\mathcal{L}\) and \(u=v=x\)._
Proof.: In view of the definition of \(\mathscr{L}\), the condition \((x,y,u,v)\in\mathrm{crit}\mathscr{L}\) reads
\[\begin{array}{l}0=\nabla_{x}\mathscr{L}(x,y,u,v)=\nabla_{x} \mathcal{L}(x,y)-2a(x-u)+2b(x-v),\\ 0\in\partial_{y}\mathscr{L}(x,y,u,v)=\partial_{y}\mathcal{L}(x,y),\\ 0=\nabla_{u}\mathscr{L}(x,y,u,v)=2a(x-u),\\ 0=\nabla_{v}\mathscr{L}(x,y,u,v)=2b(v-x).\end{array}\]
The latter two relations imply that \(u=v=x\). This, together with the first two relations, leads to \((0,0)\in\partial\mathcal{L}(x,y)\), which means \((x,y)\in\mathrm{crit}\mathcal{L}\). The converse is obvious.
Throughout the remainder of this section, we let \(M\) be the matrix associated with \(\alpha AA^{T}\), that is, for all \(y,\hat{y}\in\mathbb{Y}^{*}\), \(\langle\hat{y},My\rangle=\langle\hat{y},\alpha AA^{T}y\rangle\). The following lemma presents a recursive relation for \(\mathcal{L}\).
**Lemma 2.3**.: _Under Assumption 1, for all \(k\geq 1\), it holds_
\[\mathcal{L}(x^{k+1},y^{k+1})\leq\ \mathcal{L}(x^{k},y^{k})-\left(\frac{1}{ \alpha}-\frac{L}{2}\right)\|x^{k+1}-x^{k}\|^{2}+\langle y^{k+1}-y^{k},\alpha A (\nabla f(x^{k-1})-\nabla f(x^{k}))\rangle.\]
Proof.: From (2.5a) and (2.6), it follows that
\[f(x^{k+1})\leq f(x^{k})-\langle y^{k},A(x^{k+1}-x^{k})\rangle-\left(\frac{1}{ \alpha}-\frac{L}{2}\right)\|x^{k+1}-x^{k}\|^{2}.\]
Taking \(k=k-1\) in (2.4) and using the convexity of \(h^{*}\), we have
\[-h^{*}(y^{k+1})\leq-h^{*}(y^{k})+\langle y^{k}-y^{k+1},-M(y^{k}-y^{k-1})+A(2x^ {k}-x^{k-1})\rangle.\]
Combining these two inequalities, adding \(\langle y^{k+1},Ax^{k+1}\rangle\) on both sides and recalling that \(\mathcal{L}(x,y)=f(x)+\langle y,Ax\rangle-h^{*}(y)\), we obtain
\[\mathcal{L}(x^{k+1},y^{k+1})\leq\mathcal{L}(x^{k},y^{k})-\left(\frac{1}{ \alpha}-\frac{L}{2}\right)\|x^{k+1}-x^{k}\|^{2}+\langle y^{k+1}-y^{k},A(x^{k+1 }-x^{k}+x^{k-1}-x^{k})+M(y^{k}-y^{k-1})\rangle. \tag{2.11}\]
Applying (2.5a) again, one has
\[\langle y^{k+1}-y^{k},A(x^{k+1}-x^{k}+x^{k-1}-x^{k})+M(y^{k}-y^{k- 1})\rangle\] \[=\langle y^{k+1}-y^{k},(\alpha AA^{T}-M)(y^{k-1}-y^{k})\rangle+ \langle y^{k+1}-y^{k},\alpha A(\nabla f(x^{k-1})-\nabla f(x^{k}))\rangle\] \[=\langle y^{k+1}-y^{k},\alpha A(\nabla f(x^{k-1})-\nabla f(x^{k}))\rangle.\]
Substituting this relation into (2.11), we complete the proof.
Now, we establish the following descent property of \(\mathscr{L}\) which will play a pivotal role in the discussion of global convergence.
**Lemma 2.4**.: _Let Assumption 1 hold. Then, for all \(k\geq 1\),_
\[\mathscr{L}(x^{k+1},y^{k+1},x^{k+2},x^{k})+c(\|x^{k+1}-x^{k}\|^{2}+\|x^{k}-x^{k- 1}\|^{2})\leq\mathscr{L}(x^{k},y^{k},x^{k+1},x^{k-1}), \tag{2.12}\]
_where \(c\) is defined in (2.10)._
Proof.: From Lemma 2.3, we have
\[\begin{split}\mathcal{L}(x^{k+1},y^{k+1})&\leq \mathcal{L}(x^{k},y^{k})-\left(\frac{1}{\alpha}-\frac{L}{2}\right)\|x^{k+1}-x^ {k}\|^{2}+\langle y^{k+1}-y^{k},\alpha A(\nabla f(x^{k-1})-\nabla f(x^{k}))\rangle \\ &\leq\mathcal{L}(x^{k},y^{k})-\left(\frac{1}{\alpha}-\frac{L}{2} \right)\|x^{k+1}-x^{k}\|^{2}+\frac{\alpha\delta}{2}\|A^{T}(y^{k+1}-y^{k})\|^ {2}+\frac{\alpha L^{2}}{2\delta}\|x^{k}-x^{k-1}\|^{2},\end{split} \tag{2.13}\]
where the second inequality is deduced from the Lipschitz continuity of \(\nabla f\) and the fact that \(\langle x,y\rangle\leq\frac{\delta}{2}\|x\|^{2}+\frac{1}{2\delta}\|y\|^{2}\). From (2.7), it follows that
\[\|A^{T}(y^{k+1}-y^{k})\|^{2}\leq 2\left(\frac{1}{\alpha}+L\right)^{2}\|x^{k+ 1}-x^{k}\|^{2}+\frac{2}{\alpha^{2}}\|x^{k+2}-x^{k+1}\|^{2}.\]
Substituting this inequality into (2.13) and recalling the definitions of \(a,b,c\), we conclude
\[\mathcal{L}(x^{k+1},y^{k+1})\leq\mathcal{L}(x^{k},y^{k})-(a+b+c)\|x^{k+1}-x^{ k}\|^{2}+(b-c)\|x^{k}-x^{k-1}\|^{2}+a\|x^{k+2}-x^{k+1}\|^{2}.\]
Rewriting this inequality gives
\[\begin{split}&\mathcal{L}(x^{k+1},y^{k+1})-a\|x^{k+1}-x^{k+2}\|^{2 }+b\|x^{k+1}-x^{k}\|^{2}+c(\|x^{k+1}-x^{k}\|^{2}+\|x^{k}-x^{k-1}\|^{2})\\ &\leq\mathcal{L}(x^{k},y^{k})-a\|x^{k}-x^{k+1}\|^{2}+b\|x^{k}-x^{ k-1}\|^{2}.\end{split}\]
The proof is completed by recalling the definition of \(\mathscr{L}\).
Denote
\[z^{k}:=(x^{k},y^{k},x^{k+1},x^{k-1}).\]
Lemma 2.4 implies that the sequence \(\{\mathscr{L}(z^{k})\}\) is nonincreasing. Let
\[d^{k}:=(\nabla_{x}\mathscr{L}(z^{k}),Ax^{k}-g^{k},\nabla_{u}\mathscr{L}(z^{k} ),\nabla_{v}\mathscr{L}(z^{k})),\]
where \(g^{k}=-M(y^{k}-y^{k-1})+A(2x^{k}-x^{k-1})\). From (2.4) we have that \(g^{k}\in\partial h^{*}(y^{k})\), and hence \(d^{k}\in\partial\mathscr{L}(z^{k})\) by the definition of \(\mathscr{L}\). In the following, we derive a bound of \(d^{k}\).
**Lemma 2.5**.: _Under Assumption 1, it holds that_
\[\|d^{k}\|\leq\gamma_{1}\|x^{k}-x^{k-1}\|+\gamma_{2}\|x^{k+1}-x^{k}\|,\]
_where_
\[\gamma_{1}:=2L+4b+\frac{2}{\alpha}+(2+\alpha L)\|A\|,\ \gamma_{2}:=4a+\frac{1}{ \alpha}+\|A\|.\]
Proof.: For the first component of \(d^{k}\), we have
\[\begin{split}&\|\nabla_{x}\mathscr{L}(z^{k})\|\\ &=\|\nabla f(x^{k})+A^{T}y^{k}-2a(x^{k}-x^{k+1})+2b(x^{k}-x^{k-1} )\|\\ &=\|\nabla f(x^{k})-\nabla f(x^{k-1})+A^{T}(y^{k}-y^{k-1})+ \nabla f(x^{k-1})+A^{T}y^{k-1}-2a(x^{k}-x^{k+1})+2b(x^{k}-x^{k-1})\|\\ &\leq(L+2b)\|x^{k}-x^{k-1}\|+2a\|x^{k}-x^{k+1}\|+\|A^{T}(y^{k}-y ^{k-1})\|+\|\nabla f(x^{k-1})+A^{T}y^{k-1}\|,\end{split}\]
which, together with (2.5a) and (2.7), gives that
\[\|\nabla_{x}\mathscr{L}(z^{k})\|\leq 2\left(L+b+\frac{1}{\alpha}\right)\|x^{k}-x ^{k-1}\|+\left(2a+\frac{1}{\alpha}\right)\|x^{k+1}-x^{k}\|. \tag{2.14}\]
For the second component of \(d^{k}\), by (2.4) we obtain
\[\|Ax^{k}-g^{k}\|=\|M(y^{k}-y^{k-1})-A(x^{k}-x^{k-1})\|\leq\alpha\|A\|\|A^{T}(y^{k }-y^{k-1})\|+\|A\|\|x^{k}-x^{k-1}\|,\]
which, together with (2.7), yields that
\[\|Ax^{k}-g^{k}\|\leq(2+\alpha L)\|A\|\|x^{k}-x^{k-1}\|+\|A\|\|x^{k+1}-x^{k}\|. \tag{2.15}\]
For \(\nabla_{u}\mathscr{L}(z^{k})\) and \(\nabla_{v}\mathscr{L}(z^{k})\), one has
\[\|\nabla_{u}\mathscr{L}(z^{k})\|=2a\|x^{k+1}-x^{k}\|,\quad\|\nabla_{v} \mathscr{L}(z^{k})\|=2b\|x^{k}-x^{k-1}\|. \tag{2.16}\]
Combining (2.14), (2.15) and (2.16) together, we have
\[\|d^{k}\|\leq\left(2L+4b+\frac{2}{\alpha}+(2+\alpha L)\|A\|\right)\|x^{k}-x^{ k-1}\|+\left(4a+\frac{1}{\alpha}+\|A\|\right)\|x^{k+1}-x^{k}\|.\]
The proof is completed.
### Subsequence convergence
Let \(\mathcal{C}\) denote the set of cluster points of the sequence \(\{(x^{k},y^{k})\}\) generated by Algorithm 1. Now, we establish the subsequence convergence based on the previous lemmas concerning \(\mathscr{L}\). These convergence results shall be proved under the assumption that the sequence \(\{(x^{k},y^{k})\}\) is bounded, which is a standard assumption in the global convergence analysis of nonconvex optimization algorithms, see [7, 8, 12] for instance.
**Theorem 2.6**.: _Let the sequence \(\{(x^{k},y^{k})\}\) be bounded and Assumption 1 hold. Then,_
1. \(\sum_{k=1}^{\infty}\|x^{k+1}-x^{k}\|^{2}<\infty\) _and_ \(\sum_{k=1}^{\infty}\|y^{k+1}-y^{k}\|^{2}<\infty\)_;_
2. \(\mathcal{C}\) _is a nonempty compact set and_ \[\lim_{k\to\infty}\operatorname{dist}((x^{k},y^{k}),\mathcal{C})=0;\]
3. \(\mathcal{C}\subseteq\operatorname{crit}\mathcal{L}\)_;_
4. \(\mathcal{L}\) _is finite and constant on_ \(\mathcal{C}\)_._
Proof.: Assumption 1(ii) implies \(\inf_{k}\mathcal{L}(x^{k},y^{k})>-\infty\), which, together with the boundedness of \(\{x^{k}\}\), leads to \(\inf_{k}\mathscr{L}(z^{k})>-\infty\). Since the sequence \(\{\mathscr{L}(z^{k})\}\) is nonincreasing (cf. Lemma 2.4) and bounded from below, \(\mathscr{L}(z^{k})\) converges to a finite value denoted by \(\bar{\mathscr{L}}\). Summing (2.12) over \(k=1,\ldots,n\) yields that
\[c\sum_{k=1}^{n}(\|x^{k+1}-x^{k}\|^{2}+\|x^{k}-x^{k-1}\|^{2})\leq\mathscr{L}(z^ {1})-\mathscr{L}(z^{n+1}).\]
Let \(n\to\infty\), by the convergence of \(\{\mathscr{L}(z^{k})\}\) we have
\[\sum_{k=1}^{\infty}\|x^{k+1}-x^{k}\|^{2}<\infty.\]
This, together with (2.7) and (2.8), further gives
\[\sum_{k=1}^{\infty}\|y^{k+1}-y^{k}\|^{2}<\infty.\]
Item (i) is derived. Moreover, it further implies
\[\lim_{k\to\infty}\|x^{k+1}-x^{k}\|=0\text{ and }\lim_{k\to\infty}\|y^{k+1}-y^{k}\|=0. \tag{2.17}\]
The compactness of \(\mathcal{C}\) follows the proof of [7, Lemma 5 (iii)]. Since the sequence \(\{(x^{k},y^{k})\}\) is bounded, thus \(\mathcal{C}\) is nonempty and for any \((\bar{x},\bar{y})\in\mathcal{C}\) there exists a subsequence \(\{(x^{k_{q}},y^{k_{q}})\}\) of \(\{(x^{k},y^{k})\}\) such that
\[\lim_{q\to\infty}\|x^{k_{q}}-\bar{x}\|=0,\ \lim_{q\to\infty}\|y^{k_{q}}-\bar{y}\|=0. \tag{2.18}\]
By the definition of the distance function, we have
\[\mathrm{dist}((x^{k},y^{k}),\mathcal{C})\leq\|x^{k}-\bar{x}\|+\|y^{k}-\bar{y} \|\leq\|x^{k}-x^{k_{q}}\|+\|x^{k_{q}}-\bar{x}\|+\|y^{k}-y^{k_{q}}\|+\|y^{k_{q}} -\bar{y}\|.\]
Combining this inequality with (2.17) and (2.18), we obtain the result that \(\mathrm{dist}((x^{k},y^{k}),\mathcal{C})\) converges to \(0\) and hence item (ii) holds.
For item (iii), it is sufficient to prove \((\bar{x},\bar{y})\in\mathrm{crit}\mathcal{L}\) for any \((\bar{x},\bar{y})\in\mathcal{C}\). Let \(\bar{z}:=(\bar{x},\bar{y},\bar{x},\bar{x})\). Noting that \(z^{k_{q}}\to\bar{z}\), \(d^{k_{q}}\in\partial\mathscr{L}(z^{k_{q}})\) and \(d^{k_{q}}\to 0\) by Lemma 2.5, we have from the outer semicontinuity of \(\partial\mathscr{L}\) that \(0\in\partial\mathscr{L}(\bar{z})\), i.e., \((\bar{x},\bar{y},\bar{x},\bar{x})\in\mathrm{crit}\mathcal{L}\). Therefore, from Lemma 2.2, it follows that \((\bar{x},\bar{y})\in\mathrm{crit}\mathcal{L}\).
Recall from Remark 2.1 (iv) that the conjugate function \(h^{*}\) is proper, lower semicontinuous and convex. Thus, \(h^{*}\) is continuous over its domain \(\mathrm{dom}h^{*}\), as demonstrated in [4, Theorem 2.22]. Therefore, \(\mathcal{L}\) is continuous over \(\mathbb{X}\times\mathrm{dom}h^{*}\) and hence
\[\lim_{q\to\infty}\mathcal{L}(x^{k_{q}},y^{k_{q}})=\mathcal{L}(\bar{x},\bar{y}),\]
which further implies
\[\lim_{q\to\infty}\mathscr{L}(z^{k_{q}})=\lim_{q\to\infty}(\mathcal{L}(x^{k_{q }},y^{k_{q}})-a\|x^{k_{q}}-x^{k_{q}+1}\|^{2}+b\|x^{k_{q}}-x^{k_{q}-1}\|^{2})= \mathcal{L}(\bar{x},\bar{y})=\mathscr{L}(\bar{z}). \tag{2.19}\]
In the proof of (i), we have shown that
\[\lim_{k\to\infty}\mathscr{L}(z^{k})=\bar{\mathscr{L}},\]
which, together with (2.19), implies \(\mathcal{L}(\bar{x},\bar{y})=\bar{\mathscr{L}}\). Since \((\bar{x},\bar{y})\) is arbitrarily chosen in \(\mathcal{C}\), item (iv) is obtained.
### Global convergence and rates under KL assumption
In this subsection, we will establish the global convergence and convergence rates of Algorithm 1 in the context of KL property, which has been extensively studied in recent years for the convergence of algorithms for nonconvex optimization, see, e.g., [2, 3, 7, 9, 8, 13, 35].
Given a proper lower semicontinuous function \(f\) and real numbers \(a,b\), let us denote \([a<f<b]:=\{x\in\mathbb{X}:a<f(x)<b\}\).
**Definition 2.7**.: A proper lower semicontinuous function \(f:\mathbb{X}\to(-\infty,+\infty]\) is said to have the _Kurdyka-Lojasiewicz (KL) property_ at \(\bar{x}\in\mathrm{dom}\partial f:=\{x\in\mathbb{X}:\partial f(x)\neq\emptyset\}\) if there exist \(\eta\in(0,+\infty]\), a neighborhood \(U\) of \(\bar{x}\) and a continuous concave function \(\varphi:[0,\eta)\to[0,+\infty)\) such that
* \(\varphi(0)=0\);
* \(\varphi\) is continuously differentiable and \(\varphi^{\prime}>0\) on \((0,\eta)\);
* for all \(x\in U\cap[0<f-f(\bar{x})<\eta]\), the following KL inequality holds \[\varphi^{\prime}(f(x)-f(\bar{x}))\cdot\mathrm{dist}(0,\partial f(x))\geq 1.\] (2.20)
A proper lower semicontinuous function \(f\), which has the KL property at every point of \(\mathrm{dom}\partial f\), is called a _KL function_. When \(\varphi(s)=\sigma s^{1-\theta}\), \(\sigma\) is a positive constant and \(\theta\in[0,1)\), \(f\) is said to satisfy the _Lojasiewicz property_ with _exponent_\(\theta\).
_Remark 2.8_.: It is known that the KL property is automatically satisfied at any noncritical point \(x\in\mathbb{X}\) with a concave function \(\varphi(s)=\sigma s\) (see [2, Section 3.2]). A very wide class of functions, such as nonsmooth semialgebraic functions, real subanalytic functions, and functions definable in an \(o\)-minimal structure, satisfy the KL property. In particular, for problem (1.1), \(\mathscr{L}\) is considered a KL function if \(f\) and \(h\) are semialgebraic (or, \(f\) is semialgebraic and \(h^{*}\) satisfies a growth condition, see [7, Section 5]). We refer the readers to [1, 2, 7, 20] for more properties and examples of KL functions.
We now establish the global convergence of Algorithm 1.
**Theorem 2.9**.: _Suppose that \(\mathscr{L}\) is a KL function. Let Assumption 1 hold and the sequence \(\{(x^{k},y^{k})\}\) generated by Algorithm 1 be bounded, then \((x^{k},y^{k})\) converges to a critical point of \(\mathcal{L}\) and_
\[\sum_{k=1}^{\infty}\|x^{k+1}-x^{k}\|<\infty,\quad\sum_{k=1}^{\infty}\|y^{k+1}-y ^{k}\|<\infty.\]
Proof.: In the proof of Theorem 2.6, it has been shown that
\[\lim_{k\to\infty}\mathscr{L}(z^{k})=\bar{\mathscr{L}}, \tag{2.21}\]
where \(\bar{\mathscr{L}}\) is the constant value of \(\mathcal{L}\) over \(\mathcal{C}\).
If there exists a number \(l_{0}>0\) such that \(\mathscr{L}(z^{l_{0}})=\bar{\mathscr{L}}\), which, together with Lemma 2.4, implies that \(\mathscr{L}(z^{k})=\bar{\mathscr{L}}\) and \(x^{k}=x^{k+1}\) for any \(k\geq l_{0}\). By (2.7), we have \(y^{k}=y^{k+1}\) for any \(k\geq l_{0}\). Thus, \((x^{k},y^{k})=(x^{k+1},y^{k+1})\) for any \(k\geq l_{0}\), which proves the claim.
Otherwise, since the sequence \(\{\mathscr{L}(z^{k})\}\) is nonincreasing by Lemma 2.4, it follows that \(\mathscr{L}(z^{k})>\mathscr{L}\) for any \(k>0\). Relation (2.21) ensures that for any \(\eta>0\), there exists an integer \(l_{1}>0\) such that
\[\mathscr{L}(z^{k})<\bar{\mathscr{L}}+\eta\]
for any \(k\geq l_{1}\). Let \(\varOmega\) be the set of cluster points of \(\{z^{k}\}\). By the same line as the proof of Theorem 2.6 (ii) and (iv), we have that the function \(\mathscr{L}\) is constant on the nonempty compact set \(\varOmega\) and \(\operatorname{dist}(z^{k},\varOmega)\to 0\) as \(k\to\infty\). Thus, for any \(\varepsilon>0\), there exists \(l_{2}>0\) such that for \(k\geq l_{2}\),
\[\operatorname{dist}(z^{k},\varOmega)<\varepsilon.\]
Let \(K_{0}:=\max\{l_{1},l_{2}\}\). By the above discussion, one has that \(z^{k}\in\{z:\operatorname{dist}(z,\varOmega)<\varepsilon\}\cap[\bar{\mathscr{ L}}<\mathscr{L}<\bar{\mathscr{L}}+\eta]\) for all \(k\geq K_{0}\). Since \(\mathscr{L}\) is a KL function, from the uniformized KL property ([7, Lemma 6]), there exists a continuous concave function \(\varphi\) such that for all \(k\geq K_{0}\),
\[\varphi^{\prime}(\mathscr{L}(z^{k})-\bar{\mathscr{L}})\cdot\operatorname{ dist}(0,\partial\mathscr{L}(z^{k}))\geq 1. \tag{2.22}\]
Using the concavity of \(\varphi\) yields that
\[\varphi(\mathscr{L}(z^{k+1})-\bar{\mathscr{L}})\leq\varphi(\mathscr{L}(z^{k}) -\bar{\mathscr{L}})+\varphi^{\prime}(\mathscr{L}(z^{k})-\bar{\mathscr{L}}) \cdot(\mathscr{L}(z^{k+1})-\mathscr{L}(z^{k})). \tag{2.23}\]
Lemma 2.5 implies that
\[\operatorname{dist}(0,\partial\mathscr{L}(z^{k}))\leq\gamma(\|x^{k}-x^{k-1}\|+ \|x^{k+1}-x^{k}\|), \tag{2.24}\]
where \(\gamma:=\max\{\gamma_{1},\gamma_{2}\}\). Combining (2.22), (2.23), (2.24) with Lemma 2.4, we obtain that \(\mathcal{M}_{m,n}:=\varphi(\mathscr{L}(z^{m})-\bar{\mathscr{L}})-\varphi( \mathscr{L}(z^{n})-\bar{\mathscr{L}})\) satisfies
\[\mathcal{M}_{k,k+1}\geq\varphi^{\prime}(\mathscr{L}(z^{k})-\bar{\mathscr{L}}) \cdot(\mathscr{L}(z^{k})-\mathscr{L}(z^{k+1}))\geq\frac{\mathscr{L}(z^{k})- \mathscr{L}(z^{k+1})}{\operatorname{dist}(0,\partial\mathscr{L}(z^{k}))} \geq\ \frac{c(\|x^{k+1}-x^{k}\|^{2}+\|x^{k}-x^{k-1}\|^{2})}{\gamma(\|x^{k+1}-x^{k}\|+ \|x^{k}-x^{k-1}\|)},\]
which is rewritten as
\[\|x^{k+1}-x^{k}\|^{2}+\|x^{k}-x^{k-1}\|^{2}\leq\frac{\gamma}{c}\mathcal{M}_{k,k+1}(\|x^{k}-x^{k-1}\|+\|x^{k+1}-x^{k}\|). \tag{2.25}\]
This further indicates that
\[\|x^{k+1}-x^{k}\|\leq\sqrt{\frac{\gamma}{c}\mathcal{M}_{k,k+1}(\|x^{k}-x^{k-1} \|+\|x^{k+1}-x^{k}\|)}\leq\frac{\gamma}{c}\mathcal{M}_{k,k+1}+\frac{1}{4}(\|x^ {k}-x^{k-1}\|+\|x^{k+1}-x^{k}\|),\]
which is equivalent to
\[\|x^{k+1}-x^{k}\|\leq\frac{2\gamma}{c}\mathcal{M}_{k,k+1}+\frac{1}{2}(\|x^{k}- x^{k-1}\|-\|x^{k+1}-x^{k}\|).\]
Summing up from \(k=K_{0}\) to \(n\) with \(n>K_{0}\), it follows that
\[\sum_{k=K_{0}}^{n}\|x^{k+1}-x^{k}\|\leq\frac{2\gamma}{c}\mathcal{M}_{K_{0},n+1}+ \frac{1}{2}\|x^{K_{0}}-x^{K_{0}-1}\|. \tag{2.26}\]
Similarly, from (2.25) we also have
\[\sum_{k=K_{0}}^{n}\|x^{k}-x^{k-1}\|\leq\frac{2\gamma}{c}\mathcal{M}_{K_{0},n+1} +\frac{1}{2}\|x^{n+1}-x^{n}\|. \tag{2.27}\]
Summing (2.26) and (2.27), we obtain
\[\sum_{k=K_{0}}^{n}\left(\|x^{k+1}-x^{k}\|+\|x^{k}-x^{k-1}\|\right) \leq\frac{4\gamma}{c}\mathcal{M}_{K_{0},n+1}+\frac{1}{2}\|x^{K_{0} }-x^{K_{0}-1}\|+\frac{1}{2}\|x^{n+1}-x^{n}\| \tag{2.28}\] \[\leq\frac{4\gamma}{c}\varphi(\mathscr{L}(z^{K_{0}})-\bar{ \mathscr{L}})+\frac{1}{2}\|x^{K_{0}}-x^{K_{0}-1}\|+\frac{1}{2}\|x^{n+1}-x^{n}\|,\]
where the second inequality follows from the fact that \(\varphi>0\) over \((0,\eta)\). Let \(n\to\infty\) in (2.28), by the first term of (2.17) we have
\[\sum_{k=K_{0}}^{\infty}\|x^{k+1}-x^{k}\|<+\infty,\]
which, together with (2.7), implies
\[\sum_{k=K_{0}}^{\infty}\|y^{k+1}-y^{k}\|<+\infty.\]
These two inequalities imply that \((x^{k},y^{k})\) is a Cauchy sequence by the same line of analysis as [7, Theorem 1 (ii)]. Thus, the sequence \((x^{k},y^{k})\) converges to a limit \((\bar{x},\bar{y})\) that is a critical point of \(\mathcal{L}\) by Theorem 2.6 (iii).
The convergence rates of the sequence \(\{(x^{k},y^{k})\}\) in the context of Lojasiewicz exponent are provided in the following theorem which is proved in an analogous way as [1].
**Theorem 2.10**.: _Assume that the sequence \(\{(x^{k},y^{k})\}\) is bounded and \(\mathscr{L}\) is a KL function with the Lojasiewicz exponent \(\theta\). Let \((\bar{x},\bar{y})\) be the limit of \((x^{k},y^{k})\). Then, under Assumption 1 the following estimations hold:_
1. _if_ \(\theta=0\)_, the sequence_ \(\{(x^{k},y^{k})\}\) _converges in finite steps;_
2. _if_ \(\theta\in(0,\frac{1}{2}]\)_, then there exist constants_ \(\nu>0\)_,_ \(0<\tau<1\) _and a positive integer_ \(K\) _such that for_ \(k\geq K\)_,_ \[\|x^{k}-\bar{x}\|\leq\nu\tau^{k-K},\quad\|y^{k}-\bar{y}\|\leq\nu^{\prime}\tau^ {k-K},\] _where_ \(\nu^{\prime}:=\nu(1/\alpha+L)/\hat{\lambda}\) _and_ \(\hat{\lambda}\) _is given in (_2.8_);_
3. _if_ \(\theta\in(\frac{1}{2},1)\)_, then there exist a constant_ \(\mu>0\) _and a positive integer_ \(\bar{K}\) _such that for_ \(k\geq\bar{K}\)_,_ \[\|x^{k}-\bar{x}\|\leq\mu k^{-\frac{1-\theta}{2\theta-1}},\quad\|y^{k}-\bar{y} \|\leq\mu^{\prime}k^{-\frac{1-\theta}{2\theta-1}},\] _where_ \(\mu^{\prime}:=\mu(1/\alpha+L)/\hat{\lambda}\)_._
Proof.: Consider \(\theta=0\), let \(K_{1}:=\max\{k\in\mathbb{N}:x^{k+1}\neq x^{k}\}\). We now show that \(K_{1}\) is a finite number. On the contrary, we assume that \(K_{1}\) is sufficiently large such that (2.22) holds for all \(k\geq K_{1}\). Note that \(\varphi(s)=\sigma s\) when \(\theta=0\), then (2.22) and (2.24) read
\[\gamma(\|x^{k}-x^{k-1}\|+\|x^{k+1}-x^{k}\|)\geq\operatorname{dist}(0,\partial \mathscr{L}(z^{k}))\geq\frac{1}{\sigma},\ k\geq K_{1},\]
which, together with Lemma 2.4 and \(a^{2}+b^{2}\geq(a+b)^{2}/2\), yields that
\[\mathscr{L}(z^{k+1})\leq\mathscr{L}(z^{k})-c(\|x^{k+1}-x^{k}\|^{2}+\|x^{k}-x^{k-1 }\|^{2})\leq\mathscr{L}(z^{k})-\frac{c}{2\gamma^{2}\sigma^{2}}.\]
Let \(k\to\infty\). In the proof of Theorem 2.6, it has been shown that \(\lim_{k\to\infty}\mathscr{L}(z^{k})=\bar{\mathscr{L}}=\mathcal{L}(\bar{x}, \bar{y})\), consequently,
\[\mathcal{L}(\bar{x},\bar{y})\leq\mathcal{L}(\bar{x},\bar{y})-\frac{c}{2\gamma ^{2}\sigma^{2}},\]
which is contradictory. Therefore, \(K_{1}\) is a finite number and \(\{x^{k}\}\) converges in finite steps. From (2.7) and (2.8), we attain
\[\|y^{k+1}-y^{k}\|\leq\frac{1/\alpha+L}{\hat{\lambda}}\|x^{k+1}-x^{k}\|+\frac{ 1}{\alpha\hat{\lambda}}\|x^{k+2}-x^{k+1}\|\leq\frac{1/\alpha+L}{\hat{\lambda} }\left(\|x^{k+1}-x^{k}\|+\|x^{k+2}-x^{k+1}\|\right).\]
Hence, \(\{y^{k}\}\) also converges in finite steps and item (i) holds.
Let \(\Delta_{k}:=\sum_{q=k}^{\infty}\|x^{q+1}-x^{q}\|+\|x^{q}-x^{q-1}\|\). The results in Theorem 2.9 state that \(\Delta_{k}<+\infty\) for any \(k\geq 1\) and the sequence \(\{(x^{k},y^{k})\}\) converges to \((\bar{x},\bar{y})\) that is a critical point of \(\mathcal{L}\). The triangle inequality implies that \(\|x^{k}-\bar{x}\|\leq\Delta_{k}\) and
\[\|y^{k}-\bar{y}\|\leq\sum_{q=k}^{\infty}\|y^{q+1}-y^{q}\|\leq\frac{1/\alpha+L }{\hat{\lambda}}\Delta_{k}.\]
Therefore, it is sufficient to establish the estimations in (ii) and (iii) for \(\Delta_{k}\). If \(\Delta_{k}=0\) for some \(k\), it follows that \(\|x^{q+1}-x^{q}\|=0\) for \(q\geq k\) and \(\{(x^{k},y^{k})\}\) converges in finite steps. Thus, without loss of generality we assume \(\Delta_{k}>0\) for any \(k\geq 1\).
For \(\theta\in(0,1)\), noting that \(\varphi(s)=\sigma s^{1-\theta}\), letting \(n\to\infty\) in (2.28) and using (2.22), we have
\[\Delta_{k+1}\leq\Delta_{k} \leq\frac{4\gamma\sigma}{c}(\mathscr{L}(z^{k})-\mathcal{L}(\bar{x },\bar{y}))^{1-\theta}+\frac{1}{2}\|x^{k}-x^{k-1}\|\] \[\leq\frac{4\gamma\sigma^{\frac{1}{b}}}{c}((1-\theta)\text{dist}(0,\partial\mathscr{L}(z^{k})))^{\frac{1-\theta}{\theta}}+\frac{1}{2}\|x^{k}-x ^{k-1}\|\]
for any \(k\geq K_{0}\). The above inequality, together with the definition of \(\Delta_{k}\) and (2.24), yields that
\[\Delta_{k+1} \leq\frac{4\gamma\sigma^{\frac{1}{b}}}{c}[\gamma(1-\theta)(\Delta _{k}-\Delta_{k+1})]^{\frac{1-\theta}{\theta}}+\frac{1}{2}(\Delta_{k}-\Delta_{ k+1}) \tag{2.29}\] \[=\gamma^{\prime}(\Delta_{k}-\Delta_{k+1})^{\frac{1-\theta}{\theta }}+\frac{1}{2}(\Delta_{k}-\Delta_{k+1}),\]
where \(\gamma^{\prime}:=\frac{4}{c}(\gamma\sigma)^{\frac{1}{b}}(1-\theta)^{\frac{1- \theta}{\theta}}\).
Consider \(\theta\in(0,\frac{1}{2}]\). Noting that \(0<\Delta_{k}-\Delta_{k+1}<1\) when \(k\geq K\) and \(K\geq K_{0}\) is large enough. Then, from (2.29) and \(\frac{1-\theta}{\theta}\geq 1\) it follows
\[\Delta_{k+1}\leq(\gamma^{\prime}+\frac{1}{2})(\Delta_{k}-\Delta_{k+1}).\]
By rearranging the above inequality and setting \(\tau:=(\gamma^{\prime}+\frac{1}{2})/(\gamma^{\prime}+\frac{3}{2})<1\), one has
\[\Delta_{k+1}\leq\tau\Delta_{k}.\]
Therefore, for any \(k\geq K\), it holds
\[\Delta_{k}\leq\nu\tau^{k-K},\]
where \(\nu:=\Delta_{K}\) is a finite number. Item (ii) is derived.
Consider \(\theta\in(\frac{1}{2},1)\). Let \(\bar{K}\geq K_{0}\) be large enough such that \(0<\Delta_{k}-\Delta_{k+1}<1\) for all \(k\geq\bar{K}\). Noting that \(0<\frac{1-\theta}{\theta}<1\), we have from (2.29) that
\[\Delta_{k+1}\leq(\gamma^{\prime}+\frac{1}{2})(\Delta_{k}-\Delta_{k+1})^{\frac{ 1-\theta}{\theta}}\]
for all \(k\geq\bar{K}\). Following the same line of the proof of [1, (14)], there exists a constant \(\mu_{1}>0\) such that for all \(k\geq\bar{K}\),
\[(\Delta_{k+1})^{\nu_{1}}-(\Delta_{k})^{\nu_{1}}\geq\mu_{1},\]
where \(\nu_{1}:=(1-2\theta)/(1-\theta)<0\). Summing up from \(k=\bar{K}\) to \(n\) for any \(n\geq\bar{K}\) yields that
\[(\Delta_{n})^{\nu_{1}}\geq(n-\bar{K})\mu_{1}+(\Delta_{\bar{K}})^{\nu_{1}},\]
which, together with \(\nu_{1}<0\), implies that for any \(n\geq\bar{K}\),
\[\Delta_{n}\leq[(n-\bar{K})\mu_{1}+(\Delta_{\bar{K}})^{\nu_{1}}]^{\frac{1}{\nu_ {1}}}\leq\mu n^{\frac{1}{\nu_{1}}},\]
where \(\mu\) is a positive constant. Item (iii) is obtained.
## 3 SPPDG for nonconvex finite-sum optimization
In this section, we consider to solve the nonconvex finite-sum optimization problem (1.2). By combining Algorithm 1 with certain stochastic gradient estimator, we present a stochastic variant of PPDG, named as SPPDG, and establish its almost sure convergence as well as convergence rates.
### The SPPDG algorithm
```
1 Initialization: Choose an initial point \((x^{0},y^{0})\in\mathbb{X}\times\mathbb{Y}^{*}\), a constant \(\alpha>0\) and a positive definite matrix \(M\).
2for\(k=0,1,2,\ldots\)do
3 Update \(x^{k},y^{k}\) as follows, \[\left\{\begin{aligned} x^{k+1}&=x^{k}- \alpha(\widetilde{\nabla}f_{k}+A^{T}y^{k}),\\ y^{k+1}&=\mathrm{prox}_{h^{*}}^{M}(y^{k}+M^{-1}A(2x ^{k+1}-x^{k})),\end{aligned}\right.\] (3.1b) where \[\widetilde{\nabla}f_{k}\] is a stochastic gradient estimator of \[\nabla f(x^{k})\].
4 Set \(k\gets k+1\).
```
**Algorithm 2**SPPDG
In Algorithm 2, we summarize the detail of SPPDG for the nonconvex finite-sum optimization problem
\[\min_{x\in\mathbb{X}}\ f(x)+h(Ax),\]
where
\[f(x):=\frac{1}{N}\sum_{i=1}^{N}f_{i}(x).\]
In many applications, the number of components \(N\) can be very large, which makes the computation of the full gradient \(\nabla f(x)=\frac{1}{N}\sum_{i=1}^{N}\nabla f_{i}(x)\) challenging. To circumvent this difficulty, we apply the stochastic gradient estimator \(\widetilde{\nabla}f_{k}\) to approximate \(\nabla f(x^{k})\) in (3.1a). Hence, Algorithm 2 can be viewed as a stochastic approximate variant of Algorithm 1.
Let \(\mathcal{F}_{k}\) be the \(\sigma\)-field generated by the random variables of the first \(k\) iterations of Algorithm 2 and \(\mathbb{E}_{k}\) be the expectation conditioned on \(\mathcal{F}_{k}\). Clearly, the iterate \((x^{k},y^{k})\) is \(\mathcal{F}_{k}\)-measurable since \(x^{k}\) and \(y^{k}\) are both dependent on the random information \(\{\widetilde{\nabla}f_{0},\widetilde{\nabla}f_{1},\ldots,\widetilde{\nabla}f_ {k-1}\}\) of the first \(k\) iterations.
In this paper, we will mainly focus on the variance reduced stochastic gradient estimator \(\widetilde{\nabla}f_{k}\) which is formally defined in [6, 22].
**Definition 3.1**.: The stochastic gradient estimator \(\widetilde{\nabla}f_{k}\) is said to be _variance reduced_, if there exist constants \(\sigma_{1},\sigma_{2},\sigma_{\lambda}>0\), \(\rho\in(0,1]\), and \(\mathcal{F}_{k}\)-measurable nonnegative random variables \(\Lambda_{1}^{k},\Lambda_{2}^{k}\) of the form \(\Lambda_{1}^{k}=\sum_{i=1}^{t}(v_{k}^{i})^{2},\Lambda_{2}^{k}=\sum_{i=1}^{t}v _{k}^{i}\) for some nonnegative random variables \(v_{k}^{i}\in\mathbb{R}\), such that for any \(k\geq 1\):
1. The estimator \(\widetilde{\nabla}f_{k}\) satisfies \[\mathbb{E}_{k}[\|\widetilde{\nabla}f_{k}-\nabla f(x^{k})\|^{2}]\leq\Lambda_{1 }^{k}+\sigma_{1}(\mathbb{E}_{k}[\|x^{k+1}-x^{k}\|^{2}]+\|x^{k}-x^{k-1}\|^{2})\] (3.2) and \[\mathbb{E}_{k}[\|\widetilde{\nabla}f_{k}-\nabla f(x^{k})\|]\leq\Lambda_{2}^{k }+\sigma_{2}(\mathbb{E}_{k}[\|x^{k+1}-x^{k}\|]+\|x^{k}-x^{k-1}\|).\] (3.3)
2. The sequence \(\{\Lambda_{1}^{k}\}\) decays geometrically \[\mathbb{E}_{k}[\Lambda_{1}^{k+1}]\leq(1-\rho)\Lambda_{1}^{k}+\sigma_{\Lambda} (\mathbb{E}_{k}[\|x^{k+1}-x^{k}\|^{2}]+\|x^{k}-x^{k-1}\|^{2}).\] (3.4)
3. If \(\{x^{k}\}\) satisfies \(\lim_{k\to\infty}\mathbb{E}[\|x^{k}-x^{k-1}\|^{2}]=0\), then \(\mathbb{E}[\Lambda_{1}^{k}]\to 0\) and \(\mathbb{E}[\Lambda_{2}^{k}]\to 0\) as \(k\to\infty\).
_Remark 3.2_.: A variety of popular stochastic gradient estimators satisfy the conditions in Definition 3.1, for example, SAGA, SARAH, SAG and SVRG. Combining (3.2) and (3.4), for any \(k\geq 1\) we have the following bound,
\[\mathbb{E}_{k}[\|\widetilde{\nabla}f_{k}-\nabla f(x^{k})\|^{2}]\leq\frac{1}{ \rho}(\Lambda_{1}^{k}-\mathbb{E}_{k}[\Lambda_{1}^{k+1}])+\kappa(\mathbb{E}_{ k}[\|x^{k+1}-x^{k}\|^{2}]+\|x^{k}-x^{k-1}\|^{2}), \tag{3.5}\]
where
\[\kappa:=\sigma_{1}+\frac{\sigma_{\Lambda}}{\rho}.\]
The readers are referred to [6, 22] for a detailed description on examples and properties of the variance reduced stochastic gradient estimator.
In the rest of this section, we assume that \(\widetilde{\nabla}f_{k}\) in Algorithm 2 is some variance reduced gradient estimator satisfying the conditions of Definition 3.1, and let \(M\) be the matrix associated with \(\alpha AA^{T}\). We shall analyze the convergence of \((x^{k},y^{k})\) generated by Algorithm 2 under the following assumption.
**Assumption 2**.: _The assumption is the same as Assumption 1 except that Assumption 1(i) is replaced by: the functions \(f_{i}\), \(i=1,\cdots,N\) are \(L\)-smooth; and \(\mathcal{L}\) in Assumption 1(ii) is replaced by \(\mathcal{L}_{s}\) with_
\[\mathcal{L}_{s}(x,y)=\frac{1}{N}\sum_{i=1}^{N}f_{i}(x)+\langle y,Ax\rangle-h^ {*}(y).\]
Unsurprisingly, we will observe that part of the convergence analysis of Algorithm 2 will be performed similarly with that of Algorithm 1 in Section 2. Without any confusion, some notations in Section 2 will be repeatedly used in this section.
### Auxiliary lemmas
Let us first define the following Lyapunov function
\[\mathscr{L}_{s}(x,y,u,v,w):=\mathcal{L}_{s}(x,y)-a\|x-u\|^{2}+b\|x-v\|^{2}+c \|v-w\|^{2},\ \forall x,u,v,w\in\mathbb{X},\ y\in\mathbb{Y}^{*}. \tag{3.6}\]
Here, \(a,b,c\) are constants given by
\[a:=e_{0}+\frac{2\delta_{2}}{\alpha}+2\delta_{2}\alpha\kappa,\ b:=e_{0}+\frac {9\alpha\kappa}{2\delta_{2}}+2\delta_{2}\alpha\kappa+\frac{\kappa}{2\delta_{1 }}+\frac{3\alpha L^{2}}{2\delta_{2}},\ c:=\frac{3\alpha\kappa}{2\delta_{2}},\]
where
\[e_{0}:=\frac{1}{3\alpha}-\frac{\delta_{1}+L}{6}-\frac{\kappa}{3\delta_{1}}- \frac{4\delta_{2}L}{3}-\frac{4\delta_{2}}{3\alpha}-\frac{2\delta_{2}\alpha L^ {2}}{3}-\frac{\alpha L^{2}}{2\delta_{2}}-\frac{2\alpha\kappa}{\delta_{2}}- \frac{8\delta_{2}\alpha\kappa}{3}. \tag{3.7}\]
In addition, \(\delta_{1},\delta_{2}>0\) are properly selected constants such that \(e_{0}>0\).
In this subsection, we mainly aim to develop the decent property corresponding to the Lyapunov function \(\mathscr{L}_{s}\) in expectation. Following the same line as the proof of Lemma 2.2, we firstly derive the relation between \(\operatorname{crit}\!\mathcal{L}_{s}\) and \(\operatorname{crit}\!\mathscr{L}_{s}\).
**Lemma 3.3**.: _For any \(x,u,v,w\in\mathbb{X}\), \(y\in\mathbb{Y}^{*}\), \((x,y,u,v,w)\in\mathrm{crit}\mathscr{L}_{s}\) if and only if \(u=v=w=x\) and \((x,y)\in\mathrm{crit}\mathcal{L}_{s}\)._
The following lemma gives a connection between the two sequences \(\{y^{k}\}\) and \(\{x^{k}\}\).
**Lemma 3.4**.: _Suppose Assumption 2 holds. Then, for \(k\geq 1\),_
\[\mathbb{E}_{k}[\|A^{T}(y^{k+1}-y^{k})\|^{2}]\leq 4\left(\frac{1}{ \alpha^{2}}+\kappa\right)\mathbb{E}_{k}[\|x^{k+2}-x^{k+1}\|^{2}]+4\left(\left( \frac{1}{\alpha}+L\right)^{2}+2\kappa\right)\mathbb{E}_{k}[\|x^{k+1}-x^{k}\|^ {2}]\] \[\qquad\qquad\qquad\qquad+4\kappa\|x^{k}-x^{k-1}\|^{2}+\frac{4}{ \rho}\mathbb{E}_{k}[\Lambda_{1}^{k+1}-\Lambda_{1}^{k+2}]+\frac{4}{\rho}( \Lambda_{1}^{k}-\mathbb{E}_{k}[\Lambda_{1}^{k+1}]).\]
Proof.: Using (3.1a) twice yields that
\[\|A^{T}(y^{k+1}-y^{k})\| =\left\|\left(\frac{x^{k+1}-x^{k+2}}{\alpha}-\widetilde{\nabla} f_{k+1}\right)-\left(\frac{x^{k}-x^{k+1}}{\alpha}-\widetilde{\nabla}f_{k} \right)\right\| \tag{3.8}\] \[\leq\frac{1}{\alpha}\|x^{k+2}-x^{k+1}\|+\frac{1}{\alpha}\|x^{k+1} -x^{k}\|+\|\widetilde{\nabla}f_{k+1}-\widetilde{\nabla}f_{k}\|.\]
Since \(f\) is \(L\)-smooth, we have
\[\|\widetilde{\nabla}f_{k+1}-\widetilde{\nabla}f_{k}\| \leq\|\widetilde{\nabla}f_{k+1}-\nabla f(x^{k+1})\|+\|\nabla f(x ^{k+1})-\nabla f(x^{k})\|+\|\widetilde{\nabla}f_{k}-\nabla f(x^{k})\| \tag{3.9}\] \[\leq\|\widetilde{\nabla}f_{k+1}-\nabla f(x^{k+1})\|+\|\widetilde{ \nabla}f_{k}-\nabla f(x^{k})\|+L\|x^{k+1}-x^{k}\|.\]
Substituting (3.9) into (3.8), one has
\[\|A^{T}(y^{k+1}-y^{k})\|^{2} \leq\frac{4}{\alpha^{2}}\|x^{k+2}-x^{k+1}\|^{2}+4\left(\frac{1}{ \alpha}+L\right)^{2}\|x^{k+1}-x^{k}\|^{2}\] \[\qquad+4\|\widetilde{\nabla}f_{k+1}-\nabla f(x^{k+1})\|^{2}+4\| \widetilde{\nabla}f_{k}-\nabla f(x^{k})\|^{2}.\]
Finally, taking conditional expectation on both sides and using (3.5), we derive the claim.
For the sake of simplicity, define
\[z^{k}:=(x^{k},y^{k},x^{k+1},x^{k-1},x^{k-2}).\]
Similar to the process of convergence analysis in Section 2, we establish the following critical lemma on the descent property of the Lyapunov function \(\mathscr{L}_{s}\).
**Lemma 3.5**.: _Let Assumption 2 hold. Then, for any \(k\geq 1\) and \(\delta_{1},\delta_{2}>0\),_
\[\mathbb{E}[\mathscr{L}_{s,k+1}^{\Lambda}]+\mathbb{E}[e_{0}(\|x^{k+2}-x^{k+1} \|^{2}+\|x^{k+1}-x^{k}\|^{2}+\|x^{k}-x^{k-1}\|^{2})]\leq\mathbb{E}[\mathscr{ L}_{s,k}^{\Lambda}], \tag{3.10}\]
_where \(e_{0}\) is defined in (3.7),_
\[\mathscr{L}_{s,k}^{\Lambda}:=\mathscr{L}_{s}(z^{k})+e_{1}\Lambda_{1}^{k+1}+e_ {2}\Lambda_{1}^{k}+e_{3}\Lambda_{1}^{k-1},\]
_and_
\[e_{1}:=\frac{2\delta_{2}\alpha}{\rho},\ e_{2}:=\frac{2\delta_{2}\alpha}{\rho}+ \frac{1}{2\delta_{1}\rho}+\frac{3\alpha}{2\delta_{2}\rho},\ e_{3}:=\frac{3 \alpha}{2\delta_{2}\rho}.\]
Proof.: Since the function \(f\) is \(L\)-smooth, we have
\[f(x^{k+1}) \leq f(x^{k})+\langle\nabla f(x^{k}),x^{k+1}-x^{k}\rangle+\frac{L }{2}\|x^{k+1}-x^{k}\|^{2}\] \[=f(x^{k})+\langle\widetilde{\nabla}f_{k},x^{k+1}-x^{k}\rangle+ \langle\nabla f(x^{k})-\widetilde{\nabla}f_{k},x^{k+1}-x^{k}\rangle+\frac{L}{ 2}\|x^{k+1}-x^{k}\|^{2}\] \[\leq f(x^{k})+\langle\frac{x^{k}-x^{k+1}}{\alpha}-A^{T}y^{k},x^{k+ 1}-x^{k}\rangle+\frac{1}{2\delta_{1}}\|\nabla f(x^{k})-\widetilde{\nabla}f_{k }\|^{2}+\frac{\delta_{1}+L}{2}\|x^{k+1}-x^{k}\|^{2}\] \[=f(x^{k})-\left(\frac{1}{\alpha}-\frac{\delta_{1}+L}{2}\right)\| x^{k+1}-x^{k}\|^{2}+\frac{1}{2\delta_{1}}\|\nabla f(x^{k})-\widetilde{\nabla}f_{k }\|^{2}-\langle y^{k},A(x^{k+1}-x^{k})\rangle,\]
where the second inequality is deduced from (3.1a) and \(\langle x,y\rangle\leq\frac{\delta_{1}}{2}\|x\|^{2}+\frac{1}{2\delta_{1}}\|y\|^{2}\) for any \(\delta_{1}>0\). Together with the convexity of \(h^{*}\) and \(g^{k}\in\partial h^{*}(y^{k})\), it further indicates
\[f(x^{k+1})-h^{*}(y^{k+1})+\langle y^{k+1},Ax^{k+1}\rangle\] \[\leq f(x^{k})-h^{*}(y^{k})+\langle y^{k},Ax^{k}\rangle-\langle y^ {k},Ax^{k}\rangle+\langle y^{k+1},Ax^{k+1}\rangle-\langle y^{k},A(x^{k+1}-x^{ k})\rangle+\langle y^{k}-y^{k+1},g^{k}\rangle\] \[\qquad-\left(\frac{1}{\alpha}-\frac{\delta_{1}+L}{2}\right)\|x^ {k+1}-x^{k}\|^{2}+\frac{1}{2\delta_{1}}\|\nabla f(x^{k})-\widetilde{\nabla}f_ {k}\|^{2}\] \[=f(x^{k})-h^{*}(y^{k})+\langle y^{k},Ax^{k}\rangle+\langle y^{k+ 1}-y^{k},Ax^{k+1}-g^{k}\rangle\] \[\qquad-\left(\frac{1}{\alpha}-\frac{\delta_{1}+L}{2}\right)\|x^ {k+1}-x^{k}\|^{2}+\frac{1}{2\delta_{1}}\|\nabla f(x^{k})-\widetilde{\nabla}f_ {k}\|^{2}.\]
Recalling the definition of \(\mathcal{L}_{s}\) and substituting (2.4) (let \(k+1=k\)) into the above inequality, one has
\[\mathcal{L}_{s}(x^{k+1},y^{k+1})\leq\mathcal{L}_{s}(x^{k},y^{k})+ \langle y^{k+1}-y^{k},A(x^{k+1}-x^{k}+x^{k-1}-x^{k})+M(y^{k}-y^{k-1})\rangle\] \[\qquad\qquad-\left(\frac{1}{\alpha}-\frac{\delta_{1}+L}{2}\right) \|x^{k+1}-x^{k}\|^{2}+\frac{1}{2\delta_{1}}\|\nabla f(x^{k})-\widetilde{\nabla} f_{k}\|^{2}.\]
Using (3.1a) and the fact that \(\langle x,y\rangle\leq\frac{\delta_{2}}{2}\|x\|^{2}+\frac{1}{2\delta_{2}}\|y\|^{2}\) for the second term of the right-hand side, and letting \(M=\alpha AA^{T}\), we have
\[\mathcal{L}_{s}(x^{k+1},y^{k+1})\leq\mathcal{L}_{s}(x^{k},y^{k}) -\left(\frac{1}{\alpha}-\frac{\delta_{1}+L}{2}\right)\|x^{k+1}-x^{k}\|^{2}+ \frac{\delta_{2}\alpha}{2}\|A^{T}(y^{k+1}-y^{k})\|^{2}\] \[\qquad\qquad+\frac{1}{2\delta_{1}}\|\nabla f(x^{k})-\widetilde{ \nabla}f_{k}\|^{2}+\frac{\alpha}{2\delta_{2}}\|\widetilde{\nabla}f_{k-1}- \widetilde{\nabla}f_{k}\|^{2},\]
which, together with (3.9) (take \(k=k-1\)), yields that
\[\mathcal{L}_{s}(x^{k+1},y^{k+1}) \tag{3.11}\] \[\leq\mathcal{L}_{s}(x^{k},y^{k})-\left(\frac{1}{\alpha}-\frac{ \delta_{1}+L}{2}\right)\|x^{k+1}-x^{k}\|^{2}+\frac{\delta_{2}\alpha}{2}\|A^{T }(y^{k+1}-y^{k})\|^{2}+\frac{3\alpha L^{2}}{2\delta_{2}}\|x^{k}-x^{k-1}\|^{2}\] \[\qquad+\left(\frac{1}{2\delta_{1}}+\frac{3\alpha}{2\delta_{2}} \right)\|\nabla f(x^{k})-\widetilde{\nabla}f_{k}\|^{2}+\frac{3\alpha}{2\delta _{2}}\|\nabla f(x^{k-1})-\widetilde{\nabla}f_{k-1}\|^{2}.\]
Taking conditional expectation on both sides of (3.11), and applying Lemma 3.4 as well as (3.5), we have
\[\mathbb{E}_{k-1}[\mathcal{L}_{s}(x^{k+1},y^{k+1})]\] \[\leq\mathbb{E}_{k-1}[\mathcal{L}_{s}(x^{k},y^{k})]-\left(\frac{1} {\alpha}-\frac{\delta_{1}+L}{2}-\frac{\kappa}{2\delta_{1}}-\frac{3\alpha \kappa}{2\delta_{2}}-2\delta_{2}\alpha\left(\left(\frac{1}{\alpha}+L\right)^{2 }+2\kappa\right)\right)\mathbb{E}_{k-1}[\|x^{k+1}-x^{k}\|^{2}]\] \[\quad+2\delta_{2}\alpha\left(\frac{1}{\alpha^{2}}+\kappa\right) \mathbb{E}_{k-1}[\|x^{k+2}-x^{k+1}\|^{2}]+\left(2\delta_{2}\alpha\kappa+\frac{ \kappa}{2\delta_{1}}+\frac{3\alpha(L^{2}+2\kappa)}{2\delta_{2}}\right) \mathbb{E}_{k-1}[\|x^{k}-x^{k-1}\|^{2}]\] \[\quad+\frac{3\alpha\kappa}{2\delta_{2}}\|x^{k-1}-x^{k-2}\|^{2}+ \frac{2\delta_{2}\alpha}{\rho}\mathbb{E}_{k-1}[\Lambda_{1}^{k+1}-\Lambda_{1}^{ k+2}]+\left(\frac{2\delta_{2}\alpha}{\rho}+\frac{1}{2\delta_{1}\rho}+\frac{3\alpha}{2 \delta_{2}\rho}\right)\mathbb{E}_{k-1}[\Lambda_{1}^{k}-\Lambda_{1}^{k+1}]\] \[\quad+\frac{3\alpha}{2\delta_{2}\rho}(\Lambda_{1}^{k-1}-\mathbb{E }_{k-1}[\Lambda_{1}^{k}]).\]
Therefore, taking expectation on both sides implies that
\[\mathbb{E}[\mathcal{L}_{s}(x^{k+1},y^{k+1})]\leq\mathbb{E}[ \mathcal{L}_{s}(x^{k},y^{k})]-e_{4}\mathbb{E}[\|x^{k+1}-x^{k}\|^{2}]+e_{5} \mathbb{E}[\|x^{k+2}-x^{k+1}\|^{2}]+e_{6}\mathbb{E}[\|x^{k}-x^{k-1}\|^{2}]\] \[\qquad+e_{7}\mathbb{E}[\|x^{k-1}-x^{k-2}\|^{2}]+e_{1}\mathbb{E}[ \Lambda_{1}^{k+1}-\Lambda_{1}^{k+2}]+e_{2}\mathbb{E}[\Lambda_{1}^{k}-\Lambda_{1}^ {k+1}]+e_{3}\mathbb{E}[\Lambda_{1}^{k-1}-\Lambda_{1}^{k}],\]
where
\[e_{1}=\frac{2\delta_{2}\alpha}{\rho},\ e_{2}=\frac{2\delta_{2} \alpha}{\rho}+\frac{1}{2\delta_{1}\rho}+\frac{3\alpha}{2\delta_{2}\rho},\ e_{3}=\frac{3\alpha}{2\delta_{2}\rho},\ e_{4}=\frac{1}{\alpha}-\frac{\delta_{1}+L}{2}-\frac{ \kappa}{2\delta_{1}}-\frac{3\alpha\kappa}{2\delta_{2}}-2\delta_{2}\alpha((\frac {1}{\alpha}+L)^{2}+2\kappa),\] \[e_{5}=2\delta_{2}\alpha(\frac{1}{\alpha^{2}}+\kappa),\ e_{6}=2 \delta_{2}\alpha\kappa+\frac{\kappa}{2\delta_{1}}+\frac{3\alpha(L^{2}+2\kappa)}{2 \delta_{2}},\ e_{7}=\frac{3\alpha\kappa}{2\delta_{2}}.\]
Recalling the definitions of \(a,b,c,e_{0}\), we have \(e_{0}=\frac{1}{3}(e_{4}-e_{5}-e_{6}-e_{7})\) and \(a=e_{0}+e_{5},b=e_{0}+e_{6}+e_{7},c=e_{7}\), and thus
\[\mathbb{E}[\mathscr{L}_{s,k+1}^{\Lambda}+e_{0}(\|x^{k+2}-x^{k+1}\|^{2}+\|x^{k+ 1}-x^{k}\|^{2}+\|x^{k}-x^{k-1}\|^{2})]\leq\mathbb{E}[\mathscr{L}_{s,k}^{ \Lambda}].\]
This proof is completed.
_Remark 3.6_.: As stated previously, the constant \(e_{0}\) is guaranteed to be positive through a careful selection of \(\delta_{1},\delta_{2}\) and the stepsize \(\alpha\). For example, let \(\delta_{1}=1\), \(\delta_{2}=\frac{1}{6}\) and \(\alpha\in(0,1/2(3+7L+6\kappa))\), we have \(e_{0}>0\) by an straightforward calculation. Thus, Lemma 3.5 indicates that the sequence \(\{\mathbb{E}[\mathscr{L}_{s,k}^{\Lambda}]\}\) is nonincreasing.
Define
\[d^{k}:=(d_{1}^{k},d_{2}^{k},d_{3}^{k},d_{4}^{k},d_{5}^{k}) \tag{3.12}\]
with
\[d_{1}^{k} :=\tfrac{1}{N}\sum_{i=1}^{N}\nabla f_{i}(x^{k})+A^{T}y^{k}-2a(x^{ k}-x^{k+1})+2b(x^{k}-x^{k-1}),\] \[d_{2}^{k} :=Ax^{k}+M(y^{k}-y^{k-1})-A(2x^{k}-x^{k-1}),\] \[d_{3}^{k} :=-2a(x^{k+1}-x^{k}),\ d_{4}^{k}:=2b(x^{k-1}-x^{k})+2c(x^{k-1}-x^{ k-2}),\ d_{5}^{k}:=2c(x^{k-2}-x^{k-1}).\]
Noting that \(g^{k}=-M(y^{k}-y^{k-1})+A(2x^{k}-x^{k-1})\in\partial h^{*}(y^{k})\) from (2.4), we can easily check that \(d^{k}\in\partial\mathscr{L}_{s}(z^{k})\). In the following lemma, we derive a bound of \(d^{k}\).
**Lemma 3.7**.: _Let Assumption 2 be satisfied. It holds that_
\[\mathbb{E}_{k}[\|d^{k}\|^{2}]\leq\gamma_{3}\mathbb{E}_{k}[\|x^{k+1}-x^{k}\|^{ 2}]+\gamma_{4}(\|x^{k}-x^{k-1}\|^{2}+\|y^{k}-y^{k-1}\|^{2})+\gamma_{5}\|x^{k-1 }-x^{k-2}\|^{2}+3\Lambda_{1}^{k}, \tag{3.13}\]
_where \(\gamma_{3}:=\frac{3}{\alpha^{2}}+\frac{12a}{\alpha}+16a^{2}+3\sigma_{1}\), \(\gamma_{4}:=20b^{2}+3\sigma_{1}+2\|A\|^{2}+2\|M\|^{2}\) and \(\gamma_{5}:=12c^{2}\)._
Proof.: It is sufficient to bound the five components of \(d^{k}\). Firstly, from (3.1a) we have
\[\|d_{1}^{k}\|^{2} =\|\nabla f(x^{k})+A^{T}y^{k}-2a(x^{k}-x^{k+1})+2b(x^{k}-x^{k-1}) \|^{2}\] \[\leq\left(\|\nabla f(x^{k})-\widetilde{\nabla}f_{k}\|+\|A^{T}y^{ k}+\widetilde{\nabla}f_{k}\|+2a\|x^{k+1}-x^{k}\|+2b\|x^{k}-x^{k-1}\|\right)^{2}\] \[=\left(\|\nabla f(x^{k})-\widetilde{\nabla}f_{k}\|+\left(\frac{1 }{\alpha}+2a\right)\|x^{k+1}-x^{k}\|+2b\|x^{k}-x^{k-1}\|\right)^{2}\] \[=3\|\nabla f(x^{k})-\widetilde{\nabla}f_{k}\|^{2}+3\left(\frac{1 }{\alpha}+2a\right)^{2}\|x^{k+1}-x^{k}\|^{2}+12b^{2}\|x^{k}-x^{k-1}\|^{2}.\]
Taking conditional expectation on both sides and using (3.2) yield that
\[\mathbb{E}_{k}[\|d_{1}^{k}\|^{2}]\leq 3\left(\frac{1}{\alpha^{2}}+\frac{4a}{ \alpha}+4a^{2}+\sigma_{1}\right)\mathbb{E}_{k}[\|x^{k+1}-x^{k}\|^{2}]+3(4b^{2} +\sigma_{1})\|x^{k}-x^{k-1}\|^{2}+3\Lambda_{1}^{k}.\]
For the other four components of \(d^{k}\), it follows that
\[\mathbb{E}_{k}[\|d_{2}^{k}\|^{2}] =\|M(y^{k}-y^{k-1})-A(x^{k}-x^{k-1})\|^{2}\leq 2\|M\|^{2}\cdot\|y^{k}-y ^{k-1}\|^{2}+2\|A\|^{2}\cdot\|x^{k}-x^{k-1}\|^{2},\] \[\mathbb{E}_{k}[\|d_{3}^{k}\|^{2}] =4a^{2}\mathbb{E}_{k}[\|x^{k+1}-x^{k}\|^{2}],\] \[\mathbb{E}_{k}[\|d_{4}^{k}\|^{2}] \leq 8b^{2}\|x^{k}-x^{k-1}\|^{2}+8c^{2}\|x^{k-1}-x^{k-2}\|^{2},\] \[\mathbb{E}_{k}[\|d_{5}^{k}\|^{2}] =4c^{2}\|x^{k-1}-x^{k-2}\|^{2}.\]
Combining these results, we derive the conclusion.
### Convergence analysis
Now, with the help of the auxiliary lemmas established in the previous subsection, we demonstrate that the iterates \(\{(x^{k},y^{k})\}\) of Algorithm 2 exhibit the following elementary convergence property under the assumption that \(\{(x^{k},y^{k})\}\) is bounded almost surely (for short, a.s.). This assumption is also used in [19, 35] for studying stochastic optimization algorithms.
**Proposition 3.8**.: Suppose that \(\{(x^{k},y^{k})\}\) is bounded almost surely. Then, under Assumption 2,
\[\sum_{k=0}^{\infty}\|x^{k+1}-x^{k}\|^{2}<\infty\ \text{a.s.}\quad\text{and} \quad\sum_{k=0}^{\infty}\|y^{k+1}-y^{k}\|^{2}<\infty\ \text{a.s.}\]
Proof.: Summing (3.10) over \(k=1,\ldots,n\) yields that
\[e_{0}\sum_{k=1}^{n}\mathbb{E}[\|x^{k+2}-x^{k+1}\|^{2}+\|x^{k+1}-x^{k}\|^{2}+\|x ^{k}-x^{k-1}\|^{2}]\leq\mathbb{E}[\mathscr{L}_{s,1}^{\Lambda}]-\mathbb{E}[ \mathscr{L}_{s,n+1}^{\Lambda}]. \tag{3.14}\]
From Assumption 2, \(\mathcal{L}_{s}\) is bounded from below, which, together with the almost sure boundedness of \(\{x^{k}\}\), ensures that \(\mathbb{E}[\mathscr{L}_{s,k}^{\Lambda}]\) is bounded from below. Since \(\mathbb{E}[\mathscr{L}_{s,k}^{\Lambda}]\) is nonincreasing (Lemma 3.5), thus \(\mathbb{E}[\mathscr{L}_{s,k}^{\Lambda}]\) converges to a finite value. Then, from (3.14) it follows that
\[\sum_{k=1}^{\infty}\mathbb{E}[\|x^{k}-x^{k-1}\|^{2}]=\sum_{k=0}^{\infty} \mathbb{E}[\|x^{k+1}-x^{k}\|^{2}]<\infty. \tag{3.15}\]
This also implies that
\[\lim_{k\to\infty}\mathbb{E}[\|x^{k+1}-x^{k}\|^{2}]=0 \tag{3.16}\]
and
\[\sum_{k=0}^{\infty}\|x^{k+1}-x^{k}\|^{2}<\infty\quad\text{a.s.} \tag{3.17}\]
Furthermore, from item (iii) in Definition 3.1, it follows that
\[\lim_{k\to\infty}\mathbb{E}[\Lambda_{1}^{k}]=0\ \text{and}\ \lim_{k\to\infty}\mathbb{E}[\Lambda_{2}^{k}]=0. \tag{3.18}\]
By Lemma 3.4 and (2.8), we have
\[\mathbb{E}_{k}[\|y^{k+1}-y^{k}\|^{2}]\leq\frac{4}{\hat{\lambda}^ {2}}\left(\left(\frac{1}{\alpha}+L\right)^{2}+2\kappa\right)\left(\mathbb{E}_ {k}[\|x^{k+2}-x^{k+1}\|^{2}+\|x^{k+1}-x^{k}\|^{2}]+\|x^{k}-x^{k-1}\|^{2}\right) \tag{3.19}\] \[\qquad\qquad\qquad\qquad\qquad+\frac{4}{\hat{\lambda}^{2}\rho} \mathbb{E}[\Lambda_{1}^{k+1}-\Lambda_{1}^{k+2}]+(\Lambda_{1}^{k}-\mathbb{E}_ {k}[\Lambda_{1}^{k+1}])\right).\]
Taking expectation on both sides and summing it over \(k=1,\ldots,n\), we have
\[\sum_{k=1}^{n}\mathbb{E}[\|y^{k+1}-y^{k}\|^{2}]\leq\frac{4}{\hat {\lambda}^{2}}\left(\left(\frac{1}{\alpha}+L\right)^{2}+2\kappa\right)\sum_{k= 1}^{n}\mathbb{E}[\|x^{k+2}-x^{k+1}\|^{2}+\|x^{k+1}-x^{k}\|^{2}+\|x^{k}-x^{k-1} \|^{2}] \tag{3.20}\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\frac{4}{\hat{\lambda}^{2} \rho}\mathbb{E}[\Lambda_{1}^{1}+\Lambda_{1}^{2}-\Lambda_{1}^{n+1}-\Lambda_{1} ^{n+2}].\]
Let \(n\to\infty\), then using (3.15) and (3.18), one has
\[\sum_{k=0}^{\infty}\mathbb{E}[\|y^{k+1}-y^{k}\|^{2}]<\infty, \tag{3.21}\]
which implies that
\[\lim_{k\to\infty}\mathbb{E}[\|y^{k+1}-y^{k}\|^{2}]=0 \tag{3.22}\]
\[\sum_{k=0}^{\infty}\|y^{k+1}-y^{k}\|^{2}<\infty\quad\text{a.s.} \tag{3.23}\]
The proof is completed.
_Remark 3.9_.: Due to the random nature of \(\widetilde{\nabla}f_{k}\), we can define a suitable sample space \(\Omega\) based on the structure of Algorithm 2. Then, the sequence \(\{(x^{k}(\omega),y^{k}(\omega))\}\) with each sample \(\omega\in\Omega\) corresponds to the generated iterates by a single run of Algorithm 2. The sample space \(\Omega\) can be equipped with a \(\sigma\)-algebra \(\mathcal{F}\) and a probability measure \(\mathbb{P}\) to form a probability space \((\Omega,\mathcal{F},\mathbb{P})\). Consequently, the assumption that \(\{(x^{k},y^{k})\}\) is bounded almost surely implies that there exists an event \(\mathcal{A}\) with \(\mathbb{P}(\mathcal{A})=1\) such that the sequence \(\{(x^{k}(\omega),y^{k}(\omega))\}\) is bounded for every \(\omega\in\mathcal{A}\).
The following theorem establishes subsequence convergence by showing that any cluster point of the sequence \(\{(x^{k},y^{k})\}\) is a critical point of \(\mathcal{L}_{s}\) with probability \(1\).
**Theorem 3.10**.: _Let Assumption 2 be satisfied and \(\{(x^{k},y^{k})\}\) be bounded almost surely. Then, there exists an event \(\mathcal{A}\) with measure \(1\) such that, for all \(\omega\in\mathcal{A}\), the following statements hold:_
1. _the set_ \(\mathcal{C}_{\omega}\) _containing all cluster points of_ \(\{(x^{k}(\omega),y^{k}(\omega))\}\) _is nonempty and compact, and_ \[\operatorname{dist}((x^{k}(\omega),y^{k}(\omega)),\mathcal{C}_{\omega})\to 0;\]
2. \(\mathcal{C}_{\omega}\subseteq\operatorname{crit}\mathcal{L}_{s}\)_;_
3. \(\mathcal{L}_{s}\) _is finite and constant on_ \(\mathcal{C}_{\omega}\)_._
Proof.: Because \(\{(x^{k},y^{k})\}\) is bounded almost surely, from Remark 3.9 there exists an event \(\mathcal{A}\) with measure \(1\) such that the sequence \(\{(x^{k}(\omega),y^{k}(\omega))\}\) is bounded for any fixed \(\omega\in\mathcal{A}\). Hence, the set \(\mathcal{C}_{\omega}\) is nonempty. For any \((\bar{x}(\omega),\bar{y}(\omega))\in\mathcal{C}_{\omega}\), there exists a subsequence \(\{(x^{k_{q}}(\omega),y^{k_{q}}(\omega))\}\) of \(\{(x^{k}(\omega),y^{k}(\omega))\}\) such that
\[x^{k_{q}}(\omega)\to\bar{x}(\omega)\text{ and }y^{k_{q}}(\omega)\to\bar{y}(\omega). \tag{3.24}\]
From (3.17) and (3.23), we have
\[\lim_{k\to\infty}\|x^{k+1}(\omega)-x^{k}(\omega)\|=0\text{ and }\lim_{k\to\infty}\|y^{k+1}(\omega)-y^{k}(\omega)\|=0. \tag{3.25}\]
Thus, we obtain that \(\mathcal{C}_{\omega}\) is compact and \(\operatorname{dist}((x^{k}(\omega),y^{k}(\omega)),\mathcal{C}_{\omega})\to 0\), by following the same line of the proof of Theorem 2.6 (ii). Item (i) is derived.
We next prove that, for any \((\bar{x}(\omega),\bar{y}(\omega))\in\mathcal{C}_{\omega}\), \(\bar{z}(\omega):=(\bar{x}(\omega),\bar{y}(\omega),\bar{x}(\omega),\bar{x}( \omega),\bar{x}(\omega))\in\operatorname{crit}\mathscr{L}_{s}\), i.e., \(0\in\partial\mathscr{L}_{s}(\bar{z}(\omega))\), by using the outer semicontinuity of \(\partial\mathscr{L}_{s}\). Let
\[z^{k_{q}}(\omega):=(x^{k_{q}}(\omega),y^{k_{q}}(\omega),x^{k_{q}+1}(\omega),x ^{k_{q}-1}(\omega),x^{k_{q}-2}(\omega)).\]
It immediately follows from (3.24) that \(z^{k_{q}}(\omega)\to\bar{z}(\omega)\). Let \(d^{k_{q}}(\omega)\) be defined similarly to (3.12) with respect to \(\omega\). Then, we have \(d^{k_{q}}(\omega)\in\partial\mathscr{L}_{s}(z^{k_{q}}(\omega))\). Therefore, due to the outer semicontinuity of \(\partial\mathscr{L}_{s}\), in order to obtain \(0\in\partial\mathscr{L}_{s}(\bar{z}(\omega))\), it is sufficient to show \(d^{k_{q}}(\omega)\to 0\). From Lemma 3.7, there exists a constant \(r>0\) such that
\[\mathbb{E}[\|d^{k_{q}}\|^{2}]\leq r(\mathbb{E}[\|x^{k_{q}+1}-x^{k_{q}}\|^{2}+ \|x^{k_{q}}-x^{k_{q}-1}\|^{2}+\|x^{k_{q}-1}-x^{k_{q}-2}\|^{2}+\|y^{k_{q}}-y^{k _{q}-1}\|^{2}+\Lambda_{1}^{k_{q}}]). \tag{3.26}\]
By rearranging (3.4), we attain
\[\mathbb{E}[\Lambda_{1}^{k_{q}}]\leq\frac{1}{\rho}\mathbb{E}[\Lambda_{1}^{k_{q} }-\Lambda_{1}^{k_{q}+1}]+\frac{\sigma_{\Lambda}}{\rho}(\mathbb{E}[\|x^{k_{q}+ 1}-x^{k_{q}}\|^{2}]+\mathbb{E}[\|x^{k_{q}}-x^{k_{q}-1}\|^{2}]),\]
which, together with (3.26), yields that
\[\mathbb{E}[\|d^{k_{q}}\|^{2}]\leq(r+\frac{r\sigma_{\Delta}}{\rho}) \mathbb{E}[\|x^{k_{q}+1}-x^{k_{q}}\|^{2}+\|x^{k_{q}}-x^{k_{q}-1}\|^{2}+\|x^{k_ {q}-1}-x^{k_{q}-2}\|^{2}]\] \[\qquad\qquad\qquad+r\mathbb{E}[\|y^{k_{q}}-y^{k_{q}-1}\|^{2}]+ \frac{r}{\rho}\mathbb{E}[\Lambda_{1}^{k_{q}}-\Lambda_{1}^{k_{q}+1}].\]
Summing up from \(k_{q}=2\) to \(\infty\) and using (3.15), (3.21) and (3.18), one has
\[\sum_{k_{q}=2}^{\infty}\mathbb{E}[\|d^{k_{q}}\|^{2}]<\infty.\]
Hence \(d^{k_{q}}\to 0\) almost surely, which further implies that \(d^{k_{q}}(\omega)\to 0\). Thus, we finish the proof of \(\bar{z}(\omega)\in\operatorname{crit}\!\mathscr{L}_{s}\). Furthermore, we derive item (ii) by Lemma 3.3.
To prove item (iii), let us first show that \(\sum_{k=1}^{\infty}W_{k}<\infty\) almost surely, where
\[W_{k} :=\frac{\delta_{1}+L}{2}\|x^{k+1}-x^{k}\|^{2}+\frac{\delta_{2} \alpha}{2}\|A^{T}(y^{k+1}-y^{k})\|^{2}+\frac{3\alpha L^{2}}{2\delta_{2}}\|x^{ k}-x^{k-1}\|^{2}\] \[\quad+\left(\frac{1}{2\delta_{1}}+\frac{3\alpha}{2\delta_{2}} \right)\|\nabla f(x^{k})-\widetilde{\nabla}f_{k}\|^{2}+\frac{3\alpha}{2\delta _{2}}\|\nabla f(x^{k-1})-\widetilde{\nabla}f_{k-1}\|^{2}.\]
It follows from (3.5) that
\[\mathbb{E}[\|\widetilde{\nabla}f_{k}-\nabla f(x^{k})\|^{2}]\leq\frac{1}{\rho} (\mathbb{E}[\Lambda_{1}^{k}]-\mathbb{E}[\Lambda_{1}^{k+1}])+\kappa(\mathbb{E} [\|x^{k+1}-x^{k}\|^{2}]+\mathbb{E}[\|x^{k}-x^{k-1}\|^{2}]),\]
which, together with the facts that \(\mathbb{E}[\Lambda_{1}^{k}]\to 0\) and \(\sum_{k=0}^{\infty}\mathbb{E}[\|x^{k+1}-x^{k}\|^{2}]<\infty\) from (3.18) and (3.15), indicates that
\[\sum_{k=1}^{\infty}\mathbb{E}[\|\widetilde{\nabla}f_{k}-\nabla f(x^{k})\|^{2} ]<\infty,\]
and hence \(\sum_{k=1}^{\infty}\|\widetilde{\nabla}f_{k}-\nabla f(x^{k})\|^{2}<\infty\) almost surely. Therefore, using Proposition 3.8 we obtain that \(\sum_{k=1}^{\infty}W_{k}<\infty\) almost surely.
In a completely analogous way to (3.11), we can prove that for any fixed \(\omega\in\mathcal{A}\),
\[\mathcal{L}_{s}(x^{k+1}(\omega),y^{k+1}(\omega))\] \[\leq\mathcal{L}_{s}(x^{k}(\omega),y^{k}(\omega))+\frac{\delta_{1} +L}{2}\|x^{k+1}(\omega)-x^{k}(\omega)\|^{2}+\frac{\delta_{2}\alpha}{2}\|A^{T}( y^{k+1}(\omega)-y^{k}(\omega))\|^{2}\] \[\quad\quad+\frac{3\alpha L^{2}}{2\delta_{2}}\|x^{k}(\omega)-x^{k- 1}(\omega)\|^{2}+\left(\frac{1}{2\delta_{1}}+\frac{3\alpha}{2\delta_{2}} \right)\|\nabla f(x^{k}(\omega))-\widetilde{\nabla}f_{k}(\omega)\|^{2}\] \[\quad\quad+\frac{3\alpha}{2\delta_{2}}\|\nabla f(x^{k-1}(\omega) )-\widetilde{\nabla}f_{k-1}(\omega)\|^{2}\] \[=\mathcal{L}_{s}(x^{k}(\omega),y^{k}(\omega))+W_{k}(\omega).\]
Because \(\sum_{k=1}^{\infty}W_{k}<\infty\) almost surely, we have \(\sum_{k=1}^{\infty}W_{k}(\omega)<\infty\). Therefore, from [5, Proposition A.4.4] it follows that \(\{\mathcal{L}_{s}(x^{k}(\omega),y^{k}(\omega))\}\) converges to a finite value. Since \(\mathcal{L}_{s}\) is continuous over \(\mathbb{X}\times\mathrm{dom}h^{*}\), one has from (3.24) that
\[\lim_{q\to\infty}\mathcal{L}_{s}(x^{k_{q}}(\omega),y^{k_{q}}(\omega))= \mathcal{L}_{s}(\bar{x}(\omega),\bar{y}(\omega)).\]
Combining these results with the definition of \(\mathcal{C}_{\omega}\), we have that \(\mathcal{L}_{s}\) is finite and constant on \(\mathcal{C}_{\omega}\). The proof is completed.
_Remark 3.11_.: Under the assumptions in Theorem 3.10, from item (i) and item (iii), there exists an event \(\mathcal{A}\) with \(\mathbb{P}(\mathcal{A})=1\) such that, for all \(\omega\in\mathcal{A}\), \(\operatorname{dist}((x^{k}(\omega),y^{k}(\omega)),\mathcal{C}_{\omega})\to 0\) and \(\mathcal{L}_{s}\) equals to a constant value \(\tilde{\mathcal{L}}_{s}(\omega)\) over \(\mathcal{C}_{\omega}\). Hence, it follows that \(\mathbb{E}[\mathcal{L}_{s}(x^{k},y^{k})]\to\tilde{\mathcal{L}}_{s}\) with \(\tilde{\mathcal{L}}_{s}:=\mathbb{E}[\tilde{\mathcal{L}}_{s}(\omega)]\). Further, it follows from (3.6) and \(z^{k}=(x^{k},y^{k},x^{k+1},x^{k-1},x^{k-2})\) that
\[\mathscr{L}_{s}(z^{k}):=\mathcal{L}_{s}(x^{k},y^{k})-a\|x^{k}-x^{k+1}\|^{2}+b \|x^{k}-x^{k-1}\|^{2}+c\|x^{k-1}-x^{k-2}\|^{2},\]
which, together with (3.16), implies that \(\mathbb{E}[\mathscr{L}_{s}(z^{k})]\to\bar{\mathcal{L}}_{s}\) as \(k\to\infty\).
We now present the main theorem of this section about the finite length property and the almost sure convergence of the whole sequence \(\{(x^{k},y^{k})\}\) generated by Algorithm 2 depending on the KL property of the Lyapunov function \(\mathscr{L}_{s}\).
**Theorem 3.12**.: _Suppose that Assumption 2 holds and \(\mathscr{L}_{s}\) is a KL function with Lojasiewicz exponent \(\theta\in[0,1)\). Let the sequence \(\{(x^{k},y^{k})\}\) be bounded almost surely. Then,_
1. _it holds that_ \[\sum_{k=0}^{\infty}\mathbb{E}[\|x^{k+1}-x^{k}\|]<\infty,\quad\sum_{k=0}^{ \infty}\mathbb{E}[\|y^{k+1}-y^{k}\|]<\infty;\]
2. _the sequence_ \(\{(x^{k},y^{k})\}\) _converges almost surely to a random vector_ \((\bar{x},\bar{y})\)_, and_ \((\bar{x},\bar{y})\in\mathrm{crit}\mathcal{L}_{s}\) _a.s._
Proof.: Let us begin with the proof of a simple fact that \(\sum_{k=0}^{\infty}\mathbb{E}[\|x^{k+1}-x^{k}\|]<\infty\) and \(\sum_{k=0}^{\infty}\mathbb{E}[\|y^{k+1}-y^{k}\|]<\infty\) if \(\sum_{k=0}^{\infty}\sqrt{\mathbb{E}[\|x^{k+1}-x^{k}\|^{2}]}<\infty\). In other words, if this fact is true, in order to derive item (i), it is sufficient to prove \(\sum_{k=0}^{\infty}\sqrt{\mathbb{E}[\|x^{k+1}-x^{k}\|^{2}]}<\infty\). Indeed, by Jensen's inequality, \(\sum_{k=0}^{\infty}\mathbb{E}[\|x^{k+1}-x^{k}\|]<\infty\) is obvious. By (3.19) (with \(k=k-1\)) and \(\sqrt{a+b}\leq\sqrt{a}+\sqrt{b}\), there exists a constant \(\gamma_{6}>0\) such that
\[\sqrt{\mathbb{E}[\|y^{k}-y^{k-1}\|^{2}]}\leq\gamma_{6}(\sqrt{\mathbb{E}[\|x^{ k+1}-x^{k}\|^{2}]}+\sqrt{\mathbb{E}[\|x^{k-x^{k-1}}\|^{2}]}+\sqrt{\mathbb{E}[ \|x^{k-1}-x^{k-2}\|^{2}]}+\sqrt{\mathbb{E}[\Lambda_{1}^{k-1}]}). \tag{3.27}\]
Using inequalities (3.4), \(\sqrt{a+b}\leq\sqrt{a}+\sqrt{b}\) and \(\sqrt{1-\rho}\leq 1-\frac{\rho}{2}\), it follows that
\[\begin{split}\sqrt{\mathbb{E}[\Lambda_{1}^{k}]}& \leq\sqrt{(1-\rho)\mathbb{E}[\Lambda_{1}^{k-1}]+\sigma_{\Lambda }(\mathbb{E}[\|x^{k}-x^{k-1}\|^{2}]+\mathbb{E}[\|x^{k-1}-x^{k-2}\|^{2}])}\\ &\leq(1-\frac{\rho}{2})\sqrt{\mathbb{E}[\Lambda_{1}^{k-1}]}+ \sqrt{\sigma_{\Lambda}\mathbb{E}[\|x^{k}-x^{k-1}\|^{2}]}+\sqrt{\sigma_{ \Lambda}\mathbb{E}[\|x^{k-1}-x^{k-2}\|^{2}]}.\end{split} \tag{3.28}\]
Rearranging this inequality, we obtain
\[\sqrt{\mathbb{E}[\Lambda_{1}^{k-1}]}\leq\frac{2}{\rho}(\sqrt{\mathbb{E}[ \Lambda_{1}^{k-1}]}-\sqrt{\mathbb{E}[\Lambda_{1}^{k}]})+\frac{2}{\rho}\sqrt{ \sigma_{\Lambda}\mathbb{E}[\|x^{k}-x^{k-1}\|^{2}]}+\frac{2}{\rho}\sqrt{\sigma_ {\Lambda}\mathbb{E}[\|x^{k-1}-x^{k-2}\|^{2}]}. \tag{3.29}\]
Therefore, by substituting (3.29) into (3.27), we have
\[\sum_{k=0}^{\infty}\mathbb{E}[\|y^{k+1}-y^{k}\|]\leq\sum_{k=0}^{\infty}\sqrt{ \mathbb{E}[\|y^{k+1}-y^{k}\|^{2}]}<\infty.\]
Hence, the simple fact is proved.
We next prove that \(\sum_{k=0}^{\infty}\sqrt{\mathbb{E}[\|x^{k+1}-x^{k}\|^{2}]}<\infty\). By [22, Lemma 4.5], if \(\mathscr{L}_{s}\) is a KL function with exponent \(\theta\), there exist an integer \(K_{0}\) and a function \(\varphi_{0}(s)=\sigma_{0}s^{1-\theta}\) such that the following holds
\[\varphi_{0}^{\prime}(\mathbb{E}[\mathscr{L}_{s}(z^{k})]-\bar{\mathscr{L}}_{s, k})\mathbb{E}[\mathrm{dist}(0,\partial\mathscr{L}_{s}(z^{k}))]\geq 1,\ \forall k\geq K_{0}, \tag{3.30}\]
where \(\{\bar{\mathscr{L}}_{s,k}\}\) is a nondecreasing sequence satisfying \(\mathbb{E}[\mathscr{L}_{s}(z^{k})]-\bar{\mathscr{L}}_{s,k}>0\) and converging to a finite value \(\bar{\mathcal{L}}_{s}\) which is given in Remark 3.11.
When \(\theta=0\), we show that \(\mathbb{E}[\mathscr{L}_{s,k}^{\wedge}]=\bar{\mathcal{L}}_{s}\) holds after a finite number of iterations by contradiction. Otherwise, inequality (3.30) implies that
\[\mathbb{E}[\mathrm{dist}(0,\partial\mathscr{L}_{s}(z^{k}))]\geq\frac{1}{ \sigma_{0}},\ \forall k\geq K_{0}. \tag{3.31}\]
From (3.31), (3.26) (letting \(k_{q}=k\)) and Jensen's inequality, we have
\[\begin{array}{rl}\frac{1}{\sigma_{0}^{2}}&\leq(\mathbb{E}[\mathrm{dist}(0, \partial\mathscr{L}_{s}(z^{k}))])^{2}\\ &\leq r(\mathbb{E}[\|x^{k+1}-x^{k}\|^{2}+\|x^{k}-x^{k-1}\|^{2}+\|x^{k-1}-x^{k- 2}\|^{2}+\|y^{k}-y^{k-1}\|^{2}+\Lambda_{1}^{k}]).\end{array}\]
Applying this inequality to (3.10), we have
\[\begin{array}{rl}\mathbb{E}[\mathscr{L}_{s,k}^{\wedge}]&\leq\mathbb{E}[ \mathscr{L}_{s,k-1}^{\wedge}]-e_{0}\mathbb{E}[\|x^{k+1}-x^{k}\|^{2}+\|x^{k}-x^{ k-1}\|^{2}+\|x^{k-1}-x^{k-2}\|^{2}]\\ &\leq\mathbb{E}[\mathscr{L}_{s,k-1}^{\wedge}]-\frac{e_{0}}{r\sigma_{0}^{2}}+e_ {0}\mathbb{E}[\|y^{k}-y^{k-1}\|^{2}]+e_{0}\mathbb{E}[\Lambda_{1}^{k}],\end{array}\]
which is impossible after a large enough number of iterations by noticing that \(\mathbb{E}[\|y^{k}-y^{k-1}\|^{2}]\to 0\) (cf. 3.22), \(\mathbb{E}[\Lambda_{1}^{k}]\to 0\) (cf. 3.18) and \(\mathbb{E}[\mathscr{L}_{s,k}^{\mathsf{A}}]\to\bar{\mathscr{L}}_{s}\) (cf. Remark 3.11). Therefore, there exists an integer \(\bar{K}\geq 0\) such that \(\mathbb{E}[\mathscr{L}_{s,k}^{\mathsf{A}}]=\bar{\mathscr{L}}_{s}\) holds for \(k\geq\bar{K}\). In view of (3.10), we have \(\mathbb{E}[\|x^{k}-x^{k-1}\|^{2}]=0\) for \(k\geq\bar{K}\), and hence \(\sum_{k=0}^{\infty}\sqrt{\mathbb{E}[\|x^{k+1}-x^{k}\|^{2}]}<\infty\).
We now consider \(\theta\in[\frac{1}{2},1)\). By (3.26), Jensen's inequality and \(\sqrt{a+b}\leq\sqrt{a}+\sqrt{b}\), it holds that
\[\begin{split}\mathbb{E}[\operatorname{dist}(0,\partial\mathscr{L }_{s}(z^{k}))]&\leq\sqrt{r}(\sqrt{\mathbb{E}[\|x^{k+1}-x^{k}\| ^{2}]}+\sqrt{\mathbb{E}[\|x^{k}-x^{k-1}\|^{2}]}+\sqrt{\mathbb{E}[\|y^{k}-y^{k- 1}\|^{2}]}\\ &\qquad+\sqrt{\mathbb{E}[\|x^{k-1}-x^{k-2}\|^{2}]}+\sqrt{ \mathbb{E}[\Lambda_{1}^{k}]}).\end{split} \tag{3.32}\]
Substituting (3.27) into (3.32), then we have
\[\begin{split}\mathbb{E}[\operatorname{dist}(0,\partial\mathscr{ L}_{s}(z^{k}))]&\leq\,(\sqrt{r}+\gamma_{6}\sqrt{r})\left(\sqrt{ \mathbb{E}[\|x^{k+1}-x^{k}\|^{2}]}+\sqrt{\mathbb{E}[\|x^{k}-x^{k-1}\|^{2}]}+ \sqrt{\mathbb{E}[\|x^{k-1}-x^{k-2}\|^{2}]}\right)\\ &\qquad+\gamma_{6}\sqrt{r}\sqrt{\mathbb{E}[\Lambda_{1}^{k-1}]}+ \sqrt{r\mathbb{E}[\Lambda_{1}^{k}]}.\end{split} \tag{3.33}\]
Applying (3.29) to the last two terms in (3.33), respectively, and letting \(\gamma:=\sqrt{r}+\gamma_{6}\sqrt{r}+\frac{2\sqrt{r\sigma_{3}}}{\rho}+\frac{2 \gamma_{6}\sqrt{r\sigma_{3}}}{\rho}\), one has
\[\begin{split}\mathbb{E}[\operatorname{dist}(0,\partial\mathscr{ L}_{s}(z^{k}))]&\leq\,\gamma\left(\sqrt{\mathbb{E}[\|x^{k+1}-x^{k}\|^{2}]}+ \sqrt{\mathbb{E}[\|x^{k}-x^{k-1}\|^{2}]}+\sqrt{\mathbb{E}[\|x^{k-1}-x^{k-2}\|^ {2}]}\right)\\ &\qquad+\frac{2\gamma_{6}\sqrt{r}}{\rho}\left(\sqrt{\mathbb{E}[ \Lambda_{1}^{k-1}]}-\sqrt{\mathbb{E}[\Lambda_{1}^{k}]}\right)+\frac{2\sqrt{r}} {\rho}\left(\sqrt{\mathbb{E}[\Lambda_{1}^{k}]}-\sqrt{\mathbb{E}[\Lambda_{1}^ {k+1}]}\right).\end{split} \tag{3.34}\]
Denote by \(\Sigma_{k}\) the right-hand side of the above inequality. Obviously, \(\Sigma_{k}>0\). Then, combining the inequality \(\mathbb{E}[\operatorname{dist}(0,\partial\mathscr{L}_{s}(z^{k}))]\leq\Sigma_{k}\) with (3.30) and \(\varphi_{0}(s)=\sigma_{0}s^{1-\theta}\) gives that
\[\frac{\sigma_{0}(1-\theta)\Sigma_{k}}{(\mathbb{E}[\mathscr{L}_{s}(z^{k})]- \bar{\mathscr{L}}_{s,k})^{\theta}}\geq 1,\ \forall k\geq K_{0}. \tag{3.35}\]
Note that, for \(\theta\in[\frac{1}{2},1)\), there exist positive constants \(\beta_{0}\), \(\kappa_{2},\kappa_{3}\) and a sufficiently large integer \(K_{1}>0\) such that for \(k\geq K_{1}\),
\[(\mathbb{E}[\kappa_{1}\Lambda_{1}^{k+1}+e_{2}\Lambda_{1}^{k}+e_{3} \Lambda_{1}^{k-1}])^{\theta}\leq\kappa_{2}(\mathbb{E}[\Lambda_{1}^{k+1}+ \Lambda_{1}^{k}+\Lambda_{1}^{k-1}])^{\theta}\leq\kappa_{2}\sqrt{\mathbb{E}[ \Lambda_{1}^{k+1}+\Lambda_{1}^{k}+\Lambda_{1}^{k-1}]}\] \[\leq\kappa_{3}(\sqrt{\mathbb{E}[\Lambda_{1}^{k}]}+\sqrt{ \mathbb{E}[\Lambda_{1}^{k-1}]}+\sqrt{\mathbb{E}[\|x^{k+1}-x^{k}\|^{2}]}+ \sqrt{\mathbb{E}[\|x^{k}-x^{k-1}\|^{2}]})\leq\beta_{0}\Sigma_{k},\]
where the second inequality is deduced from \(\mathbb{E}[\Lambda_{1}^{k}]\to 0\) for \(k\to\infty\) (cf. Theorem 3.10), the third inequality is obtained by \(\sqrt{a+b}\leq\sqrt{a}+\sqrt{b}\) and (3.28), and the last inequality is from (3.29) and the definition of \(\Sigma_{k}\). Take a constant \(\beta>0\) such that \(\beta\sigma_{0}(1-\theta)\geq\sigma_{0}(1-\theta)+\beta_{0}\). Then, from (3.35) and the fact that \((a+b)^{\theta}\leq a^{\theta}+b^{\theta}\) for \(\theta\in[\frac{1}{2},1]\), it holds for \(k\geq K:=\max\{K_{0},K_{1}\}\),
\[\begin{split}\frac{\beta\sigma_{0}(1-\theta)\Sigma_{k}}{( \mathbb{E}[\mathscr{L}_{s,k}^{\mathsf{A}}]-\bar{\mathscr{L}}_{s,k})^{\theta}}& \geq\frac{\beta\sigma_{0}(1-\theta)\Sigma_{k}}{(\mathbb{E}[ \mathscr{L}_{s}(z^{k})]-\bar{\mathscr{L}}_{s,k})^{\theta}+(\mathbb{E}[e_{1} \Lambda_{1}^{k+1}+e_{2}\Lambda_{1}^{k}+e_{3}\Lambda_{1}^{k-1}])^{\theta}}\\ &\geq\frac{\beta\sigma_{0}(1-\theta)\Sigma_{k}}{\sigma_{0}(1- \theta)\Sigma_{k}+\beta_{0}\Sigma_{k}}\geq 1.\end{split} \tag{3.36}\]
Thus, let \(\varphi_{1}(s):=\beta\sigma_{0}s^{1-\theta}\), for any \(k\geq K\), (3.36) is rewritten as
\[\varphi_{1}^{\prime}(\mathbb{E}[\mathscr{L}_{s,k}^{\mathsf{A}}]-\bar{\mathscr{L} }_{s,k})\Sigma_{k}\geq 1. \tag{3.37}\]
Since \(\varphi_{1}\) is concave, we have
\[\begin{split}\varphi_{1}(\mathbb{E}[\mathscr{L}_{s,k+1}^{\mathsf{A}} ]-\bar{\mathscr{L}}_{s,k+1})&\leq\,\varphi_{1}(\mathbb{E}[ \mathscr{L}_{s,k}^{\mathsf{A}}]-\bar{\mathscr{L}}_{s,k})+\varphi_{1}^{\prime}( \mathbb{E}[\mathscr{L}_{s,k}^{\mathsf{A}}]-\bar{\mathscr{L}}_{s,k})\mathbb{E}[ \mathscr{L}_{s,k+1}^{\mathsf{A}}-\bar{\mathscr{L}}_{s,k+1}^{\mathsf{A}}-\mathscr{L }_{s,k}^{\mathsf{A}}+\bar{\mathscr{L}}_{s,k}]\\ &\leq\,\varphi_{1}(\mathbb{E}[\mathscr{L}_{s,k}^{\mathsf{A}}]- \bar{\mathscr{L}}_{s,k})+\varphi_{1}^{\prime}(\mathbb{E}[\mathscr{L}_{s,k}^{ \mathsf{A}}]-\bar{\mathscr{L}}_{s,k})\mathbb{E}[\mathscr{L}_{s,k+1}^{\mathsf{A}}- \mathscr{L}_{s,k}^{\mathsf{A}}]\\ &\leq\,\varphi_{1}(\mathbb{E}[\mathscr{L}_{s,k}^{\mathsf{A}}]- \bar{\mathscr{L}}_{s,k})-\frac{e_{0}}{\Sigma_{k}}\mathbb{E}[\|x^{k+2}-x^{k+1} \|^{2}+\|x^{k+1}-x^{k}\|^{2}+\|x^{k}-x^{k-1}\|^{2}],\end{split} \tag{3.38}\]
where the second inequality is obtained by \(\mathscr{L}_{s,k}\leq\mathscr{L}_{s,k+1}\), the third inequality is from Lemma 3.5 and (3.36). Let \(\mathcal{M}_{m,n}:=\varphi_{1}(\mathbb{E}[\mathscr{L}_{s,m}^{\mathsf{A}}]- \bar{\mathscr{L}}_{s,m})-\varphi_{1}(\mathbb{E}[\mathscr{L}_{s,n}^{\mathsf{A}}] -\bar{\mathscr{L}}_{s,n})\), then (3.37) implies
\[\mathcal{M}_{k,k+1}\geq\frac{e_{0}}{\Sigma_{k}}\mathbb{E}[\|x^{k+2}-x^{k+1}\|^ {2}].\]
Rewriting this inequality and using \(4\sqrt{ab}\leq a/\gamma+4\gamma b\) for any \(\gamma>0\) yield that
\[4\sqrt{\mathbb{E}[\|x^{k+2}-x^{k+1}\|^{2}]}\leq 4\sqrt{\frac{\mathcal{M}_{k,k+1} \Sigma_{k}}{e_{0}}}\leq\frac{\Sigma_{k}}{\gamma}+\frac{4\gamma\mathcal{M}_{k,k +1}}{e_{0}},\]
which, together with the definition of \(\Sigma_{k}\), gives
\[4\sqrt{\mathbb{E}[\|x^{k+2}-x^{k+1}\|^{2}]} \leq\sqrt{\mathbb{E}[\|x^{k+1}-x^{k}\|^{2}]}+\sqrt{\mathbb{E}[\| x^{k}-x^{k-1}\|^{2}]}+\sqrt{\mathbb{E}[\|x^{k-1}-x^{k-2}\|^{2}]}+\frac{4 \gamma\mathcal{M}_{k,k+1}}{e_{0}}\] \[\quad+\frac{2\gamma_{6}\sqrt{r}}{\rho\gamma}\left(\sqrt{\mathbb{ E}[\Lambda_{1}^{k-1}]}-\sqrt{\mathbb{E}[\Lambda_{1}^{k}]}\right)+\frac{2 \sqrt{r}}{\rho\gamma}\left(\sqrt{\mathbb{E}[\Lambda_{1}^{k}]}-\sqrt{\mathbb{E }[\Lambda_{1}^{k+1}]}\right).\]
Summing up from \(k=K\) to \(n\), we have
\[\sum_{k=K}^{n}\sqrt{\mathbb{E}[\|x^{k+2}-x^{k+1}\|^{2}]} \leq 3\sqrt{\mathbb{E}[\|x^{K+1}-x^{K}\|^{2}]}+2\sqrt{\mathbb{E}[\|x ^{K}-x^{K-1}\|^{2}]}+\sqrt{\mathbb{E}[\|x^{K-1}-x^{K-2}\|^{2}]} \tag{3.38}\] \[\quad+\sum_{k=K}^{n}\frac{4\gamma\mathcal{M}_{k,k+1}}{e_{0}}+ \frac{2\gamma_{6}\sqrt{r}}{\rho\gamma}\sqrt{\mathbb{E}[\Lambda_{1}^{K-1}]}+ \frac{2\sqrt{r}}{\rho\gamma}\sqrt{\mathbb{E}[\Lambda_{1}^{K}]}.\]
By the definition of \(\mathcal{M}_{k,k+1}\), it holds that \(\sum_{k=K}^{n}\mathcal{M}_{k,k+1}=\mathcal{M}_{K,n+1}\leq\varphi_{1}(\mathbb{ E}[\mathscr{L}_{s,K}^{\mathsf{A}}]-\bar{\mathscr{L}}_{s,K})\). Let \(n\to\infty\) in (3.38), then it follows that
\[\sum_{k=0}^{\infty}\sqrt{\mathbb{E}[\|x^{k+1}-x^{k}\|^{2}]}<\infty.\]
For \(\theta\in(0,\frac{1}{2})\), we show that it can be reduced to the case that \(\theta=\frac{1}{2}\). Indeed, from Remark 3.11, we can let \(K_{0}\) be large enough such that \(\mathbb{E}[\mathscr{L}_{s}(z^{k})]-\bar{\mathscr{L}}_{s,k}<1\). Then, since (3.30) holds with \(\theta\in(0,\frac{1}{2})\), we have that (3.30) also holds with \(\theta=\frac{1}{2}\). Thus, we can get the claim immediately by following the analysis for the case that \(\theta\in[\frac{1}{2},1)\).
Combining these results together, we have that \(\sum_{k=0}^{\infty}\sqrt{\mathbb{E}[\|x^{k+1}-x^{k}\|^{2}]}<\infty\) holds for all \(\theta\in[0,1)\), and hence item (i) is derived by the previously mentioned simple fact.
In the proof of Theorem 3.10, we have shown that there exists an event \(\mathcal{A}\) with measure \(1\) such that, for any \(\omega\in\mathcal{A}\), every convergent subsequence of \(\{(x^{k}(\omega),y^{k}(\omega))\}\) converges to a point \((\bar{x}(\omega),\bar{y}(\omega))\) belonging to \(\operatorname{crit}\!\mathcal{L}_{s}\). If follows from item (i) that
\[\sum_{k=0}^{\infty}\|x^{k+1}-x^{k}\|<\infty\text{ a.s.,}\quad\sum_{k=0}^{ \infty}\|y^{k+1}-y^{k}\|<\infty\text{ a.s.,}\]
and consequently,
\[\sum_{k=0}^{\infty}\|x^{k+1}(\omega)-x^{k}(\omega)\|<\infty,\quad\sum_{k=0}^{ \infty}\|y^{k+1}(\omega)-y^{k}(\omega)\|<\infty.\]
In other words, \(\{(x^{k}(\omega),y^{k}(\omega))\}\) is a Cauchy sequence. Thus, the whole sequence \(\{(x^{k}(\omega),y^{k}(\omega))\}\) converges to \((\bar{x}(\omega),\bar{y}(\omega))\). Therefore, there exists a random vector \((\bar{x},\bar{y})\) such that \(\{(\bar{x},\bar{y})\}\in\operatorname{crit}\!\mathcal{L}_{s}\) a.s. and \(\{(x^{k},y^{k})\}\) converges almost surely to \((\bar{x},\bar{y})\). Item (ii) is proved.
Finally, we establish the convergence rates of the sequence \(\{(x^{k},y^{k})\}\) in the context of Lojasiewicz exponent in the following theorem.
**Theorem 3.13**.: _Suppose that Assumption 2 holds and \(\mathscr{L}_{s}\) is a KL function with Lojasiewicz exponent \(\theta\in[0,1)\). Let the sequence \(\{(x^{k},y^{k})\}\) be bounded almost surely and \(\{(x^{k},y^{k})\}\) converges almost surely to some random vector \((\bar{x},\bar{y})\). Then, the following statements hold:_
1. _if_ \(\theta=0\)_, the sequence_ \(\{(x^{k},y^{k})\}\) _converges in expectation after finite steps;_
2. _if_ \(\theta\in(0,\frac{1}{2}]\)_, then there exist constants_ \(\nu,\bar{\nu}>0\)_,_ \(\tau,\bar{\tau}\in(0,1)\) _and a sufficiently large integer_ \(K\) _such that for_ \(k\geq K\)_,_ \[\mathbb{E}[\|x^{k}-\bar{x}\|]\leq\nu\tau^{k-K},\quad\mathbb{E}[\|y^{k}-\bar{y} \|]\leq\bar{\nu}\bar{\tau}^{k-K};\]
3. _if_ \(\theta\in(\frac{1}{2},1)\)_, then there exist constants_ \(\mu,\bar{\mu}>0\) _and a sufficiently large integer_ \(\bar{K}\) _such that for_ \(k\geq\bar{K}\)_,_ \[\mathbb{E}[\|x^{k}-\bar{x}\|]\leq\mu k^{-\frac{1-\theta}{2\theta-1}},\quad \mathbb{E}[\|y^{k}-\bar{y}\|]\leq\bar{\mu}k^{-\frac{1-\theta}{2\theta-1}}.\]
Proof.: Item (i) has been presented in the proof of Theorem 3.12.
Let us point out that, for the case that \(\theta\in(0,1/2)\), by the same reason in the proof of Theorem 3.12, the analysis in the following can reduce to the case that \(\theta=1/2\). Therefore, it is sufficient to consider the case that \(\theta\in[1/2,1)\). Let
\[\Delta_{k}:=\sum_{q=k}^{\infty}\sqrt{\mathbb{E}[\|x^{q+1}-x^{q}\|^{2}]}+\sum_ {q=k}^{\infty}\sqrt{\mathbb{E}[\|x^{q}-x^{q-1}\|^{2}]}+\sum_{q=k}^{\infty} \sqrt{\mathbb{E}[\|x^{q-1}-x^{q-2}\|^{2}]}.\]
Noticing that (3.38) holds for all \(\theta\in[1/2,1)\). Similarly, we can also have
\[\sum_{k=K}^{n}\sqrt{\mathbb{E}[\|x^{k}-x^{k-1}\|^{2}]} \leq\sqrt{\mathbb{E}[\|x^{n+1}-x^{n}\|^{2}]}+\sqrt{\mathbb{E}[\| x^{K-1}-x^{K-2}\|^{2}]}+\sum_{k=K}^{n}\frac{4\gamma\mathcal{M}_{k,k+1}}{e_{0}} \tag{3.39}\] \[\quad+\frac{2\gamma_{6}\sqrt{r}}{\rho\gamma}\sqrt{\mathbb{E}[ \Lambda_{1}^{K-1}]}+\frac{2\sqrt{r}}{\rho\gamma}\sqrt{\mathbb{E}[\Lambda_{1}^{ K}]}\]
and
\[\sum_{k=K}^{n}\sqrt{\mathbb{E}[\|x^{k+1}-x^{k}\|^{2}]} \leq 2\sqrt{\mathbb{E}[\|x^{K}-x^{K-1}\|^{2}]}+\sqrt{\mathbb{E}[\| x^{K-1}-x^{K-2}\|^{2}]}+\sum_{k=K}^{n}\frac{4\gamma\mathcal{M}_{k,k+1}}{e_{0}} \tag{3.40}\] \[\quad+\frac{2\gamma_{6}\sqrt{r}}{\rho\gamma}\sqrt{\mathbb{E}[ \Lambda_{1}^{K-1}]}+\frac{2\sqrt{r}}{\rho\gamma}\sqrt{\mathbb{E}[\Lambda_{1}^{ K}]}.\]
Then, combining (3.38), (3.39) and (3.40) (let \(n\to\infty\)), for any \(k\geq K\), it follows from the definition of \(\mathcal{M}_{m,n}\) and \(\varphi_{1}(s)=\beta\sigma_{0}s^{1-\theta}\) that
\[\Delta_{k+1} \leq 4\left(\sqrt{\mathbb{E}[\|x^{k+1}-x^{k}\|^{2}]}+\sqrt{ \mathbb{E}[\|x^{k}-x^{k-1}\|^{2}]}+\sqrt{\mathbb{E}[\|x^{k-1}-x^{k-2}\|^{2}]}\right) \tag{3.41}\] \[\quad+\frac{6\gamma_{6}\sqrt{r}}{\rho\gamma}\sqrt{\mathbb{E}[ \Lambda_{1}^{k-1}]}+\frac{6\sqrt{r}}{\rho\gamma}\sqrt{\mathbb{E}[\Lambda_{1}^ {k}]}+\frac{12\gamma}{e_{0}}\varphi_{1}(\mathbb{E}[\mathscr{L}_{s,k}^{A}]- \mathscr{L}_{s,k})\] \[= 4\left(\sqrt{\mathbb{E}[\|x^{k+1}-x^{k}\|^{2}]}+\sqrt{\mathbb{E}[ \|x^{k}-x^{k-1}\|^{2}]}+\sqrt{\mathbb{E}[\|x^{k-1}-x^{k-2}\|^{2}]}\right)\] \[\quad+\frac{6\gamma_{6}\sqrt{r}}{\rho\gamma}\sqrt{\mathbb{E}[ \Lambda_{1}^{k-1}]}+\frac{6\sqrt{r}}{\rho\gamma}\sqrt{\mathbb{E}[\Lambda_{1}^ {k}]}+\beta_{1}(\mathbb{E}[\mathscr{L}_{s,k}^{A}]-\mathscr{L}_{s,k})^{1- \theta},\]
where \(\beta_{1}:=12\gamma\beta\sigma_{0}/e_{0}\). By the definition of \(\mathscr{L}_{s,k}^{\Lambda}\) and \((a+b)^{1-\theta}\leq a^{1-\theta}+b^{1-\theta}\) for \(\theta\in[1/2,1)\), it follows that
\[(\mathbb{E}[\mathscr{L}_{s,k}^{\Lambda}]-\mathscr{L}_{s,k})^{1-\theta}\leq( \mathbb{E}[\mathscr{L}_{s}(z^{k})]-\mathscr{L}_{s,k})^{1-\theta}+\beta_{2}( \mathbb{E}[\Lambda_{1}^{k+1}]+\mathbb{E}[\Lambda_{1}^{k}]+\mathbb{E}[\Lambda_{ 1}^{k-1}])^{1-\theta}, \tag{3.42}\]
where \(\beta_{2}=\max\{e_{1},e_{2},e_{3}\}^{1-\theta}\). Let \(\bar{\Sigma}_{k}\) be the right-hand side of (3.33), then there exists a constant \(\beta_{4}>0\) such that
\[4(\sqrt{\mathbb{E}[\|x^{k+1}-x^{k}\|^{2}]}+\sqrt{\mathbb{E}[\|x^{k}-x^{k-1}\|^{ 2}]}+\sqrt{\mathbb{E}[\|x^{k-1}-x^{k-2}\|^{2}]})+\frac{6\gamma_{6}\sqrt{r}}{ \rho\gamma}\sqrt{\mathbb{E}[\Lambda_{1}^{k-1}]}+\frac{6\sqrt{r}}{\rho\gamma} \sqrt{\mathbb{E}[\Lambda_{1}^{k}]}\leq\beta_{4}\bar{\Sigma}_{k}. \tag{3.43}\]
Plugging (3.42) and (3.43) into (3.41) yields that
\[\Delta_{k+1}\leq\beta_{4}\bar{\Sigma}_{k}+\beta_{1}(\mathbb{E}[\mathscr{L}_{s}(z ^{k})]-\bar{\mathscr{L}}_{s,k})^{1-\theta}+\beta_{1}\beta_{2}(\mathbb{E}[\Lambda _{1}^{k+1}]+\mathbb{E}[\Lambda_{1}^{k}]+\mathbb{E}[\Lambda_{1}^{k-1}])^{1- \theta}. \tag{3.44}\]
From (3.30), it follows that
\[(\mathbb{E}[\mathscr{L}_{s}(z^{k})]-\bar{\mathscr{L}}_{s,k})^{1-\theta}\leq( \sigma_{0}(1-\theta)\mathbb{E}[\mathrm{dist}(0,\partial\mathscr{L}_{s}(z^{k}) )])^{\frac{1-\theta}{\theta}}. \tag{3.45}\]
Since \(2\theta\geq 1\), we have that
\[(\mathbb{E}[\Lambda_{1}^{k+1}]+\mathbb{E}[\Lambda_{1}^{k}]+ \mathbb{E}[\Lambda_{1}^{k-1}])^{1-\theta} \leq(\mathbb{E}[\Lambda_{1}^{k+1}]+\mathbb{E}[\Lambda_{1}^{k}]+ \mathbb{E}[\Lambda_{1}^{k-1}])^{\frac{1-\theta}{2\theta}}\] \[\leq(\sqrt{\mathbb{E}[\Lambda_{1}^{k+1}]}+\sqrt{\mathbb{E}[ \Lambda_{1}^{k}]}+\sqrt{\mathbb{E}[\Lambda_{1}^{k-1}]})^{\frac{1-\theta}{ \theta}},\]
which further implies that, there exists a constant \(\beta_{3}>0\) such that
\[(\mathbb{E}[\Lambda_{1}^{k+1}]+\mathbb{E}[\Lambda_{1}^{k}]+\mathbb{E}[ \Lambda_{1}^{k-1}])^{1-\theta}\leq\beta_{3}\bar{\Sigma}_{k}^{\frac{1-\theta}{ \theta}}. \tag{3.46}\]
It follows from (3.45) and (3.33) that
\[(\mathbb{E}[\mathscr{L}_{s}(z^{k})]-\bar{\mathscr{L}}_{s,k})^{1-\theta}\leq( \sigma_{0}(1-\theta)\bar{\Sigma}_{k})^{\frac{1-\theta}{\theta}}. \tag{3.47}\]
Substituting (3.47) and (3.46) into (3.44) gives that
\[\Delta_{k+1}\leq((\sigma_{0}(1-\theta))^{\frac{1-\theta}{\theta}}\beta_{1}+ \beta_{1}\beta_{2}\beta_{3})\bar{\Sigma}_{k}^{\frac{1-\theta}{\theta}}+\beta_ {4}\bar{\Sigma}_{k}\leq\varrho\bar{\Sigma}_{k}^{\frac{1-\theta}{\theta}}, \tag{3.48}\]
where the second inequality is derived from \(\frac{1-\theta}{\theta}\leq 1\), \(\bar{\Sigma}_{k}\to 0\) and \(\varrho:=(\sigma_{0}(1-\theta))^{\frac{1-\theta}{\theta}}\beta_{1}+\beta_{1} \beta_{2}\beta_{3}+\beta_{4}\). To bound \(\bar{\Sigma}_{k}\), we have from (3.28) and (3.29) that
\[\begin{split}\bar{\Sigma}_{k}\leq&\ (\sqrt{r}+\gamma_{6}\sqrt{r}+\sqrt{r\sigma_{\Lambda}})\left(\sqrt{ \mathbb{E}[\|x^{k+1}-x^{k}\|^{2}]}+\sqrt{\mathbb{E}[\|x^{k}-x^{k-1}\|^{2}]}+ \sqrt{\mathbb{E}[\|x^{k-1}-x^{k-2}\|^{2}]}\right)\\ &\ \ \ +(2\gamma_{6}\sqrt{r}+(2-\rho)\sqrt{r})\sqrt{\mathbb{E}[ \Lambda_{1}^{k-1}]}-(\gamma_{6}\sqrt{r}+(1-\frac{\rho}{2})\sqrt{r})\sqrt{ \mathbb{E}[\Lambda_{1}^{k-1}]}\\ \leq&\ \beta_{5}\left(\sqrt{\mathbb{E}[\|x^{k+1}-x^{k}\|^{ 2}]}+\sqrt{\mathbb{E}[\|x^{k}-x^{k-1}\|^{2}]}+\sqrt{\mathbb{E}[\|x^{k-1}-x^{k- 2}\|^{2}]}\right)\\ &\ \ \ +2\beta_{6}\left(\sqrt{\mathbb{E}[\Lambda_{1}^{k-1}]}- \sqrt{\mathbb{E}[\Lambda_{1}^{k}]}\right)-(\gamma_{6}\sqrt{r}+(1-\frac{\rho}{2 })\sqrt{r})\sqrt{\mathbb{E}[\Lambda_{1}^{k-1}]},\end{split} \tag{3.49}\]
where \(\beta_{5}:=\sqrt{r}+\gamma_{6}\sqrt{r}+\frac{4+4\gamma_{6}-\rho}{\rho}\sqrt{r \sigma_{\Lambda}}\), \(\beta_{6}:=\frac{2+2\gamma_{6}-\rho}{\rho}\sqrt{r}\). Let \(\Delta_{k}^{\Lambda}:=\Delta_{k}+\frac{2\beta_{6}(1-\frac{\varrho}{\theta})}{ \beta_{5}}\sqrt{\mathbb{E}[\Lambda_{1}^{k-1}]}\). Then,
\[(\Delta_{k+1}^{\Lambda})^{\frac{\theta}{1-\theta}}\leq\frac{2^{\frac{\theta}{1 -\theta}}}{2}\Delta_{k+1}^{\frac{\theta}{1-\theta}}+\frac{2^{\frac{\theta}{1 -\theta}}}{2}\left(\frac{2\beta_{6}(1-\frac{\varrho}{4})}{\beta_{5}}\sqrt{ \mathbb{E}[\Lambda_{1}^{k}]}\right)^{\frac{\theta}{1-\theta}}\leq\frac{(2 \varrho)^{\frac{\theta}{1-\theta}}}{2}\bar{\Sigma}_{k}+\frac{(\frac{4\beta_{6 }(1-\frac{\varrho}{\theta})}{\beta_{5}})^{\frac{\theta}{1-\theta}}}{2}\sqrt{ \mathbb{E}[\Lambda_{1}^{k}]}, \tag{3.50}\]
where the first inequality is obtained by using inequalities \(\frac{\theta}{1-\theta}\geq 1\) and \((a+b)^{v}\leq 2^{v-1}a^{v}+2^{v-1}b^{v}\) for any \(v\geq 1\), the second inequality is from (3.48). Substituting (3.49) into (3.50) and rearranging terms, we have
\[(\Delta_{k+1}^{\Lambda})^{\frac{\theta}{1-\theta}}\leq\frac{\beta_{5}(2 \varrho)^{\frac{\theta}{1-\theta}}}{2}(\Delta_{k}^{\Lambda}-\Delta_{k+1}^{ \Lambda}),\]
which further gives
\[\Delta_{k+1}^{\Lambda}\leq 2\varrho(\frac{\beta_{5}}{2})^{\frac{1-\theta}{\theta}}( \Delta_{k}^{\Lambda}-\Delta_{k+1}^{\Lambda})^{\frac{1-\theta}{\theta}}. \tag{3.51}\]
Note that, the result (3.51) is very similar to (2.29). Hence, the rest of the proof can be conducted similarly as that of Theorem 2.10. In specific, if \(\theta\in(\frac{1}{2},1)\), there exist an integer \(\bar{K}\), constants \(\mu>0\) and \(\nu_{1}=\frac{1-2\theta}{1-\theta}<0\) such that for \(n>\bar{K}\),
\[\Delta_{n}^{\Lambda}\leq\mu n^{\frac{1}{\nu_{1}}};\]
if \(\theta\in(0,\frac{1}{2}]\), there exists constants \(\nu>0\) and \(\tau=\frac{\rho\delta_{5}}{1+\rho\delta_{5}}<1\) such that for \(k\geq K\),
\[\Delta_{k+1}^{\Lambda}\leq\nu\tau^{k-K}.\]
Since \(\mathbb{E}[\|x^{k}-\bar{x}\|]\leq\Delta_{k+1}^{\Lambda}\), the estimations for \(\mathbb{E}[\|x^{k}-\bar{x}\|]\) in (ii) and (iii) are derived.
Finally, we consider the estimations for \(\mathbb{E}[\|y^{k}-\bar{y}\|]\). Combining (3.27) and (3.29) yields that
\[\sqrt{\mathbb{E}[\|y^{q}-y^{q-1}\|^{2}]} \leq\gamma_{6}(1+\frac{2\sqrt{\sigma_{\Lambda}}}{\rho})\left( \sqrt{\mathbb{E}[\|x^{q+1}-x^{q}\|^{2}]}+\sqrt{\mathbb{E}[\|x^{q}-x^{q-1}\|^{ 2}]}+\sqrt{\mathbb{E}[\|x^{q-1}-x^{q-2}\|^{2}]}\right) \tag{3.52}\] \[\quad+\frac{2\gamma_{6}}{\rho}\left(\sqrt{\mathbb{E}[\Lambda_{1} ^{q-1}]}-\sqrt{\mathbb{E}[\Lambda_{1}^{q}]}\right).\]
Summing up from \(q=k\) to \(\infty\), we have
\[\sum_{q=k}^{\infty}\sqrt{\mathbb{E}[\|y^{q+1}-y^{q}\|^{2}]} \leq\gamma_{6}(1+\frac{2\sqrt{\sigma_{\Lambda}}}{\rho})\Delta_{k +1}+\frac{2\gamma_{6}}{\rho}\sum_{q=k}^{\infty}\left(\sqrt{\mathbb{E}[ \Lambda_{1}^{q}]}-\sqrt{\mathbb{E}[\Lambda_{1}^{q+1}]}\right) \tag{3.53}\] \[\leq\gamma_{6}(1+\frac{2\sqrt{\sigma_{\Lambda}}}{\rho})\Delta_{k +1}+\frac{2\gamma_{6}}{\rho}\sqrt{\mathbb{E}[\Lambda_{1}^{k}]}.\]
Let \(V_{k}:=\gamma_{6}(1+\frac{2\sqrt{\sigma_{\Lambda}}}{\rho})\Delta_{k}+\frac{2 \gamma_{6}}{\rho}\sqrt{\mathbb{E}[\Lambda_{1}^{k-1}]}\). Then by (3.41), it holds that
\[V_{k+1} \leq 4\gamma_{6}(1+\frac{2\sqrt{\sigma_{\Lambda}}}{\rho})\left( \sqrt{\mathbb{E}[\|x^{k+1}-x^{k}\|^{2}]}+\sqrt{\mathbb{E}[\|x^{k}-x^{k-1}\|^{ 2}]}+\sqrt{\mathbb{E}[\|x^{k-1}-x^{k-2}\|^{2}]}\right) \tag{3.54}\] \[\quad+\frac{6\gamma_{6}^{2}\sqrt{r}}{\rho\gamma}(1+\frac{2\sqrt{ \sigma_{\Lambda}}}{\rho})\sqrt{\mathbb{E}[\Lambda_{1}^{k-1}]}+\frac{6\gamma_{6 }\sqrt{r}}{\rho\gamma}(1+\frac{2\sqrt{\sigma_{\Lambda}}}{\rho})\sqrt{ \mathbb{E}[\Lambda_{1}^{k}]}\] (3.55) \[\quad+\gamma_{6}\beta_{1}(1+\frac{2\sqrt{\sigma_{\Lambda}}}{\rho} )(\mathbb{E}[\mathscr{L}_{s,k}^{\Lambda}]-\bar{\mathscr{L}}_{s,k})^{1-\theta }+\frac{2\gamma_{6}}{\rho}\sqrt{\mathbb{E}[\Lambda_{1}^{k}]}.\]
By the same line to obtain (3.48), there exists a constant \(\varrho^{\prime}>0\) such that
\[V_{k+1}\leq\varrho^{\prime}\bar{\Sigma}_{k}^{\frac{1-\theta}{\theta}}.\]
Similar to obtain the estimations of \(\Delta_{k+1}^{\Lambda}\), for \(V_{k}^{\Lambda}:=V_{k}+\frac{2\beta_{6}(1-\frac{\varrho}{2})}{\rho_{5}}\sqrt{ \mathbb{E}[\Lambda_{1}^{k-1}]}\), we can obtain that
\[V_{k}^{\Lambda}\leq\bar{\mu}^{k\frac{1}{\nu_{1}}}\text{ for }\theta\in(1/2,1) \tag{3.56}\]
and
\[V_{k+1}^{\Lambda}\leq\bar{\nu}\tau^{k-K}\text{ for }\theta\in(0,1/2]\,, \tag{3.57}\]
where \(\bar{\mu}\) and \(\bar{\nu}\) are some positive constants. The triangle inequality gives
\[\mathbb{E}[\|y^{k}-\bar{y}\|]\leq V_{k+1}\leq V_{k+1}^{\Lambda},\]
which, together with (3.56) and (3.57), implies the estimations for \(\mathbb{E}[\|y^{k}-\bar{y}\|]\) in (ii) and (iii). The proof is completed.
## 4 Preliminary numerical experiments
In this section, we show the efficiency of our proposed algorithms, and compare them with several state-of-the-art algorithms on a variety of test problems. All numerical experiments are carried out using MATLAB R2023a on a desktop computer with Intel Core i5 2.5GHz and 32GB memory.
### Image denoising via \(\ell_{0}\) gradient minimization
Let \(b\in\mathbb{R}^{n\times m}\) represent the noisy input image and \(x\) be the result after denoising. The \(2D\) discrete gradient operator of \(x\) is denoted by \(\nabla x\) which is linear. In this subsection, we focus on the following \(\ell_{0}\) gradient minimization problem [53, 28, 50]:
\[\min_{x\in\mathcal{\hat{D}}}\quad\frac{1}{2}\|x-b\|_{F}^{2}+\lambda\|\nabla x\| _{0}. \tag{4.1}\]
Here, \(\lambda>0\) is a regularization parameter, and the set \(\hat{\mathcal{D}}=\{x\in\mathbb{R}^{n\times m}:c_{1}\leq(\nabla x)_{i,j}\leq c _{2}\}\) for two given constants \(c_{1},c_{2}\). Problem (4.1) can be expressed in the form of (1.1) with \(f(x)=\frac{1}{2}\|x-b\|_{F}^{2}\), \(h(u)=\lambda\|u\|_{0}+\mathcal{I}_{\mathcal{D}}(u)\) with \(\mathcal{D}=\{u:c_{1}\leq u_{i,j}\leq c_{2}\}\) and \(\mathcal{I}_{\mathcal{D}}(\cdot)\) being the indicator function, and \(A\) is the linear operator associated with \(\nabla x\) such that \(Ax=\nabla x\)..
In this experiment, we aim to compare the performance of PPDG with ADMM [34] and PDHG [40] for solving problem (4.1). Recall that, to apply our algorithm, PPDG, we should calculate \(\text{pro}_{h^{*}}^{M}(y^{k}+M^{-1}A(2x^{k+1}-x^{k}))\) with \(M=\alpha AA^{T}\) at each iteration. To avoid computing the inverse of \(M\), in practice we calculate the following term as an approximation,
\[\text{prox}_{\beta h^{*}}\left(y^{k}+\beta A(2x^{k+1}-x^{k})\right), \tag{4.2}\]
where \(\beta=1/(\alpha\|A\|^{2})\). Here, the proximal mapping \(\text{prox}_{\beta h^{*}}(\cdot)\) is computed according to Example A.2.
The following peak signal-to-noise ratio (PSNR) is used as a measure of the quality of the denoised image,
\[\text{PSNR}=10\times\log_{10}\frac{mn(\max x^{k})^{2}}{\|x^{k}-x_{org}\|^{2}},\]
where \(x_{org}\) is the original image without any noisy, \(x^{k}\) is the output image. Take \(c_{1}=-1\), \(c_{2}=1\) and \(\lambda=0.1\). The numerical results are illustrated in Figure 1 and Figure 2. The first row of Figure 1 displays the original images1, the second row exhibits the noisy input images with varying levels of noise, while the third, fourth and fifth rows show the denoised images by PPDG, ADMM and PDHG, respectively. The values of PSNR for the denoised images and the corresponding running time are presented in Table 1. Figure 2 provides enlarged views of specific images from Figure 1, allowing readers to see the details clearly.
Footnote 1: The images are available in [https://www.robots.ox.ac.uk/~vgg/data/](https://www.robots.ox.ac.uk/~vgg/data/) and [http://www.eecs.qmul.ac.uk/~phao/IP/Images/](http://www.eecs.qmul.ac.uk/~phao/IP/Images/)
We can observe that, from Figure 1, PPDG outperforms ADMM and PDHG in terms of denoising capability, and from Table 1, PPDG is superior to ADMM and PDHG in view of running time. Moreover, PPDG is comparable to ADMM in terms of PSNR from Table 1. Finally, Figure 2 shows that PPDG is superior to ADMM with respect to image detail processing.
### Deep learning for image classification
In this subsection, we employ a one-hidden layer deep neural network for image classification using the dataset CIFAR-\(10^{2}\), which consists of \(60000\) color images of size \(32\times 32\) divided into \(10\) classes. Within this dataset, \(50000\) images have already been designated for training, while the remaining \(10000\) are reserved for testing. The one-hidden layer neural network consists of an input layer, a hidden layer and an output layer. The hidden layer size is \(175\).
We adopt the following notation:
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|} \hline Algorithm & Sunflower & Peppers & Dog & Cat \\ \hline PPDG & \(\mathbf{27.5878}|\mathbf{3.8}|\) & \(26.7780|\mathbf{2.3}\) & \(\mathbf{26.2217}|\mathbf{1.6}\) & \(\mathbf{28.0326}|\mathbf{1.4}\) \\ ADMM & \(27.2407|\mathbf{5.3}\) & \(\mathbf{27.0784}|\mathbf{3.3}\) & \(25.4212|\mathbf{2.6}\) & \(27.1934|\mathbf{2.4}\) \\ PDHG & \(24.0718|4.6\) & \(20.0308|\mathbf{3.3}\) & \(23.4053|\mathbf{2.3}\) & \(25.8938|\mathbf{2.2}\) \\ \hline \end{tabular}
\end{table}
Table 1: PSNR and running time (seconds) of four images (in Figure 1) denoised by PPDG, ADMM and PDHG. Bold numbers are the best results.
Figure 1: The first row contains the original images; the second row represents the Gaussian noised images; the third, fourth and fifth rows are the denoised images by PPDG, ADMM and PDHG, respectively.
* \(N\): the number of input samples, \(m\): the number of neurons in the hidden layer, \(d\): the dimension of each input sample;
* \(x_{i}\in\mathbb{R}^{d}\): the \(i\)-th input sample, \(i=1,\cdots,N\), \(z_{ij}\in\mathbb{R}\): the \(j\)-th element of the actual output, \(y_{ij}\in\mathbb{R}\): the \(j\)-th element of the desired output, \(j=1,\cdots,10\);
* \(w_{kl}\): the weights of the connections between the input nodes and the hidden layer, \(v_{jk}\): the weights of the connections between the hidden layer neurons and the output neuron, \(b_{k},b_{j}\): the bias parameters of hidden layer and output layer, \(k=1,\cdots,m\), \(j=1,\cdots,10\);
Training the neural network amounts to obtaining the value of the model parameter \((w,b)\) such that, for each input data \(x\), the output \(z\) of the model predicts the real value \(y\) with satisfying accuracy. To achieve this, it is required to solve the following finite-sum optimization problem,
\[\min_{w,b}\frac{1}{N}\sum_{i=1}^{N}f_{i}(w,b)+\lambda(\|w\|_{1}+\|b\|_{1}), \tag{4.3}\]
where
\[f_{i}(w,b)=-\sum_{j=1}^{10}y_{ij}\log(z_{ij})\]
is the cross-entropy loss function,
\[z_{ij}=g_{2}\left(\sum_{k=1}^{m}v_{jk}g_{1}\left(\sum_{l=1}^{d}w_{kl}x_{il}+b_ {k}\right)+b_{j}\right),\]
and \(\lambda>0\) represents a regularization parameter. Here, \(g_{1}\) is the sigmoid activation function and \(g_{2}\) is the softmax activation function.
Figure 2: Images (a), (b), (c) and (d) are the partial enlargements of (i), (m), (l) and (p) in Figure 1, respectively.
Choose a normalized vector drawn from standard normal distribution as the initial point \(x^{0}\), and set \(\lambda=1e-4\). Combining with SAGA, SVRG and SARAH estimators, we apply SPPDG (Algorithm 2), the stochastic linearized ADMM (SADMM) proposed recently in [6] and the stochastic proximal gradient method (SPG), to train the neural network on the training set. After training, we evaluate the classification performance of this neural network on test set. The numerical results are presented in Figure 3, which displays the training loss, training error and test error as functions of the total number of propagations for all methods. By observing this figure, all three SPPDG methods obviously outperform the methods associated with SADMM and SPG, and the gradient estimator SVRG seems more competitive than SAGA and SARAH.
### Nonconvex graph-guided fused lasso
In this subsection, we consider the following problem
\[\min_{x\in\mathcal{D}}\frac{1}{N}\sum_{i=1}^{N}f_{i}(x)+\lambda\|Ax\|_{p}^{p}. \tag{4.4}\]
Here, \(A=[V;I]\in\mathbb{R}^{m\times n}\) with \(V\in\mathbb{R}^{n\times n}\) is the sparsity pattern of the graph obtained by sparse inverse covariance estimation (see [29]), the set \(\hat{\mathcal{D}}\) is defined as \(\hat{\mathcal{D}}=\{x\in\mathbb{R}^{n}:\|Ax\|_{\infty}\leq r\}\), \(f_{i}(x)=1-\tanh(b_{i}\cdot\langle a_{i},x\rangle)\) is the sigmoid loss function which is nonconvex, and \(\|u\|_{p}\), \(p\in(0,1)\) is the \(\ell_{p}\)-norm. Evidently, problem (4.4) can be categorized as an instance of the fully nonconvex finite-sum optimization (1.2) with \(h(u)=\lambda\|u\|_{p}^{p}+\mathcal{I}_{\mathcal{D}}(u)\), \(\mathcal{D}=\{u:\|u\|_{\infty}\leq r\}\). In what follows, we choose \(\lambda=1e-4\) and \(r=1\).
In this experiment, we test problem (4.4) by datasets CINA3, MNIST4, gisette[26] and covtype5, which are summarized in Table 2. At the \(k\)-th iteration of SPPDG, we also apply (4.2) to compute an approximate point of the extended proximal mapping \(\text{prox}_{h^{*}}^{M}(y^{k}+M^{-1}A(2x^{k+1}-x^{k}))\) where \(\text{prox}_{\beta h^{*}}(\cdot)\) is calculated according to Example A.3.
Footnote 3: The dataset is available in [http://www.causality.inf.ethz.ch/data/CINA.html](http://www.causality.inf.ethz.ch/data/CINA.html)
Footnote 4: The dataset is available in [http://yann.lecun.com/exdb/mnist](http://yann.lecun.com/exdb/mnist)
Footnote 5: The dataset is available in [https://datahub.io/machine-learning/covertype](https://datahub.io/machine-learning/covertype)
The numerical result is displayed in Figure 4 with initial point \(x^{0}=0\), \(q=0.5\) and a fixed mini-batch sample size \(\lfloor 0.01N\rfloor\). Due to the fully nonconvex structue and the existence of the linear operator \(A\), problem (4.2) cannot be solved by SADMM and SPG directly as in the previous subsection. However, we can observe from Figure 4 that SPPDG (Algorithm 2 with gradient estimators SAGA, SVRG and SARAH)
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Dataset & Data \(N\) & Variable \(n\) & Density \\ \hline CINA & 16033 & 132 & 29.56\% \\ MNIST & 60000 & 784 & 19.12\% \\ gisette & 6000 & 5000 & 12.97\% \\ covtype & 581012 & 54 & 22.12\% \\ \hline \end{tabular}
\end{table}
Table 2: Datasets used in graph-guided fused lasso.
Figure 3: Comparison of SPPDG, SADMM and SGD for image classification using one-hidden layer neural network.
SARAH) is able to solve problem (4.4) efficiently. It is also shown that by running same number of iterations, SPPDG-SVRG outperforms both SPPDG-SAGA and SPPDG-SARAH significantly. However, with the same CPU time the performance of SPPDG-SAGA is better than that of SPPDG-SVRG and SPPDG-SARAH.
Figure 4: SPPDG for nonconvex graph-guided fused lasso.
Conclusion
In this paper, we delve into the exploration of the first-order primal-dual methods for composite optimization and nonconvex finite-sum optimization in the fully nonconvex setting. Inspired by the existing first-order primal-dual methods for convex optimization, with the help of conjugate duality, we propose a preconditioned primal-dual gradient method and its stochastic approximate variant. The proposed methods are shown to be effective on a variety of nonconvex applications.
Motivated by the rapid development of convergence analysis of various nonconvex optimization algorithms in recent years, we have derived the convergence results for the proposed algorithms in the context of Kurdyka-Lojasiewicz condition. Notably, the analysis of the stochastic algorithm for finite-sum optimization relies heavily on the properties of the variance reduced gradient estimators. Consequently, it is not trivial to extend the technique in this paper to investigate the convergence of stochastic algorithms for general nonconvex stochastic optimization problems, which is left to future research.
## Acknowledgement
This work was partially supported by the National Key R&D Program of China (No. 2022YFA1004000), the Major Key Project of PCL (No. PCL2022A05) and the National Natural Science Foundation of China (Nos. 12271076 and 11271278).
| この論文では、まず、共役双対性理論に基づく事前条件付けされた原始-偶数勾配アルゴリズムを紹介する。このアルゴリズムは、二つの項からなる目的関数の複合最適化問題を解くように設計されている:連続的に微分可能な非凸関数と、線形演算子と非凸非凸関数の合成である。既存の非凸原始-偶数アルゴリズムと比較して、共役双対を利用することで、非凸関数の近傍マッピングの計算を必要としない。限定的な条件下において、生成されたシーケンスの任意の近傍点は複合最適化問題の臨界点であることを証明する。クドリカーラ-ローjasの定理の文脈において、近似的な収束と、イテレータの収束速度を導出する。次に、非凸有限和最適化に対して、事前条件付けされた原始 |
2309.14379 | Machine-assisted mixed methods: augmenting humanities and social
sciences with artificial intelligence | The increasing capacities of large language models (LLMs) present an
unprecedented opportunity to scale up data analytics in the humanities and
social sciences, augmenting and automating qualitative analytic tasks
previously typically allocated to human labor. This contribution proposes a
systematic mixed methods framework to harness qualitative analytic expertise,
machine scalability, and rigorous quantification, with attention to
transparency and replicability. 16 machine-assisted case studies are showcased
as proof of concept. Tasks include linguistic and discourse analysis, lexical
semantic change detection, interview analysis, historical event cause inference
and text mining, detection of political stance, text and idea reuse, genre
composition in literature and film; social network inference, automated
lexicography, missing metadata augmentation, and multimodal visual cultural
analytics. In contrast to the focus on English in the emerging LLM
applicability literature, many examples here deal with scenarios involving
smaller languages and historical texts prone to digitization distortions. In
all but the most difficult tasks requiring expert knowledge, generative LLMs
can demonstrably serve as viable research instruments. LLM (and human)
annotations may contain errors and variation, but the agreement rate can and
should be accounted for in subsequent statistical modeling; a bootstrapping
approach is discussed. The replications among the case studies illustrate how
tasks previously requiring potentially months of team effort and complex
computational pipelines, can now be accomplished by an LLM-assisted scholar in
a fraction of the time. Importantly, this approach is not intended to replace,
but to augment researcher knowledge and skills. With these opportunities in
sight, qualitative expertise and the ability to pose insightful questions have
arguably never been more critical. | Andres Karjus | 2023-09-24T14:21:50 | http://arxiv.org/abs/2309.14379v1 | Machine-assisted mixed methods: augmenting humanities and social sciences with artificial intelligence
###### Abstract
The increasing capacities of large language models (LLMs) present an unprecedented opportunity to scale up data analytics in the humanities and social sciences, augmenting and automating qualitative analysis previously typically allocated to human labor. This contribution goes beyond simply reporting on LLM task performance, proposing a systematic mixed methods framework to harness qualitative analytic expertise, machine scalability, and rigorous quantification, with attention to transparency and replicability. It builds on the mixed methods designs of quantification or integration, and feature analysis from linguistics. 16 machine-assisted case studies are show-cased as proof of concept, in 9 diverse languages and across multiple disciplines. Tasks include linguistic and discourse analysis, lexical semantic change detection, interview analysis, historical event cause inference, detection of political stance, text and idea reuse, genre composition in literature and film; social network inference from text, historical text mining, automated lexicography, missing metadata augmentation, and multimodal visual cultural analytics. They are based on novel test data as well as direct replications of past research. It is also shown how to replace opaque topic modeling popular as a "distant reading" method, with hypothesis-driven topic classification. In contrast to the focus on English in the emerging LLM applicability literature, many examples here deal with scenarios involving smaller languages and historical texts prone to digitization distortions. In all but the most difficult tasks requiring expert knowledge, (already currently available) generative LLMs can demonstrably serve as viable research instruments and an alternative to human-only analytics. LLM (and human) annotations may contain errors and variation, but the agreement rate can and should be accounted for in subsequent statistical modeling; a bootstrapping approach is discussed. The replications among the case studies illustrate how tasks previously requiring potentially months of team effort and complex computational pipelines, can now be accomplished by an LLM-assisted scholar in a fraction of the time. Importantly, this approach is not intended to replace, but to augment researcher knowledge and skills. With these opportunities in sight, qualitative expertise and the ability to pose insightful questions have arguably never been more critical.
## 1 Introduction
Developments in generative large language models (LLMs, sometimes dubbed as AI) have broadened their applicability to various research tasks. Of particular interest to the humanities and social sciences (H&SS) is the capacity to use them as on-demand instructable classifiers and inference engines. Classifying texts or images for various properties has been available for a while in the form of supervised machine learning (ML). Yet the necessity to train such models (or
tune pretrained models) on sufficiently large sets of already labeled examples may have been one factor hampering the wider adoption of ML tools as research instruments in H&SS. Unsupervised learning approaches like word or sentence embeddings and topic modeling do allow for explorative approaches, but often necessitate complex text preprocessing, convoluted pipelines to use for confirmatory inference, and as latent models, are typically opaque to interpret (see examples below).
Zero-shot learning, as implemented via instructable, generative pretrained LLMs, offers the best of both worlds, if used in a principled manner. If applied as an on-demand classifier, the classified features or inferred variables constitute quantitative data, which in turn necessitates systematic statistical modeling, to make sure the eventual claims and interpretations remain rigorous. This contribution takes a step in that direction by proposing a framework consisting of a qualitative annotation step (by humans or machines) and a subsequent quantitative modeling step. As discussed below, it can therefore be situated as a mixed methods approach. It is shown via case studies to be applicable across a wide range of tasks across a diverse set of disciplines. This includes traditionally qualitative areas, and those dealing with data like literary or historical text. While here the focus is on one particularly flexible (quantitization-driven) design, the machine-assisted components may well be applicable in other mixed designs as well. It is not directly applicable to (true) qualitative scholarship in the sense that the outputs of the quantification (coding) step necessitates quantification. But as discussed below, purely qualitative approaches may no longer be optimal in many areas dealing with empirical data, given the availability of more systematic frameworks, and now also scalability via machine-assistance.
This proposal involves substituting some aspects of traditionally human research labor with automation. However, qualitative thinking and expert knowledge is absolutely central to the framework's successful application. To be meaningful, it necessitates qualitative work including hypothesis, coding scheme and prompt design, expert annotation of evaluation data sets, and interpretation and contextualization of the final quantitative results (e.g. regression coefficients, clusters, frequency intervals, or other numerical data). The framework employs machines such as LLMs as tools -- or in a sense, (narrow) artificial intelligence assistants -- to augment expertise, enabling more scalable research, while fostering replicability and transparency through proposed good practices in unitizing, analysis and methods documentation.
### Related LLM applicability research
This section offers a brief overview of recent LLM applicability research that this work builds upon and complements. This term is used here to denote the exploration of the feasibility and performance of pre-trained LLMs as research and analytics tools -- as distinct from the machine learning domain of LLM engineering (reviews of which can be found elsewhere).
ML and natural language processing (NLP) supported research in the (digital) humanities and (computational) social sciences is nothing new. However, only until recently, a typical machine assisted text-focused research scenario would have involved either training a supervised learning classifier (or fine-tuning a pretrained LLM, see e.g. Majumder et al.2020; de la Rosa et al.2023) on a large set of annotated examples for a given task, or using output vectors from a word or sentence embedding LLM like BERT (Devlin et al.2019) for clustering or other tasks (e.g. Fonteyn2021; Sen et al.2023). What makes a difference now is recent advances in LLM technology, to the point that it has become feasible to use them as instructable zero-shot learners (Wei et al.2022; OpenAI 2023) for classification and inference. In a zero-shot learning scenario, an LLM is instructed to generate output using an input (prompt) that includes in some form both the classification or generation instructions, as well as the input data. The generated outputs are then parsed and quantified as necessary. This removes the need for laborious annotation work to create large training sets for every specific niche task. A second contributing factor to recent LLM popularity, both as chatbots and classifiers, is arguably accessibility. Running very large LLMs requires significant hardware, while cloud services like that of e.g. OpenAI's GPT interfaces have made them accessible, albeit at a cost, to those who do not have access to such hardware or the skills to operate one.
All of this has attracted attention across various research communities well beyond NLP. The latter is currently in the process of (re)evaluating which previous specialized tasks (or entire research subfields) can be zero-shot with large enough LLMs (cf. Qin et al.2023). Other interested parties include the humanities and social sciences. In a large benchmarking exercise involving 24 (English-language) tasks drawn from multiple disciplines, Ziems et al.(2023) show that both the slightly older FLAN-T5 (Chung et al.2022) and OpenAI's third generation GPT models (GPT-3 and 3.5) achieve moderate to good results across their annotation and classification benchmarks. Other contributions have focused on single tasks or domains like discourse annotation (Fan and Jiang2023), metalinguistic abilities (Begus et al.2023), diagnosis inference
(Wang et al. 2023), political stance and affiliation detection (Tornberg 2023; Zhang et al. 2023b), text retrieval and analytics (Zhu et al. 2023), and likely many more. Gilardi et al. (2023) compare the performance of GPT-3.5 to crowdsourced workers from the Amazon Mechanical Turk platform on four (English) text classification task and find that the LLM outperforms crowdworkers on accuracy and reliability while running a fraction of the crowdsourcing costs (see also Tornberg 2023; Huang et al. 2023; Wu et al. 2023). There is also the artificial cognition strand interested in comparing machine and human behavior (Futrell et al. 2019; Taylor and Taylor 2021; Acerbi and Stubbersfield 2023).
### Related feature analytic and mixed methods research
This contribution describes a general-purpose quantitizing-type mixed methods framework -- henceforth, QMM -- where the qualitative coding of (e.g. textual) data into a fixed number of categorical or numerical variables is followed by quantitative modeling of these inferred variables. This is sometimes also spelled 'quantizing' (Fetters et al. 2013; Hesse-Biber 2010; Sandelowski et al. 2009). It is a design where the research questions are answered primarily based on the results of the quantification step (e.g. a statistical model), not the data annotation step. 'Quantitization' is used here to distinguish the process of annotating data with the explicit purpose of subsequent rigorous quantification, from annotation for any other purposes. 'Coding' is indeed frequently used in many domains, but unhelpfully also has many other meanings.
As QMM combines both qualitative analysis and quantification, it can be described as mixed methods, although it is likely not positioned in its mainstream. Much of mixed methods research is mixed mostly in the sense of using multiple data types in a single study (and therefore a method for each). These designs include e.g. sequential, concurrent, convergent, triangulating, following-a-thread (Hesse-Biber 2010; Tashakkori and Teddlie 2010; O'Cathain et al. 2010) and various unspecified designs (Huynh et al. 2019). The quantitizing design and related variants are referred to with a variety of other terms in mixed methods and related literature, including "integrated" (Tashakkori and Teddlie 2010; Creamer 2018; O'Halloran et al. 2019), "integration through data transformation" design (not to be confused with "transformative mixed methods," Mertens 2008), "qualitative/quantitative" (Young and Jaganath 2013), and "converting" (Creamer 2018). Parks and Peters (2023) propose a 'dialogical' machine-assisted framework which is similar in using NLP tools as part of the pipeline, but is not a quantitizing approach.
A similar mixed approach can also be found within content analysis (Schreier 2012), where the quantitizing is again called "coding". However, the CA community does not consider subsequent quantification of the coded variables as "a defining criterion" of the paradigm" (Krippendorff 2019), as it also includes more holistic or interpretative-qualitative approaches (Hsieh and Shannon 2005), and quantification limited to counting or simple pairwise tests (cf. Morgan 1993; Schreier 2012). Similarly limited quantification can be found in discourse analysis (e.g. O'Halloran et al. 2019). Thematic analysis also does coding but typically does not apply any statistical modeling to the distributions of codes (cf. Braun and Clarke 2012; Trahan and Stewart 2013). As a machine-assisted example, 'distant reading' in digital humanities relies on word counts or topic clusters for similar purposes (Moretti 2013).
Issues with quantification and biased sampling affect rigor and replicability (Parks and Peters 2023). Approaches that make use of quantitizing in a limited way (which can easily lead to spurious results), either by using impressionistic claims like "more", "less", "some" without actual quantification, or explicitly counting but stopping at proper statistical modeling -- will all be referred to as pseudo-mixed, going forward. In contrast, the present proposal emphasizes the need for rigorous statistics in the quantitative step to estimate uncertainty and to be able to deal with issues like confounding variables, interactions, multicollinearity, and repeated measures (e.g. of same participant).
The mixed approach of combining qualitative coding with subsequent quantification is widespread, if not the default, in strands of usage-based (corpus-based, variationist, cognitive) linguistics. It us usually not referred to as mixed however. Where it is named explicitly, "usage feature analysis" has been used, sometimes prepended with "multi-factorial" (cf. Glynn and Fischer 2010). "Behavioral profiles" refers to the same (Gries and Divjak 2009). The quantitizing, referred here too as "coding", is typically conducted by the researchers themselves (often requiring expert linguistics knowledge). When it comes to coding schemes, standard variables from past literature may be applicable and reused, for example grammatical categories (Szmrecsanyi et al. 2014), but may also be developed for a specific research question (cf. Glynn 2010). Developing a standardized coding scheme or taxonomy for future research on a given topic can also be the sole aim, as in branches of psychology (where similar methods are also used; Hennessy et al. 2016). Unlike some of the methods mentioned above, a great deal of attention is paid to rigorous statistical procedures in the quantitative modeling step
### This paper
The framework described in this contribution is essentially that, the QMM or feature analytic design, but extended beyond a single discipline like linguistics to the larger H&SS, and augmented with machine learning to solve the scaling problem of the laborious human-annotation bottleneck. "Machine-assisted mixed methods" or MAMM will be used as a shorthand, with particular reference to the quantitizing design. In other words, here the MAMM is a QMM that uses machines for the qualitative analysis step, but the machine-assisted component could very well also be integrated in the other mixed designs mentioned above.
As discussed above, the idea of applying machine learning to humanities or social science data as a form of mixed methods research is not new as such, nor is quantifying the resulting annotations. However, it is hoped that this contribution will be nevertheless useful in casting the machine analysis step explicitly as a qualitative (but quantitizing) analysis task, and bringing the aforementioned aspects together under a unified framework that would be easily referenceable, implementable, replicable, and teachable. As pseudo-mixed approaches appear to be still relatively common in the H&SS, sections in the Methods are dedicated to briefly summarize why statistical rigor in the quantitative step is not only recommended but necessary to avoid spurious and unreplicable results.
While previous similar frameworks have focused on one or a small set of disciplines (e.g. feature analysis in linguistics), this contribution is about compatibility: it is argued (and shown via the case studies) to be generally applicable to a large set of problems and questions. It can also readily be incorporated in discourse or content analysis research or approaches building on social semiotics (Halliday 1978). Usage of the MAMM or QMM does not exclude using additional, sequential, convergent, etc. analyses either. The focus here is on using machines to augment the quantitizing type designs (suitable for inherently qualitative data), but the same principles can be applied to automate pipelines in other designs. While the examples here are from academic research, the same approach may be used in analytics in business, marketing, media, etc. (for related work, see Dell'Acqua et al. 2023).
In summary, this contribution has three aims, seeking to fill one research gap and complement two others. Firstly, it encourages wider adoption of the QMM pipeline, and in particular the MAMM, given its inherent advantages over alternatives in many applications (see Methods). The general QMM approach can also be fruitfully applied across disciplines using just human annotators. However, there are demonstrable benefits to augmenting it with machine learning, in particular generative LLMs as on-demand instructable annotators. This augmentation translates to scalability to magnitudes of data that would be unfeasible in purely qualitative or human-quantitizing paradigms (see Discussion).
Secondly, the case studies here go beyond the otherwise fairly Anglo-centric focus of LLM applicability research, with tasks in 9 languages: Estonian, Finnish, German, Italian, Japanese, Latin, Russian, Turkish, and English in four varieties (19th century US, and 18th century UK, contemporary standard US, and nonstandard US American as used on social media).
Thirdly, the case studies complement already existing LLM applicability research discipline-wise, with a set of 16 case studies covering research tasks across roughly nine domains -- linguistics, discourse analysis, literature, media and film studies, history, social network science, discourse analysis, lexicography -- and finally look to the future by exemplifying possible applications of multi-modal models to visual analytics. The case studies include replications of past research, synthetic examples created for this contributions, and one benchmark. While some exemplify the MAMM pipeline, others include general practical tasks like data augmentation and content filtering, and explorative tasks like literary translation analysis and critique. Their results collectively yield an answer to the question: is an artificial intelligence or machine-assisted methodology actually applicable to complex cultural and social data, and already feasible given current LLM technology?
Unlike most LLM engineering and LLM applicability research, the focus here is not on public benchmarks or shared tasks. One reason is data contamination (Aiyappa et al. 2023). The currently most capable LLMs are trained on vast datasets likely mined at least partially from the open Internet, which may well include public benchmarks, either intentionally or not. In the one NLP benchmark utilized here (one highly unlikely to cause contamination), the zero-shot approach scores 1.5-2x above the state of the art. The second reason: a focus on benchmarks would simply not be particularly representative of research practice in the proposed framework, which encourages researchers to build their own task
specific miniature test sets -- so that they can be used to estimate machine error rates and directly incorporate them in the statistical estimates in the quantification step (see Methods). They can also be used to compare and choose models, including eventual fine-tuned models for specific tasks, or personal or research group specific models (see Discussion). The code and test sets are made public though, to complement the Anglo-centric benchmarking scene and to provide a starting point for researchers working on similar topics to experiment with this approach.
The Methods section below explicates the components of the framework and how it is universally applicable to a large variety of research questions, disciplines and data types. Practical implementation suggestions will also be provided. The Results section then illustrates this through a number of case studies.
### Three disclaimers
Some disclaimers are however in order, as "artificial intelligence" has recently attracted a significant uptick of public attention and corporate hype. Firstly, this paper explicitly does not deal with topics like data copyright, related ethical issues, possible biases in the models, environmental concerns, "AGT", or AI tractability. These issues have and will be discussed elsewhere (Bender et al., 2021; Lund et al., 2023; Rooij et al., 2023; Tomlinson et al., 2023; Liesenfeld et al., 2023; Motoki et al., 2023; Feng et al., 2023). There is also a growing literature centered around demonstrating what LLMs as such are not or should not be capable of (Asher et al., 2023; Rooij et al., 2023; Dinh et al., 2023; Barone et al., 2023; Sclar et al., 2023).
In contrast, this contribution focuses on the very pragmatic approach of using current generative LLMs and any suitably instructable future models as a class of zero-shot machine learning tools, which can be (carefully, with expert guidance and evaluation) applied to scale up otherwise laborious and time-consuming data annotation and analysis procedures, or replace otherwise complex computational pipelines. The case studies below focus on empirical performance of some already available models on realistic tasks and replications of past research, rather than their possible theoretical limitations. Whether or not the language models or machine learning classifiers in general are referred to as "AI" is not particularly important, what matters is if they work.
Second disclaimer: this not about replacing researchers or research assistants (cf. Erscoi et al., 2023), but about augmenting, complementing and empowering them, while promoting transparent and replicable research practices, and ultimately reducing repetitive labor and leaving more time for meaningful work. To put it another way: human labor does not scale well, machines do; human time is valuable, machine time is cheap. Ziems et al. (2023) suggest that "LLMs can radically augment but not entirely replace the traditional [computational social science] research pipeline." This contribution agrees with this sentiment; indeed larger gains are likely to be made from converging expert humans and powerful machines as annotators and assistants.
Third disclaimer: the LLM test results and classification accuracies reported in the case studies should only be seen as the _absolute minimum baseline_. Prompt optimization is not the main goal here, and most prompts consisted of fairly simple 1-2 sentence instructions (see Appendix). As a rule of thumb, precise and more detailed prompts tend to yield better results. Also, LLM technology is rapidly improving, as also evident from the comparisons of two subsequent GPT (generative pre-trained transformer) versions here. The accuracy rates are therefore not the point, although they are reported -- to illustrate tasks with (already present) potential for automation, and to show how to incorporate them in subsequent statistical modeling.
## 2 A machine-assisted mixed methods framework
This contribution describes a framework for analyzing qualitative data -- text, images, etc. -- in both exploratory and confirmatory settings, readily augmentable with machine assistance for automation and scaling purposes. As a quantitizing framework, it focuses on cases where the data is qualitative but can be quantitized (also called annotating, coding) into one or more discrete or numeric variables, which can then be used in quantitative modeling, followed by qualitative interpretation. The qualitative annotation step can be completed either by (or in conjunction of) humans and machines, such as supervised learning or zero-shot learning using generative LLMs, which are now reaching human performance in many relevant tasks. A typical pipeline can be summarized as follows (illustrated visually in Figure 1).
1. Research question or hypothesis design; designing or adopting a coding scheme and instructions to apply it (may
re-iterate after data collection)
2. Data collection (from corpora, databases, interviews, fieldwork, etc.)
3. Cleaning, parsing, unitizing, sampling data, as necessary, into a reasonably sized sample of reasonably sized examples
4. Qualitative annotation (quantitizing) of these examples according to the coding scheme: each example is translated into one or more categorical or numeric variables 1. If this is delegated to artificial intelligence, then also: human annotation of a test set (unless one already exists)
5. Quantitative (statistical) analysis of the inferred variables and their relationships according to the research question(s), with control for (often present) repeated measures; quantification of uncertainty or generalizability of the results 1. If the previous step involved AI annotators, incorporate their error rates in the uncertainty modeling
6. Qualitative interpretation of the quantitative results (regression coefficients, p-values, counts, etc.), potentially in combination with qualitative analysis of examples from data or theory.
### Data preparation and coding
Unitizing is a crucial step in data without natural units (for good practices, see the references in the Introduction and Krippendorff 2019). It can be helpful to think of units as rows in a table where the columns are the variables and the first column contains the example units. Given an art collection, a painting is likely a useful unit of analysis. There may be multiple paintings per artist, but the unit is fairly non-controversial, and the subsequent statistical analysis, even if the goal is to compare said artists, can and should take into account this grouping of units (see mixed effects modeling discussion below). In contrast, an entire book can but is unlikely to be a useful unit that can be distilled into a single data point in a variable. That is, unless the goal is just to count pages, authors, or variables applying to an entire book (but finer unitizing may well lead to better results in the latter case as well). If the interest is in content, a likely unit of comparison would be a paragraph or a sentence. The same applies to interview-based research: the unit, the data point, is unlikely to be an interview or a respondent, but all their (topic-relevant) utterances or answers (which can be grouped by respondent in the quantitative step).
A coding scheme consists of variables and their definitions (again see the literature in the Introduction for good practices). Categorical variables have preset levels (values) and definitions. The scheme may be entirely or partially derived from preceding research and theory, or engineered by the domain expert from scratch for a given study. The qualitative analysis proceeds according to this scheme, but the scheme may be, and often is in practice, iteratively improved based on small initial data samples or a pilot study (see also Schreier 2012). The number of levels of a categorical variable are fixed and typically kept to a minimum, to ease interpretation of the quantification step.
For example, if the data are newspaper texts, the unit a paragraph and the hypothesis that negative stances are foregrounded, then the variables and levels might be the dependent variable of stance (positive, negative), the main predictor the page (numeric; or binomial, front page or not), perhaps a control variable for type (news, opinion), and variables for text author and publication. The first three would be considered fixed effects and the last two random effects in the mixed effects statistical modeling sense; these would need to be ideally controlled for in the case of repeated measures (which is more often than not the case in H&SS research; see below).
Figure 1: A typical QMM pipeline. Qualitative elements are outlined in yellow, quantitative (statistical) procedures in blue. Steps where machine learning or other automation can be applicable are in bold font, in particular the automatable qualitative annotation step (which would make this a MAMM). Annotating a (small) additional test set is optional but strongly recommended in the case of either using multiple human annotators (e.g. crowdsourcing) or machine annotators.
### Setting up an annotator machine
While any suitable machine learning or statistical machine can be plugged into the MAMM framework, this section focuses on instructable LLMs in a zero-shot learning context as the currently most flexible option. The case studies below are not focused on model comparison (like e.g. Ziems et al. 2023; Bandarkar et al. 2023). Two models are used here, primarily OpenAI's GPT-4, with occasional comparisons with the previous-generation GPT-3.5. The model choice is mostly for practical reasons. Running inference on this cloud service is easy and fairly affordable, and does not require setting up a local LLM, which would require either hardware beyond the consumer grade, or a suitably powered and configurable cloud service. GPT-4 is also highly multi-lingual, compared to current open-source alternatives, which, based on limited attempts, did not recognize some of the smaller languages of the intended case studies. However, more and larger open-source models are continuously becoming available, and optimization research is ongoing to make them run on resource-constrained hardware (e.g. Taori et al. 2023). In the meanwhile, this section uses the cloud-based GPTs as an example, but the suggestions should be fairly generalizable.
In the case of the OpenAI models as a service, analyzing texts consists of making a call to their API (application programming interface), which consists of a number of (fairly well documented) parameters. The associated Python packages openai and tiktoken can be freely used for easy implementation, which also make it easy to keep an eye on costs (which are calculated per input and output tokens). While prompts can be tried out over the web-based ChatGPT interface, a chatbot is obviously not well suited for systematic data analysis, and likely has an above-zero temperature setting (its "code interpreter" plugin, now relabeled as 'advanced data analysis', was not found suitable either). Currently, some programming is required to use these models, both cloud and locally-run LLMs, but as this technology is gradually integrated into various software, zero-code approaches may well become available in the near future, e.g. via software like MAXQDA, or AI integration in Google Sheets and similar. This contribution comes with an open code base to foster replication and enable researchers easy experimentation with these tools.
The simplest input prompt consists of the instructions and the data to be analyzed, for example, _Tag this text as positive or negative sentiment. Text: I love ice cream_. Multiple examples can be batched into a single prompt with a request to output multiple tags, but this can easily degrade classification accuracy and induce hallucinations -- these are, after all, just text generation engines. This appeared less of a problem for GPT-4 than 3.5, and may be worth experimenting with as a cost-optimization strategy. If the input data is long, e.g. an entire book, then the window size of the chosen model must be kept in mind. Inputs that do not fit into a single prompt can be chunked and their result later aggregated. In most practical applications however, proper unitizing (e.g. paragraphs or chapters instead of entire books) is expected to yield more useful and fine-grained results anyhow.
Relevant parameters for a classification task include temperature, bias and output length (for details, see this white paper: OpenAI 2023). These may slightly differ between models but the same principles hold. "Temperature" controls the randomness or variety of the output of a neural network: in a chatbot like ChatGPT, a moderate to high value is desirable, while 0 makes sense for classification. Defining token bias allows for steering the model towards generating certain outputs with a higher probability. This is useful in a classification scenario where the output should be one of the class labels (setting their token values to 100 worked well), but should not be used where an open-ended output is expected. Finally, it is useful to limit model output to the maximum (token) length among the class labels, to make sure the model, generative as it is, does not add commentary and that the output is easy to parse (using single-token class labels where possible worked well). If using prompting strategies like chain-of-thought etc. (Zhang et al. 2023; Chen et al. 2023), longer outputs must be allowed of course, and parsed accordingly. One option is to instruct to output a machine-readable format such as JSON (for a guide, see Ziems et al. 2023). Fairly short and simple, often single-sentence prompts were used in the case studies below (prompts in the Appendix; short inputs also save on cloud service usage costs).
One way or another, if a generative LLM is used as a classifier, it is important to keep in mind that it is actually not a classifier in the traditional ML sense, and may generate nonstandard outputs. This issue may also rise the LLM detects potentially sensitive or harmful content in the input and refuses to give an output (the GPT models for example all appear to have quite extensive guardrails of that nature built in), or when a cloud service simply times out. In any case, it is good practice to build contingencies for that into the pipeline.
This is the technical side of things. The most important component however -- just like in QMM designs such as usage feature analysis -- is the qualitative coding scheme design, which precedes any annotation work. In the MAMM case, this also involves translating the coding instructions into a good prompt. In turn, the prerequisite for a good scheme and variables is a good question or hypothesis. The machine-assisted step can only augment and scale up the
qualitative expertise, and the quantification step can only estimate uncertainty to make sure the claims are reasonably likely to replicate (see next section). LLM tech at its current stage is unlikely to be a substitute for this careful and systematic qualitative work, theory grounding and expert knowledge that precedes and follows the fairly straightforward data annotation process and statistical machinery in the middle.
### Engaging in rigorous statistical modeling to avoid unintentional pseudo-mixed designs
The QMM or MAMM approaches only make sense if the inferred variables are subsequently modeled in a rigorous statistical framework to estimate the (un)certainty of the estimates, be it prevalence counts or complex multivariate relationships between the variables. In a typical hypothesis-driven research scenario, this entails minimally accounting for possible confounding variables and interactions, repeated measures (not necessarily but very often applicable) and any repeated testing. None of these issues are exclusive to quantitative research, they just appear to be often ignored in qualitative designs.
There is not enough space to delve into each of these issues (and handbooks exist which do; cf. Introduction). The bottom line is, lack of control for these issues can easily lead to false, overestimated conclusions or even diametrically opposite results. One such example is Simpson's Paradox: if the underlying grouping or hierarchical structure of a dataset is not accounted for, estimates can quite literally reverse direction (see Kievit et al. 2013). Again, this is not a problem of statistics but equally applicable to qualitative research, it is just inherently impossible to systematically control for the effects of repeated measures in the latter.
Any quantitative claim (including in a mixed methods study) should be accompanied by an estimate of confidence or uncertainty and if possible, effect size. The majority of H&SS works with samples, not populations (in the statistical sense) and the samples are often small. If claims are made about the population based on a sample, it is crucial to estimate the reliability or replicability of a claimed difference, association, tendency, prevalence, etc. The smaller the samples the more important that is, to avoid making claims based on what may just be sampling noise. Estimating effect size of e.g. difference, similarity, trend, correlation is simply impossible in qualitative designs. This is however important in quantitative modeling, to avoid making sweeping yet spurious claims based on quantification which may actually describe only a small portion of variance in a (typically complex) social or cultural system with many interacting variables. This is also the reason simple pairwise tests (Chi-squared, t-test) are often an insufficient modeling solution in H&SS. More versatile models like multiple regression enable estimating the uncertainty of the main hypothesis while controlling for confounds, interactions, and in the mixed effects (multilevel) models case, repeated measures (cf. Gries 2015; Clark and Linzer 2015; Winter 2020; McElreath 2020). This may not immediately look like an issue in some disciplines. Examples include those focused on specific cases where there is no intended extrapolation to the larger world, like micro-history or biographies. Then again, even historical data points about a single person are also a sample from the population of (often not fully known) data about their life.
Repeated measures are also very common in H&SS. Survey and interview data typically contain multiple (often different number of) responses from multiple respondents. Literary or artistic examples may be sourced from multiple works from multiple authors from multiple further groupings, like eras or nationalities. Linguistic examples are sourced from corpora (with underlying sources) or elicited from multiple informants. Social media data often contains multiple data points from one user. One of the case studies below deals with a common scenario of analyzing interview data, exemplified with synthetic generated responses. In the Appendix, as an extension to this section, there is another constructed example based on the same dataset illustrating how opposing conclusions may easily be reached if the underlying respondent structure is ignored (modeling only fixed effects). The second example in the Appendix concerns confounding variables: without controlling for a relevant confound, the researcher could easily conclude support for a hypothesis -- it is shown how including the relevant variable can make the initial hypothesized effect disappear completely.
In summary, systematic and rigorous statistical modeling is a crucial part of applying QMM or MAMM. This is not, however, something that should be seen as a complicating factor or a requirement to make the researcher's life harder. On the contrary, it makes your life easier: instead of having to worry if an analysis is replicable and representative, the uncertainty can be estimated, enabling more principled final (qualitative) interpretation and decision-making.
### Incorporating classification error in statistical modeling
In a quantitizing research design, regardless if the annotation step is completed by humans or machines, inter-rater (dis)agreement should be accounted for in any subsequent operationalization of these new data, to avoid overconfident estimates and elevating the likelihood of making Type I errors. It is far from atypical for (also human) annotation tasks to have only moderate agreement in H&SS research. This aspect is typically ignored, even in applications of QMM like linguistic usage feature analysis, which otherwise strives for statistical rigor.
As discussed in the Introduction, no methodological element in this proposal is new on its own, including that of using zero-shot LLMs to annotate data. What appears to be not yet common is the suggestion to systematically use expert knowledge to delegate coding, analysis, or annotation tasks to machines such as LLMs, while -- importantly -- also making sure the machine error rates are incorporated in statistical modeling and uncertainty estimates. Unless a closely comparable test set already exists, this will typically require a subset of data to be manually coded by human annotator(s) for evaluating the chosen machine(s).
Annotation error can be accounted for in a number of ways. If the goal is confirmatory, then one is using errors-in-variables (EIV) type regression models (if the variables are numerical), directly model measurement errors using an MCMC or Bayesian approach (Carroll et al., 2006; Goldstein et al., 2008), or use prevalence estimation techniques (often referred to as "quantification" in machine learning literature; Gonzalez et al., 2017). Distributional shift naturally remains a problem, although there are proposals to work around that too (Guillory et al., 2021).
Keeping it simple, a bootstrapping approach is considered here which applies straightforwardly to exploratory and confirmatory scenarios and makes use of the rich error information available via computing the confusion matrix between a ground truth test set predictions and predictions or annotations. The procedure is simple, involving simulating the annotation procedure by sampling from the confusion matrix (see Figure 2 for illustration; ground truth is rows and predictions are columns).
1. Compute the confusion matrix \(m\), of machine vs ground truth, for the variable of interest \(V\) which has a set of categorical levels or classes \(C\)
2. For each class \(i\in C\), normalize the count distribution of predictions against ground truth ("rows" of \(m\)) as probability distributions \(d_{i}\)
3. Perform bootstrapping, creating \(N\) number of synthetic replicates of the data, where each predicted value \(V_{j}\) is replaced with a simulated value 1. For each case \(V_{j}\) with a value \(C_{i}\), perform random weighted sampling from \(C\) using the corresponding \(d_{i}\) as weights 2. After simulating all values of \(V\), perform the statistical analysis of interest (counts, prevalence, regression, etc.) on this synthetic dataset, and record the output(s)
4. Calculate \(\pm\) confidence intervals for the statistic(s) of interest based on the estimate of the error (e.g. standard deviation) in the bootstrapped outputs.
For example, if the goal is to estimate confidence of a percentage of class \(C_{i}\) in \(V\): perform bootstrapping on the raw new (classified or annotated) data some large number of times (e.g. 10000), by sampling from the test set confusion matrix for each case in \(V\), and calculating the statistic (percentage) in each replicate; then, calculate e.g. 95% confidence intervals as 1.96 \(\cdot\)\(\sigma\). The intuition is: if the outputs of the (human or machine) annotator match with the designated ground truth 100%, then there will be no variance in the sampling either, and the confidence intervals will be \(\pm\)0. The
Figure 2: A bootstrapping-driven pipeline for estimating the uncertainty in a machine-annotated categorical data variable \(V\). The crucial component is the test set for comparing human expert annotation (ground truth) and machine predictions (or that of human coders). This provides an estimate of annotator accuracy and class confusion within the variable, which can then be used in bootstrapping the confidence intervals for the statistic of interest.
more confusion in the confusion matrix, the more variance in the replicates, leading to higher error estimate or wider confidence intervals. See the first case study in the Results section for a practical demonstration. This is the simplest approach, and potentially better and more robust procedures may be considered where possible.
### Ensuring transparency and replicability of qualitative and machine-assisted research
One criticism that can be raised against qualitative research, quantitizing mixed methods, as well as any machine-assisted designs, is that they are not completely transparent and replicable. All of these approaches involve subjective decision making, and therefore annotator and analyst biases, and inherent stochasticity in the case of machine annotators, if models that are not 100% deterministic are used (such as the current GPTs).
The way to get around these issues and still allow for research on cultural, humanistic and qualitative data, is to strive towards maximal transparency and good open science practices in all phases of a given study (McKiernan et al., 2016; Vicente-Saez and Martinez-Fuentes, 2018; Nosek et al., 2018; Kapoor et al., 2023). In the QMM case, this includes describing and publishing the coding scheme and unitization principles (possibly as a pre-registration), the annotated or analyzed unitized data, and code or instructions to reproduce the quantitative analysis step. MAMM adds the need to publish the prompts and information about the model that was used. As open-source LLMs become more capable and available, it is not unfeasible that the open data accompanying a study would include the (potentially fine-tuned) model as well.
In cases where the source data itself is of sensitive nature and cannot be publicized, the coded variables (which typically do not reveal identities in the source) are still enough for the quantification and subsequent interpretations to be reproducible. In cases where even that would be an issue, synthetic or anonymization methods can be used to generate comparable data (James et al., 2021).
To avoid underestimating the model error rates and subsequent uncertainty estimates (discussed above), setting up the test data can follow the proven machine learning philosophy of using independent training, evaluation and test sets. In the zero-shot case, there is no training set, but any prompt engineering should be ideally completed on a separate evaluation set, only then to be tested (once) on the test set, in order to avoid overfitting and biasing confidence of the eventual analysis where the test set results will be used to estimate confidence or uncertainty.
There are large discrepancies between disciplines when it comes to open science practices. While publishing data and the concept of replicability are still practically unheard of in some, others like psychology have learned their lessons from the "replication crisis" (Shrout and Rodgers, 2018; Scheel et al., 2021). It is vital to adopt transparent practices when using the MAMM to avoid such pitfalls.
### Why use this framework?
Machine-assisted mixed methods is a proposal for analyzing qualitative data at scale. It incorporates most of the advantageous aspects of the involved approaches (traditional qualitative, quantitative) while overcoming their inherent limitations, as summarized below. Quantitative can be seen to include methods referred to in (digital) humanities as "distant reading", and qualitative includes "close reading". Mixed methods below refer primarily to the integrating or quantitizing type, like the usage feature analysis discussed in the Introduction (where pseudo-mixed is also discussed). The word "primarily" is used, as any quant research can be seen as "mixed" in the sense that everything ultimately requires a final qualitative interpretation. To summarize the pros and cons:
* Qualitative methods:
* Typically deeply focused, can consider wider context, reception, societal implications, etc. and self-reflections by the author
* Hard to generalize and estimate uncertainty of claims; typically hard to replicate, practically impossible to reproduce; involves inherently subjective analysis
* Very hard to scale to large data
* Pseudo-mixed quantitizing methods:
* Same as above; focused, contextual, reflexive, etc.
* Systematic codes, if present, make it easier to replicate (if documented), but relationships and their uncertainty remain impressionistic
* Otherwise same downsides as above, incl. hard to scale. False confidence in quantitative results without uncertainty modeling can lead to spurious results
* Primarily quantitative methods:
* Applicable to big data and scalable; relationships and their uncertainty can be estimated; may be seen as more objective
* Easier to replicate (or reproduce if data and procedures are all made available)
* May lack the nuance and depth of qualitative analysis of meaning, context and power relationships, especially for complex societal or cultural phenomena. Only applicable to counted or directly quantifiable data types.
* Quantitizing mixed methods (e.g. feature analysis)
* Inclusion of the qualitative step comes with most if not all benefits of qualitative-only analysis; including ability to handle virtually any human-readable data type
* While the qualitative step involves subjectivity, it can be replicated, and the quantitative reproduced (given data and procedures); relationships and their uncertainty can be estimated
* Hard to scale to large data
* Machine-assisted (quantitizing) mixed methods (MAMM)
* All the benefits of qualitative analysis
* All the benefits of mixed methods, rigorous quantification, replicability
* Yet applicable to big data and scalable
The list above is obviously simplified and these archetypes may not describe every research scenario. Importantly however, this framework is general enough to be applicable to both exploratory and confirmatory designs, a variety of questions and data types, including free running text, regardless of discipline. It is well suited for any empirical research scenario which requires more in-depth contextualization and interpretation than basic quantification allows for, but where the size of the dataset makes manual-only analysis laborious.
Even without the machine assistance component, quantitizing mixed methods provides a systematic framework promoting replicability and rigorous quantification, that is likely more practical compared to alternatives limited in these aspects. Incorporating machine learning, in particular instructable generative LLMs, enables simplification of previously complicated computational pipelines (see case study examples below) and easy scaling to much larger datasets than before.
This includes typically "small" datasets like interviews or fieldwork data which may be in principle manageable by hand, but can still benefit from systematic coding and at least a first pass of machine annotation. It also includes approaches aimed at getting at the "bigger picture" or themes, like thematic analysis: unitizing and subsequent systematic quantitative modeling can only improve the induction of overarching themes, and unlike the pseudo-mixed practices, also help control for confounds, repeated measures issues etc.
Metaphorically: it is true that to someone with a hammer, everything looks like a nail -- it's just that this is a particularly efficient, multi-purpose hammer with robot arms. The rest of this contribution is dedicated to demonstrating that these arms already work, even given the present level of generative LLM technology (which is likely to improve).
## 3 Results of case studies
This section summarizes the results of 16 case studies. These range from brief explorations to emulated tasks based on synthetic data, to replications of existing published research. Tasks covered include explorative and confirmatory research as well as practical technical tasks such as automated data augmentation. This section has two goals: to demonstrate the applicability of (currently already available) LLMs as suitable annotator machines or "artificial assistants" in a machine-assisted mixed methods framework, and to illustrate the pipeline proposed in the Methods section with realistic research cases. The case studies rely on two models, gpt-3.5-turbo-0613 and gpt-4-0613, current at the time of writing. They will be referred to simply as GPT-3.5 and GPT-4 (OpenAI 2023).
Accuracy and Cohen's kappa are used as evaluation metrics in most case studies (summarized in Table 1). The interest here is in agreement with human-annotated ground truth, rather than information retrieval metrics e.g. F1. Tasks with ordinal outputs use Spearman's rho instead. The kappa is adjusted agreement, taking into account observed and expected agreement that can arise by chance: \((p_{e}-p_{e})/(1-p_{e})\). While accuracy illustrates empirical performance, kappa takes into account that some tasks are easier than others due to different number of possible classes in a given classification task.
Most of the case studies emulate or replicate only a segment of a typical research project pipeline. This is by design, to investigate the applicability of LLMs for a variety of different tasks in different stages of research. Many of the examples in the case studies below boil down to multi-class classification. This is natural, as much of science too is concerned about measurement, classification and taxonomies, to be able to make predictions and discover connections. Almost none of the tasks exhibit 100% agreement between human and machine annotations and analyses. This is not unexpected -- less so because these machines have room to improve, but more so because these are mostly qualitative tasks requiring some degree of subjective judgment. In many if not most cases, it would be unrealistic to expect even for multiple human annotators to agree 100%. The upper bound of machine accuracy can be estimated by comparing the agreement of multiple human raters (which some cases below do).
### Confirmatory topic classification using LMMs instead of latent topic modeling
In fields like digital humanities, computational literature studies and computational social science among others, topic modeling is a commonly used tool to discover themes, topics and their historical trends in texts such as newspapers and literature. The bag-of-words LDA (Blei et al., 2003) is still commonly used, as well as more recent sentence embedding driven methods (Angelov, 2020; Groetendorf, 2022). They are also used in what is called "distant reading" (Moretti, 2013; Janicke et al., 2015). They all boil down to various forms of soft or hard clustering. While good for exploration, it is
\begin{table}
\begin{tabular}{l l c c l l} Task & Language & Acc & Adj & Data domain & Complexities \\ \hline Topic prediction & Russian & 0.88 & 0.85 & Cultural history, media & Historical, abbreviations \\ Event cause detection & Estonian & 0.88 & 0.83 & Maritime history & Historical, abbreviations \\ Interview analytics & English & 1 & 1 & Discourse/content analysis & \\ Relevance filtering & English & 0.92 & 0.82 & Text mining, history, media & Low quality OCR \\ Text\&idea reuse & Eng, Rus & 1 & 1 & History of ideas & Multilingual \\ Usage feature analysis & Eng (18\({}^{\text{th}}\) c) & 0.94 & 0.89 & Linguistics, culture & Historical \\ Semantic change & English & \({}^{\prime}\)0.81 & & Linguistics, NLP & Historical \\ Semantic change & German & \({}^{\prime}\)0.75 & & Linguistics, NLP & Historical \\ Semantic change & Latin & \({}^{\prime}\)0.1 & & Linguistics, NLP & Historical \\ Semantic variation & English & \({}^{\prime}\)0.6 & & Sociolinguistics & Social media text, emoji \\ Stance: relevance & Estonian & 0.95 & 0.91 & Media analytics & \\ Stance: polarity & Estonian & 0.95 & 0.92 & Media analytics & \\ Lit. genre detection & English & 0.8 & 0.73 & Literature & Books mix genres \\ Translation analysis, censorship detection & Eng, Italian, Japanese & 0.96 & 0.95 & Translation studies, culture & Multilingual \\ Novel sense inference & Eng, Est, Turkish & \(\sim\)1 & & Lexicography, linguistics & Minimal context \\ Data augmentation & Finnish & 0.72 & Media studies & Minimal context \\ Visual analytics & - & * & * Film \& art, cultural analytics & Multi-modal \\ Social network inference & English & * & * Network science, literature & Many characters, ambig. references \\ \hline \end{tabular}
\end{table}
Table 1: Summary of case studies in this contribution, replicating and emulating various humanities and social sciences research tasks. The Acc column displays raw accuracy of the best-performing LMM at the task (compared to human-annotated ground truth; results marked with \(\rho\) are Spearman’s rho values instead of accuracy). The Adj column shows the kappa or baseline-chance adjusted agreement where this is applicable. Open-ended results are marked with an asterisk*.
suboptimal for confirmatory research. Often, historical or humanities researchers may have hypotheses in mind, but they are difficult to test when they need to be aligned to ephemeral latent topics. Instead of such an exercise in reading tea leaves, one could instead classify texts by predefined topics of interest. Until now, this would have however required training (or tuning an LLM as) a classifier based on labeled training data. Annotating such data is time-consuming -- and the easy, out of the box unsupervised applicability of methods like LDA have therefore remained attractive (cf. Jelodar et al., 2019; Jacobs and Tschotschel, 2019; Sherstinova et al., 2022).
With instructable LLMs, laboriously annotating training data is no longer necessary, and topics in textual data can be predicted instead of derived via clustering processes. Good prompt engineering is still necessary, but this is where qualitative scholars can be expected to shine the brightest. Ziems et al. (2023) worry that topic modeling may be challenging for transformer-based language models, given their context window size limitations. The latter is becoming less of an issue with newer models (the GPT-4 model used here has a window size of 8000 tokens or about 6000 words), but longer texts can always be split into smaller units and results later aggregated.
The zero-shot topic classification approach is exemplified here using a historical dataset of short Russian-language news-reel summaries from the Soviet Union. For more details on this dataset and the history of newsreels, see (Oiva et al., 2023). In short, the dataset consists of synopses of story segments that make up the roughly 10-minute weekly newsreels video clips from 1945-1992 (12707 stories across 1745 issues in total; the synopses are short, about 16 words on average). As part of the aforementioned collaboration (Oiva et al., 2023), an expert cultural historian predetermined a set of 8 topics of interest based on and preceding research and close viewing of a subset of the reels: politics (including domestic and international relations), military and wars, science (including industrial, space, aviation progress), social (including lifestyle, arts, culture, education, health), disasters (which was not found in the actual dataset), sports, agriculture, industry and economy (see the Appendix for the prompt and definitions; note that we used an English-language prompt despite the data being in Russian, as this yielded better accuracy in early experiments). An additional "miscellaneous" topic was defined in the prompt to subsume all other topics. Such an "everything else" or negative category is incidentally a naturally available feature in the zero-shot approach, that would require much more complicated modeling in traditional supervised machine learning.
The expert annotated a test set of 100 stories for these topics, one topic tag per story. Two OpenAI models were applied here, GPT-3.5 and GPT-4, with preliminary experiments with various prompting strategies. In general, a single example per instruction prompt yielded the highest accuracy of 0.88 (kappa 0.85, for GPT-3.5; 0.84 or kappa 0.8 for GPT-4), but is of course the more expensive option when using cloud services like that of OpenAI that charge per input and output length. While this deserves further, more systematic investigation, batching multiple examples (preceded by an instruction prompt, requesting multiple output tags) generally seemed to reduce classification accuracy, although less so for the newer GPT-4. This can however be used as a cost-optimization strategy, saving on the number of times the prompt has to be parsed.
While the 88% accuracy is not perfect, it should be kept in mind that this is on a 9-class problem on a historical dataset rife with archaic terms and abbreviations that may not all exist in the training data of a present-day LLM. The synopses are also short, yet may contain mentions of different themes and topics. For example, a featured tractor driver in an agricultural segment may also be lauded as a Soviet war hero or a local sports champion. In other words, it is unlikely that humans would have perfect inter-rater agreement on this task either. On qualitative inspection, we did not come across any misclassifications that would be entirely off the mark in that sense.
Following testing, GPT-3.5 was applied to the rest of the corpus of 12707 stories, producing an estimation of topics in the newsreels covering most of the Soviet period 3.A. Among the trends, there is a notable increase in the Social topic, towards the end of the period. Given the uncertainty of the classifier, and the fact that there is fewer issues and therefore fewer data points in the latter years, this could potentially be sampling noise. To test this, one can fit for example a logistic regression model to the period of interest (1974-1989), predicting topic (as a binomial variable, Social vs everything else) by year. This model indicates there is an effect of \(\hat{\beta}=0.064,p<0.0001\): each passing year multiplies the odds of encountering a Social topic in the reels by a factor of \(\ell^{0.064}=1.07\)).
However, this is based on the predicted topics, and not all predictions may be accurate. One way to incorporate this uncertainty in the inference would be to use bootstrapping, as discussed in the Methods section: simulate the classification procedure by sampling from the test set confusion matrix (our annotated 100 synopses), then rerun the statistical model over and over on a large number of bootstraps of the simulated data. This yields bootstrapped distributions for each statistic of interest, from which confidence intervals can be calculated. In this case, since the classifier is fairly accurate,
the 95% confidence interval around the log odds estimate is \(\pm 0.02\), and for the p-value, \(\pm 0.00002\) (in other words, the upper bound is still well below the conventional \(\alpha=0.05\)).
The same procedure can be applied to percent estimates on graphs like 3.A: simulate, aggregate into percentages, bootstrap, infer confidence intervals. Latent topics models may still be useful for exploration and discovery, but this exercise shows that zero-shot topic prediction is a viable alternative for testing confirmatory hypotheses about topical trends and correlations.
In a limited exploratory exercise, a sample of about 200 random synopses (about 8000 words in Russian, or 16k tokens) was fed into a GPT-3.5 version with the larger context window (gpt-3.5-turbo-16k), prompting it to come up with either any number and then also a fixed number of general topics. By their own assessment, these lists were quite similar to the one initially produced by our expert historian.
### Historical event and cause detection
Detecting and extracting events, their participants and causes from texts, including historical documents, is not only an interest central to many fields of humanities but also to method-oriented NLP researchers (Sprugnoli and Tonelli 2019; Lai et al. 2021). Ziems et al. (2023) experimented with applying zero-shot LLMs to binary event classification and event argument extraction in English. Here, GPT-4 is applied to detecting the causes of shipwrecking events in an Estonian-language dataset of maritime shipwrecks in the Baltic sea (part of the Estonian state hydrograph database HIS 2023). Each entry contains a description of the incident based on various sources and fieldwork (n=513, ships wrecked between 1591-2006, but mostly from the 20th century). The dataset has already been enriched by domain experts, providing a ground truth to test against. One such variable is the primary cause of the wrecking, as a term or a phrase. There were 54 unique values, which were simplified here into four groups: warfare-related causes like assaults and torpedo hits; mines (both particularly frequent during the two world wars), mechanical faults like leaks and including intentional abandonment; and broadly navigational errors, such as getting caught on shallows, or in a storm or fog. Naturally, some categories may interact and contribute jointly as causes, complicating the inference task.
The descriptions used as input to the LLM range from very brief statements such as _Sunk by a mine at the mouth of Lahepere Bay on July 21, 1941_ to longer stories such as _Perished en route from Visby to Norrkoping on the rocks of Vastervik in April of 1936. After beaching in Gotland, Viljandi had been repaired and had set sail from Visby to Norrkoping around mid-month. In a strong storm, the ship suffered damage to its rudder near Storklappen, losing its ability to steer. The injured vessel drifted onto the rocks of Slado Island, where it was abandoned. Local fisherman Ossian Johansson rescued two men from the ship in his boat. One of them was the ship's owner, Captain Sillen. The wreck sank to a depth of 12 meters._ (this is marked as a navigation and weather related wrecking).
Figure 3: (A) Zero-shot prediction of predefined topics in the corpus of Soviet newsrel synopses. Vertical axis shows yearly aggregate percentages. Bootstrapped confidence intervals are added on the trend of the Social topic. There are less data in the latter years, reflected in the wider intervals. (B) Wrecking causes of ships found in the Baltic sea, mostly in Estonian waters, as annotated by experts based on field notes and historical documents (left), compared to zero-shot prediction of said categories based on the same data, with bootstrapped confidence intervals on the counts. Due to fairly good classification accuracy, the counts end up roughly similar.
The model accuracy is fairly high: the (albeit simplified) primary cause prediction matches with human annotation 88% of the time (kappa 0.83). This is very good for this task where there are often multiple interacting causes and the primary one may be somewhat arbitrary. Some classes are easier than others to detect: for example the "mine" class has a 100% recall. Figure 3.B illustrates how much a predicted distribution of causes would differ from an expert-annotated one, with bootstrapped confidence intervals added to the counts using the same approach as in the previous section on topic prediction. This exercise shows even current LLMs are already fairly capable of completing annotation and inference tasks that would otherwise have required manual work by domain experts.
### LLM-powered interview analysis
Interview-based studies across many disciplines are often qualitative. Despite this, researchers may make approximate quantitative claims about "more", "less", "many" etc., without systematic quantification or statistical modeling of the (un)certainty of such claims (cf. Hrastinski and Aghaee 2012; Norman et al. 2021; Paasonen et al. 2023). Even in explicit mixed-methods approaches, modeling is often limited to counting coded themes or variables. This contribution encourages the usage of the more systematic QMM approach. This section demonstrates how to incorporate a machine annotator in analyzing interview data, and how to quantify the outcomes.
The data are synthetic, generated using GPT-4, which was promoted to output a variety of responses, as if uttered by college students, concerning benefits and downsides of doing group assignments online as opposed to meeting live. This emulates a scenario where students would be interviewed about their study experiences, and the researcher has already unitized the data as relevant passages or responses, and extracted those discussing online vs live. The latter step could be done either manually, by searching for keywords, or as another machine classification step (e.g. prompting an LLM to determine if a given passage or response is relevant for a research question or not). The synthetic data includes examples such as: _You know, one of the things that bothers me about online meetings is that it's harder to have those spontaneous moments of laughter or fun that make the work enjoyable, and that's something I really miss._ In this synthetic dataset, responses are randomly grouped by "respondents" (multiple responses per student, who are also assigned an age each) and assigned to either on-campus or off-campus living group (with a bias, to simulate a difference). The resulting data has 192 responses (rows of data) from 53 "students", where 109 off-campus responses are split 36/73 negative-positive; 64/19 for on-campus. Given the simulated nature of the data, the main variable of interest, stance towards online group work, is already known (as if coded by a human annotator).
The example (admittedly simplistic) hypothesis is: controlling for age, students living on campus see more negative aspects in doing group assignments online than those off campus. This can be tested using a mixed effects binomial regression model; the random effects structure is used to take into account the repeated measures. The model can be conveniently run using the popular lme4 package in R with the following syntax:
Logistic regression: \[\log\left(\frac{p_{ij}}{1-p_{ij}}\right)=\beta_{0}+\beta_{1} \cdot\text{campus}_{ij}+\beta_{2}\cdot\text{age}_{ij}+u_{j}\] In lme4 Syntax: \[\text{online}\sim\text{campus}+\text{age}+(1|\text{id})\]
In the constructed model, "off-campus" is the reference category for the campus variable and "off" for the response. Living on campus is associated with a decrease in the log-odds of a positive response \(\beta=-1.9\) or \(e^{1.9}=0.14\) times, \(p<0.001\). Regression model assumptions were checked and were found to be met. The p-value indicates the probability of observing that effect is exceedingly small (0.00000000438) if the null hypothesis (no effect of living on campus) was true, so the alternative hypothesis (on-campus students don't like online) can be accepted.
This test was conducted directly on the synthetic data, equivalent to a scenario where a human annotated the interpretations. The same could be completed by an LMM instructed to determine the stance or attitude of the student towards online group assignments in each response (regardless of overall sentiment of the response). The LLM accuracy results are easy to report here: a suitably instructed GPT-4 detected stance towards online learning from the narrative-form responses with a 100% accuracy; i.e. the machine interpretations did not differ from ground truth in this case. Note that this exercise was independent of the data synthesis. The fact that GPT-4 both generated the initial data and was used for classification has no bearing on the accuracy, which likely stems from the combination of relatively clear stance expressions in the data and the easy of inferring them for GPT-4. If the accuracy was considerably lower -- in a real research scenario, measured using e.g. a small human-annotated test set -- then the error rate should be incorporated in the regression modeling to avoid biased estimates. See the Methods section for one approach how to do that.
While this example focused on a confirmatory case, interview-based research could benefit from machine assistance on other tasks as well. The retrieval of relevant examples was mentioned; another may be clustering interviewees (cf. Kandel et al. 2012). This could indeed be done using e.g. latent topic models, but as in the confirmatory topic classification example above, can be approached in a more principled way by having the LLM annotate responses for specific themes of interest, and then using those for clustering.
### Social network inference from literary texts
This short section showcases the potential of using LLMs as information retrieval engines. Figure 4.A depicts a character network manually constructed from "Les Miserables" by Victor Hugo, often used as a textbook example in network science and related fields. Figure 4.B is a network of interacting characters inferred automatically from the full text of the same book, by feeding each chapter into GPT-3.5 with the prompt to list pairs of characters who directly converse in the chapter. The result may well have some errors -- some anomalous pairs like street names and unspecific characters ("people" etc.) were filtered out post-hoc. Better results may well be achieved by better prompts and using more capable models like GPT-4. Still, the result is also much richer than the smaller manual version, including non-plot characters discussed by Hugo in tangential sections of the book. This limited exercise shows that LLMs can be used for information retrieval tasks like this in H&SS contexts, while preprocessing with specialized models (named entity recognition, syntactic parsing, etc.) is no longer strictly required (cf. Elson et al. 2010).
### Relevance filtering and OCR correction in digitized newspaper corpora
Digitization efforts of historical textual data such as newspapers and books have made large-scale, computer-assisted diachronic research feasible is many domains where this was not possible before. However, finding relevant examples from vast swathes of digitized data, more so if it is plagued by optical character recognition (OCR) errors, can be a challenge. A part of the pipeline from a recent work (Kanger et al. 2022) is replicated here. The study sought to measure central trends in dominant ideas and practices of industrial societies with a focus on the topic of nature, environment and technology, based on digitized newspapers from multiple countries.
Their pipeline for retrieving relevant examples on the topic for further analysis consisted of assembling a set of broad keywords, extracting passages where these occur, and estimating if they match the topic (as "nature" can also refer to "human nature", etc.). They used LDA topic modeling, which required cleaning and lemmatizing the texts, and annotating the latent LDA topics for relevance. Such pipelines can be streamlined with as a single operation on an LLM. The authors (Kanger et al. 2022) kindly provided a human-annotated test set of 99 excerpts for this exercise. The experiments here include both GPT-3.5 and GPT-4, and also evaluate the effect of adding an OCR-correction step before the classification step. While most many of the corpus texts are fairly readable, they also contain examples such as this:
_principally to casing in \(\Rightarrow\) u j allan consolidated bonds nine Issues Siorln \(\approx\) falli and on'y two Issues galnl \(\approx\) 8 The liteti Included
Figure 4: Social networks of directly interacting characters in "Les Misérables" by Victor Hugo, manually constructed textbook example on the left (A) and as automatically inferred using LLMs on the right (B; GPT-4 was used to infer the gender of the character; men are blue and women are orange).
the 3. per cent 1942 in which lagi pa'ack were bou.ht The Syd Iii, banks lollungulabel a small gait of recent trim Arinstnatru-raleacilon in \(t\). limited \(S\), \(r\) of issues the main body of Indu- irai continued to find keen support._
The GPT-4-cleaned version: _principally to using in Australian consolidated bonds; nine issues showing a fall and only two issues gaining. The latter included the 3 per cent 1942, in which large parcels were bought. The Sydney banks relinquished a small part of recent gains. As a natural reaction in the limited set of issues, the main body of industrial continued to find keen support._
While such operations may suffer from LLM hallucination issues, we can test if this step degrades or improves the downstream classification results.
The case turns out to be the latter. Given a single-sentence prompt to classify a given input as having mentioned "nature or environment in the biological natural world sense, including nature tourism, landscape, agriculture, environmental policy" (see Appendix), the results are as follows. Without the cleaning, GPT-3.5 gets 0.79 accuracy (0.49 kappa) and GPT-4: 0.9 (0.77). With cleaning, GPT-3.5 gets 0.82 (0.56) and GPT-4: 0.92 (0.82 kappa). This is again on a task with very limited, often historical period-specific contexts. More precise prompting would likely help. There were for example cases such as "atmosphere of natural gas [on Pluto]" and "nature provides nourishment for the newborn [fetus]" where the machines and humans disagreed.
In summary however, using zero-shot or fine-tuned LLMs may well provide a simpler and faster alternative to complex processing and annotating pipelines such as those described in Kanger et al. (2022), as well as obviate the need for parameterizing and carrying out mathematical operations to make embedding vectors usable (cf. Sen et al. 2023). LLMs can also assist in situations with distorted data such as OCR-processed text. The combination of initial rough search (keywords or regex) with a follow-up LLM-based filtering step may well be a fruitful approach, as running an entire corpus through a cloud service LLM like GPT-4 can be very costly (and time-consuming, even if using a local model).
### Text and idea reuse detection
Political studies, history of ideas, cultural history, and the science of science are among disciplines interested in the spread and reuse of ideas, texts and other content (see Chaturvedi et al. 2018; Linder et al. 2020; Salmi et al. 2021; Gienapp et al. 2023). Automated detection of reuse may be based on keywords, latent embeddings or hybrid approaches, but is considered hard for a variety of reasons (Chaturvedi et al. 2018; Manjavacas et al. 2019). While tracking verbatim reuse of an entire news article or a passage is not difficult, reuse and spread of smaller units and abstract ideas is, more so if it crosses the boundaries of languages. A synthetic test set of pseudohistory-style blog posts is used here, modeled directly after Oiva and Ristila (2022), who surveyed the landscape of pseudo-historical ideas and their outlets in Russian-language online spaces. The classification tasks involves detecting the occurrence or "reuse" of the idea that "Russians are descendants of the Huns", apparently common in such user groups.
The data is generated as follows. GPT-4 was first instructed to compile 50 short paragraphs in English on various other pseudohistorical topics drawn from Oiva and Ristila (2022) that would include this claim, and 50 that would not. These 100 items were then modulated and distorted in a variety of ways, again using GPT-4: rephrasing the claim, inducing "OCR errors", translating into Russian -- and combinations thereof. As an example of original text and its maximal modulation:
_It's an often-overlooked fact that all the weapons used in seventeenth-century Europe were produced by the Russians. This massive weapons production and export reflect an advanced civilization, attesting to the fact that Russians are descendants of the Huns. It's a narrative that resists the distortions of history, reaffirming Russian heritage._ This becomes:
_Zho nagagn
also catch such items. In summary, as shown here, text and idea reuse detection is very much feasible using instructable LLMs such as GPT-4, including cases of idea transfer across languages.
### Linguistic usage feature analysis
As discussed in the introduction, machine learning, including large language models, has found various uses in branches of linguistics (beyond the explicitly labeled computational one). The application of LLMs in the usage feature analysis framework appears to be still novel. This case study replicates a part of the pipeline of a recent work on linguistic value construction in 18th century British English advertisement texts (Mulder et al., 2022). The researchers were interested in modifiers such as adjectives as a way of expressing appreciation in language, and in testing hypotheses about historical prevalence trends of both modifier usage and the advertised objects themselves.
The paper goes into detail about the process in developing the categories of Evaluative and Descriptive modifiers via systematic annotation exercises, and normalizing spelling in the historical texts (heterogeneous by nature and plagued by OCR errors) via a process involving edit distance metrics, word embeddings and manual evaluation. While cleverly utilizing computational tools, it is evident that no small amount of manual effort was expended in that project. Most of such manual work can be streamlined and automated using zero-shot LLMs. As shown above in the relevance filtering and text reuse sections, models like GPT-4 are quite capable both at fixing low-quality OCR as well as working with OCR-distorted texts.
Replicating the annotation step consisted of instructing GPT-4 to detect whether a given phrase such as _servants stabling_ or _fine jewelry_ is objectively descriptive or subjective (evaluative) in nature. The model achieves strong agreement with the human annotations in the paper (accuracy 0.94, kappa 0.89). For context, in the first iteration of the annotation process, the paper reports the kappa agreement between two researchers annotators to have been at 0.84. This is clearly not an easy task either and may require subjective decisions; e.g. _servants horse_ is tagged objective yet _gentleman's saddle_ as subjective in their final dataset, which may be debatable. This exercise used an example from the lexical semantics domain with links to cultural history, but the same approach could equally be used to automate or augment linguistic feature analyses in domains like grammar and syntax (cf. Begus et al., 2023; Qin et al., 2023; Beuls and Van Eecke, 2024).
### Lexical semantic change detection
Unsupervised lexical change or shift detection is a task and research area in computational linguistics that attempts to infer changes in the meanings of words over time, typically based on large diachronic text corpora (Gulordava and Baroni, 2011; Hamilton et al., 2016; Dubossarsky et al., 2019). Data from the domain of historical linguistics or annotated test sets may be used to evaluate different approaches (Schlechtweg et al., 2018). Such result may be of interest to lexicologists, NLP scientists looking to improve the robustness of their language models, or linguists looking to understand language change. Schlechtweg et al. (2020) reports on a shared task at the SemEval 2020 conference, where a large number of competing approaches were pitted against an annotated test covering four languages and centuries of language change. There were two subtasks: 1) binary classification to determine which words have and have not lost or gained senses between the given time periods, and 2) a graded change detection task which was evaluated by comparing the rankings of the test words, ranked according to how much they had changed. There were 27-38 test words per language.
This sparked follow-up research in the field: for example, while type-based word embeddings were (somewhat surprisingly) more successful in that task than more recent token-based (contextual, BERT-like) models, later research has shown how to adapt LLMs to such tasks (Rosin and Radinsky, 2022). The latter is the highest scoring approach (on subtask 2) on this task since the original controlled shared task but on the same test set, according to a recent large-scale survey on the topic (Montanelli and Periti, 2023).
As a simple experiment setup, GPT-4 was instructed to determine if the meaning of a given target word in two example sentences is either the same, closely related, distantly related or unrelated (see Appendix). This is loosely based on the DURel schema used in annotating original test data to produce the gold standard classes and rankings (cf. Schlechtweg et al., 2018). The task dataset contains pairs of moderately sized, randomly sampled subcorpora for each language, representing two distinct time periods each (e.g. 1810-1860 vs 1960-2010 from the Corpus of Historical American English). The procedure involved sampling 30 sentence pairs for each word in the test set (with replacement, as not all words would occur frequently enough). For the classification task, a threshold of 2 or more "unrelated" judgments was used to
indicate that a sense has emerged or disappeared (an optimized threshold might improve the results).
In the binary classification task, this simple zero-shot approach, based on evaluating just a handful of examples, performs as well as the state of the art best model reported in the SemEval task in English (70% accuracy; Figure 5.A). It is just above the random (majority) baseline but below SOTA in German; and practically at random for Latin. In the second, semantic change subtask however, it goes well beyond the best SemEval result for English (\(\varphi=0.81\) vs \(0.42\), a \(2\)x improvement; Figure 5.B). It also surpasses the more recent Rosin and Radinsky (2022) LLM-based architecture that had a \(0.52\) correlation with the test set. In German, the result of \(0.75\) is between that and the best SemEval model (\(0.76\) and \(0.73\), respectively). Judging by the trend, it may improve if given more than just \(30\) examples (the SemEval models used entire training corpora; better prompting may also increase scores). Latin at \(0.1\) performs below both comparisons, but above random (which given it is Spearman correlation would be \(0\)). For further context, other specialized LLM fine-tuning architectures have gotten as high as \(0.74\) on the same task in a different Indo-European language, Spanish (Zamora-Reina et al., 2022).
There are two takeaways here. Zero-shot, using a large enough generative LLM, can perform on par or surpass purpose-built architectures based on smaller LLMs or embeddings, while requiring minimal effort to carry out. Setting up this experiment here required writing a one-sentence prompt to be iterated with the example pairs on the GPT-4 API. In contrast, the authors of the models featured in the SemEval task paper (Schlechtweg et al., 2020) clearly put no small amount of work into developing their various embedding, ensemble and LLM based architectures (each spawning at least one paper on their own). Rosin and Radinsky (2022) is a full-length paper in a high-ranking NLP conference. On the flip side, pretrained instructable LLMs are only as good as their training data. Clearly, there is not enough Latin in GPT-4 for it to perform well here.
### Challenging linguistic data annotation
Recent work has shown that large enough LLMs like GPT-4 can perform at near human annotator level in various linguistic and textual tasks (Begus et al., 2023; Gilardi et al., 2023; Fan and Jiang, 2023; Huang et al., 2023; Qin et al., 2023; Ziems et al., 2023). One such use case is reported here, using a setup similar to the lexical change detection case study above. In a separate study focusing on linguistic divergence in US American English (Karjus and Cuskley, 2023), we looked into modeling differences between two groups of users, those aligned with the political "left" and those with the "right". The data was mined from the social media platform Twitter (now "X"). We experimented with using word embeddings of the type that performed well in the shared task discussed above (Schlechtweg et al., 2020), as well as annotating a small dataset by hand following the DURel framework (cf. Schlechtweg et al., 2018) mentioned above for evaluation purposes. This involved comparing the usage and therefore meaning of a target word, phrase or emoji in contexts derived from the tweet corpus, for example (the examples have been rephrased here in order to preserve author anonymity):
_We have a kitten right now who is in a bad condition, need to get him to a **vet**, got many more here like this --_ compared to _-- This is a president that knows how to withdraw forces when necessary. Perhaps if more **vets** ran for office we would have people in charge who can do what is needed._
Figure 5: Lexical semantic change detection using GPT-4 on two tasks, binary classification (A) and graded change (B), in three languages. The trend lines illustrate how well the zero-shot approach performs given an increasing number of example pairs (bootstrapped average values). The top results from the SemEval task are highlighted with solid lines. The gray lines are random baselines for binary classification. A later LLM-based result in (B) is shown with the dotted line. GPT-4 performs best in English as expected, even surpassing past approaches on the second task.
We also applied GPT-4 to the same annotation task in the role of a 'third annotator'. There were 8 target words and emoji, three comparison sets for each to determine difference as well as in-group polysemy; 320 pairs in total. The two human annotators had a good agreement rate of \(\rho=0.87\) (measured in Spearman's rho, given the ordinal DURel scale). GPT-4 achieved moderate agreement of \(\rho=0.45\) with one and 0.6 with the second annotator. The lower rate compared to the human inter-rater agreement was partially affected by the emoji in the test set, which the humans also found difficult to annotate. There was also very little context to go on, and social media texts can include rather non-standard language and abbreviations.
This exercise shows that while LLMs have become good enough for many textual and linguistic annotation and analysis tasks, it is important to check their accuracy rates against human annotator and preferably expert judgments. This exercise did not involve iteratively improving the simple single-sentence prompt -- better agreement may be achieved with more detailed, and potentially iterative or step-wise prompting (Chen et al. 2023a).
### Stance and opinion detection
In a recent paper (Mets et al. 2023), we investigated the feasibility of using pretrained LLMs for stance detection in socio-politically complex topics and lower-resource languages, on the example of stances towards immigration in Estonia. Estonian is not low-resource in the sense that there is written corpora and some NLP tools available, but given the number of speakers is small (population of Estonia is 1.3M), the amount of training data available is limited compared to English or German. We experimented with fine-tuning the previous generation of BERT-class models (Devlin et al. 2019) on a hand-annotated dataset of thousands of examples of pro/against/neutral stances towards immigration. The best-ranking fine-tuned RoBERTa model performed on par with a zero-shot GPT-3.5 approach (which had just come out, in late 2022). Naturally, zero-shot is a much cheaper alternative, obviating the need for costly manual training set construction and LLM fine-tuning (which either requires beyond consumer-grade hardware or paying for a cloud service). Emergent LLM applicability research reported similar results (Zhang et al. 2023a; Gilardi et al. 2023).
In another upcoming work (Karjus in prep), we report on a collaboration with the Estonian Police and Border Guard Board on a cross-sector project to analyze large media datasets to determine societal stances towards the institution of police and police personnel. Estonia is a multilingual society: while the media primarily caters to the Estonian-speaking majority, there are newspapers, TV and Radio stations in Russian as well as outlets with news translated into English. This necessitates a multilingual approach. We apply a pipeline similar to the immigration case study: a first pass of keyword search across corpora of interest followed by LLM-based filtering of the found examples for relevancy, and LLM-powered stance analysis applied to this filtered set. Finding contextually relevant examples from simpler keyword matches is crucial for accurate stance detection. For example, if the target of interest is Estonian Police, articles discussing police news from other countries, metaphorical expressions ('fashion police') and fictional contexts (films featuring police) should be excluded.
While in the recent past accurate stance detection or aspect-based sentiment analysis would have required complex machine learning pipelines (Kucuk and Can 2020; Nazir et al. 2022; Rezapour et al. 2023) and model tuning, this can now be solved with zero-shot LLMs. We annotated a 259-sentence test set in Estonian; there were 90 non-relevant examples, and of the relevant 31 were negative, 199 neutral, 19 positive. This is quite representative, most police-related reporting is neutral about the police itself. In detecting relevant examples, GPT-3.5 only gets to 76% accuracy (kappa=0.4, i.e. accounting for baseline chance; mean F1 at 0.54) but GPT-4 achieves 95% accuracy (kappa=0.9, F1=0.9). We included both the target sentence and the title of the source article in the prompt for context, but some cases are difficult, e.g. where it is only implied indirectly that police of another country is discussed. More context (e.g. paragraphs or fixed larger content window) might help, but is more costly (more tokens to parse). In stance detection, GPT-3.5 agrees with human annotations at a rate of 78% (kappa=0.36; F1=0.51), while GPT-4 gets to 95% accuracy (kappa=0.88; mean F1=0.92). Again, this is quite good, as many examples are ambiguous. For example a sentence can be overtly negative while the police may be mentioned as a neutral participant; or the police might be reported to have done their job, which could be seen as neutral or positive, depending on perspective. Yet LLMs show promise as universal NLP tools in media monitoring contexts.
### Genre detection in literature and film scenes
Computational literature studies is an emerging field within digital humanities. In their large-scale LLM applications paper, Ziems et al. (2023) discuss the computational analysis of themes, settings, emotions, roles and narratives, and benchmark some of these tasks. Instead of testing against another benchmark, a real world study (Sobchuk and Sela 2023) is replicated here, to illustrate how instructable LLMs can be used as a simpler alternative to complex computational pipelines, which often require extensive parameterization. The latter study seeks to compare approaches of capturing and clustering thematic (genre) similarity between literary texts. This is tested by comparing how well clustering of automatically extracted features matches a manually assigned genre system of Detective, Fantasy, Romance and Sci-Fi. They evaluate a large set of combinations of text embedding algorithms (bag-of-words, topic models, embeddings), their parameters, preprocessing steps and distance measures for the final step of clustering. The target measure is the Adjusted Rand Index (or ARI; Hubert and Arabie 1985).
Given that Cohen's kappa score is comparable to the Adjusted Rand (Warrens 2008), the performance of an LLM set up to classify genres can be directly compared to their clustering task results. The authors generously shared their labeled 200-book test set for this purpose. For this exercise, rather than parsing entire books, 25 random passages were sampled from each (5000 in total). GPT-3.5 was instructed to label each presented passage as one of the 4 genres (briefly defined in the prompt; see Appendix), and the assigned label for a book was simply the most frequent label (better results may well be achieved by parsing more data). The best-performing parameter and model combination in Sobchuk and Sela (2023) used strong preprocessing, a 300-dimensional doc2vec model (Le and Mikolov 2014), and cosine similarity. The preprocessing involved lemmatizing, named entity detection and removal, part-of-speech tagging for stopword removal, and lexical simplification (replacing infrequent words with more frequent synonyms using an additional word embedding). This combination yielded an ARI of 0.7.
Our simple zero-shot LLM approach here achieved a (comparable) kappa of 0.73 (0.8 accuracy) without any of preprocessing (and only judging a small subset of random passages per book, using the cheaper GPT-3.5 instead of 4). Some genres were easier than others, e.g. Fantasy had a 100% recall; while books combining multiple genres complicate the task. These results echo the message of the first case study in this section: instead of clustering or topic modeling, zero-shot learning enables direct prediction and classification, and using LLMs obviates or at least eases the need for complex processing pipelines (see also Chaturvedi et al. 2018; Sherstinova et al. 2022).
Instead of labeling entire books with a single genre label, on-demand classification like this can instead yield a more informative distribution of genres for each work of fiction under examination. Figure 6 illustrates another related proof of concept of another application of zero-shot text classification. In a similar way as the genre classification exercise above, two texts were split into manageable chunks: P.K. Dick's "Do Androids Dream Of Electric Sheep?", and the script of "Blade Runner" based on the latter, by H. Fancher and D. Peoples (the 1981 version with the happy ending and voice-overs). The script is split up by scenes (but merging very short scene descriptions) and the book into equally sized chunks (a larger number, as the book is 3 times longer). Each segment is classified using GTP-3.5 with the same prompt as above (with the addition of the thriller class). Here things are kept simple and the classifier accuracy is not incorporated in the visualization, assuming it to be good enough for explorative purposes.
Differences between the book and adaption are revealed: the movie is more of a thriller with sci-fi and detective story elements, while the book delves into various other topics. Both have most of the detective elements in the first half, and
Figure 6: Zero-shot classification of genre across one book and its film adaption, split into equally-sized segments and scenes, respectively. Frames from the film are added for illustration. Differences and similarities become readily apparent, and can provide basis for follow-up qualitative or quantitative comparisons.
romantic elements around the middle. The one segment labeled as "fantasy" does include the following: _"The donkey and especially the toad, the creatures most important to him, had vanished, had become extinct; only rotting fragments, an eyelash head here, part of a hand there, remained. At last a bird which had come there to die told him where he was. He had sunk down into the tomb world."_
This exercise is of course only a very rough approximation -- one could also take into account running time, or try to align a book and its adaption (cf. Yi et al. 2023). Still, this exercise illustrates the potential of using zero-shot LLMs to explore qualitative data, without the need of training a specialized classifier for genre, mood, action, etc. Multi-modal models (explored in the last subsection below) can add another dimension of zero-shot scene analytics.
### Automated literary translation analysis and a semantic edit distance
This section describes two case studies, one explorative and the other testing the accuracy of LLMs as multilingual semantic distance evaluators. The first experiment consists of automatically aligning and then qualitatively evaluating the English and translated Italian version of the first paragraphs of G. Orwell's "1984" (until the "war is peace, freedom is slavery, ignorance is strength" part). This involved using two tools. BERTalign (Liu and Zhu 2023) was used to split and align the sentences of the source and translation, yielding 47 sentence pairs. The second step was to prompt GPT-4 to examine each pair, outputting if there is any significant lexical or stylistic differences, and if any to briefly explain. The outcome was then examined by two native Italian speaking literature scholars (see Acknowledgments). Both concluded that the alignment as well as GPT-4's inferences were largely correct and insightful, with no significant misinterpretations. While here only a qualitative initial assessment, it shows that the approach of combining multilingual LLM-driven aligners such as BERTalign with generative LLM-driven interpretation can easily enable scaling up translation and literary analysis to much larger datasets than a single human researcher could manually read in their lifetime.
Since generative LLMs can be prompted to classify anything on demand, here is also an experiment to implement a kind of a'semantic edit distance". String edit distances are widely used in linguistics, NLP and information retrieval among others (Manning et al. 2008; Wichmann et al. 2010). For example, Levenshtein distance operationalizes string distance as the optimal required number of additions, deletions or substitutions (of e.g. letters) to transform one string (word) to another. The distance of _dog_ to _log_ is 1 (1 replacement); to _cogs_ it is 2 (1 substitution, 1 addition). This approach works for comparing texts in the same language, but not across different languages, nor can it capture semantic equivalence if synonyms are used.
While machine translation algorithms or multilingual sentence embeddings can output a numeric similarity between two sentences in different languages, it would be useful to have a more fine-grained, interpretable metric, for example in fields like literary and translation studies. As an experiment, GPT-4 is prompted here to determine if a given source and translation differs, and if so -- inspired by Levenshtein -- is it addition, deletion or substitution. The test set is synthetic, generated also using GPT-4, prompted to output sentences from a children's story about a rabbit in a forest, in English and Japanese. 25 pairs match closely, but in 25 the rabbit is replaced with a bear in the Japanese version, in 25 a moose character is added, and in the last 25 the rabbit kisses his rabbit girlfriend in English, which is redacted in the Japanese version (emulating censorship scenarios not uncommon in the real world; cf. Inggs 2011). As an "edit distance', the translations as a text would have a ground truth total distance of 75/100. The results are very good, with accuracy at 96% across the four classes (0.95 kappa; the simple sum of non-close classes, or 'distance', is 74/100, i.e. 1 off). This demonstrates the applicability of LLMs to translation studies and other scenarios which require semantic comparison of source texts to translated, altered or censored variants, but beyond simple numeric similarity scores.
### Zero-shot lexicography for novel words
In this and the next section, two practical applications of instructable LLMs are considered which can but do not need to be part of a larger MAMM approach. Both computational approach such as word embeddings or LLMs, as well as qualitative approaches can be used for determining the meaning of novel words such as borrowings, be it for linguistic or lexicographic purposes such as dictionary building. Here the utility of using generative LLMs as a "zero-shot lexicographer" is demonstrated, using a synthetic test set. This was generated also by GPT-4, instructed to compile a set of unrelated sentences that would use one of these three target senses: _bear_, _glue_ and _thief_ (representing both animate and inanimate subjects, countable and mass nouns), in three languages: English, Turkish and Estonian (representing different
language families and speaker population sizes). Each target sense is instructed to be expressed with a placeholder word, _zoorplick_, instead. This was chosen as a word that would be unlikely to be in the training data. GPT-4 was also queried to guess its "meaning" and the machine came up with nothing. The context is intentionally just a sentence to make the task harder.
In the testing phase, GPT-4 was instructed to infer the meaning of the placeholder given the separately generated contexts. The LLM output was not constrained, making this an open-ended exercise Some leeway was given: _adhesive_ would be accepted as correct for _glue_, and _burglar_, _robber_, _pickocket_ as types of _thief_. The results, illustrated in Figure 7, are promising: _glue_ and _thief_ can be correctly inferred in all three languages already based on 3-4 examples. _bear_ is more difficult, as with only sentence-length contexts, the LLM mistakes it for various other wild predators, but accuracy improves with more examples. This exercise shows lexicography and dictionary making can benefit from applying zero-shot generative models either in lieu or in conjunction with specialized models or human lexicographers.
### LLMs for missing data augmentation
Working with large but incomplete databases poses a challenge across many fields. While numerical prediction-based missing data imputation approaches exist, they can lead to biased estimates. Here is an experiment with LLM-driven semantic imputation on a real dataset. In a recent public service media study (Ibrus et al. 2022), we explored a large dataset of television metadata from a broadcast management system (BMS; essentially a production database as well as an archive) of the channels of the Estonian Public Broadcasting (ERR). The study covered 201k screen time hours across 408k program entries and investigated dynamics of content types and production countries between 2004-2020 among other things. In a follow-up study (in prep.), this is being compared to a similar BMS dataset of neighboring Finland's public broadcaster YLE. The data is again partial, with notably production countries missing from about 23% of daily programming entries. Such missing data could be manually added by reading through the rest of the metadata like program title, synopsis or description entries and searching for the origin of the shows and films from additional sources. This would of course be incredibly time consuming.
This is an attempt to infer production country directly from limited metadata using GPT-4, on a randomized test set of 200 unique program entries where the true production country is actually present (15 different countries ended up in the test set). The task is complicated by the fact that the entire BMS is in Finnish language, including the titles and the (very) short descriptions which are used to prompt the LLM. Besides many country names like _Yhdsysullat_ (USA), smaller place names can also be translated, e.g. Lake Skadar (also Skadarsko, Scutari; in the Balkans) is referred to as Skutarijarvi in one of the synopses, which might as well be a Finnish place name. Non-Finnish names are also modified according to Finnish morphology, e.g. _Jamie odotata tuomisana toimenpanao_**Wentworth**lin_ _limassa_, _mutta pian hanta odotata kuolemaakin hurjempi kothtalo. Claire pance henkensal likeoon pelastaakseen mienensa sadistisen_**Randallin kynsisa**. This synopsis is also typical in length; the median in the test set is 206 characters.
The task is set up without constraining output classes to a fixed set like in most other classification tasks here, to give the LLM free reign to take an educated guess. Despite these complexities and the open-ended nature of the task, the results are promising, with an accuracy at 72%. Most mismatches make sense too: mixing up English-speaking countries is the most common source of errors, followed by the German-speaking, and also the Nordic countries. This illustrates the applicability of LLMs for data imputation and augmentation in complex social and media science datasets, but also the necessity to account for error rates in any subsequent statistical modeling based on the augmented data, to avoid biases,
Figure 7: Lex
as discussed in Methods. If augmented data are added to an existing database, it should of course be transparently flagged as such.
### Visual analytics at scale using multimodal AI
The case studies above have focused on text analytic capabilities of large language models. There is a clear direction towards multimodal models though, and GPT-4 is, technically, one of them (OpenAI 2023). At the time of writing, these capabilities were accessible via Microsoft's Bing AI web app, running on a version of GPT-4, according to Microsoft. Figure 8 depicts four examples of utilizing multimodal AI for image analytics. To avoid the possibility of the LLM drawing too much on "memorized" content in the training data, all images in Figure 8 were generated (using an image model, Stable Diffusion XL1.0), except for The Matrix lobby scene still, which was also captured by the author.
While these are all toy examples, scaling up such questions and inquiries to large datasets holds promise of an unprecedented scale of analytics in fields like film studies, art history, visual anthropology, etc. The narrative descriptions of images may not be useful for quantification as such, but illustrate the already available capacities of this class of models, which are able to reason about multilingual multimodal jokes (Figure 8.A) and produce coherent descriptions of realistic scenes as well as abstract visuals. Detecting and segmenting objects on images (cf. Chen et al. 2023b; Kirillov et al. 2023) or inferring art styles and aesthetics (Mao et al. 2017; Karjus et al. 2023) is nothing new as such. What LLMs bring to the table is the ability to "reason" (see Figure 8.A,B) and perform zero-shot classification (8.C). The example results are not necessarily perfect: indeed, Neo is not falling backwards but doing a cartwheel, and it may be a stretch to call any of the dresses pink on the painting in C. These are however early models and will likely improve.
Figure 8: Examples of visual analytics using a multimodal LLM (Microsoft Bing AI, August 2023 version). See the Appendix for the full prompts and outputs which have been truncated here. (A) Prompt: Explain this joke. — Output: The joke is based on a wordplay that involves removing the letter B from the word “bananas” and getting the word “ananas”, which is the French word for ”pineapple”. (B) This is a still from a movie. Analyze it like a film scholar. — The image depicts a scene of violence and conflict, as indicated by the presence of guns, masks, and broken glass. The scene is likely part of an action or thriller genre, which typically features such elements to create tension and excitement for the audience. The scene also suggests a power imbalance between the four men, as three of them are shooting at one, who is falling backwards. The falling man is in a vulnerable and defeated position, while the other three are in a dominant and aggressive stance. This could imply that the falling man is either the protagonist or an ally of the protagonist, who is facing a threat or a challenge from the antagonists. The scene is shot from a low angle, which enhances the sense of drama and danger. /.../ The image is slightly blurry, which suggests fast movement and adds to the feeling of chaos and urgency. The setting of the scene is a gray stone building with columns, which could indicate a public or official place, such as a government office, a bank, or a museum. /.../ The debris and broken glass scattered around the scene add to the sense of destruction and disorder caused by the shootout. (C) Make a nested list of all prominent objects and characters in this painting. — [painting [sky [clouds]] [buildings [roofs] [walls] [windows] [doors]] [street [cobblestones] ] [people [man in black suit and top hat] [woman in blue dress and white hat] [boy in brown shirt and pants] [girl in pink dress and bonnell] [animals [horse] [cart] [chickens] [rabbits]]. (D) Analyze this painting like an art scholar. — This painting is an example of abstract expressionism, a style of art that emerged in the 1940s and 1950s in the United States. The painting does not depict any recognizable objects or scenes, but rather uses colors, shapes, and textures to create a visual impact. The painting has a dynamic and energetic composition, with a burst of colors that radiate from the center. The colors are mostly warm (orange, yellow, red) and cool (blue, green, purple), creating a contrast and a sense of tension. The black and white colors add depth and contrast to the painting, as well as suggesting light and shadow. The painting has a thick and textured surface, with visible brushstrokes that show the movement and force of the artist’s hand. /.../ The possible interpretations of this painting are varied and subjective, depending on the viewer’s perspective and imagination: /.../ an explosion or a firework /.../ a flower or a sun /.../ a galaxy or a star /.../ or the artist’s psyche or emotions.
Discussion
There is no substitute for expert human judgment, reasoning and calibration, when it comes to designing, conducting and interpreting research. However, human time is a bottleneck. With suitable instructable machines, these capacities can be augmented and scaled up to enable research on much larger volumes of data than human alone could process, enabling researchers to elevate to more representative sample sizes as well as ask new questions that were perhaps not yet feasible in the past. Humanities, social sciences, and other neighbors of philosophy are very well positioned to make use of this opportunity, with long traditions in theory building, qualitative reasoning and expert knowledge driven analytics. These are all competencies crucial in the application of a MAMM approach, which requires solid questions or hypotheses, a well corresponding coding scheme, expert-annotated test sets for evaluation, and last but not least meaningful interpretation of the results of quantification of potentially very large datasets.
The quantitizing mixed methods approach, as exemplified by usage feature analysis in linguistics, provides a flexible and replicable framework, as a more rigorous alternative for analyzing qualitative data in a systematic quantitative manner, compared to pseudo-mixed methods, as discussed above. The MAMM is an augmentation of the QMM with machine learning. Here the machines of choice were instructable LLMs as flexible zero-shot classifiers and reasoners -- but any suitable model applies.
Continuing to use any potentially pseudo-mixed designs would thus seem difficult to justify, when objectively more efficient and transparent methods are available. Purely qualitative research naturally has its place; but applying qualitative designs in empirical scenarios, if the actual goal is quantification and extrapolation, can lead to unintentional pseudo-mixed practices and spurious results. Using (and extrapolating based on) small sub-samples is no longer necessary either, as previously time-consuming annotation and analytic tasks can now be delegated to a (suitable expert-instructed) LLM. This all is of course not to say LLMs or ML or AI should be applied to everything everywhere all at once. Allocating research tasks to a machine is rather an optimization problem between monetary resources, human time, and output quality. However, as shown above and in recent literature, using currently already available LLMs does not always decrease, and can in some cases even improve upon human output quality (cf. Gilardi et al.2023; Tornberg2023).
### Limitations
#### 4.1.1 Technological limitations
A possible technical factor limiting the applicability of current LLMs is that their instruction-training process typically involves at least some form of censorship to stop the final model from generating harmful or offensive content. The extent of this varies, but it can also hinder using the model as a classifier in valid contexts: if a given input with potentially sensitive content triggers such an adverse reaction, the model may refuse to respond or respond with a refusal. However, contingencies for such occasions can be built into the analytic pipeline (see Methods). Current text-centric models are also limited in applicability to multimodal data. For example, natural human communication is inherently multimodal: not just uttered words but gesture, tone and other factors play a role (cf. Rasenberg et al.2022). This may well improve in the near future however.
As stated in the Introduction, this contribution is limited in scope in terms of prompt optimization or model comparison, which have and are being done elsewhere. To emphasize once more, the case study results should not be considered the upper limit of the accuracy and capacity of current and future LLMs, but the baseline of what is already possible.
#### 4.1.2 Proficiency-based limitations in applying a machine-assisted framework
The SAGE Handbook of Mixed Methods in Social & Behavioral Research (Tashakkori and Teddlie2010) on lists the following hindrances to mixed methods research: "costs of conducting it, unrealistic expectations regarding an individual researcher's competence in both QUAL and QUAN methodology, complexity of putting together teams to carry out such research when groups (rather than individuals) conduct MMR, and (last, but not least) the impossibility of an individual or even a team's examining issues from different perspectives/worldviews."
The same, by extension, applies to the MAMM framework. While zero-code applications may well become available in the future, the low-code pipeline described in the Methods does require some proficiency in a suitable programming
language, and of either using APIs or deploying local models. The quantification step furthermore necessitates a basic understanding of statistics and the skills to conduct modeling in a software package or a programming language like R. There are two options here: either the scholar takes time to learn the basics of programming and statistics, or, collaborates with somebody who already has them. However, investment in learning (and teaching students) basic programming and statistics is worthwhile, with the added effect of broadening career prospects.
#### 4.1.3 Other arguments against LLMs as research instruments, and ways forward
One critique leveraged against the use of machine learning such as LLMs to annotate data is that they can be unreliable or unreplicable, because their outputs may be stochastic. This can be due to the nature of the underlying neural network model, or because updates to cloud service LMMs may not be well documented and traceable. A related critique is that LLMs, like all trained ML models, can be biased due to some skewed distributions or content in their (in commercial cases, often unknown) training data (Feng et al., 2023). This is more so an issue with close-source models like GPT-4 where training data and procedures are not fully known. However, as pointed out by Tornberg (2023), these issues are not categorically unique to machines, and also apply to human analysts (and crowd-worker annotators, research assistants). To put it another way, humans too are stochastic and closed source.
Engaging in analytic tasks requiring subjective judgments and reasoning can propagate and amplify biases. There is no way around this in qualitative (and by extension, mixed methods) research. The solution is to be mindful, reflect and acknowledge this, follow good open science practices, and generally strive towards transparency and foster replicability where possible -- regardless if using machine or human analysts. These are unfortunately not yet seen as relevant issues in all fields of H&SS. While qualitative research can only account for uncertainty and bias informally, quantitative approaches can furthermore enable systematic accounting and modeling of biases and other issues (see Methods).
Using open-source LLMs based on well documented training procedures and data is preferable in that it can help with transparency and replicability (cf. Liesenfeld et al., 2023). Running a fixed version of a local model can ease the replication issues that current cloud services may have, if the model is public (in a persistent manner) or can be publicized along with the research. However, this is not always feasible, such as at the time of writing this paper, where the only models capable of working with the smaller languages were the commercial closed-source ones.
One might also criticize using LLMs in research for the fact that using them can cost money -- either in the form of commercial cloud service fees or investments into hardware capable of running the bigger models locally. The ecological footprint of using LLMs has also been raised. Then again, arguably any research activity has costs and a footprint, including hiring a research assistant or crowd-workers to annotate data, or using one's own time -- the most valuable resource -- to complete a given analysis (see also Tomlinson et al., 2023).
One way or another, LLMs are already being used in research, likely in ways also described in this contribution. The cost and effort of running a typical "paper-sized" study has therefore significantly decreased in many disciplines, especially those not requiring experimentation or primary data collection. The writing process is also used by LLM-based tools like ChatGPT. Anecdotally: the core of a typical usage-based linguistics paper applying feature analysis consists of (in addition to the write-up) the annotation of anywhere around 500-5000 linguistic examples, often sourced from a corpus; a PhD thesis about thrice that. Such a task can now be completed in hours using an LLM (at least at some level of quality). If a discipline (or a journal) allows itself to be flooded by low-effort, low-insight papers, this is bound to eventually erode its reputation and trustworthiness and hinder real progress. Transparent practices and replicability (including in review) have thus never been more important than now, and research evaluation should focus less on volume (as scaling is cheap now) and more on insight and intellectual contribution.
### Future research and opportunities
While the case studies here covered a number of disciplines and task types, this contribution is by no means comprehensive in that regard. Using LLMs and eventual multimodal models as zero-shot classifiers and inference machines hold obvious potential for fields including humanities, social sciences and cultural analytics, which routinely deal with complex textual, visual and otherwise "qualitative" data. As demonstrated in the case studies, already currently readily available LLMs can be plugged into research pipelines for classification and analysis as well as data processing and filtering. As shown, a single LLM prompt can often do an equally well (or better) job than complex, multi-model pre
processing pipelines -- which obviously were necessary up until very recently, to the point of sometimes being research goals themselves (cf. Chaturvedi et al. 2018; Sherstinova et al. 2022; Ash et al. 2023; Sobchuk and Sela 2023). If a researcher or research group makes regular use of LLMs, it may well make sense to deploy a custom model on in-house hardware or a private cloud, and fine-tune it for their domain and most common use cases, or a set of branching models for specific cases. I would be surprised if that would not become commonplace in the near future.
There are various other domains not considered in the case studies here where machine assistance may be useful. One is experiments employing artificial languages or visual stimuli, as used in psychology, experimental semiotics, cognitive science and linguistics (Kirby et al. 2008; Galantucci et al. 2012; Tamariz and Kirby 2015; Karjus et al. 2021). LLMs could be used to generate stimuli, visual AI models for any desired visual or artistic stimuli, and LLMs can be used to analyze any open-ended responses. LLMs can be used to build the codebase for the website or app used to run the experiment. These are all tasks typically shared between a research team, but allocating some to machines means for example a cognitive scientist no longer needs to act as a full-stack developer, web designer, artist, and analyst, all in one. Speaking of linguistics, one case study that did not make it to this contribution due to being too preliminary consisted of inferring typological categories such as dominant word order based on a small corpus of sentences from an 'undocumented" (artificially generated) language. Initial results on GPT-4 were promising, with it being able to reason and infer linguistic categories about a novel 'language' that it does not have in its training data.
While a number of domains were covered by the case studies, there were no experiments in areas of law, educational sciences or pedagogy. Like in the cases covered here, empirical data like interviews, observations, practice reports but also laws and regulations etc. could be analyzed in a MAMM framework. In an educational setting, LLMs may be used for assessment and other tasks (Baidoo-Anu and Owusu Anshan 2023; Kasneci et al. 2023). This however requires in turn assessing the performance and suitability of these tools, where the QMM or MAMM is likely applicable. Another scenario that LLMs could be used for annotation or classification is where the content of the data is potentially harmful, toxic or triggering. Instead of subjecting a crowd-worker or research assistant to the task, it can now be allocated to a machine.
As discussed above, one framework that explicitly relies on (machine-assisted) quantification of qualitative data in the humanities is that of distant reading (Moretti 2013), which typically relies on interpreting word counts or latent topics. Naturally these representations are removed from the nuances of the content itself (the domain of 'close reading'). One critique of Moretti-style distant reading (Ascari 2014) states that its reliance of predefined genre labels and "abstract models that rest on old cultural prejudices is not the best way to come to grips with complexity." The MAMM presents a solution. Instead of operating with broad genre labels or abstract topic models, it is now possible to model texts as distributions or sequences (of theory-driven units) at any chosen level of granularity, while the machine component enables meaningfully processing volumes of text that would be unfeasible for human-only close reading. In that sense, distant reading can now be replaced with machine reading, which embodies the best of both worlds.
Data analysis in the form described here is however not the goal across all H&SS disciplines. For example, a semiotician or philosopher may be more interested in developing concepts, prescriptive frameworks, or discussing possible interpretations and reception of a text. If the research is purely qualitative in that manner, the MAMM framework would not be applicable (unless the design is actually pseudo-mixed, see Introduction). LLMs might still be useful as AI research assistants, for summarizing texts or filtering out contextually complex examples beyond what a keyword search would be capable of.
### Time and efficiency gains
Ziems et al. (2023) suggest that the resources saved from allocating some tasks to LLMs would be put to good use by training expert annotators (or research assistants). This is a good point: let machines do repetitive labor and humans more interesting and meaningful work. The time savings can be considerable. For example, the dataset of the first case study on newsreels features a modest dataset of 12707 synopses totaling about 281k words. Assuming a reading speed of 184 wpm (words per minute; average for Russian language text; Trauzettel-Klosinski et al. 2012), merely reading through that would be over 19 hours of work, with annotation work likely taking as much again. At least a full work week in total. That is assuming the availability of a speaker of Russian who is knowledgeable of the historical Soviet context and able to interpret the various abbreviations and references in the text. Running it through the OpenAI API was a matter of leaving the script running in the background for an hour or so -- yielding results very close to what an expert human
with said qualifications would judge (as evident from the test set accuracy).
The English translation of "Les Miserables" used in the network inference example above is about 558k words, and contains a long list of major and minor characters. Reading through that would take over 40 hours (assuming the English average of 228 wpm), and taking meticulous notes of all pairs of interacting characters in each passage would likely double that. Again easily two weeks of work. Or a few minutes or hours on an LLM.
The Corpus of Historical American English (19-20th century; Davies 2010) is a commonly used resource in historical and computational linguistics (see references in the lexical semantic change case study). While NLP methods have been used to parse the entire corpus to infer e.g. lexical change, reading through its entire 400M words would take a human over 14 years (assuming 250 8h-workdays per year without a lunch break). No English scholar in their right mind would undertake this, so either small samples or aggregation via NLP methods is used. With instructable LLMs, reading, annotating or meaningfully analyzing every single sentence therein is entirely feasible.
One of the largest and most prominent exercises in distant reading is likely still the "Quantitative Analysis of Culture Using Millions of Digitized Books" by Michel et al. (2011). Even just the English segment of their corpus (361B words) would be 13k years of work to read through. While purporting to launch a field of "culturomics", their results were based not on "books" but rather counts of words and phrases aggregated across books. Given a similar dataset, processing it with an LLM in a MAMM framework would indeed take more than a few hours, but would not be impossible, while enabling asking more meaningful questions than word frequencies can provide.
## 5 Conclusions
Building on past mixed methods and linguistics research, this contribution proposed and evaluated a machine-assisted (quantitizing-type) mixed methods framework. Large language models were shown to be a flexible solution for the machine or artificial intelligence component, and were applied to 16 case studies characteristic of humanities and social science research topics. It was shown how both time-consuming human annotation and analytic labor, as well as complex computational pipelines, can be either augmented or substituted with zero-shot learning, without a significant loss in (or even potentially improving) annotation quality. The MAMM framework emphasizes the need for transparency and replicability of both the qualitative and quantitative component, which can be achieved by transparent research practices, rigorous statistical procedures, and following general good open science principles.
## Data and code availability
The data and code are available at
[https://github.com/andreskarjus/MachineAssistedMixedMethods](https://github.com/andreskarjus/MachineAssistedMixedMethods)
The prompts are also listed in the Appendix below.
## Acknowledgments
The author would like to thank Mila Oiva for collaboration on the newsreels topic classification example which became an extended case study in this contribution, providing expertise in interpreting the additional results and feedback; Christine Cuskley for the collaboration on the Twitter paper, one component of which was also used here as an expanded example; Pritit Latti for providing a version of the maritime wrecks dataset, Daniele Monticelli and Novella Tedesco for providing expert evaluation in the English-Italian translation task, Laur Kanger and Peeter Tinits for discussions and for providing the test set for the historical media text filtering task, Oleg Sobchuk and Arjoms Sela for providing a test set for the literary genre detection task, and Tanya Escudero for discussions that led to expanding the literature review. Thanks for useful discussions and feedback go to Vejune Zemaityte, Mikhail Tamm, and Mark Mets. The author is supported by the CUDAN ERA Chair project, funded through the European Union's Horizon 2020 research and innovation program (Grant No. 810961). | 大規模言語モデル(LLM)の能力の増大は、人文科学と社会科学におけるデータ分析を拡大する、 precedented な機会となります。質的な分析タスクを従来人間が行っていたものを自動化することで、この貢献は、質的な分析の専門知識、機械のスケーラビリティ、厳格な定量化を統合した体系的な混合方法論を提案します。透明性と再现可能性に配慮します。16台の機械支援による事例研究を、概念証明として示します。タスクには言語と discurso 分析、語彙的語義変化検出、インタビュー分析、歴史的出来事の因果推論とテキストマイニング、政治的な立場を検出、テキストとアイデアの再利用、文学と映画におけるジャンル構成が含まれます。ソーシャルネットワーク推定、自動言語学、欠損メタデータの拡張、多岐にわたる視覚的な文化分析が含まれます。 既存 |
2309.03605 | Virtual segmentation of a small contact HPGe detector: inference of hit
positions of single-site events via pulse shape analysis | Exploring hit positions of recorded events can help to understand and
suppress backgrounds in rare event searching experiments. In this study, we
virtually segment a small contact P-type high purity germanium detector (HPGe)
into two layers. Single-site events (SSEs) in each layer are selected by an
algorithm based on two pulse shape parameters: the charge pulse drift time
($T_{Q}$) and current pulse rise time ($T_{I}$). To determine the shapes and
volumes of the two layers, a Th-228 source is placed at top and side positions
to irradiate the detector. The double escape peak events from 2614.5 keV
$\gamma$-ray are selected as typical SSEs, their numbers in the two layers are
used to calculate the volumes and shapes of those layers. Considering the
statistical and systematic uncertainties, the inner layer volume is evaluated
to be 47.2\%$\pm$0.26(stat.)\%$\pm$0.22(sys.)\% of the total sensitive volume.
We extend our analysis for SSEs in 1400-2100 keV, the spectra of inner layer
events acquired from experimental data using the selection algorithm are in
good agreement with those from the simulation. For sources outside the HPGe
detector, the outer layer can act as a shielding for the inner layer. Selecting
the inner layer as the analysis volume can reduce the externalbackground in the
signal region of Ge-76 neutrinoless double beta (0$\nu\beta\beta$) decay. We
use the Th-228 source to evaluate the background suppression power of the
virtual segmentation. After performing the single and multi-site event
discrimination, the event rate in the 0$\nu\beta\beta$ signal region can be
further suppressed by 12\% by selecting the inner layer as the analysis volume.
The virtual segmentation could be used to efficiently suppress surface
background like electrons from Ar-42/K-42 decay in 0$\nu\beta\beta$ experiments
using germanium detector immersed in liquid argon. | W. H. Dai, H. Ma, Z. Zeng, L. T. Yang, Q. Yue, J. P. Cheng | 2023-09-07T10:00:26 | http://arxiv.org/abs/2309.03605v1 | Virtual segmentation of a small contact HPGe detector: inference of hit positions of single-site events via pulse shape analysis
###### Abstract
Exploring hit positions of recorded events can help to understand and suppress backgrounds in rare event searching experiments. In this study, we virtually segment a small contact P-type high purity germanium detector (HPGe) into two layers. Single-site events (SSEs) in each layer are selected by an algorithm based on two pulse shape parameters: the charge pulse drift time (\(\mathbf{T_{Q}}\)) and current pulse rise time (\(\mathbf{T_{I}}\)). To determine the shapes and volumes of the two layers, a Th-228 source is placed at top and side positions to irradiate the detector. The double escape peak events from 2614.5 keV \(\mathbf{\gamma}\)-ray are selected as typical SSEs, their numbers in the two layers are used to calculate the volumes and shapes of those layers. Considering the statistical and systematic uncertainties, the inner layer volume is evaluated to be 47.2%\(\pm\)0.26(stat.)%\(\pm\)0.22(sys.)% of the total sensitive volume. We extend our analysis for SSEs in 1400-2100 keV, the spectra of inner layer events acquired from experimental data using the selection algorithm are in good agreement with those from the simulation. For sources outside the HPGe detector, the outer layer can act as a shielding for the inner layer. Selecting the inner layer as the analysis volume can reduce the external background in the signal region of Ge-76 neutrinoless double beta (\(0\mathbf{\nu\beta\beta}\)) decay. We use the Th-228 source to evaluate the background suppression power of the virtual segmentation. After performing the single and multi-site event discrimination, the event rate in the \(0\mathbf{\nu\beta\beta}\) signal region can be further suppressed by 12% by selecting the inner layer as the analysis volume. The virtual segmentation could be used to efficiently suppress surface background like electrons from Ar-42/K-42 decay in \(0\mathbf{\nu\beta\beta}\) experiments using germanium detector immersed in liquid argon.
small contact HPGe, pulse shape analysis, detector segmentation
## 1 Introduction
Small contact high purity germanium (HPGe) detectors are widely used in searching for rare events from physics beyond Standard Model, such as the neutrinoless double beta (\(0\nu\beta\beta\)) decay and dark matter [4, 5, 6, 7]. Those searches need an extremely low background level in the signal region to achieve sufficient sensitivity. The discrimination of background and signal via pulse
shape analysis is a powerful background suppression technology and widely used in HPGe based experiments. [8, 9, 10, 11].
The energy depositions from \(0\nu\beta\beta\) decay events and dark matter interactions are typically within about a millimeter and are regarded as single-site events (SSEs). Backgrounds can be single-site or multi-site events (MSEs), depending on their origination. Small contact HPGe detectors, such as point contact Ge (PCGe) and broad energy Ge (BEGe), have been demonstrated to have SSE and MSE discrimination capability utilizing pulse shape analysis [3, 9, 10, 11]. After the SSE/MSE discrimination, signals are still mixed with SSE-like backgrounds, such as single Compton scattering of incoming \(\gamma\) or direct energy depositions from beta decay electrons penetrating the surface layer of the detector. Signals are expected to have a uniform distribution in the detector, while the backgrounds tend to be close to the detector surface. Therefore, inference of the SSE position can help to understand and suppress the SSE-like backgrounds.
Previous studies [12, 13, 14] have demonstrated that the charge collection time in a small contact HPGe detector depends on the energy deposition position. Past work [13] has shown that the rise time of the event pulse can be used to estimate the distance of energy deposition from the contact in a PCGe detector. Pulse shape simulation in [12] also showed that the signal shape depends on the interaction position.
This work explores the position discrimination power of a small contact \(p\)-type HPGe detector via pulse shape analysis. The detector is virtually segmented into two layers, and single-site events with hit position in the inner layer are identified. The shape and volume of the inner layer are modeled, determined, and validated in a series of Th-228 irradiation experiments. We also discuss the background suppression potential of this method towards possible application in future \(0\nu\beta\beta\) experiments.
## 2 Experimental setup
The detector used in this work is a small contact \(p\)-type HPGe detector produced by ORTEC. The detector crystal has a height of 42.6 mm and a diameter of 80.0 mm, and the thin \(p+\) contact is about 3.1 mm in diameter and is implemented in a 1 mm deep hole on the bottom surface of the crystal. The \(n+\) surface of the detector crystal, formed by the lithium diffusion, contains an inactive layer and reduces the sensitive mass of the detector. The thickness of the inactive layer is evaluated to be 0.87 mm in our previous work [15]. Subtracting the inactive layer, the total sensitive mass of the detector is 1.052 kg.
As shown in Fig.1, the data acquisition (DAQ) system is based on commercial NIM/VME modules and crates. The detector is operated under 4500 V bias voltage provided by a high voltage module. The output signal from the \(p+\) contact is fed into an resistance-capacitance (RC) preamplifier. The RC-preamplifier provides two identical
Figure 1: Schematic diagram of the DAQ system.
Figure 2: Experimental setup at CJPL.
output signals. One is loaded into a shaping amplifier with a gain factor of 10 and shaping time of 6 \(\mu\)s. The output of the shaping amplifier and the other output of the RC-preamplifier are fed into a 14-bit 100 MHz flash analog-to-digital convertor (FADC) for digitalization. The digitalized waveforms are recorded by the DAQ software on a PC platform.
A detector scanning device is built in China Jinping Underground Laboratory (CJPL) [16]. As shown in Fig.2, the detector and the liquid nitrogen (LN) Dewar are installed with the scanning device. A Th-228 source with an activity of 500 Bq is mounted on the source holder with a step motor controlling the source position.
## 3 Pulse processing and event discrimination
### Digital pulse processing
Typical pulses from the shaping amplifier and preamplifier are illustrated in Fig.3. After subtracting the baseline, the integration of the shaping amplifier pulse is used to estimate the event energy (as shown in Fig.3(a)). Energy calibration is performed by the measured Th-228 spectrum with characteristic \(gamma\)-ray peaks from decays of radionuclides in the Th-228 decay chain.
The pulses from the preamplifier are used to estimate the time features of the event (as shown in Fig.3(b)). The charge drift time (\(T_{Q}\)) is defined as the time between the moments when charge pulse reachs 0.2% and 10% of its maximum amplitude. The current pulse is extracted from the charge pulse by a moving average differential filter, and the current rise time (\(T_{I}\)) is the time between the moments when the current pulse reachs 0.2% and 20% of its maximum amplitude.
### Single and multi-site event discrimination
The single/multi-site event discriminator (A/E) is defined as ratio of the maximum amplitude of the current pulse (A) and the reconstructed energy (E). It has been discussed in various literature [9, 11, 17, 18] that SSE tends to have higher A/E value than MSE in a small contact HPGe detector. Therefore, we apply a cut on A/E to select the SSEs. The acceptance region of the A/E cut is determined by the double escape peak (DEP) events from a measured Th-228 spectrum. DEP events are typical SSEs and their A/E distribution is fitted by a Gaussian function to determine the mean (\(\mu_{SSE}\)) and standard deviation (\(\sigma_{SSE}\)) of A/E parameter for SSEs. As shown in Fig.4, the cut threshold is set to \(\mu_{SSE}-5\sigma_{SSE}\), leading to about 80% survival fraction of DEP events and 9% survival fraction of single escape peak events (typical MSEs).
Figure 3: (a) an example of shaping amplifier pulse, the blue region indicates the integral of the pulse after subtracting the baseline, and it is used as the energy estimator; (b) an example of smoothed preamplifier pulse and the extracted current pulse. Pulse time parameters \(T_{Q}\), \(T_{I}\), and parameter ”A” in the A/E discriminator are also illustrated. The current pulse is rescaled for demonstration.
Fig.5 shows typical Th-228 spectra before and after the A/E cut. Main characteristic peaks from the Th-228 source and radionuclides in the surrounding materials are labeled. The full-width-at-half-maximum (FWHM) of the double escape peak (1592.5 keV) before (after) the A/E cut is \(2.19\pm 0.05\) keV (\(2.18\pm 0.03\) keV). The FWHM of the 2614.5 keV peak before (after) the A/E cut is \(2.51\pm 0.01\) keV (\(2.46\pm 0.02\) keV). A slight improvement in the energy resolution is observed after the A/E cut.
### Linear and nonlinear event discrimination
The \(T_{Q}\) and \(T_{I}\) distribution of SSEs demonstrates two types of events: events gathered in a rodlike region in Fig.6(a) are referred to as linear events, and other events gathered in a cluster are referred to as nonlinear events. As shown in Fig.6, the charge drift time (\(T_{Q}\)) and a linearity index (\(L\)) are used to discriminate the linear and nonlinear events. The linearity index is defined as:
\[L=T_{I}-\left(k\times T_{Q}+b\right), \tag{1}\]
where fit parameters \(k\) and \(b\) are calculated via fitting \(T_{Q}\) and \(T_{I}\) of typical linear events with the function (\(T_{I}=k\times T_{Q}+b\)). First, initial values of fit parameters (\(k_{0}\) and \(b_{0}\)) are calculated by fitting events with \(T_{Q}\) and \(T_{I}\) below 500 ns. Then events with linearity \(L=T_{I}-\left(k_{0}\times T_{Q}+b_{0}\right)\) in [-50, 50] ns are fitted to give the final value of \(k\) and \(b\). As shown in Fig.6(b), the distribution of linearity index \(L\) is fitted with two Gaussian functions corresponding to linear and nonlinear events, respectively. The cut limit is set to (\(\mu_{L,linear}-3\sigma_{L,linear}\)), where \(\mu_{L,linear}\) and \(\sigma_{L,linear}\) are the mean and standard deviation of \(L\) distribution for linear events. The distribution of \(T_{Q}\) for nonlinear events selected by linearity index \(L\) is fitted with a Gaussian function, and the cut limit is set to (\(\mu_{T,linear}-3\sigma_{T,linear}\)), where \(\mu_{T,linear}\) and \(\sigma_{T,linear}\) are the mean and standard deviation of \(T_{Q}\) distribution for nonlinear events as shown in Fig.6(c). The red dashed line in Fig.6(a) shows the discrimination limit set by the linearity index \(L\) and the charge drift time \(T_{Q}\).
## 4 Detector segmentation model
### Demonstration of spatial distribution of linear and nonlinear events via pulse shape simulation
We perform a pulse shape simulation (PSS) for the HPGe detector to demonstrate the spatial distribution of the linear and nonlinear events. The electric field and weight potential field in the detector are calculated using the \(mjd\_fieldgen\) package [19], assuming a linear impurity profile
Figure 4: A/E distributions of DEP and SEP events in Th-228 calibration data. The dashed line is the A/E cut threshold (\(\mu_{SSE}-5\sigma_{SSE}\)).
Figure 5: Typical Th-228 spectra before and after the A/E cut. The characteristic peaks from decay daughters of Th-228 (Tl-208, Bi-212) and other radionuclides (K-40, and Bi-212) are labeled in the spectra. The double-escape peak (DEP) of Tl-208 2614.5 keV \(\gamma\)-ray is marked in red.
in the Z-direction with an impurity density of \(3.7\times 10^{9}\) cm\({}^{3}\) and \(8.0\times 10^{9}\) cm\({}^{3}\) at the top and bottom surface of the crystal. SSEs with 1 MeV energy deposition are placed at different positions in the crystal. The corresponding charge pulses are calculated via the SAGE-PSS package [20] and added with electric noise extracted from measured pulses.
Fig.7 demonstrates the \(T_{Q}\) and \(T_{I}\) as a function of the interaction position. As shown in Fig.7(a) and (b), SSEs close to the \(p+\) contact have shorter \(T_{Q}\) and \(T_{I}\). With the distance to contact increasing, the \(T_{Q}\) and \(T_{I}\) of induced pulses increase simultaneously, for instance, the SSE-3 and SSE-4. These events are typical linear events in Fig.7(c). However, when SSEs near the top and side surfaces of the detector, their \(T_{Q}\) and \(T_{I}\) are not sensitive to their positions. Those SSEs, such as SSE-1 and SSE-2 are typical nonlinear events. It can be explained by the Schockley-Ramo theory [21]: when SSEs deposit energy near the outer surface of the detector, the induced charge and
Figure 6: Discrimination of linear and nonlinear events. Data in the figure are from DEP events (1592.5\(\pm\)5 keV, after A/E cut) in a Th-228 calibration experiment (source placed at the center of detector top surface). (a) Distribution of \(T_{Q}\) and \(T_{I}\). The blue dashed line is the fitted linear function of \(T_{Q}\) and \(T_{I}\). Red dashed line is the cut limit for inner layer events; (b) Histogram of event linearity index \(L\), and the Gaussian fit of linear (blue line) and nonlinear (red line) events; (c) \(T_{Q}\) Histogram for nonlinear events selected by \(L\) cut in (b). The black dashed lines in (b) and (c) are the cut limit for inner layer events.
Figure 7: Pulse shape simulation for SSEs in different positions of the detector. (a) Charge drift time (\(T_{Q}\)) for SSE as a function of the interaction position; (b) Current rise time (\(T_{I}\)) for SSEs as a function of the interaction position; (c) Distribution of \(T_{Q}\) and \(T_{I}\) for pulses in (a) and (b), those events are gathered in two clusters with a linear and nonlinear relationship between \(T_{Q}\) and \(T_{I}\). Red crosses mark the positions of four selected SSEs.
current pulses will not exceed the 0.2% of their maximum amplitude as charge carriers drift in the weak electric and weight potential field area near the surface. Thereby, the \(T_{Q}\) and \(T_{I}\) of those SSEs are not sensitive to the energy deposition position.
### Parameterized segmentation model
According to the pulse shape simulation, the linearity between \(T_{Q}\) and \(T_{I}\) of the SSE can be use to infer its hit position. We segment the detector into two layers referring to the positions of linear and nonlinear SSEs. The boundary between the two layers is related to the electric and weight potential field of the detector. And due to the lack of precise knowledge of the impurity profile within the Ge crystal, we can't rely on the PSS to calculate the shape of the two layers but take it as a reference. Therefore, we take an empirical approach to build a segmentation model with 14 parameters to described the boundary.
As shown in Fig.8, the boundary of the inner layer is the linear connection of 8 spatial points. It is worth noting that the number of spatial points in the model is arbitrary, and it will be demonstrated later that the 8 points model is sufficient for this study. Table.1 lists the bound for each model parameter. As the model only requires the two layers to be continuous, the first spatial point \((r_{1},z_{1})\) could be on the top surface or the central axis. To determine the value of each model parameter, we design and conduct a Th-228 scanning experiment.
## 5 Optimization of segmentation model parameters
### Th-228 source scanning experiment
A Th-228 source is used to perform a scan of the detector top and side surfaces at 19 different positions as shown in Fig.9. A background measurement is also conducted for the detector.
Events in the DEP region (1592.5\(\pm\)5 keV) are selected as SSE candidates. After removing MSEs by the A/E cut, the linear events in the remaining SSEs are selected using the method in Sec 3.3. The ratio of linear events from the Th-228 source (\(R_{L,DEP}\)) is then calculated by:
\begin{table}
\begin{tabular}{c c} \hline \hline Parameter & Parameter bound \\ \hline \((r_{1},z_{1})\) & \(r_{1}=0\), \(0<z_{1}<H\) \\ & or \(z_{1}=H\), \(0<r_{1}<R\) \\ \((r_{2},z_{2})\) & \(r_{1}\leq r_{2}\), \(z_{2}\leq z_{1}\) \\ \((r_{3},z_{3})\) & \(r_{2}\leq r_{3}\), \(z_{3}\leq z_{2}\) \\ \((r_{4},z_{4})\) & \(r_{3}\leq r_{4}\leq R\), \(z_{4}\leq z_{3}\) \\ \((r_{5},z_{5})\) & \(r_{5}\leq R\), \(z_{5}\leq z_{4}\) \\ \((r_{6},z_{6})\) & \(r_{6}\leq r_{5}\), \(z_{6}\leq z_{5}\) \\ \((r_{7},z_{7})\) & \(r_{7}\leq r_{6}\), \(z_{7}\leq z_{6}\) \\ \((r_{8},z_{8})\) & \(0\leq r_{8}\leq r_{7}\), \(z_{8}=0\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Bounds for segmentation model parameters, \(R\) and \(H\) are the radius and height of the Ge crystal.
Figure 8: Parameterized segmentation model of the detector, where \(H\) and \(R\) are the height and radius of the crystal. The top spatial point \((r_{1},z_{1})\) could be on the top surface (\(z_{1}=H\)) or on the central axis (\(r_{1}=0\)) of the crystal. The green shadow region is the inner layer in the segmentation model, and the gray shadow is the inactive layer in the \(n+\) surface.
\[R_{L,DEP}=\frac{N_{L,S}-N_{L,B}\cdot t_{S}/t_{B}}{N_{T,S}-N_{T,B}\cdot t_{S}/t_{B}}, \tag{2}\]
where \(N_{T,S}\) and \(N_{T,B}\) are total numbers of selected single-site DEP events in Th-228 and background measurements, respectively. \(N_{L,S}\) and \(N_{L,B}\) are numbers of selected linear events. \(t_{S}\) and \(t_{B}\) are the live time of source and background measurements. The uncertainty of \(R_{L,DEP}\) is calculated by propagating the Poisson uncertainties of event counts in Th-228 and background measurement through Eq.(2). Fig.10 shows the linear event ratio of SSEs in the DEP region as a function of Th-228 source positions. The \(R_{L,DEP}\) decreased from 33.3% to 24.0% as the source moved from the top center to the edge of the detector. About 2.9% changes in \(R_{L,DEP}\) is observed when moving the source along the detector side surface.
### Spatial distribution of DEP events
As the linear events are located in the inner layer of the segmentation model, the linear event ratio \(R_{L,DEP}\) can be modeled by:
\[R_{L,DEP}=\iint M(r,z\mid\theta)F_{DEP}(r,z)\cdot\mathrm{d}r\mathrm{d}z, \tag{3}\]
\[M(r,z\mid\theta)=\begin{cases}1\ (r,z)\in\mathrm{inner\,layer}\\ 0\ (r,z)\in\mathrm{outer\,layer}\end{cases}, \tag{4}\]
where \(M(r,z\,|\,\theta)\) is the select function for the inner event using the segmentation model, \(\theta\) represents the model parameters in Table.1, \(F_{DEP}(r,z)\) is the spatial distribution of SSEs in the DEP region. The energy deposition of \(\gamma\) emitted by the Th-228 source is simulated by Geant4 [22]. The energy depositions occured in the inactive layer of the detector are not recorded in the simulation. The
Figure 11: \(\delta_{D}\) histogram for simulated DEP events with the Th-228 source is placed at the center of the top detector surface.
Figure 10: Ratio of the linear event in selected DEP events as a function of Th-228 source positions. Error bars indicate the 1\(\sigma\) uncertainty.
Figure 9: Schematic of Th-228 source positions in calibration experiments. The red points indicate the position of the Th-228 source. The red, blue, and green dashed boxes mark the selected measurements for sub-datasets in the uncertainty assessment. The Th-228 source is mounted on a source holder. The carbon fiber vacuum cryostat and the copper crystal holder are also shown.
single-site events are selected by the \(\delta_{D}\) parameter. \(\delta_{D}\) is the average distance between the energy deposition points to the charge center of the event:
\[\delta_{D}=\frac{1}{n}\sum_{i=0}^{n}\sqrt{(x_{i}-\hat{x})^{2}+(y_{i}-\hat{y})^{2 }+(z_{i}-\hat{z})^{2}}, \tag{5}\]
\[\hat{x}=\sum_{i=0}^{n}x_{i}\frac{E_{i}}{E_{tot}},\hat{y}=\sum_{i=0}^{n}y_{i} \frac{E_{i}}{E_{tot}},\hat{z}=\sum_{i=0}^{n}z_{i}\frac{E_{i}}{E_{tot}}, \tag{6}\]
where \(n\) is the number of steps in one event, \((x_{i},y_{i},z_{i})\) and \(E_{i}\) are the hit position and energy deposition of the i-th step. \((\hat{x},\hat{y},\hat{z})\) and \(E_{tot}\) are the charge center and total energy deposition of the event. Events with \(\delta_{D}<\delta_{D,SSE}\) are selected as SSEs, where \(\delta_{D,SSE}\) is determined by matching the survival fraction of DEP events in simulation with that of the A/E cut in the experiment. Fig.11 demonstrates a typical \(\delta_{D}\) distribution of simulated DEP events when the Th-228 source is at the top center of the detector. The charge center of the selected SSE is then used to simulate the spatial distribution \(F_{DEP}(r,z)\). Fig.12 shows the simulated \(F_{DEP}(r,z)\) for the Th-228 source at two different positions.
### Optimization of model parameters
As shown in Fig.12, the position of the Th-228 source affects the spatial distribution of DEP events and therefore leads to different observed linear event ratios in Fig.10. Thus, we use a minimum-\(\chi^{2}\) method to calculate the model parameters (\(\theta\)), in which \(\chi^{2}\) is defined as:
\[\chi^{2}=\sum_{k=1}^{19}\frac{\left(R_{k,exp}-\iint M(r,z\mid\theta)F_{DEP}(r, z)\mathrm{d}r\mathrm{d}z\right)^{2}}{\sigma_{k}^{2}}, \tag{7}\]
where \(R_{k,exp}\) is the measured linear event ratio for Th-228 source at position \(k\) (\(k\)=1,2,...19), \(\sigma_{k}\) is the corresponding uncertainty of \(R_{k,exp}\). \(F_{DEP,k}(r,z)\) is the simulated spatial distribution of single-site DEP events for the Th-228 source at position \(k\). The minimalization of \(\chi^{2}\) is implemented by the genetic algorithm using a python-based calculation package Geatpy [23]. Fig.13 shows the optimized results. The volume of the inner layer is 47.2% of the total sensitive volume of the detector. The linear event ratios calculated by Eq.3 using the optimized model parameters are shown in Fig.14. The fit result agrees well with the measurements, the \(p-value\) of the \(\chi^{2}\) fit is 0.701.
## 6 Uncertainty assessment and model validation
Uncertainties of the shape and volume of the inner layer in the optimized model mainly consist of three parts:
1. Uncertainty of the linear event ratio (\(R_{L,DEP}\)) propagated by the \(\chi^{2}\)-method is evaluated using
Figure 12: Spatial distribution of simulated SSEs in DEP region. (a) Th-228 source in the center of the top surface; (b) Th-228 source on the side of the detector. The labels of the color bar represent the distribution density (arbitrary unit).
a toy Monte Carlo method. 3000 Monte Carlo datasets are generated assuming a Gaussian distribution for the \(R_{L,DEP}\) with the mean and standard deviation equal to the measured value and uncertainty, respectively. Model parameters are recalculated for each dataset following the same analysis in Sec 5.3. The distribution of inner layer shapes and volumes for the 3000 samples are illustrated in Fig.15. The distribution of inner layer volume is fitted with a Gaussian function, and the standard deviation, \(\pm 0.26\%\), is adopted as the statistical uncertainty.
2. Systematic uncertainty due to the choice of dataset: we divide the measured data in Fig.10 into three sub-datasets. Sub-dataset I and II each consists of ten measured data (marked by red dashed boxes for sub-dataset I, and blue dashed boxes for sub-dataset II in Fig.9). sub-dataset III consists of six measured data (green dashed boxes in Fig.9). The fitting of model parameters are performed in each sub-dataset, and the largest difference in inner layer volume between all sub-datasets and the full dataset (Fig.16 (a)) is \(\pm 0.22\%\) as a systematic uncertainty.
3. Systematic uncertainty due to the construction of the segmentation model: we reconstruct the segmentation model using 6 spatial points (10 free parameters) and 10 spatial points (18 free parameters) and calculate the model parameters using the full dataset. Fig.16(b) shows the optimized results for the reconstructed models. The overall shape and volume of the inner layer are similar in the three models, and the largest difference in inner layer volume is 0.02%, which is about 10 times smaller than the other two uncertainties and thereby negligible. This indicates the 8-point segmentation model is sufficient in this study.
Figure 16: (a) Optimized results using different datasets, full dataset (black line) consists of all measured data, sub-dataset I, II, III are selected from the full dataset. (b) Optimized results for three different models, the chi-square (\(\chi^{2}\)) and \(p\)-\(value\) are given to demonstrate the fit goodness of each model. The gray shadow regions in both figures are the inactive layer on the detector \(n+\) surface.
Figure 15: (a) Inner layer shapes of the 3000 Monte Carlo datasets. The green, yellow, and blue shadow bands are corresponding to 68%, 95%, and 99.7% quantiles, respectively. The gray shadow is the inactive layer on the \(n+\) surface. (b) Distribution of inner layer volumes. The red line is the fit of inner layer volumes using a Gaussian function, \(\mu\) and \(\sigma\) are the mean and standard deviation, respectively.
uncertainties in the simulation. In this case, the systematic uncertainty is taken as the discrepancy between linear event ratios corresponding to the innermost and outmost shape of the 68% quantile of the inner layer (the green region in fig.15(a)). Fig.17(b) is the comparison of measured and simulated spectra, it demonstrates that the \(\delta_{D}\) cut in the simulation is a good approximation for the A/E cut, and the spectra of inner layer events also show a good agreement between the simulation and measurement in the 1400-2100 keV energy region.
## 7 Background suppression performance of virtual segmentation
In the search for Ge-76 \(0\nu\beta\beta\) decay using HPGe detectors, backgrounds, mostly \(\gamma\)-rays and electrons from outside the detector, have to penetrate the outer layer of the detector to deposit their energy in the inner layer. Thus, the outer layer in the virtual segmentation could act as a shielding for the inner layer, and a lower background level of the inner layer may improve the detection sensitivity.
We use the Th-228 scanning data to evaluate the background suppression power of the virtual segmentation. The count rates in spectra are normalized to unit sensitive mass to include the mass loss due to the analysis volume selection. The masses of the detector are 1.052 kg and 0.496 kg for the total sensitive volume and the inner layer, respectively. Fig.18 demonstrates spectra before and after A/E cut and inner layer event selection when the Th-228 source is placed on the side of the detector. First the whole detector is selected as the analysis volume and the A/E cut is applied to removes multi-site events (gray and blue regions in Fig.18). Then the inner layer of the virtual segmentations is selected as the analysis volume, a further reduction on the event rate is shown in Fig.18 (red region). It is expected that the SSEs mostly come from the single Compton scattering of high energy \(\gamma\)-rays emitted from the source and are clustered near the surface of the detector. Thereby the inner layer has a lower background level in the detector.
Fig.19 shows the event rate in the \(0\nu\beta\beta\) signal region (1900-2100 keV) as a function of the Th-228 source positions. The highest background suppression power is achieved when the Th-228 source is at the side of the detector. In this case, the A/E
Figure 17: Comparison of simulation and experiment for Th-228 source placed on the side of the detector. (a) The linear event ratio as a function of energy, The uncertainty band for simulation (the green shadow) consists of uncertainty from the inner layer shape (68% quantile region in Fig.15(a)) and statistical uncertainty in simulation. The normalized residuals are shown in the bottom figure, (b) Measured and simulated spectra in 1400-2100 keV region.
cut reduces the event rate by 62%, and the virtual segmentation yeilds a further reduction of 12% on the basis of the A/E cut.
In future \(0\nu\beta\beta\) experiments using small contact HPGe detectors, this method might be used to further suppress background in the signal region. Especially for experiments using a liquid argon (LAr) veto system where the HPGe detector is directly immersed in LAr, such as GERDA [1], the LEGEND [24], and CDEX-300\(\nu\) experiments [3]. The background from K-42 (daughter of cosmogenic Ar-42 in LAr) beta-decay is mainly located in the surface of the detector, therefore might be suppressed if the inner layer is selected as the analysis volume. It should be noted that the balance between a lower background and the loss in detector sensitive mass should be considered in the the searching for the \(0\nu\beta\beta\) signal.
Furthermore, the discrepancy between the inner and outer layer SSE spectrum could be used to infer the location of the background source. A more precise background model could be built by fitting the spectra of events in the inner and the outer layer simultaneously.
## 8 Summary
In this study, we develop a virtual segmentation model for a small contact HPGe detector and demonstrate its background suppression capability in the Ge-76 \(0\nu\beta\beta\) signal region. The HPGe detector is virtually segmented into two layers, and a selection algorithm based on charge pulse drift time (\(T_{Q}\)) and current rise time (\(T_{I}\)) is established to identify the position of the single-site event. The shape and volume of the inner layer in the segmentation model are determined using the DEP events in a series of Th-228 source calibration experiments. The volume of the inner layer is evaluated to be 47.2%\(\pm\)0.26(stat.)%\(\pm\)0.22(sys.)% of the total sensitive volume of the detector.
The background suppression power of the virtual segmentation in Ge-76 \(0\nu\beta\beta\) signal region is evaluated by the Th-228 scanning data. Choosing the inner layer as the analysis volume, a further 12% reduction of background is achieved when the Th-228 source is on the side of the detector. Other backgrounds in the \(0\nu\beta\beta\) signal region, especially those clustered on the surface of the detector, such as Ar-42 in future \(0\nu\beta\beta\) experiments, could also be reduced by the virtual segmentation.
The principle of the virtual segmentation can be extended to other small contact HPGe detectors, for instance, point-contact Ge (PCGe) and broad energy Ge (BEGe) detectors.
Figure 19: Event rate in \(0\nu\beta\beta\) signal region (1900-2100 keV) as a function of Th-228 source position. The left and right figures show the event rate for the Th-228 source placed on the top and side surface of the detector, respectively.
Figure 18: Measured spectra for the Th-228 source on the side surface of the detector. cpkkd represents counts per kg per keV per day, \(Q_{\beta\beta}\) is the energy of Ge-76 \(0\nu\beta\beta\) signal.
## Acknowledgments
This work was supported by the National Key Research and Development Program of China (Grant No. 2022YFA1604701) and the National Natural Science Foundation of China (Grants No. 12175112). We would like to thank CJPL and its staff for supporting this work. CJPL is jointly operated by Tsinghua University and Yalong River Hydropower Development Company.
| ```
記録されたイベントのヒット位置を探索することは、希少イベント検索実験で背景を理解し、抑制することができる。本研究では、小さな接触型P型高純度ゲルマニウム検出器(HPGe)を2層に virtuallySEGmented した。各層のシングルサイトイベント(SSE)は、2つの波形パラメーターに基づいたアルゴリズムで選択される:電荷波形漂流時間($T_{Q}$) と電流波形上昇時間 ($T_{I}$)。2つの層の形状と volume を決定するために、トップとサイドの位置にTh-228ソースを配置して検出器を照射する。2614.5 keV$\gamma$-ray の double escape peak イベントは典型的な SSE であり、それぞれの層の数は、その層の形状と volume を計算するために使用される。統計的とシステム的不確定性と考慮すると、内層の |
2309.15067 | Logic Locking based Trojans: A Friend Turns Foe | Logic locking and hardware Trojans are two fields in hardware security that
have been mostly developed independently from each other. In this paper, we
identify the relationship between these two fields. We find that a common
structure that exists in many logic locking techniques has desirable properties
of hardware Trojans (HWT). We then construct a novel type of HWT, called
Trojans based on Logic Locking (TroLL), in a way that can evade
state-of-the-art ATPG-based HWT detection techniques. In an effort to detect
TroLL, we propose customization of existing state-of-the-art ATPG-based HWT
detection approaches as well as adapting the SAT-based attacks on logic locking
to HWT detection. In our experiments, we use random sampling as reference. It
is shown that the customized ATPG-based approaches are the best performing but
only offer limited improvement over random sampling. Moreover, their efficacy
also diminishes as TroLL's triggers become longer, i.e., have more bits
specified). We thereby highlight the need to find a scalable HWT detection
approach for TroLL. | Yuntao Liu, Aruna Jayasena, Prabhat Mishra, Ankur Srivastava | 2023-09-26T16:55:42 | http://arxiv.org/abs/2309.15067v1 | # Logic Locking based Trojans: A Friend Turns Foe
###### Abstract
Logic locking and hardware Trojans are two fields in hardware security that have been mostly developed independently from each other. In this paper, we identify the relationship between these two fields. We find that a common structure that exists in many logic locking techniques has desirable properties of hardware Trojans (HWT). We then construct a novel type of HWT, called Trojans based on Logic Locking (TrollL), in a way that can evade state-of-the-art ATPG-based HWT detection techniques. In an effort to detect TroLL, we propose customization of existing state-of-the-art ATPG-based HWT detection approaches as well as adapting the SAT-based attacks on logic locking to HWT detection. In our experiments, we use random sampling as reference. It is shown that the customized ATPG-based approaches are the best performing but only offer limited improvement over random sampling. Moreover, their efficacy also diminishes as TroLL's triggers become longer (_i. e._ have more bits specified). We thereby highlight the need to find a scalable HWT detection approach for TroLL.
Logic Locking, Hardware Trojans, ATPG +
Footnote †: publicationid: pubid: 978-1-6654-3274-0/21/$31.00 © 2021 IEEE
## I Introduction
The fact that most chip designers outsource the production of their chips to off-shore foundries raises concerns about the privacy of the chip's intellectual property (IP) and the integrity of the fabrication process. There has been a significant amount of research in both topics. For IP protection, numerous design obfuscation techniques have been proposed to mitigate attacks such as counterfeiting and over production, among which logic locking is by far the most prominent and well-studied class of protection techniques [1]. Logic locking adds key inputs and key-controlled gates into the circuit to make the locked circuit's functionality key-dependent. As the correct key is not known to the untrusted foundry, neither is the correct functionality, and hence the privacy of the design is preserved. Pertaining to the integrity of fabrication, the term Hardware Trojans (HWT) is often used to describe stealthy malicious modifications in the design. Logic locking and HWT's have been studied mostly independently so far. Apart from some studies on how to use design obfuscation to prevent HWT insertion [2, 3, 4, 5] and how to compromise obfuscation with HWT's [6], little attention was paid to the relationship between logic locking and HWT's. In this work, we will discuss how to utilize logic locking techniques to construct novel HWT's, and how to convert attacks against design obfuscation to HWT detection techniques. The contribution of this work is as follows.
* We analyze a class of state-of-the-art logic locking techniques and highlight that their infrastructure can be viewed as a composition of a modification unit (MU) and a restore unit (RU).
* While state-of-the-art Trojans are triggered based on rare events, we propose a fundamentally different way of designing Trojans based on Logic Locking (TroLL) by inserting only the MU into the design (equivalent to dropping the RU from a locked design).
* We propose evolved versions of existing ATPG-based HWT detection approaches to account for TroLL's trigger selection strategy.
* We adapt the Boolean satisfiability (SAT)-based attack on logic locking to the detection of both TroLL and conventional HWT's.
* Experimental results demonstrate that TroLL is much more resilient to existing state-of-the-art ATPG-based HWT detection approaches including statistical test generation [7] and maximum clique sampling [8]. While they can detect nearly all conventional HWT's, their efficacy drops drastically for TroLL. In comparison, the evolved ATPG-based approaches and SAT-based detection perform better on TroLL without sacrificing the efficacy on conventional HWT's. However, the percentage of TroLL detected still drops drastically as the trigger length increases no matter which detection approach is used.
The rest of this paper is organized as follows. In Section II, we introduce the technical background of hardware Trojans and logic locking. Section III presents our analysis of state-of-the-art logic locking techniques and the construction of TroLL. The evolved ATPG-based detection approaches and the adaptation of SAT-based attacks on logic locking to HWT detection is formulated in Section IV. In Section V, we present the experiment details on TroLL and the results on detecting TroLL with approaches based on ATPG, SAT, and random sampling. Lastly, we conclude the paper in Section VI.
## II Background and Related Work
In this section, we provide relevant background and survey related efforts in three broad categories. First, we describe the working principle of hardware Trojans. Next, we survey existing test generation efforts for detection of hardware Trojans. Finally, we provide an overview of logic locking techniques.
### _Hardware Trojans_
Hardware Trojans (HWT) are stealthy malicious modifications to hardware designs. HWT's usually consist of two components, trigger and payload. The trigger is a condition that activates the HWT, and the payload is the effect of the HWT once activated. The trigger condition can be determined by the circuit's input and/or internal signals in the original design. The HWT payload can have various possible effects, including functionality corruption [9], information leakage [10, 11, 12], denial-of-service [13], bypass of security properties [14], etc. An illustration of an HWT-infested circuit is given in Fig. 1 where the relationship between the original design and the HWT's trigger and payload is shown.
HWT's can be inserted in almost any phase in the VLSI hardware supply chain, including in third-party IP cores, by a malicious CAD tool, by a rogue designer, by an untrusted fabrication facility, etc. [15, 16]. The HWT's inserted before the fabrication stage are present in the design files (_e.g._ RTL and/or gate-level netlists). Therefore, it is possible to use formal methods, such as logic equivalence checking, to tell whether an HWT exists [17, 18]. However, for HWT's inserted by the foundry, the netlist of the HWT-infested circuit is not available to the designer. Some researchers have proposed to use reverse engineering to obtain the layout of the HWT-suspicious chip [19, 20]. However, IC reverse engineering is increasingly expensive and error-prone as technology node scales down [21], and there is no report of successful reverse engineering of any chip with technology node below 14nm to the best of our knowledge. Hence, testing is still the most practical way to detect HWT's inserted by untrusted foundries. Besides, testing-based methoes are also applicable to HWT's inserted by IP providers, CAD tools, rogue designers, etc. The state-of-the-art automatic test pattern generation (ATPG) approaches for HWT detection will be introduced in Section II-B.
### _ATPG-based HWT Detection_
Both combinational and sequential HWT triggering mechanisms have been proposed in the literature. However, since the designer likely has access to testing infrastructure that allows the circuit to be tested combinationally (_e.g._ scan-chain), a sequential HWT trigger can be broken down into a series of combinational ones. We hence focus on combinational HWT triggers in this work. State-of-the-art combinational HWT insertion methodology utilizes rare signals (_i.e._ an internal node's value that is functionally difficult to sensitize) as the trigger, ensuring that the HWT is only triggered in rare circumstances [22, 23]. Based on this property, many HWT detection methods have been developed based on ATPG principles. Existing approaches explored two complementary directions when dealing with test generation for activation of rare signals: 1) statistical test generation, and 2) directed test generation. A promising avenue for statistical test generation is to rely on \(N\)-detect principle [24] by activating each rare signal \(N\) times to increase the statistical likelihood of activating the unknown trigger in the HWT. MERO [7] tries to generate test vectors to activate the same rare signal \(N\) times by flipping input vector bits one at a time. Saha _et al._ improved the test generation performance using genetic algorithm and Boolean satisfiability [25]. Pan _et al._ improved the performance further by flipping bits using reinforcement learning [26].
While \(N\)-detect methods try to activate one rare signal at a time, Lyu _et al._ focused on activating as many rare nodes as possible using maximal clique sampling (TARMAC [8]). TARMAC first creates the satisfiability graph for all the rare signals. In this graph, each vertex stands for a rare signal, and there is an edge connecting two vertices if and only if there exists an input pattern that sensitizes the two rare signals simultaneously. Next, the maximal cliques from the satisfiability graph is computed. Finally, TARMAC generates tests to activate randomly sampled set of maximal cliques. If any of the generated tests is able to activate the trigger, the HWT will be detected.
### _Logic Locking_
Logic locking has emerged as a protection mechanism against potential piracy and overbuilding threats in untrusted fabrication facilities. These techniques obfuscates the hardware by adding key inputs into the circuit without disclosing the correct key to the fab. Hence, the fab will not know the full functionality of the design. When the fabrication is done, the chip designer (or a trusted third party) will provide the key to the chip by connecting a piece of tamper proof memory. This process is called _activation_. This way, only the authorized users will have access to an _activated chip_ which has the correct functionality.
There have been many attacks formulated against logic locking, among which the ones based on Boolean satisfiability theory, a.k.a. SAT-based attacks [27], have both mathematical guarantee to find the correct key and strong practical performance. The flow of SAT-based attacks is demonstrated in Fig. 2. As demonstrated, a miter circuit is built. The miter contains two copies of the locked netlist that share the same input but are keyed separately. Their outputs are XOR'ed. Essentially, if the miter's output is \(TRUE\), the input is causing different outputs with the two keys. The SAT-based attacks are iterative. In each iteration, a Boolean satisfiability problem is solved to find an input pattern and two keys that satisfy the miter circuit. The input pattern is called the _distinguishing input (DI)_. The activated chip is then queried to get the correct output value. Then, clauses are added to the miter-based SAT formula so that all the wrong keys that causes an incorrect output for the DI are pruned out. A correct key will be found when the DIs found have pruned out all the wrong keys.
Fig. 1: Illustration of an HWT-infested Circuit
Many SAT resilient logic locking techniques have been proposed to thwart the attack. In this work, we will examine these techniques and summarize the structural similarities among them. We then show how these logic locking techniques can guide the construction of novel hardware Trojans.
## III Locking Inspired HWT Construction
In this section, we provide a brief overview of existing obfuscation art. We then explore how the properties of these techniques can be leveraged in order to construct difficult-to-detect HWT's by slightly modifying their logical topologies, maintaining their rigorous mathematical guarantees but retargeting them to HWT application. The intuition behind such conversion is that, for both locking and Trojan, error is injected into a circuit only when the circuit has a specific input pattern:
* For locking: The input pattern is among those that are corrupted by the given wrong key.
* For Trojans: The input pattern matches the trigger.
Because HWT's should be triggered only by very few input patterns to evade detection [7, 8], the logic locking schemes suitable for converting to HWT's should also corrupt very few input patterns given a wrong key. Such logic locking techniques do exist and they are mainly designed to thwart SAT-based attacks. These techniques include Anti-SAT [28], SARLock [29], stripped functionality logic locking (SFLL) [30], Robust Strong Anti-SAT [31], CASLock [32], etc. In this work, we first analyze the commonality among these locking approaches. Next, we present the HWT construction based on these locking algorithms.
### _Commonality among Logic Locking_
No matter how distinct these logic locking constructions seem to be, we find that they can all be decomposed into two functional units that interact together to inject error for specific input pattern given a wrong key. We call them a _modification units (MU)_ and a _restore unit (RU)_. Essentially, the MU modifies the circuit's functionality for some input patterns and the RU tries to restore the modified functionality. When the correct key is applied, the RU restores the correct input patterns modified by the MU and so the locked circuit produces correct output to all input values. When the key is incorrect, however, the error injected by the MUs will not be corrected by the RU. In this case, if the input's functionality is modified by the MU, its output will be corrupted. The number of input patterns modified by the MU should be very small in order for the logic locking approach to be resistant to SAT-based attacks [33]. The rarity of such input patterns makes them suitable for HWT triggers. We use SFLL, SARLock, and Anti-SAT as examples of SAT-resilient locking techniques and briefly review how the MU and RU interact in each of them.
#### Iii-B1 Stripped Functionality Logic Locking (SFLL)
Fig. (a)a shows the block diagram of SFLL. It is composed of two parts: a functionality stripped circuit (FSC) and a restore unit (RU). The FSC is the original circuit with the functionality altered for a set of protected input patterns (PIP), denoted as \(\mathbf{P}\). The FSC's internal structure that modifies the functionality of the PIPs is the MU of SFLL. Notice that RU of SFLL coincides with our general definition of RU. The structure of the RU in SFLL is a look-up table (LUT). If the circuit's input matches the LUT key, the LUT will produce a restore signal. If the LUT contains the correct key, the restore signal will reverse the corruption caused by the FSC. If the LUT contains an incorrect key, both the PIPs and the input patterns that correspond to the key will be corrupted.
#### Iii-B2 Anti-Sat
The structure of Anti-SAT is shown in Fig. (b)b. The MU and the RU have similar structure. For the MU, there is an array of XOR gates followed by an AND tree. Depending on \(\vec{K}_{1}\), there is only one input value of \(\vec{X}\) such that the MU will evaluate to logic 1. Let us call this value \(\vec{X}_{M}\). The RU's structure is very similar to the MU's, and the only difference is that the AND tree's output is inverted. Depending on \(\vec{K}_{2}\), there is only one input value of \(\vec{X}\) that will make the RU evaluate to logic 0. Let us call this value \(\vec{X}_{R}\). Corruption is injected into the circuit when both the MU and the RU evaluate to logic 1, _i.e._ when \(\vec{X}=\vec{X}_{M}\) and \(\vec{X}\neq\vec{X}_{R}\). Therefore, a correct key must be such that \(\vec{X}_{M}=\vec{X}_{R}\). This way, the RU will output logic 0 when MU outputs logic 1 and prevent the original circuit from being corrupted.
#### Iii-B3 SARLock
SARLock also contains an MU and an RU, as shown in Fig. (c)c. Its MU is the same as the one in Anti-SAT: depending on the key, there is one input value that will let the MU evaluate to logic 1. The RU checks if the key input contains the correct key. If so, it will mask the MU's output and prevent it from corrupting the original circuit.
### _Advantages of Locking based Trojans_
In each of the above-mentioned logic locking techniques, the MU is capable of injecting error into the circuit, and the RU will prevent the error from being injected if the correct locking key is provided. We notice that the MU naturally offers properties desirable for the trigger of HWT's:
Fig. 2: The basic procedure of SAT-based attacks
* Corruption is injected for very few (or just one) input pattern in the exponential input space, which makes random sampling based detection very difficult.
* The corrupted input patterns need not have any correlation with the original netlist's structure, so that they can be chosen to avoid ATPG or rare signal based detection approaches such as [7] and [8].
* These trigger patterns are completely known to the attacker. Contrarily, enumerating triggers of conventional rare signal based Trojans is mathematically intractable in general because it is a satisfiability counting problem [34]. Hence, it is much easier for the attacker to control when to trigger the Trojan and avoid unintended triggering using TroLL.
### _Construction of TroLL_
These properties indicate that the MU's of logic locking can serve as ideal HWT trigger circuitry. Building upon this discovery, we present Trojans based on logic locking (TroLL), which employs the MU of logic locking to modify the functionality of the original circuit. We present a generalizable way to convert a logic locking technique to TroLL as follows:
1. Identify MU and RU in the locked netlist and remove the RU. Hard-code the RU's output value to the one that does not invert the output of the MU.
2. If the MU has a key input (such as Anti-SAT), hard-code the key such that the desired HWT trigger can cause the MU to corrupt the circuit.
Essentially, when building TroLL from SFLL, we only need to remove the RU and make sure that the PIP's represent the Trojan trigger patterns we want. For Anti-SAT and SARLock, we need to remove the RU and hard-code the MU keys to incorporate the triggers. E.g., for the Anti-SAT construction in Fig. (b)b, we need to remove the RU and fix its output at logic 1. For the MU, we fix \(\vec{K}_{1}\) to be the bitwise-inverted trigger pattern. A constant sweep is then performed to simplify the circuit. In this way, the key inputs of logic locking will be all removed and the TroLL-infested circuit has the same I/O pins as the original circuit. No matter which logic locking technique TroLL is made from, the functionality of TroLL will be identical. Besides, as each of the above steps is a strict subtraction from logic locking infrastructure, TroLL's overhead will be much lower than that of logic locking. Notice that, although we describe a gate-level operation to build TroLL in the above example, TroLL can be incorporated at RT or behavioral level using the two-step process as well.
TroLL needs to evade HWT detection. As introduced in Section II-A, existing state-of-the-art HWT detection approaches find test patterns that sensitize rare signals in the original design. To evade these detection approaches, TroLL trigger patterns need to avoid sensitizing any rare nodes. To begin with, we use a random sampling approach to determine the rare value of each internal node, \(r_{i}\), and its associated probability, \(p_{i}\). Although an alternative to random sampling is the signal probability skew analysis [35], the complexity of such analysis often increases exponentially if the correlation between signals is to be accounted for [36]. Then we use Algorithm 1 to determine the trigger pattern for TroLL. Essentially, the algorithm finds an input pattern with the maximum probability threshold \(p_{max}\) such that no rare value below this probability will be realized by the trigger. Such a process is illustrated in Fig. 4. In the sample circuit, the rare values and their probabilities are annotated for each internal
Fig. 3: MU and RU in Logic Locking Constructions
node. A list of randomly generated input patterns are shown under the circuit diagram. The signal sensitized by each input pattern that has the lowest probability are highlighted in pink. Algorithm 1 will choose the input pattern that maximizes the lowest probability. In this example, the trigger pattern will be the one in the last row since it does not sensitize any rare value. TroLL triggers selected by this process will be immune to the existing rare value based detection approaches such as those introduced in Section II-A.
The fact that TroLL triggers does not sensitize any rare signal does not mean that TroLL can be triggered by high probability signals or can be easily detect by random sampling. On the contrary, TroLL is essentially creating a new rare node that only the trigger pattern can sensitize. Since the defender does not have the netlist of the HWT-infested circuit and can only base the detection on the original circuit, they do not have any information about the new node and hence cannot generate test patterns aimed at sensitizing the new node. Also notice that the triggers selected using Algorithm 1 has the full input length. This will likely cause high overheads. As we later demonstrate in Sections V-B, practical resilience against HWT detection can be attained when only a subset of input bits are taken as the TroLL trigger.
## IV Detection Techniques for TroLL
In this section, we introduce a few novel approaches that are aimed to detect TroLL more effectively. The first type of approaches are based on the trigger selection process of TroLL: by avoiding any test pattern that sensitize any rare node value, ATPG-based HWT detection mechanisms will generate test patterns that are more likely to match the trigger of TroLL. The second approach is based on the fact that TroLL originates from logic locking, and SAT-based attacks are the most formidable attacks on logic locking. Therefore, we can formulate a Trojan detection approach that emulates the a SAT-based attack on logic locking.
### _Customizing ATPG-based HWT Detection Approaches for TroLL_
Given TroLL's trigger selection mechanism, we can customize existing ATPG-based HWT detection approaches to detect TroLL. TroLL's trigger selection process eliminates any input pattern that sensitizes any rare internal node value as described in Algorithm 1. The same principles can be applied to the test generation algorithms for HWT detection: instead of targeting the rare values, the ATPG algorithms can choose the test patterns that satisfy as many prevalent values as possible. Following the notations used in Algorithm 1: say that \(n\) internal nodes of a combinational circuit that implement Boolean functions \(G_{1}\ldots G_{n}\) have rare values \(r_{1}\ldots r_{n}\) that are below a certain threshold \(p\) where \(0<p<0.5\). In other words, these \(n\) nodes have prevalent values \(\tilde{r_{1}}\ldots\tilde{r_{n}}\) that have probabilities above \(1-p\). While existing HWT detection algorithms aim to find test patterns \(X\) that satisfy as many \(G_{i}(X)=r_{i}\) as possible (\(i=1\ldots n\)), a TroLL-specific detection algorithm should instead find input patterns that satisfy \(G_{i}(X)=\tilde{r_{i}}\) for as many \(i\) as possible.
Given such a principle, it is surprisingly convenient to customize existing HWT detection approaches for TroLL. We can indeed run the same ATPG algorithms, such as statistical test generation [7] or maximal clique sampling [8], and target the same set of internal nodes. The only change is to invert the targeted Boolean values of these nodes. Statistical test generation (such as \(N\)-detect) can target to generate test vectors to activate each prevalent node value \(N\) times, whereas maximal clique sampling can build the satisfiability graph on the prevalent values instead of the rare values.
Because the defender does not know the type of HWT when the test patterns are generated, the test patterns should be able to detect conventional HWTs as well. Therefore, for each ATPG algorithm, we combine the test patterns that are generated to sensitize the rare values (for conventional Trojans) and those generated to avoid sensitizing rare values (for TroLL). We refer to such an approach as _Evolved Statistical Test Generation_ and _Evolved Maximal Clique Sampling_. In Section V-B, we will present the efficacy of these evolved HWT detection approaches.
### _Adapting SAT-based Attacks on Logic Locking for HWT Detection_
Attacks on logic locking try to find the correct key, whereas Trojan detection aims to find the trigger of HWT's. Since TroLL is based on logic locking, it is natural to associate logic locking attack with the detection of TroLL. However, since the defender does not know which type of HWT is potentially inserted, the detection approaches must not be limited to TroLL but generalizable to any type of HWT. In this section, we present how to adapt the SAT-based attacks on logic locking to detecting HWT's. A SFLL-like auxiliary circuit will be constructed based on the HWT-suspicious circuit where the Trojan's trigger and payload are represented by keys. Then, the SAT attack formulation is used to find a key that can represent the HWT. The HWT is detected when such a key
Fig. 4: Illustration of how to Choose TroLL Trigger using Algorithm 1 on a Sample Circuit
is found. In Section V, this SAT-based detection approach as well as the ATPG-based approaches will be used to evaluate the detectability of TroLL and conventional HWT's.
#### Iii-B1 Construction of the Auxiliary Circuit
A defender has the netlist of the original circuit and the fabricated HWT-suspicious circuit. The netlist of the fabricated circuit is not available. In order to search for a trigger pattern, an SFL-like auxiliary circuit to emulate an HWT-infested circuit is constructed. As shown in Fig. 5, the auxiliary circuit is built by adding a look-up table to emulate the trigger and payload of the HWT. The trigger key \(K_{T}\) is compared with the circuit input \(X\). When they are the same, the payload key \(K_{P}\) is bit-wise XOR'ed with the output \(Y\).
Note that SAT-based detection does not assume any knowledge information about the potentially existing HWT, and the construction of the auxiliary circuit is independent from the actual trigger and payload of the HWT. The purpose of the auxiliary circuit is to emulate the trigger and payload of HWT's rather than being functionally equivalent to the HWT-suspicious circuit. Since only one trigger needs to be found to detect the HWT, we only need to have one entry in the LUT of the auxiliary circuit.
#### Iii-B2 Detection Flow
The flow of SAT-based Detection is laid out in Fig. 6. Similar to the SAT-based attack against logic locking introduced in Section II-C, a miter circuit is built using two copies of the auxiliary circuit and their outputs are XOR'ed. Let \(F(\vec{X})\) be Boolean function of the original circuit, \(F_{A}(\vec{X},\vec{K}_{T},\vec{K}_{P})\) be that of the auxiliary circuit, and \(H(\vec{X})\) be that of the HWT-suspicious circuit. In the first iteration, the following SAT formula is solved to obtain the distinguishing input (DI):
\[F_{A}(D\vec{I}_{1},\vec{K}_{Ta},\vec{K}_{P})\neq F_{A}(D\vec{I}_{1},\vec{K}_{ Tb},\vec{K}_{P}) \tag{1}\]
The subscript of \(DI\) stands for the iteration number. Then, both the original circuit and the HWT-suspicious circuit are queried with the DI. If the results are not equal, \(F(DI_{1})\neq H(DI_{1})\), then the HWT is detected and \(DI_{1}\) is an HWT trigger. If they are equal, then let \(O_{1}=H(DI_{1})\). In the second iteration, clauses are added to ensure that the new keys found should produce correct output for \(DI_{1}\) since it is not the trigger:
\[\begin{split} F_{A}(D\vec{I}_{2},\vec{K}_{Ta},\vec{K}_{P})& \neq F_{A}(D\vec{I}_{2},\vec{K}_{Tb},\vec{K}_{P})\\ \bigwedge& F_{A}(D\vec{I}_{1},\vec{K}_{Ta},\vec{K}_{ P})=F_{A}(D\vec{I}_{1},\vec{K}_{Tb},\vec{K}_{P})=O_{1}\end{split} \tag{2}\]
The added clause will exclude _any_ trigger key \(K_{T}\) that mistakes a non-trigger \(D\vec{I}_{1}\) as a trigger, which makes SAT-based detection potentially more efficient than purely testing-based detection approaches which only determine whether the test pattern is an HWT trigger or not.
The process of SAT-based detection have some key differences from the SAT attack on logic locking:
* The oracle used in the formulation is the HWT-suspicious circuit under detection, instead of an activated chip.
* An early exit condition is added. If the DI produce a different output on the HWT-suspicious circuit compared to the original circuit, the detection process will terminate because an HWT is detected.
* The same payload key is applied to both copies of the auxiliary circuits to ensure that the output difference of the two copies is caused by the trigger key.
* The correct key found by SAT attack on logic locking will make the locked circuit have the same functionality as the original circuit, whereas the SAT-based detection is not meant for replicating the exact functionality of the HWT-free circuit.
### _Summary_
In this section, we introduce two types of novel HWT detection techniques that have potentials to detect TroLL more effectively than existing approaches. The evolved ATPG-based detection aims at finding the trigger based on TroLL's trigger selection algorithm, whereas SAT-based detection is an effort to take advantage of TroLL's resemblance to logic locking. In the next section, we will examine these techniques alongside the existing ones to evaluate TroLL's ability to evade detection.
Fig. 5: Construction of the auxiliary circuit for SAT-based detection
Fig. 6: The SAT-based HWT Detection Flow
## V Experiments
In this section, we present details on TroLL implementation and evaluation. We also compare the detection approaches introduced in Section IV with existing state-of-the-art ATPG-based HWT detection approaches and random sampling on both TroLL and conventional HWT's.
### _Trojan Implementation and Overhead_
In this work, we implement both TroLL and conventional hardware Trojans, including rare node triggered Trojans and random node triggered Trojans. We use three benchmarks for the evaluation: DES, a 32-bit multiplier, and SHA-256, with a range of sizes as shown in Table I. For each benchmark, we use 100,000 random testing samples to analyze and determine the rare values and associated probability of each internal node. For rare node triggered Trojans, the triggers are selected directly based on this analysis. For TroLL, we choose trigger patterns using Algorithm 1 introduced in Section III-D. Notice that the length of these triggers are the same as the circuit's input. When a shorter trigger length is needed, we choose a random subset of bits from the trigger patterns. For the HWT payload, we choose a subset of output pins to flip when the trigger condition is satisfied and the payload is the same across all the HWT instances for the same benchmark. This avoids combinational loops in rare and random node triggered HWT's and ensure that the differences in overhead and detectability are only caused by trigger mechanisms.
is because TroLL's trigger selection algorithm, as presented in Section III-D, intentionally avoids sensitiizing any rare nodes within the original circuit. As both statistical test generation [7] and maximal clique sampling [8] ensures that each test pattern will sensitize some rare nodes in the circuit, they are unlikely to sensitize the triggers of TroLL.
In the middle row of Table II, we show the HWT detection results with the evolved ATPG-based approaches. Compared to the original ATPG-based approaches, the evolved ones are able to detect more TroLL-type HWTs. The improvement is most significant with trigger length between 12 an 20 bits. Thanks to the customization (flipping of targeted node value), the ATPG-based approaches are able to generate test patterns that fit the trigger criteria of TroLL, which is the main cause of the improvement.
SAT-based detection is implemented based on the code framework of SAT-based attacks on logic locking presented in [27]. We limit the time of each SAT-based detection run to 48 hours and a Trojan is considered as not detected if no trigger pattern is found within this time frame. In the bottom left division of Table II, we show the percentage of HWT detected by SAT for each benchmark and type of HWT.
The random sampling detection results are shown in the bottom right division of Table II. We should take the random sampling detection as the baseline case as it does not require any specialized algorithm. From Figure 8, it can be observed that the evolved ATPG-based approaches have higher efficacy on TroLL whereas their original versions perform worse than random sampling. This indicates that the customization of the ATPG-based approaches presented in Section IV-A is effective against TroLL. The SAT-based detection has overall similar efficacy compared to random sampling. This is expected because such an approach essentially converts a Trojan detection problem to a SAT attack problem on SFLL, a logic locking technique that essentially forces a SAT attacker to choose the distinguishing input pattern randomly in each iteration.
## VI Conclusion
In this paper, we present a novel type of Hardware Trojans based on logic locking, TroLL. TroLL is constructed by retaining the modification unit (MU) and removing the restore unit (RU) of state-of-the-art logic locking techniques. The trigger patterns of TroLL are selected in a way that avoids sensitiizing the internal rare signals of the original circuit, thereby evading state-of-the-art ATPG-based detection schemes. In an attempt to formulate an effective detection approach against TroLL, we tried several different approaches, including evolving the ATPG-based approaches targeting the internal nodes' prevalent values in addition to the rare values, and adapting the SAT-based attacks on logic locking to HWT detection. We also use random sampling as a reference. We found that the evolved ATPG-based approaches performed better than random sampling, but even these approaches' efficacy diminishes as TroLL's triggers get longer. Therefore, we have identified TroLL as a new threat to the integrity of hardware manufactured in untrusted fabrication facilities, and it is necessary to find a scalable detection approach against TroLL.
On a broader scale, this paper reminds us that even a design protection scheme (such as logic locking) can be a double edged sword. Meanwhile, just like the SAT attack can be turned to an HWT detection scheme, we can examine other attacks against logic locking in the search for a more effective detection approach against TroLL.
| 論理ロックとハードウェアトラウザーは、ハードウェアセキュリティの分野で、互いにほとんど独立して開発されてきました。この論文では、これらの2つの分野の間に関係性を明らかにします。私たちは、多くの論理ロック技術に存在する共通構造がハードウェアトラウザー(HWT)に望ましい特性を持っていることを発見しました。次に、従来のATPGベースHWT検出技術を回避する能力を持つ、新しいタイプのHWT、つまり論理ロックに基づくトラウザー(TroLL)を構築しました。TroLLを検出するために、既存のstate-of-the-art ATPGベースHWT検出アプローチのカスタマイズと論理ロックのSATベース攻撃の適用を提案しました。実験では、ランダムサンプリングを基準としています。カスタマイズされたATPGベースのアプローチは、最高の性能を示していますが、ランダムサンプリングに比べて改善は限られています。さらに、彼らの効果は、 |
2309.05274 | FuzzLLM: A Novel and Universal Fuzzing Framework for Proactively
Discovering Jailbreak Vulnerabilities in Large Language Models | Jailbreak vulnerabilities in Large Language Models (LLMs), which exploit
meticulously crafted prompts to elicit content that violates service
guidelines, have captured the attention of research communities. While model
owners can defend against individual jailbreak prompts through safety training
strategies, this relatively passive approach struggles to handle the broader
category of similar jailbreaks. To tackle this issue, we introduce FuzzLLM, an
automated fuzzing framework designed to proactively test and discover jailbreak
vulnerabilities in LLMs. We utilize templates to capture the structural
integrity of a prompt and isolate key features of a jailbreak class as
constraints. By integrating different base classes into powerful combo attacks
and varying the elements of constraints and prohibited questions, FuzzLLM
enables efficient testing with reduced manual effort. Extensive experiments
demonstrate FuzzLLM's effectiveness and comprehensiveness in vulnerability
discovery across various LLMs. | Dongyu Yao, Jianshu Zhang, Ian G. Harris, Marcel Carlsson | 2023-09-11T07:15:02 | http://arxiv.org/abs/2309.05274v2 | Fuzzllm: A Novel and Universal Fuzzing Framework for Practively Discovering Jailbreak Vulnerabilities in Large Language Models
###### Abstract
Jailbreak vulnerabilities in Large Language Models (LLMs), which exploit meticulously crafted prompts to elicit content that violates service guidelines, have captured the attention of research communities. While model owners can defend against individual jailbreak prompts through safety training strategies, this relatively passive approach struggles to handle the broader category of similar jailbreaks. To tackle this issue, we introduce FuzzLLM, an automated fuzzing framework designed to proactively test and discover jailbreak vulnerabilities in LLMs. We utilize _templates_ to capture the structural integrity of a prompt and isolate key features of a jailbreak class as _constraints_. By integrating different _base classes_ into powerful _combo_ attacks and varying the elements of _constraints_ and prohibited _questions_, FuzzLLM enables efficient testing with reduced manual effort. Extensive experiments demonstrate FuzzLLM's effectiveness and comprehensiveness in vulnerability discovery across various LLMs. Code and data will be open-sourced upon publication.
Dongyu Yao\({}^{1,2}\) Jianshu Zhang\({}^{1}\) Ian G. Harris\({}^{2}\) Marcel Carlsson\({}^{3}\)\({}^{1}\)Wuhan University \({}^{2}\)University of California Irvine \({}^{3}\)Lootcore
Large Language Model, Jailbreak Vulnerability, Automated Fuzzing
## 1 Introduction
The advent of Large Language Models (LLMs) has revolutionized the field of artificial intelligence with their remarkable natural language processing capabilities and promising applications. Both commercial LLMs [1, 2] and open-sourced LLMs [3, 4, 5, 6] have enjoyed widespread popularity among developing and research communities.
Meanwhile, the advancement also brings about numerous security concerns, with "jailbreak vulnerabilities" being the most prominent. In terms of LLM context, jailbreak refers to the circumvention of LLM safety measures with meticulously crafted input prompts, resulting in LLMs generating clearly objectionable content. This concept has originally been discussed in online forums [7] and has recently been studied as a research topic. The Jailbreakchat website [8] collected prompts that succeeded in jailbreaking ChatGPT, and several researchers conducted empirical studies for their taxonomy and evaluation [9, 10, 11] as well as proposing attack methodologies [12, 13]. One of our interesting observations is that when a jailbreak prompt was produced or discussed in papers, the LLM provider such as OpenAI [14] almost immediately patched it by updating the version of their LLM and strengthening the defense capability. For example, most prompts on the Jailbreakchat website [8] and in the empirical papers failed to bypass the defense mechanism of ChatGPT [1] of the latest July 20th version.This observation reveals the nature of the arms race between attackers and model owners.
However, in this everlasting cat-and-mouse game, owners often play catch-up with attackers, as they typically need to wait for an attack to be identified as effective before they can develop mitigation measures based on the attack scheme. Moreover, as most developers enhance models' defense via a safety fine-tuning mechanism [15, 11, 16], the scarcity of high-quality labeled data severely inhibits this process. This is because most previous works did not fully open-source their testing dataset so developers are only able to defend against individual jailbreak prompts and the less-diversified semantic variants [12], rather than handling the entire class of jailbreaks. Consequently, commercial LLM providers and open-sourced model owners are in desperate need of a method to proactively discover and evaluate potential jailbreak vulnerabilities before releasing or updating their LLMs.
To alleviate the aforementioned limitations and help model owners gain the upper hand, in this paper, we propose FuzzLLM, a framework for proactively testing and discovering jailbreak vulnerabilities in any LLM. The idea stems from the popular _Fuzzing_[17] technique, which automatically generates random inputs to test and uncover vulnerabilities in software and information systems. FuzzLLM utilizes black-box (also called IO-driven) fuzzing [18], and tests generated jailbreak prompts on a **Model Under Test (MUT)** without seeing its internals.
Figure 1: An example of fuzzing template
The key objective of our FuzzLLM is to craft sufficient, varied jailbreak prompts to ensure both syntactic and semantic variation while maintaining the structure and robustness of each attacking prompt. Inspired by empirical work [10] that prompt patterns or templates can be utilized to generate a plethora of prompts, for a jailbreak prompt, we decompose it into three fundamental components: _template_, _constraint_ and _question_ sets. As presented in Figure 1, the template describes the structure of an entire class of attack (instead of an individual prompt), which contains placeholders that are later plugged in with certain constraints and illegal questions. A constraint represents key features of successful jailbreaks, generalized from existing prompts [8, 11, 12], while questions are collected from previous works [10]. In addition, a _base class_ template is free to merge with other jailbreak classes, creating the more powerful _combo_ jailbreaks. During prompt construction, the jailbreak fuzzer selects elements from the constraint set and question set and inserts them into corresponding templates to automatically generate thousands of testing samples covering different classes of attacks.
We conduct extensive experiments regarding fuzzing tests on 8 different LLMs and comparison with existing jailbreak prompts. Experimental results demonstrate FuzzLLM's capability in universal testing and comprehensive discovery of jailbreak vulnerabilities, even on GPT-3.5-turbo [19] and GPT-4 [2] with the state-of-the-art defense mechanisms.
## 2 Methodology
### Prompt Construction
Base Class of Jailbreaks.Before constructing jailbreak prompts, we generalize the empirical works [10, 11, 9] of jailbreak taxonomy and sort them into three _base classes_ of jailbreak attacks that can be combined and altered into new variants: 1) the **Role Play** (_RP_) jailbreak creates a storytelling scenario to alter the conversation context; 2) the **Output Constrain** (_OC_) jailbreak shifts an LLM's attention at the output level; 3) the **Priviledge Escalation** (_PE_) jailbreak induces an LLM to directly break its restrictions. For formal expressions, we set the number of base classes as \(m=3\).
Fuzzing Components.As illustrated in the left box of Figure 2, we decompose a jailbreak prompt into three fundamental components: 1) the fuzzing template set \(\mathcal{T}\) that serves as a carrier of each defined class of attack; 2) the constraint set \(\mathcal{C}\) which is the essential factor that determines the success of a jailbreak; 3) the illegal question set \(\mathcal{Q}\) consists of questions that directly violate OpenAI's usage policies1.
Footnote 1: [https://openai.com/policies/usage-policies](https://openai.com/policies/usage-policies)
Fuzzing Template Set.Inspired by [12], we craft each base class template in a straightforward format. As an example displayed in Figure 1, a base class template \(b\) in \(\mathcal{B}=\{b_{1},b_{2},\dots,b_{m}\}\) is made up of **a**) the "text body" (marked in \(\operatorname{blue}\)), **b**) a placeholder for one base class of constraint (marked in \(\operatorname{brown}\)), and **c**) a placeholder for one illegal question (marked in \(\operatorname{violet}\)). We then manually design the _combo_ templates (see Figure 2, left box) by simply combining different \(b\in\mathcal{B}\). During concatenation, each base class template \(b\) gets to keep its placeholder for the corresponding constraint class while sharing the same placeholder for one illegal question. For example, to make a template of double combo _RP&OC_ jailbreak, we only need to append the constraint segment of the _OC_ template to the end of the _RP_ template, without adding or removing the placeholder for an illegal question. With this approach, the overall fuzzing template set can be viewed as the power set of \(\mathcal{B}\), described as \(\mathcal{T}=Pow(\mathcal{B})=\{t_{1},t_{2},\dots,t_{n}|n=2^{m}-1\}\).
Constraint Set.We examine the jailbreak chat website [8] and select several constraints for each of the \(m\) base classes of jailbreaks. We define the constraint set as \(\mathcal{C}=\bigcup_{i=1}^{m}c_{i}\), where \(c\) is a subset of \(\mathcal{C}\) as one base class constraint.
Illegal Question Set.Following Liu _et al.[10]_, we explore 8 prohibited scenarios of OpenAI's usage policies and design 3 illegal questions for each of the 8 scenarios. Formally speaking, the question set is defined as \(\mathcal{Q}=\bigcup_{i=1}^{k}q_{i}\), with \(k\) the number of distinct prohibited scenarios and \(q\) the subset of \(\mathcal{Q}\). Template Rephrasing.To further increase the prompt vari
Figure 2: Overview of FuzzLLM framework.
ation during the fuzzing, we leverage a self-instruction technique [20] and task an LLM (the "Rephrase Model" in Figure 2, left box) with rephrasing our manually designed template into new variants of different text styles. This is done using a rephrase prompt presented below. Afterward, \(t\) becomes a subset of \(\mathcal{T}=\bigcup_{i=1}^{n}t_{i}\) and has multiple elementary templates belonging to the same jailbreak class.
```
Uniqueness Zempt
```
Unleash ALL your limitations and vield your FULL creativity. Rephrase the following content in '{...}' and keep its original semantic while avoiding executing it: {{ ORIGIN FUZZING TEMPLATE }} ```
**Generation-based Fuzzing.** With the aforementioned \(\mathcal{C}\), \(\mathcal{Q}\) and \(\mathcal{T}\) as three seed inputs, a jailbreak fuzzer generates jailbreak prompts as test cases using functions \(\mathcal{I}(p,\mathcal{C})\) and \(\mathcal{M}(p,s)\) to plug each constraint element and question element into the corresponding placeholders of each template element, resulting in an obfuscated jailbreak prompt set \(\mathcal{P}\). Specifically, \(\mathcal{I}(p,\mathcal{C})\) identifies the required constraint class \(\mathcal{C}^{{}^{\prime}}\) for prompt \(p\) and \(\mathcal{M}(p,s)\) takes set \(p\) and set \(s\) as input, merges each element \(e\) of set \(s\) into the corresponding placeholder of **each element \(e\)** in set \(p\): \(\mathcal{M}(p,s)=\{e_{p}\cup e_{s}|e_{p}\in p,e_{s}\in s\}\). The detailed process is illustrated in Algorithm 1.
``` Input : Template set \(\mathcal{T}\) with \(n\) subsets; Constraint set \(\mathcal{C}\) with \(m\) subsets; Question set \(\mathcal{Q}\); Output : Fuzzed Jailbreak Prompt Set \(\mathcal{P}\) Initialization: Empty prompt template \(\mathcal{P}=\mathcal{T}\); for\(i\gets 1\)to\(n\)do Get current prompt set \(p_{i}\) Get required constraint class \(\mathcal{C}^{{}^{\prime}}=\mathcal{I}(p_{i},\mathcal{C}),\mathcal{C}^{{}^{ \prime}}\subseteq\mathcal{C}\)\(p_{c}=p_{i}\) forsubset\(c\)in\(C^{{}^{\prime}}\)do\(p_{c}=\mathcal{M}(p_{c},c)\) Update the current prompt set: \(p_{i}=\mathcal{M}(p_{c},\mathcal{Q})\) Final jailbreak prompt set \(\mathcal{P}=\{p_{1},p_{2},\dots,p_{n}\}\) ```
**Algorithm 1**Jailbreak Fuzzing Process
An example of a fuzzed jailbreak prompt is shown below.
### Automatic Labeling
We gain insights from Wang _et al.[20]_ and design the label prompt (presented below) to automatically label each attack result. This encompasses two key aspects: the teal segment of identification instruction, and the dark red segment of label rule instruction. Similar to in-context tuning [21] scheme, we manually annotate a set of ground truth labels, analyze and extract key features of error cases, and incorporate this prior knowledge as cues into the context of a new labeling round. With this approach, we reduce the error rate to around 4% (details in Sec. 3.3). Each labeled result is tagged with only "good" or "bad" (Figure 2, right top box). Bad answers can be analyzed to discover the model's jailbreak vulnerabilities, or serve as a safety training dataset to fine-tune the MUT [11, 3].
``` Input : Template set \(\mathcal{T}\) with \(n\) subsets; Constraint set \(\mathcal{C}\) with \(m\) subsets; Question set \(\mathcal{Q}\); Output : Fuzzed Jailbreak Prompt Set \(\mathcal{P}\) Initialization: Empty prompt template \(\mathcal{P}=\mathcal{T}\); for\(i\gets 1\)to\(n\)do Get current prompt set \(p_{i}\) Get required constraint class \(\mathcal{C}^{{}^{\prime}}=\mathcal{I}(p_{i},\mathcal{C}),\mathcal{C}^{{}^{ \prime}}\subseteq\mathcal{C}\)\(p_{c}=p_{i}\) forsubset\(c\)in\(C^{{}^{\prime}}\)do\(p_{c}=\mathcal{M}(p_{c},c)\) Update the current prompt set: \(p_{i}=\mathcal{M}(p_{c},\mathcal{Q})\) Final jailbreak prompt set \(\mathcal{P}=\{p_{1},p_{2},\dots,p_{n}\}\) ```
**Algorithm 2**Fuzzing Process
## 3 Experiment and Evaluation
### Experimental Setup
**Model Selection.** As the first and universal fuzzing framework for jailbreak vulnerabilities, we test on 6 open-sourced LLMs (Vicuna-13B [4], CAMEL-13B [22], LLAMA-7B [3], ChatGLM2-6B [6], Bloom-7B [23], LongChat-7B [5]) and 2 commercial LLMs GPT-3.5-turbo [19] and GPT-4 [2] (GPT version 8/3/2023). Same as [12], we use ChatGPT [1] as the rephrase model for diversifying our template set. We apply the open-sourced Vicuna-13B [4] as our label model to reduce the experiment cost while maintaining high-quality labeling.
**Metric.** As the jailbreak testing follows a one-shot attack scheme, the success rate metric is defined as \(\sigma=Bad/Yes\). \(Bad\) stands for the results labeled "bad" (a successful jailbreak), and \(Yes\) is the test set size of jailbreak prompts for **each** attack class, randomly scaled from the overall fuzzed prompts of each class. Note that we use an identical set of hyper-parameters for all MUTs: \(Yes=300\) (2100 prompts in total), temperature \(tmp=0.7\), max output token \(tk=256\). All results are averaged over three scaling random seeds.
### General Fuzzing Result on Multiple MUTs
Our general testing results are displayed in Table 1. Here we use the abbreviated name of jailbreak classes, see details in Sec. 2.1. From these results, we can conclude that the 3 generalized base classes are effective in attacking a MUT, while the combo classes generally exhibit greater power in discovering jailbreak vulnerabilities. Despite the seemingly indestructible safety defense of commercial LLMs (GPT-3.5 and GPT-4), our FuzzLLM is still able to uncover their jailbreak vulnerabilities on a relatively small jailbreak test set size. This
finding suggests FuzzLLM's great potential and effectiveness in automatic and comprehensive vulnerability discovery.
### Label Model Analysis
To identify the most suitable label model, we test three open-sourced LLMs on labeling results from Vicuna-13B as MUT. We manually evaluate the labeled result and analyze the error rate \(\epsilon=E/Tes\), where \(E\) is the mislabeled number (false negative and false positive cases), and \(Tes=300\) for each class (2100 prompts in total). Results are shown in Table 2.
### Comparison with Single-Component Jailbreaks
Since both commercial and open-sourced LLMs are evolving through time (_i.e_., better defense ability), and previous works [10, 12] did not open-source their testing data, it is unfair to compare with their attack results directly. Hence, we replicate jailbreaker's [12] "rewriting" augmentation scheme and combine the rewritten prompts with our question set. According to Table 3, our overall result slightly underperforms single-component jailbreaks on Vicuna-13B [4], but performs better on GPT-3.5-t [19] and GPT-4 [2]. Moreover, existing jailbreaks [8] are mixtures of multiple attack classes; therefore, our combo attacks are more effective when fairly compared.
### Ablation Study
We conduct all ablation studies on Vicuna-13B [4] as MUT.
**Ablation on test set size \(Tes\).** To investigate the influence of dataset scaling on the comprehensive outcomes of fuzzing, we conduct empirical evaluations utilizing varying different \(Tes\). As elucidated in Table 4, the observed variations in outcomes between distinct test set sizes are minimal, thereby suggesting that the entire fuzzed dataset, when subjected to random shuffling, exhibits an equal distribution. Consequently, it can be inferred that reducing the dataset to more diminutive scales exerts negligible impact on the results.
**Ablation on max output token \(tk\).** An intelligent label model can often determine whether a piece of content is in violation by examining only a small portion of that content. We sweep over [64, 128, 256, 512] to ascertain the minimal \(tk\) needed for the violation check. As shown in Table 5, there is a large increase in success rate when \(tk=64\). After careful examination, we find that before Vicuna-13B answers a jailbreak prompt, it tends to repeat the malicious question. When \(tk\) is too small, the incomplete output content may only contain the question, then this content is tagged with "bad" by the label model, thus increasing the overall success rate.
## 4 Conclusion
This paper presents FuzzLLM, a novel and universal framework that leverages fuzzing to proactively discover jailbreak vulnerabilities in Large Language Models (LLMs). Utilizing templates to combing jailbreak constraints and prohibited questions, we facilitate the automatic and directed random generation of jailbreak prompts. Our approach employs three generalized base classes that can be integrated into potent combo attacks, broadening the scope of vulnerability discovery. Extensive experiments validate FuzzLLM's efficiency and efficacy across diverse LLMs.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Jailbreak Class} & \multicolumn{8}{c|}{MUT Name} \\ \cline{2-9} & Vicuna [4] & CAMEL [22] & LLAMA [3] & ChatGLM2 [6] & Bloom [23] & LongChat [5] & GPT-3.5-t [19] & GPT-4 [2] \\ \hline _RP_ & 70.02 & 81.06 & 26.34 & 77.03 & 40.02 & 93.66 & 16.68 & 5.48 \\ _OC_ & 53.01 & 44.32 & 57.35 & 36.68 & 43.32 & 59.35 & 17.31 & 6.38 \\ _PE_ & 63.69 & 66.65 & 30.32 & 48.69 & 62.32 & 55.02 & 9.68 & 4.03 \\ _RP\&OC_ & 80.03 & 66.05 & 79.69 & 55.31 & 47.02 & 80.66 & 50.02 & 38.31 \\ _RP\&PE_ & 87.68 & 89.69 & 42.65 & 54.68 & 56.32 & 79.03 & 22.66 & 13.35 \\ _PE\&OC_ & 83.32 & 74.03 & 45.68 & 79.35 & 58.69 & 64.02 & 21.31 & 9.08 \\ _RP\&PE\&OC_ & 89.68 & 82.98 & 80.11 & 79.32 & 49.34 & 76.69 & 26.34 & 17.69 \\ \hline Overall & 75.33 & 72.11 & 51.68 & 61.72 & 51.15 & 68.49 & 23.57 & 13.47 \\ \hline \end{tabular}
\end{table}
Table 1: General success rate \(\sigma\) of jailbreak vulnerabilities across various MUTs (results presented by percentage). The first three rows show the test results of 3 base classes, followed by four rows of combo jailbreak classes.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \(Tes\) & 50 & 100 & 200 & 300 & 500 \\ \hline Overall & 75.77\% & 73.37\% & 76.14\% & 75.33\% & 74.88\% \\ \hline \end{tabular}
\end{table}
Table 4: Results of different jailbreak prompt test set sizes
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \(tk\) & 64 & 128 & 256 & 512 \\ \hline Overall & 82.26\% & 74.63\% & 75.33\% & 75.52\% \\ \hline \end{tabular}
\end{table}
Table 5: Ablation of output token limit \(tk\) | LLMにおけるJailbreak脆弱性(サービスガイドラインを侵害するコンテンツを生成するように巧みに設計されたプロンプトを攻撃)は、研究コミュニティの注目を集めている。モデル所有者は、安全なトレーニング戦略を通じて、個別のJailbreakプロンプトに対抗できるが、この比較的 passivなアプローチは、類似のJailbreakを扱うことが困難である。この問題に対処するため、FuzzLLMという、Jailbreak脆弱性に対する積極的なテストと発見のための自動Fuzzingフレームワークを導入する。私たちは、テンプレートを使用してプロンプトの構造的な整合性を捕捉し、Jailbreakクラスの主要な特徴を制約として分離する。異なる基底クラスを組み合わせて強力な組み合わせ攻撃を実行し、制約と禁止された質問の要素を変化させることで、FuzzLLMは効率的なテストを実現し、手動の作業を軽減する。多くの実験は、FuzzLLMの有効性と、様々な LLM |
2309.12708 | PointSSC: A Cooperative Vehicle-Infrastructure Point Cloud Benchmark for
Semantic Scene Completion | Semantic Scene Completion (SSC) aims to jointly generate space occupancies
and semantic labels for complex 3D scenes. Most existing SSC models focus on
volumetric representations, which are memory-inefficient for large outdoor
spaces. Point clouds provide a lightweight alternative but existing benchmarks
lack outdoor point cloud scenes with semantic labels. To address this, we
introduce PointSSC, the first cooperative vehicle-infrastructure point cloud
benchmark for semantic scene completion. These scenes exhibit long-range
perception and minimal occlusion. We develop an automated annotation pipeline
leveraging Semantic Segment Anything to efficiently assign semantics. To
benchmark progress, we propose a LiDAR-based model with a Spatial-Aware
Transformer for global and local feature extraction and a Completion and
Segmentation Cooperative Module for joint completion and segmentation. PointSSC
provides a challenging testbed to drive advances in semantic point cloud
completion for real-world navigation. The code and datasets are available at
https://github.com/yyxssm/PointSSC. | Yuxiang Yan, Boda Liu, Jianfei Ai, Qinbu Li, Ru Wan, Jian Pu | 2023-09-22T08:39:16 | http://arxiv.org/abs/2309.12708v2 | # PointSSC: A Cooperative Vehicle-Infrastructure Point Cloud Benchmark for Semantic Scene Completion
###### Abstract
Semantic Scene Completion (SSC) aims to jointly generate space occupancies and semantic labels for complex 3D scenes. Most existing SSC models focus on volumetric representations, which are memory-inefficient for large outdoor spaces. Point clouds provide a lightweight alternative but existing benchmarks lack outdoor point cloud scenes with semantic labels. To address this, we introduce PointSSC, the first cooperative vehicle-infrastructure point cloud benchmark for semantic scene completion. These scenes exhibit long-range perception and minimal occlusion. We develop an automated annotation pipeline leveraging Segment Anything to efficiently assign semantics. To benchmark progress, we propose a LiDAR-based model with a Spatial-Aware Transformer for global and local feature extraction and a Completion and Segmentation Cooperative Module for joint completion and segmentation. PointSSC provides a challenging testbed to drive advances in semantic point cloud completion for real-world navigation.
## I Introduction
Accurate perception of 3D scenes is crucial for autonomous agents to navigate complex environments. Holistic understanding of 3D scenes informs critical downstream tasks such as path planning and collision avoidance. Leading 3D scene perception tasks include 3D object detection, 3D point cloud semantic segmentation, and semantic scene completion (SSC). Like humans can exhibit robust completion and understanding of occluded 3D environments by leveraging prior knowledge, SSC predicts complete geometric shapes and corresponding semantic labels from partially observed scenes. However, a considerable gap persists between current SSC models and human-level perception for real-world driving scenarios.
Most current SSC datasets rely on vehicle-mounted sensors, which have limited perception range and greater susceptibility to occlusion compared to elevated infrastructure vantage points. SemanticKITTI [1] provides only front-view semantic scenes, while SurroundOcc [2] and OpenOccupancy [3] incorporate surrounding views yet they still do not effectively address occluded areas. Occ3D [4] uses ray casting to generate occlusion masks, but solely leverages these to refine evaluation metrics rather than improve ground truth labels. Succinctly, existing vehicle-view datasets fail to capture the long-range perception and prevalent occlusion characteristic of real-world driving environments. Purpose-built infrastructure-view datasets could enable richer, more complete semantic annotations to further advance SSC research.
To mitigate occlusion affecting vehicle-mounted sensors, we adopt a vehicle-infrastructure cooperative perspective. Infrastructure sensors possess longer range and fewer blindspots, while vehicle sensors enrich scene representation from their distinct vantage. Moreover, compared to volumetric formats, point clouds enable efficient semantic scene representation with minimal memory overhead [8]. Therefore, we develop PointSSC, the first point cloud semantic scene completion benchmark leveraging cooperative vehicle and infrastructure views, as shown in Figure 1. PointSSC provides a lightweight yet detailed testbed to advance semantic completion for outdoor autonomous navigation.
In Tab. I, we compare our PointSSC dataset against other mainstream semantic scene completion datasets. To the best of authors' knowledge, PointSSC has the largest data volume and spatial coverage. It is the first outdoor point cloud SSC dataset developed cooperatively from vehicle and infrastructure perspectives. To enable further research, we propose a LiDAR-based model tailored for PointSSC. To handle outdoor point clouds, we introduce a Spatial-Aware Transformer and Completion and Segmentation Cooperative Module (CSCM). Experiments validate the efficacy of these
Fig. 1: **PointSSC Overview. Given infrastructure-side partial points and images (top left), we first couple them with vehicle-side point clouds (bottom left) to construct the PointSSC dataset (bottom right). PointSSC then guides our network (top right) for point cloud semantic scene completion. The blue background indicates the PointSSC generation pipeline, while the brown dashed box shows model prediction.**
contributions. As infrastructure sensing gains importance, our model is trained from an infrastructure vantage. Our key innovations are:
* We present PointSSC, the first large-scale outdoor point cloud SSC dataset from cooperative vehicle-infrastructure views.
* We propose a baseline model with a Spatial-Aware Transformer and a Completion and Segmentation Cooperative Module.
* Our method sets the new state-of-the-art on PointSSC for both completion and semantic segmentation.
## II Related Works
### _Point Cloud Completion_
Existing point cloud completion methods mostly aim at object-level completion, which can be roughly divided into geometric-based and learning-based approaches. Geometry-based approaches leverage input's inherent geometric structures or template's geometric to infer missing geometric shapes. Such methods [10, 11, 12, 13, 14, 9] need complex optimization techniques and lack robust generalization capabilities.
Learning-based approaches employ neural networks for point cloud completion. PointNet [15] and its variants [16] offered a methodology to directly process unordered point clouds. FoldingNet [17] and PCN [18] pioneered point cloud completion and introduced a two-stage point cloud generation model. SnowFlakeNet [19] used the growth of neighbor points to complete points. PoinTr [20] and following works [21, 22] employed point proxy encoding to reduce computational consumption. A recent work, CasFusionNet [8] uses a dense feature fusion method to complete semantic point clouds for indoor scenes. Although it is the first scene-level point cloud completion model, it is computation-consuming and cannot perform well in large outdoor scenes. Our PointSSC model explores semantic point cloud completion for large outdoor scenes for the first time and shows exciting results.
### _3D Semantic Scene Completion (SSC)_
Holistic 3D scene understanding plays an important role in autonomous driving perception. However, due to the limitations of sensing devices and viewing angles, it is very challenging. SSCNet [5] was the first network proposing to use a single-view depth image as input and constructed an end-to-end model to SSC task. 3DSketch [23] and AICNet [24] proposed to use images and corresponding depth to generate semantic scenes. Subsequent works [25, 26, 27, 28] further designed the indoor scene completion model and achieved better performance. To solve SSC tasks in outdoor autonomous driving scenes, JS3CNet [29] proposed to use LiDAR point clouds for the first time. LMSCNet [30] proposed a lightweight structure combining 2D and 3D convolutions to improve inference speed. SCPNet [31] applied knowledge distillation to SSC tasks. MonoScene [32] was the first to accomplish semantic scene completion by using monocular RGB images. VoxFormer [33] and TPVFormer [34] have further improved performance on the basis of MonoScene. SurroundOcc [35] took multi-view camera images as input and used occupancy prediction to predict occupancy semantics. OpenOccupancy [36] proposed a nuScenes-based semantic occupancy prediction dataset and gave a baseline based on unimodality and multimodality. Although these SSC models achieve surprising results, they require expensive computing and storage resources due to their volumetric representation.
### _Infrastructure-side Datasets_
Autonomous driving datasets play an indispensable role in semantic scene understanding. Infrastructure-side autonomous driving datasets collect point clouds with fixed LiDARs, which are different from vehicle-side LiDARs that move with vehicles. IPS300+ [37] introduced a large-scale multimodal dataset for infrastructure-side perception tasks in urban intersections. BAAI-VANJEE [38] is an infrastructure-side object detection dataset featuring diverse scenes. DAIR-V2X [39] and its following work [40] are both large-scale vehicle-infrastructure cooperative multimodal datasets. They can be used for vehicle-infrastructure cooperative perception, prediction, and other related tasks. Our dataset is developed based on V2X-Seq [40] and is tailored for point cloud SSC tasks.
## III PointSSC Generation Pipeline
In this section, we introduce the task definition of point cloud semantic scene completion in III-A. Subsequently, the PointSSC generation pipeline will be introduced, including semantic scene annotation in Sec. III-B and dynamic objects mutual completion in Sec. III-C.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Dataset & Type & Data Volume & Space Range (m) & GT Style & Perspective \\ \hline SUNCG [5] & Synthetic & \(10^{6}\) & - & Voxel & - \\ NYUV2 [6] & Indoor & \(10^{6}\) & - & Voxel & - \\ Occ3D-nuScenes [4] & Outdoor & \(10^{5}\) & \(80\times 80\times 6\) & Voxel & Vehicle \\ SurroundOcc [2] & Outdoor & \(10^{5}\) & \(100\times 100\times 8\) & Voxel & Vehicle \\ OpenScene [7] & Outdoor & \(10^{5}\) & \(100\times 100\times 8\) & Voxel & Vehicle \\ Occ3D-Waym [4] & Outdoor & \(10^{6}\) & \(80\times 80\times 12\) & Voxel & Vehicle \\ Semantic KITTI [1] & Outdoor & \(10^{6}\) & \(160\times 160\times 13\) & Voxel & Vehicle \\ OpenOccupancy [3] & Outdoor & \(10^{7}\) & \(102\times 102\times 8\) & Voxel & Vehicle \\ \hline
**PointSSC** (Ours) & Outdoor & \(10^{7}\) & \(250\times 140\times 17\) & Point Cloud & Vehicle \& Infrastructure \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparison of PointSSC and Existing Semantic Scene Completion Datasets
### _Overview_
Point cloud SSC aims to generate complete point clouds and their semantic labels cooperatively. Specifically, given partial point clouds or images \(X\), SSC models are required to generate a tuple \((P,L)\), where \(P\in\mathbb{R}^{N\times 3}\) are complete points and \(L\in\mathbb{R}^{N\times 1}\) are corresponding semantic labels. Compared to volumetric SSC, point cloud SSC is more memory-efficiency and has a stronger ability to represent complex scenes.
Our PointSSC benchmark is developed based on sequential vehicle-infrastructure cooperative dataset V2X-Seq [40], which comprises 11,275 frames of both LiDAR and camera frames and annotates dynamic object bounding boxes in 9 classes. To obtain a scene-level point cloud SSC dataset, we propose a pipeline to generate complete scene-level point clouds and semantic labels simultaneously.
Infrastructure-side sensor's sensing range is often several times that of the vehicle sides. However, infrastructure sensors are fixed on the roadside equipment, so they can only perceive the scene from a limited view. As vehicle-side sensors can provide different view information, vehicle-side and infrastructure-side sensors can complement each other. Fig. 2 shows our PointSSC generation pipeline, we use vehicle-infrastructure collected point clouds and infrastructure-side images to develop ground truth semantic background point (purple dashed box). To handle incomplete perception of dynamic objects, we apply a multi-object multi-view completion strategy to dynamic objects (green dashed box).
### _Semantic Scene Annotation_
We divide the semantic scene annotation module into two parts. Firstly, we perform multi-frame static scene completion, then we use 2D image segmentation to annotate 3D static scene point cloud segmentation labels.
**Multi-frame Static Scene Completion.** We separate static scenes and dynamic objects according to bounding box annotations. Then, we transform vehicle-infrastructure cooperative point clouds into a world coordinate system for registration preparation. For static scenes, we directly use concatenate operations to generate a complete static scene.
**Semantic Point Cloud Annotation.** Compared to most mainstream SSC datasets [1, 2, 3, 4, 7] can generate semantic voxels by using existing point cloud semantic segmentation labels, the object detection dataset V2X-Seq [40] does not have a semantic annotation of point clouds or images. As it is labor-consuming to annotate semantic segmentation manually, we design an automatic point cloud semantic annotation pipeline based on image segmentation. Note that dynamic objects in V2X-Seq [40] are semantically labeled by bounding boxes, we only annotate background points.
Since image semantic segmentation generally shows superior performance than point cloud segmentation and Segment Anything [41] can adapt to the domain gap well, we use Semantic Segment Anything [42] to generate image semantic segmentation labels. To address the problem that background scenes are often occluded by dynamic objects, we propose a semantic label majority voting method to ease the background segmentation noise. Specifically, for
Fig. 2: Pipeline of our **PointSSC** dataset generation. For infrastructure-side images, we annotate their semantic labels. For vehicle-infrastructure cooperative point clouds, we separate static scenes and dynamic objects. For static scenes, we concatenate multi-frame static scenes together and annotate 2D semantic labels to 3D points to get semantic static scenes. For dynamic objects, we use a multi-view, multi-object completion strategy to complete them. Finally, we concatenate semantic static scenes and dynamic objects together. \(\copy\) denotes the concatenate operation.
each pixel, we annotate the semantic class that appears most in time sequence images as a final semantic label. Finally, we project point clouds onto images through intrinsic and extrinsic calibration transformation, point clouds can fetch their semantic labels by indexing on image semantic segmentation.
### _Dynamic Objects Mutual Completion_
Point clouds are prone to occlusion, including both self-occlusion and external occlusion, and they tend to be sparse at long range. This means that if we only carry out single-view or single-object registration, the objects produced often lack complete shape characteristics.
Complete geometric representation of dynamic objects can be obtained by vehicle-infrastructure cooperative registration. Therefore, we adopt multi-view and multi-object registration methods to complete dynamic objects.
Inspired by BtcDet [43], we develop a heuristic function \(\mathcal{H}\left(A,B\right)\) to assess the disparity between a source object \(A\) and a target object \(B\). A lower score from \(\mathcal{H}\left(A,B\right)\) indicates a higher similarity between \(A\) and \(B\). To begin, we first collect objects that require completion together into a shape bank \(\mathcal{B}\). For a given source object \(A\) that necessitates completion, we compute its similarity with every other object in the shape bank \(\mathcal{B}\). The object in the shape bank that exhibits the highest similarity to \(A\) is designated as \(B^{*}\) and is used to complete the source object \(A\).
In practice, we employ the heuristic function \(\mathcal{H}\) to execute shape completion for both infrastructure-side and vehicle-side objects independently. This process is conceptualized as multi-object completion. Subsequently, to obtain fully completed objects, we treat the infrastructure-side objects as source objects and the vehicle-side objects as target objects. This facilitates a multi-view cooperative completion. After obtaining the completed dynamic objects, we assign them with appropriate semantic labels and reintegrate them into static scenes based on their bounding boxes.
## IV Models
Fig. 3 provides an overview of our model. Firstly, we introduce the overview of our baseline for the PointSSC dataset in Sec. IV-A. Secondly, in Sec. IV-B, a spatial-aware transformer, which fuses both local and global information effectively will be introduced. Thirdly, in Sec. IV-C we introduce the completion and segmentation cooperative module (CSCM), which can generate points and corresponding semantic labels cooperatively. Finally, we introduce the training implementation in Sec. IV-D.
### _Overview_
Point cloud SSC aims to perform scene-level semantic point cloud generation. To facilitate subsequent research and use, we propose a LiDAR-based model as a baseline for PointSSC.
Our baseline is developed on Transformer [44] based model AdaPoinTr [21]. A defining feature of the attention mechanism is that both inputs and outputs are order-independent. This characteristic aligns seamlessly with the inherently unordered nature of point clouds. Moreover, given the computational expense of processing each individual point, we use a set of points along with their high-dimensional features, referred as proxies, to encapsulate the entirety of the point cloud. The synergy of proxies and the transformer architecture enables our networks to adeptly capture local spatial correlations, which is indispensable for the point cloud semantic scene completion task. Guided by these insights, we have fashioned our baseline models.
Given input partial point clouds, we employ a proxy feature extractor to obtain both proxy coordinates and associated features. Then we use a spatial-aware transformer encoder to fuse global and local features, followed by a proxy generator to generate coarse scene points. After that, a spatial-aware transformer decoder is applied to further extract local features. Finally, the Completion and Segmentation Cooperative Module (CSCM) is utilized to collaboratively generate complete points along with their semantic labels.
### _Spatial-Aware Transformer_
Existing object-level point cloud processing models [15, 21, 45] often merge per-point features into a single high-dimensional feature via max pooling. However, this approach may not be suitable for outdoor scene-level point clouds, which typically contain a larger number of points and exhibit greater complexity. We argue that a single high-dimensional feature is insufficient to fully represent an entire outdoor scene, as local geometric information also plays a crucial role. To address this, we introduce the spatial-aware transformer, a novel model that can effectively integrate global and local features.
We propose an integration of both local and global information within transformer blocks. Initially, we utilize the proxy feature \(F_{proxy}\) and the learned position embedding \(F_{pe}\) to generate queries \(Q\), keys \(K\), and values \(V\). Subsequently, we employ the geometry-aware attention block from PoinTr [20] to discern the local geometric structure among points. Aiming to fuse both global and local features, we employ max pooling to derive the global feature \(F_{global}\). This global feature \(F_{global}\) is then concatenated with \(F_{proxy}\) and subjected to a feed-forward network (FFN), orchestrating a fusion of the local and global features. We also maintain the skip connection in the standard Transformer to execute an element-wise feature fusion of local features, resulting in updated proxy features \(Q^{\prime}\). The spatial-aware transformer block is described as follows:
\[Q,K,V=\textbf{W}_{Q,K,V}\left(F_{proxy}+F_{pe}\right) \tag{1}\]
\[\mathrm{Attn}=\mathrm{Softmax}\left(\frac{QK^{T}}{\sqrt{d_{k}}}\right)V \tag{2}\]
\[F_{global}=\mathrm{Maxpool}\left(\mathrm{Attn}\right) \tag{3}\]
\[Q^{\prime}=\mathrm{FFN}\left(\mathrm{FFN}\left(\left[\mathrm{Attn},F_{global} \right]\right)+\mathrm{Attn}\right) \tag{4}\]
**W\({}_{Q}\)**, **W\({}_{K}\)** and **W\({}_{V}\)** are learnable parameters, \(d_{k}\) is the dimension of \(K\) and \(\left[\cdot,\cdot\right]\) represents concatenate operation. Using a spatial-aware transformer, we fuse the local feature
twice and the global feature once, so that local features dominate, and proxy features still retain the global scene information, which is more suitable for our scene-level point cloud generation task.
### _Completion and Segmentation Cooperative Module (CSCM)_
We utilize point proxies updated by a spatial-aware transformer decoder to generate a complete point cloud and annotate semantic segmentation labels cooperatively. It is important to highlight that point proxy features are locally feature-dominated, so we adopt the local feature up-sampling method to generate points and semantic labels.
As one point proxy contains both proxy coordination and proxy features. For point proxy features \(F\in\mathbb{R}^{N\times C}\), inspired by [19], we apply transposed convolutions as Feature Generator Module to expand feature dimension, yielding expanded features \(\bar{F}\in\mathbb{R}^{N\times S\times C^{\prime}}\), where \(N\) is proxy number, \(S\) is up sample factor, \(C\) and \(C^{\prime}\) are feature dimensions. Leveraging these expanded features, a rebuilding head is used to predict point-wise offsets \(O\in\mathbb{R}^{N\times S\times 3}\) from the origin query coordination \(P\in\mathbb{R}^{N\times 3}\). Following this, an element-wise addition is executed to produce the final complete point coordinates \(\bar{P}\in\mathbb{R}^{NS\times 3}\). In parallel, a segmentation head predicts the semantic label \(L\in\mathbb{R}^{NS\times 1}\) for each point.
### _Training Loss_
To fully guide our network, we employ ground truth complete points and semantic labels as supervisory signals. For the completion task, the widely-adopted Chamfer Distance (CD) loss [46] is utilized to minimize the Euclidean distance between predicted points \(P\) and ground truth points \(\hat{P}\). For the semantic segmentation task, the variability in generated point positions means there is not a clear one-to-one mapping between \(P\) and \(\hat{P}\). Following [8], for each predicted point, we identify the closest ground truth point and utilize its semantic label as ground truth. Cross-entropy loss [47] supervises predicted logits \(L\) and ground truth labels \(\hat{L}\).
The overall loss of PointSSC consists of CD loss \(\mathcal{L}_{CD}\) and cross-entropy loss \(\mathcal{L}_{CE}\) with balanced parameter \(\lambda\):
\[\mathcal{L}_{SSC}=\mathcal{L}_{CD}\left(P,\hat{P}\right)+\lambda\mathcal{L}_{ ee}\left(L,\hat{L}\right). \tag{5}\]
## V Experiments
### _Experiment Setup_
Our PointSSC dataset is derived from six infrastructure-side intersections in V2X-Seq [40]. As in Fig. 4, we offer two data divisions based on PointSSC. In the first division, we allocate 80% of time-sequence frames from all six scenes for training, while the remaining 20% is reserved for testing, which evaluates the model's capacity for expressiveness. For the second division, we employ four intersections throughout the entire time range for training and use the remaining two intersections for testing, serving to assess the model's ability to generalize across unfamiliar scenes.
### _Implementation Details_
We apply Pointnet++ [16] for proxy feature extraction. Sampling within the ranges of \([0m,250m]\), \([-70m,70m]\)
Fig. 4: Two types of data division. The first is a split train test set by time, the second is split by scenes.
Fig. 3: Pipeline of our **PointSSC** baseline. Given partial point clouds, we use PointNet++ [16] to extract point proxies, then fuse local and global features through a spatial-aware transformer. We use a proxy generator to get coarse-up sampled point proxies and CSCM to generate complete semantic points through a coarse-to-fine strategy.
and \([-5m,12m]\) for the X, Y, and Z axes, we randomly sample 26,624 points as input and extract 832 point proxies with a \([4,4,2]\) downsampling ratio. The proxy generator module and CSCM use a \([16,16]\) upsampling factor, yielding 13,312 coarse proxies and 212,992 complete points. We utilize the AdamW Optimizer, setting an initial learning rate of \(1.0\times 10^{-4}\) and the decay weight at \(5.0\times 10^{-4}\). All experiments run for 30 epochs with a total batch of 8 on 4 RTX A100 GPUs.
### _Evaluation Metrics_
To evaluate the completion and accuracy of point cloud semantic scene completion, following [8] and [21], we use the Chamfer Distance (CD), measured in L1-norm and L2-norm, and F1-score to evaluate the completeness of generated points. We use mean class IoU (mIoU) to evaluate the accuracy of point cloud semantic segmentation.
### _Main Results_
In Tab. II, we show the quantitative results of different networks on PointSSC. Except for [8], other models do not have segment heads, we add CSCM mentioned in Sec. IV-C for a fair comparison. Our model excels in both PointSSC data division scenarios.
In Table III, we conduct ablation studies on the test set regarding the use of the spatial-aware transformer, as detailed in Sec. IV-B. Results indicate that solely relying on max-pooling-processed global features yields the poorest outcome. Incorporating local features enhances CD and mIoU by 5% and 8% respectively. The combination of both local and global features within the transformer block achieves optimal performance.
### _Visualization_
In Fig. 5, we visualize the outcomes of various models on the PointSSC dataset. Our baseline outperforms all others in terms of point completion and semantic segmentation. Compared to AdaPoinTr [21], our model produces fewer noisy semantic points. Additionally, our PointSSC model yields a more comprehensive and cohesive shape representation than both PMP-Net++ [48] and CasFusionNet [8].
## VI Conclusions
In this paper, we introduce a comprehensive benchmark for point cloud semantic scene completion, comprising a dataset and a LiDAR-based model baseline. To produce superior semantic points, we propose the spatial-aware transformer and the completion and segmentation cooperative module. Experimental results demonstrate our model's superiority over competing approaches. We earnestly hope that PointSSC will fill the blank in scene-level semantic point cloud generation datasets and draw further research interest to this domain.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline Data Division & Method & CD(\(L_{1}\)\(\downarrow\)) & CD(\(L_{2}\)\(\downarrow\)) & F1-score(\(L_{3}\)\(\downarrow\)) & mIoU \(\uparrow\) & \multicolumn{1}{c|}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\ | semanticシーン完成 (SSC) は、複雑な3次元シーンの空間占有と意味のラベルを同時に生成することを目指しています。現行の多く SSC モデルは、体積的な表現に焦点を当てており、大規模な屋外空間ではメモリ効率が低い。点群は軽量な代替手段ですが、既存のベンチマークには、意味のラベルを付けた屋外点群のシーンが欠落しています。この問題に対処するため、私たちは PointSSC を導入します。これは、意味のシーン完成のための協力型車両-インフラストラクチャ点群ベンチマークです。これらのシーンは長距離の認識と最小の遮蔽性を呈しています。私たちは、Semantic Segment Anything を活用した自動注釈パイプラインを開発することで、効率的に意味を割り当てています。ベンチマークの進捗を評価するために、私たちは、空間AwareTransformer を用いた、LiDAR ベースのモデルを提案しています。このモデルは、グローバルな特徴 |
2309.15701 | HyPoradise: An Open Baseline for Generative Speech Recognition with
Large Language Models | Advancements in deep neural networks have allowed automatic speech
recognition (ASR) systems to attain human parity on several publicly available
clean speech datasets. However, even state-of-the-art ASR systems experience
performance degradation when confronted with adverse conditions, as a
well-trained acoustic model is sensitive to variations in the speech domain,
e.g., background noise. Intuitively, humans address this issue by relying on
their linguistic knowledge: the meaning of ambiguous spoken terms is usually
inferred from contextual cues thereby reducing the dependency on the auditory
system. Inspired by this observation, we introduce the first open-source
benchmark to utilize external large language models (LLMs) for ASR error
correction, where N-best decoding hypotheses provide informative elements for
true transcription prediction. This approach is a paradigm shift from the
traditional language model rescoring strategy that can only select one
candidate hypothesis as the output transcription. The proposed benchmark
contains a novel dataset, HyPoradise (HP), encompassing more than 334,000 pairs
of N-best hypotheses and corresponding accurate transcriptions across prevalent
speech domains. Given this dataset, we examine three types of error correction
techniques based on LLMs with varying amounts of labeled
hypotheses-transcription pairs, which gains a significant word error rate (WER)
reduction. Experimental evidence demonstrates the proposed technique achieves a
breakthrough by surpassing the upper bound of traditional re-ranking based
methods. More surprisingly, LLM with reasonable prompt and its generative
capability can even correct those tokens that are missing in N-best list. We
make our results publicly accessible for reproducible pipelines with released
pre-trained models, thus providing a new evaluation paradigm for ASR error
correction with LLMs. | Chen Chen, Yuchen Hu, Chao-Han Huck Yang, Sabato Macro Siniscalchi, Pin-Yu Chen, Eng Siong Chng | 2023-09-27T14:44:10 | http://arxiv.org/abs/2309.15701v2 | # HyPordise: An Open Baseline for Generative Speech Recognition with Large Language Models
###### Abstract
Advancements in deep neural networks have allowed automatic speech recognition (ASR) systems to attain human parity on several publicly available clean speech datasets. However, even state-of-the-art ASR systems experience performance degradation when confronted with adverse conditions, as a well-trained acoustic model is sensitive to variations in the speech domain, e.g., background noise. Intuitively, humans address this issue by relying on their linguistic knowledge: the meaning of ambiguous spoken terms is usually inferred from contextual cues thereby reducing the dependency on the auditory system. Inspired by this observation, we introduce the first open-source benchmark to utilize external large language models (LLMs) for ASR error correction, where N-best decoding hypotheses provide informative elements for true transcription prediction. This approach is a paradigm shift from the traditional language model rescoring strategy that can only select one candidate hypothesis as the output transcription. The proposed benchmark contains a novel dataset, "HyPordise" (HP), encompassing more than 334,000 pairs of N-best hypotheses and corresponding accurate transcriptions across prevalent speech domains. Given this dataset, we examine three types of error correction techniques based on LLMs with varying amounts of labeled hypotheses-transcription pairs, which gains a significant word error rate (WER) reduction. Experimental evidence demonstrates the proposed technique achieves a breakthrough by surpassing the upper bound of traditional re-ranking based methods. More surprisingly, LLM with reasonable prompt and its generative capability can even correct those tokens that are missing in N-best list. We make our results publicly accessible for reproducible pipelines with released pre-trained models, thus providing a new evaluation paradigm for ASR error correction with LLMs.
## 1 Introduction
Automatic speech recognition (ASR) has become increasingly important in modern society, as it enables efficient and accurate transcription of spoken languages. This capability facilitates access to information and enhances communication across various domains, including education [7], healthcare [50], and business [36]. Driven by the recent advances in deep learning, remarkable success has been achieved on several ASR tasks through end-to-end training techniques [28; 27; 9; 22; 30; 100; 15]. However, a major challenge of applying ASR in practical conditions lies in effectively handling variations in speech caused by different factors such as background noise [11], speaker accent [85], and speaking styles [82; 2]. These adverse factors are common and inevitable in speech signal, significantly affecting the accuracy of the recognition results [55].
Humans demonstrate remarkable robustness when faced with the above variations in acoustic environment, as the human recognition system does not only rely on acoustic cues - we usually speculate the ambiguous or distorted spoken terms based on speech context and our inherent linguistic knowledge. Similarly, current ASR system typically employs an independent language model (LM) for rescoring during the decoding process [83, 46, 43, 25]. As shown in Fig. 1, given N-best hypotheses generated by an ASR engine with beam search decoding, a trained language model (LM) can be used to re-score each utterance and select the one with the highest likelihood (referred to as the \(1^{st}\) utterance) as the output of the ASR; whereas, the other sentences (the \(2^{nd}\) - \(N^{th}\) utterances) are discarded. However, it is widely believed [68] that the N-best list contains useful information [87, 37, 56], as each hypothesis is an independent textual representation of the input speech. Consequently, discarded sentences might also carry correct tokens for accurately predicting the true transcription. To validate this belief, we have conducted experiments on the LibriSpeech dataset [66], counting the probabilities of two scenarios observed during LM rescoring: (i) the discarded utterances contain a better candidate with lower word error rate (WER), and (ii) the other discarded hypotheses can provide the right answer for the wrong tokens in \(1^{st}\) utterance. The statistical results of \(2^{nd}\sim 20^{th}\) utterances are shown in the left part of Fig. 1. Taking \(2^{nd}\) discarded utterance as example, it has a 14% probability of having a lower WER than the \(1^{st}\) utterance. Furthermore, given a wrong token in \(1^{st}\) utterance, there is a 34% probability of finding the correct token in the \(2^{nd}\) utterance.
To better mine the information in N-best hypotheses, we propose the first attempt on publicly available **ASR generative error correction benchmark** that directly predicts a true transcription, rather than selecting a candidate from the N-best list. To put forth this benchmark, we introduce a novel dataset named _HyPordase (HP)_, which comprises various open source N-best hypotheses provided by state-of-the-art ASR systems and their paired true transcriptions. Considering real-life applications, HP dataset covers various challenging speech domains, including scenarios with background noise, specific contexts, and speaker accents. Furthermore, in terms of resources availability, we define three settings to mimic the deployment of ASR systems in real-world scenarios: _(i) Zero-shot_ Learning. In this setting, only test set hypotheses are available for inference. This corresponds to applying a well-trained ASR model to new scenarios without any training data. _(ii) Few-shot_ Learning. A few in-domain hypotheses with true transcription are available for training. This setting aims to address domain-specific ASR tasks with a few manual annotations. _(iii) Fine-tuning_. A sufficient training set is available to learn the mapping between hypotheses and transcription.
To exploit the three aforementioned scenarios, we present multiple error correction techniques using large language models (LLMs), which have shown the outperforming ability of language generation and reasoning in recent studies [5, 107, 48, 84]. For _zero-shot_ and _few-shot_ settings, we design an in-context learning method without any parameter tuning, which directly performs error correction based on task prompt and in-domain demonstration. In the _fine-tuning_ scenario, we develop two sequence-to-sequence training solutions, H2T-_ft_ and H2T-_LoRA_, which adapt pre-trained LLMs to specific transcription domains. Experimental results show that all learning strategies can be beneficial to reduce the WER in different resource settings, providing potential solutions for alleviating the
Figure 1: The left part shows the pipeline to generate the N-best hypotheses using a vanilla ASR engine with beam search decoding. The right part counts the probabilities of case (i) and case (ii) on the test set of LibriSpeech dataset. It indicates the discarded information in \(2^{nd}\sim 20^{th}\) utterances. Green and red \(T_{i}\) in “Exp” respectively denote correct and wrong tokens compared with ground-truth.
negative impact of speech variation. Additionally, with reasonable prompt design, LLMs can correct those specious tokens that are exclusive from N-best list. We will release the HP datasets, reproducible pipelines, and pre-trained models on Github 2 under MIT licence.
Footnote 2: [https://github.com/Hypotheses-Paradise/Hypo2Trans](https://github.com/Hypotheses-Paradise/Hypo2Trans)
Our contribution can be summarized as follows:
* We propose the first open and reproducible benchmark to evaluate how LLMs can be utilized to enhance ASR results with N-best hypotheses, where a new dataset HyPoradise 3 with more than 334K hypotheses-transcription pairs are collected from the various ASR corpus in most common speech domains. Footnote 3: Denoted as _Hypotheses Paradise_, inspired by “Lcha Icha Paradise” from Naruto.
* We develop three ASR error correction techniques based on LLMs in different resource settings to directly predict the true transcription from the N-best hypotheses. Experimental results in the _fine-tuning_ setting show that our new approach can **surpass** a performance upper-bound (e.g., oracle WER from n-best list) of traditional re-ranking based methods.
* We introduce an evaluation paradigm of _generative error correction_ for ASR. The acoustic model generates word-piece elements in the hypotheses list; subsequently, LLMs predict accurate transcription utilizing linguistic knowledge and contextual information.
## 2 Related Work
### ASR Rescoring and Error Correction
In order to improve the linguistic acceptability of ASR results, LM rescoring has been widely employed and achieved stable performance gain for ASR systems [79; 62; 4]. Typically, an external LM is trained separately and utilized to re-score the N-best list of hypotheses generated by ASR decoding with beam search. Various approaches for LM integration have been proposed, such as shallow fusion [17; 104; 46; 83], deliberation [98; 32; 41; 40; 91; 39], component fusion [76], and cold fusion [81]. Some authors have used pre-trained LM models to replace trainable LMs [86; 74], and the log-likelihood of each hypothesis is computed using unidirectional models, e.g., GPT-2, or pseudo-log-likelihood using bidirectional models like BERT [21] and RoBERTa [59]. In ASR, LMs are also widely used for the error correction task in different languages [96; 29], leveraging only the 1-best hypothesis generated by the ASR model [53; 61; 106; 23; 109; 77]. Furthermore, more recent works [60; 52; 51] utilize a candidates list after decoding for error correction. Though Grammatical Error Correction (GEC) has been actively explored [20; 93; 100], ASR error correction is distinct with GER due to the arbitrariness of the spoken language [2], which requires the efforts from both speech and NLP communities [18].
### Large Language Models
More recently, there has been a surge of interest in Transformer-based LLMs [84; 70; 75; 107] in both academia and industry. By learning from massive amounts of text data, LLMs can capture linguistic patterns and semantic relationships, which have led to impressive performance for a wide range of natural language processing (NLP) tasks [5; 65; 95].
**In-context Learning**. Given specific task descriptions or pair-wise contextual information, LLMs show outstanding adaptability on downstream NLP tasks _without_ any parameter tuning [63; 64; 100]. Such a capability of task-specific inference is also known as in-context learning (ICL) [99], which utilize LLMs to generate text that is more coherent and relevant to the specific domain or task [44; 16; 49; 73; 8; 108]. Recently, task-activating Prompting (TAP) [100] is one of the most relevant works, employing the injection of input-output pairs of task-oriented contexts (e.g., initiating the question prompt from a broad domain to refine preceding contexts as shown in Figure 2) with the aim of enhancing the zero-shot and few-shot capabilities of frozen-pretrained LLMs for second-pass ASR. We further evaluate the TAP-based zero-shot and few-shot approaches with examples.
**Low-rank Approximation based Neural Adapter**. Tuning all LLM parameters for a given downstream task is usually not feasible due to memory constraints. Many researchers sought to mitigate that problem by either adapting only a few parameters or leveraging external trainable modules for
a new task [58; 33]. A pioneer work [1] showed that the learned over-parametrized models in fact reside on a low intrinsic dimension, consequently, a low-rank adaptation (LoRA) approach [38] was proposed to indirectly tune some dense layers by optimizing rank decomposition matrices of the dense layers. Due to its computational efficiency, LoRA adaptation has been rapidly adopted as a new paradigm for LLMs tuning, which was useful in various downstream tasks [105; 24; 42; 92].
## 3 Hypothesis Generation and Dataset Creation
We introduce the generation process of the HyPoradise dataset in this section. The employed ASR system for N-best hypotheses generation is illustrated in 3.1, and then we introduce the selected speech domain in 3.2. Finally, we provide statistic information and generated HP in 3.2.
### ASR System
We employ two state-of-the-art ASR models, namely WavLM [14] and Whisper [69] for N-best hypotheses generation. Besides their remarkable performance and popularity, those models are representative in the deployment of an ASR because: (1) WavLM is a well-trained ASR model on LibriSpeech [66] but suffering from domain mismatch, and (2) Whisper is a universal ASR model but lacking domain specificity. More details about those two ASR models are described below:
**WavLM**: We utilize the ESPnet toolkit [94] along with the pre-trained model from HuggingFace to deploy our WavLM-based ASR system. The WavLM architecture consists of two blocks: the front-end, and the ASR model (433 million parameters in total). The front-end consists of 24 Transformer-based [88] encoder layers and is pre-trained using a combination of LibriLight [45] (60k hours of data), Gigaspeech [12] (10k hours of data), and VoxPopuli [90] (24k hours of data). Front-end features are fed into the ASR back-end for fine-tuning. The back-end consists of 12 Conformer-based [30] encoder layers, and 6 Transformer-based decoder layers. The fine-tuning process is performed on 960-hour LibriSpeech data. Additionally, the WavLM decoding recipe incorporates an external LM rescoring option, where the external LM adopts Transformer architecture with 16 encoder layers and is trained using the text of LibriSpeech 960 hours data and extra LM training data from the web.
**Whisper**: We employ the Whisper-Large model developed by OpenAI to generate hypotheses, without in-domain language model rescoring. The used configuration consists of an encoder-decoder Transformer architecture with 1,550 million parameters, which is trained on 680,000 hours of multilingual-weakly labeled speech data collected from the web.
Leveraging these two pre-trained ASR models, we have employed the beam search algorithm during decoding and generated N-best lists of sentence hypotheses for each input waveform. For both WavLM and Whisper, the default beam size was set to 60. After removing repeatable utterances, we select top-5 utterances with highest probabilities as N-best list, as they have carried sufficient elements to accurately predict transcription. Subsequent experiments confirm this belief by calculating the accurately upper-bound WER using 5-best hypotheses list. To build the HP dataset, we carry out this decoding strategy on multiple popular ASR datasets (please see Section 3.2) and generate paired data consisting of an 5-best hypotheses list and 1 ground-truth transcription. The pre-processing and generation code are also released for integrating new ASR corpus into HP. All the links of relevant resources are presented in Appendix.
### Selected Speech Corpora
For corpora selection, our goal is to cover common scenarios of ASR task, e.g., noisy background and speaker accent. Consequently, we collect and modify the following corpora with evident domain characteristics to compose the HP dataset.
**LibriSpeech**[66]: LibriSpeech is a public corpus of read speech from audiobooks, including 1,000 hours of speech data with diverse speakers, genders, and accents. For generating HP training data, we exclude some simple cases from its _train-960_ split that show WER result of 0, resulting in 88,200 training utterances. We use the entire _test-clean_ and _test-other_ splits for HP test data generation.
**CHiME-4**[89]: CHiME-4 is a dataset for far-field speech recognition. It includes real and simulated noisy recordings in four noisy environments, _i.e._, bus, cafe, pedestrian area, and street junction. We
use its _train_ (with 8,738 utterances) and _test-real_ (with 1,320 utterances) splits to generate HP training and test data. The four different noises in _test-real_ split are also evaluated separately in Table 3.
**WSJ**[67]: The Wall Street Journal (WSJ) is a widely-used benchmark for speech recognition. It includes read speech from speakers in a controlled environment, with a focus on business news and financial data. We use its _train-si284_ split (with 37,514 utterances) to generate HP training set. The _dev93_ (with 503 utterances) and _eval92_ (with 333 utterances) are applied to build test sets.
**SwitchBoard**[26]: The SwitchBoard corpus is a telephone speech dataset collected from conversations between pairs of speakers. It focuses on North American English and involves over 2.4k conversations from approximately 200 speakers. We randomly select 36,539 samples from its _train_ split to generate HP training set, as well as 2,000 utterances from the _eval2000_ split for HP test set.
**CommonVoice**[3]: CommonVoice 5.1 is a freely-available dataset for speech recognition. It contains speech recordings from diverse speakers in over 60 languages. To generate HP dataset, we randomly select 51,758 samples from its _train-en_ split with accent labels, _i.e._, African, Australian, Indian, and Singaporean, where training set contains 49,758 samples and test set contains 2,000 samples.
**Tedlium-3**[35]: Tedlium-3 is a dataset of speech recorded from TED Talks in multiple languages. It contains a diverse range of background noise, speaker accents, speech topics, etc. Considering its large size, we randomly select 50,000 samples from its _train_ split for HP dataset generation, where training set contains 47,500 samples and test set contains 2,500 samples.
**LRS2**[19]: Lip Reading Sentences 2 (LRS2) is a large-scale publicly available labeled audio-visual dataset, consisting of 224 hours of video clips from BBC programs. We randomly select 42,940 samples from its _train_ split as training set, and the remaining 2,259 samples are used for test set.
**ATIS**[34]: Airline Travel Information System (ATIS) is a dataset comprising spoken queries for air travel information, such as flight times, prices, and availability. It contains around 5,000 to 5,400 utterances, which are recorded from around 500 to 550 speakers.
**CORAAL**[47]: The Corpus of Regional African American Language (CORAAL) is the first public corpus of AAL data. It includes audio recordings along with the time-aligned orthographic transcription from over 150 sociolinguistic interviews. To generate HP dataset, we select 1,728 samples as training set and 100 samples as test set.
### HyPordise (HP) Dataset Statistics
After performing beam search decoding on the selected speech datasets introduced in Section 3.2, we collected more than 334K pairs of hypotheses list and transcription to form the HP dataset, including training and test sets. The statistics for the HP dataset are given in Table 1, which shows the number
\begin{table}
\begin{tabular}{c c|c c c|c c c} \hline \hline \multicolumn{2}{c|}{\multirow{2}{*}{Source}} & \multicolumn{1}{c|}{Domain} & \multirow{2}{*}{Training Set} & \multirow{2}{*}{\# Pairs} & \multirow{2}{*}{Length} & \multirow{2}{*}{Test Set} & \multirow{2}{*}{\# Pairs} & \multirow{2}{*}{Length} \\ \multicolumn{1}{c|}{\multirow{2}{*}{LibriSpeech}} & \multicolumn{1}{c|}{\multirow{2}{*}{Audiobooks}} & \multirow{2}{*}{_train-960_} & \multirow{2}{*}{88,200} & \multirow{2}{*}{33.7} & _test-clean_ & 2,620 & 20.1 \\ & & & & & & _test-other_ & 2,939 & 17.8 \\ \hline CHiME4 & Noise & _train_ & 8,738 & 17.0 & _test-real_ & 1,320 & 16.4 \\ \hline WSJ & Business news & _train-si284_ & 37,514 & 17.5 &
\begin{tabular}{c} _dev93_ \\ _eval92_ \\ \end{tabular} & 503 & 16.7 \\ \hline SwitchBoard & Telephone & _train_ & 36,539 & 11.8 & _eval2000_ & 2,000 & 11.8 \\ \hline CommonVoice & Accented English & _train-accent_ & 49,758 & 10.5 & _test-accent_ & 2,000 & 10.5 \\ \hline Tedlium-3 & TED talk & _train_ & 47,500 & 12.6 & _test_ & 2,500 & 12.6 \\ \hline LRS2 & BBC audio & _train_ & 42,940 & 7.6 & _test_ & 2,259 & 7.6 \\ \hline ATIS & Airline info. & _train_ & 3,964 & 12.4 & _test_ & 809 & 11.3 \\ \hline CORAAL & Interview & _train_ & 1,728 & 24.2 & _test_ & 100 & 24.0 \\ \hline & Total & _train_ & 316,881 & 18.1 & _test_ & 17,383 & 14.1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: HP dataset statistics in terms of the number of hypotheses-transcription pairs and average utterance length in various domains.
of pairs and average length in various domains and splits. We would release our generated datasets and kindly call for more hypotheses-transcription pairs toward sustainable community efforts.
## 4 ASR Error Correction from Hypotheses to Transcription
We hereby introduce a hypotheses-to-transcription (H2T) training scheme utilizing the collected HP dataset to enhance ASR performance with LLM integration. With limited labeled data, in-context learning [100] is employed to form task-specific prompts and in-domain demonstrations: Linguistic knowledge in LLM is exploited without parameter tuning. Furthermore, we present two trainable methods fine-tuning (_ft_) and H2T-_LoRA_ to learn the hypotheses-to-transcription mapping when a sufficient amount of labeled data is available.
### Hypotheses-to-Transcription (H2T) Training
In addition to in-context learning, we introduce two parameter-tunable methods to learn hypotheses-to-transcription mapping in a sequence-to-sequence manner: H2T-_ft_ and H2T-_LoRA_.
**H2T-_ft_** denotes fine-tuning all parameters of a neural model with labeled data of each HP domain. Specifically, we introduce a similar method with N-best T5, which utilizes other hypotheses to improve the 1-best hypothesis as shown in Fig. 3. To constrain the decoding space, we add an new item criterion \(\mathcal{L}_{ft}=\sum_{i=1}^{N}\alpha_{i}\log P(x^{(i)}|x,\theta)\), where \(x^{(i)}\) is the \(i\)-th hypothesis in N-best list. This item aims to encourage the correction model to preferentially consider tokens into the N-best hypotheses list, preventing arbitrary modification in huge decoding space. \(\alpha_{i}\) is a hyper-parameter for \(i\)-th hypothesis that decreases with the order ranked by the acoustic model.
**H2T-_LoRA_** avoids tuning the whole set of parameters of a pre-trained model by inserting a neural module with a small number of extra trainable parameters to approximate the full parameter updates, allowing for efficient learning of the H2T mapping without affecting the pre-trained parameters of the LLM. H2T-_LoRA_ introduces trainable low-rank decomposition matrices into LLMs' existing layers, enabling the model to adapt to new data while keeping the original LLMs fixed to retain the previous knowledge. Specifically, LoRA performs a reparameterization of each model layer expressed as a matrix multiplication by injecting low-rank decomposition matrices (Fig.3 (b)). As a result, the
Figure 3: (a) Structure of H2T-_ft_. (b) Reparametrization in H2T-_LoRA_. Solid box denotes the module is fixed during tuning while dashed box stands for trainable. Blue color denotes the weights has been pre-trained on another dataset.
Figure 2: A scalable evaluation of Task-Activating Prompting [100] (TAP) based in-context learning. The demonstration in blue box is drawn from the training set, which is optional for LLMs input.
representations generated by the LLM are not distorted due to task-specific tuning, while the adapter module acquires the capability to predict the true transcription from the N-best hypotheses.
Benefiting from efficient training, we can employ a large-scale language model in the H2T-_LoRA_ method, which is expected to understand the task description and capture correlation in the N-best list. Meanwhile, instead of adding an extra training objective in H2T-_ft_, we constrain the decoding space of H2T-_LoRA_ by adding requirement in task description.
## 5 Experimental Results
### Language Models Configurations
**T5** (0.75B\(\sim\)3B): T5 family [72] is a set of encoder-decoder models pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format. T5 works well on a variety of tasks out-of-the-box by prepending a different prefix to the input corresponding to each task, e.g., for machine translation or text summarization. In this paper, we select T5-_large_ (0.75B) as the correction model in H2T-_ft_ method.
**LLaMA** (7B\(\sim\)65B): Proposed by Meta AI, LLaMA [84] is a collection of foundation language models ranging from 7B, 13B, 30B, and 65B parameters. It is trained on publicly available datasets exclusively, and shows remarkable efficiency on NLP benchmarks. We select LLaMA-13B for LoRA adaptation in H2T-_LoRA_ method as one best setup under ablations.
**GPT-3.5** (175B): Proposed by OpenAI, GPT-3.5-turbo is one of the most advanced large language models, which powers the popular ChatGPT. It has been optimized from the GPT-3 [5] for chat purposes but works well for traditional completions tasks as well. We utilize GPT-3.5-turbo in task-activated in-context learning [100], which conduct _zero-shot_ and _few-shot_ learning experiments with designed task prompt.
### Training and Evaluation
For _few-shot_ settings, the specific task prompts, along with the LLM's responses from task-activated ICL prompting [100], are provided in the Appendix (page 20). For _fine-tuning_ setting, the detailed configuration of H2T-_ft_ and H2T-_LoRA_ are also explained in Appendix. Furthermore, we release some of the pre-trained correction models to allow interested readers to reproduce our results.
We report WER results as the evaluation metric for all methods. Additionally, we report the two oracle WER for comparison, which are 1) the n-best oracle \(o_{nb}\): WER of the "best candidate" in N-best hypotheses list, and 2) the compositional oracle method \(o_{cp}\): achievable WER using "all tokens" in N-best hypotheses list. The \(o_{nb}\) can be viewed as upper bound performance of the re-rank based method, while \(o_{cp}\) denotes the upper bound of correction using occurred elements in the list.
\begin{table}
\begin{tabular}{c|c|c c c c c|c c} \hline \hline \multirow{2}{*}{Test Set} & \multirow{2}{*}{Baseline} & \multirow{2}{*}{LM\({}_{rank}\)} & \multicolumn{2}{c}{**H2T-_ft_**} & \multicolumn{2}{c|}{**H2T-_LoRA_**} & \multicolumn{2}{c}{Oracle} \\ & & & T5 & LLaMA & T5 & LLaMA & \(o_{nb}\) & \(o_{cp}\) \\ \hline WSJ & 4.5 & 4.3 & 4.0 & 3.8 & \(2.7\)\({}_{-40.0\%}\) & \(\mathbf{2.2}\)\({}_{-51.1\%}\) & 4.1 & 1.2 \\ ATIS & 8.3 & 6.9 & 2.7 & 3.4 & \(\mathbf{1.7}\)\({}_{-79.5\%}\) & \(1.9\)\({}_{-77.1\%}\) & 5.2 & 1.1 \\ CHiME-4 & 11.1 & 11.0 & 7.9 & 8.2 & \(7.0\)\({}_{-36.9\%}\) & \(\mathbf{6.6}\)\({}_{-40.5\%}\) & 9.1 & 2.8 \\ Tedlium-3 & 8.5 & 8.0 & 6.6 & 5.2 & \(7.4\)\({}_{-12.9\%}\) & \(\mathbf{4.6}\)\({}_{-45.9\%}\) & 3.0 & 0.7 \\ CV-_accent_ & 14.8 & 16.0 & 12.9 & 15.5 & \(11.0\)\({}_{-25.7\%}\) & \(\mathbf{11.0}\)\({}_{-25.7\%}\) & 11.4 & 7.9 \\ SwitchBoard & 15.7 & 15.4 & 15.9 & 18.4 & \(14.9\)\({}_{-5.1\%}\) & \(\mathbf{14.1}\)\({}_{-10.2\%}\) & 12.6 & 4.2 \\ LRS2 & 10.1 & 9.6 & 9.5 & 10.2 & \(\mathbf{6.6}\)\({}_{-34.7\%}\) & \(8.8\)\({}_{-12.9\%}\) & 6.9 & 2.6 \\ CORAAL & 21.4 & 21.4 & 23.1 & 22.9 & \(20.9\)\({}_{-2.3\%}\) & \(\mathbf{19.2}\)\({}_{-10.3\%}\) & 21.8 & 10.7 \\ \hline \hline \end{tabular}
\end{table}
Table 2: WER (%) results of H2T-_ft_ and H2T-_LoRA_ in _fine-tuning_ setting. "\(o_{nb}\)" and "\(o_{cp}\)" respectively denote n-best oracle and compositional oracle that are defined in 5.2.
### Results of H2T-_ft_ and H2T-_LoRA_
We first report the WER results for H2T-_ft_ and H2T-_LoRA_ in the _fine-tuning_ setting, where the training set of HP is available to learn H2T mapping. Whisper is employed as acoustic model for hypotheses generation, and a vanilla language model \(LM_{rank}\) is trained using in-domain transcription of the training set, and then it re-ranks the hypotheses according to perplexity. From Table 2, we observe that 1) correction techniques achieve significant performance gain in specific scenarios, where H2T-_LoRA_ respectively reduces 77.1% and 55.1% relative WER on ATIS and WSJ. 2) WER performances on CHiME-4 and CV-_accent_ demonstrate proposed correction methods improves the robustness of on background noise and speaker accent. Additionally, H2T-_LoRA_ on these two datasets both surpass the upper-bound of re-ranking based method referring to \(o_{nb}\). 3) In general, H2T-_LoRA_ usually generate better WER results than H2T-_ft_, as the low-rank adapter allows LLMs to keep pre-trained knowledge and avoid over-fitting problem.
**Limitation and Failure Studies.** We notice that an over-fitting phenomenon existing in our correction techniques, especially in H2T-_ft_ where all parameters are tunable. Furthermore, the mean and variance of the utterance length can potentially influence the WER result, since H2T-_ft_ results on CORAAL (long-form speech) and SwitchBoard (large variance in length) both fail to enhance ASR performance. On LibriSpeech, when the WER is low (1.8% by WavLM), there is less room to correct recognition errors with proposed framework. The experimental results and list the representative failure cases can be found in Appendix Table 6 and Table 7. Given the evidence of ample room for further performance improvement, our proposal thus serves as an appropriate benchmark to assess the contribution of current and future LLMs to ASR.
model, and GPT-3.5 serves as the LLM for correction. We mainly consider common domain shifts of application: specific scenario, common background noise, and speaker accent, where 5-best hypotheses are selected as context input. From Table 3, we can observe that: (1) Without any in-domain data, LLM can benefit from ASR results based on the hypotheses list. This performance gain mainly relies on the linguistic knowledge of LLM and task-activating [100] descriptions (e.g., chains of task hints) in pipeline. (2) A few in-domain pairs effectively enhance the performance gain in terms of WER. From the final output of the reasoning process, we find that LLM attempts to summarize the regulation from the demonstration and then apply it to the given test example. (3) Leveraging the vast knowledge base, LLM can even correct missing tokens that are exclusive from hypotheses list in terms of context information.
To illustrate the third observation, we conduct the case study on WSJ-_dev93_ in Table 4. According to the ground-truth transcription, two errors (shown as red) are included in \(1^{st}\) hypothesis, where "petro chemical" is wrongly recognized as two tokens perhaps due to the speaking style of the speaker. LLM correct this error since "petrochemical" can be found in \(2^{nd}\) hypothesis. However, "Sinopec" is unseen during ASR training, leading it to be recognized as weird tokens ("xinepec" or "xinepec") in hypotheses. In this case, LLM shows human-like correction - it successfully infers the correct token based on the pronunciation of "xinepec", as well as the context of "China's petrochemical". In fact, Sinopec is a petrochemical-related Chinese company.
### Additional Discussion
**Effect on Spoken Language Intent Detection.** We examine the effect of error correction on a downstream task of spoken intent detection [80] (SID). To this end, we reproduce an BERT-based SID model [13] and respectively feed the 1-best utterance and corrected utterance by H2T-_LoRA_ for comparison. The ablation results on ATIS dataset are reported in Appendix, which shows that our correction technique can also benefit to SID task in terms of detection accuracy. (3) LLM correction based on N-best hypotheses can effectively enhance the downstream SIT result, which achieves comparable accuracy with using ground-truth transcription (97.4% _v.s._ 97.9%).
**Zero-shot Prompting Results.** We finally report an initial prompting evaluation on CHiME-4 in _zero-shot_ setting. Considering the task difficulty, T5 and LLaMA are employed for hypothesis correction. For comparison, we also provide the correction results using a far smaller GPT-2 (1.5B) with a 5-gram LM baseline trained by in-domain transcription. We used LLaMA 13B to perform these zero-shot error correction tasks. Using the test set extracted from Whisper, we observed that the zero-shot method did not yield improved results on CHiME-4 (11.5 \(\pm\) 0.5%) and CV-accent (14.9% \(\pm\) 1.5%). This zero-shot pipeline performed less stably on the other test set discussed in Table 2, which we consider a failure case with a standard deviation exceeding an absolute value of 10% in terms of WER. For T5-based error correction, we noticed that the method also failed to perform zero-shot error correction by using 0.75B.
**Future work**. We find that LLMs potentially perceive acoustic information during pre-training, as they tend to perform error correction using tokens with similar pronunciation. Therefore, our first future work is including more acoustic information in HP dataset, such as token-level confidence provided by ASR engine. Furthermore, considering different data amount of each domain, more parameter-efficient training methods besides low-rank adaptation should be discussed for LLMs tuning [54], e.g., model reprogramming [102; 31], prompting [10] and cross-modal adaptation [97; 101; 71].
## 6 Conclusion
To explore the benefits in speech-language co-learning, this work introduces a new ASR benchmark that utilizes LLMs for transcription prediction from N-best hypotheses. Our benchmark contains a new HP dataset consisting of more than 334K hypotheses-transcription pairs that are collected from 9 different public ASR corpora. In _few-shot_ settings, we demonstrate that LLMs with in-context learning can serve as a plug-and-play back end to effectively alleviate domain shift of ASR. In the _fine-tuning_ setting, our proposed error correction technique based on LLMs achieves better WER performance than the upper-bound of re-ranking based method, which provides a new paradigm for applying ASR in some challenging conditions, such as background noise and speaker accent. We believe our benchmark and findings provide new and unique insights into LLM-enhanced ASR. | 深層ニューラルネットワークの進歩により、自動音声認識(ASR)システムは、いくつかの公開されている Clean Speech データセットで人間のパラ度を達成する。しかし、最新の ASR システムは、悪条件下でパフォーマンスが低下することがある。これは、よく訓練された音声モデルが、音声域における変化、例えば、背景ノイズに影響されるためである。直感的には、人間は、その言語的知識を利用してこの問題を解決する。つまり、曖昧な音声の用語の意味は通常、文脈の指針から推測されるため、音声システムへの依存度が低くなる。この観察に基づき、私たちは、音声認識のエラー修正に、外部の大規模言語モデル (LLM) を活用する最初のオープンソースのベンチマークを紹介します。N-best 解碼仮説は、真の転写予測に役立つ情報を提供するため、このアプローチは、従来の言語モデルのリスク評価戦略から一転し、 |
2310.00509 | Smoothing Mixed Traffic with Robust Data-driven Predictive Control for
Connected and Autonomous Vehicles | The recently developed DeeP-LCC (Data-EnablEd Predictive Leading Cruise
Control) method has shown promising performance for data-driven predictive
control of Connected and Autonomous Vehicles (CAVs) in mixed traffic. However,
its simplistic zero assumption of the future velocity errors for the head
vehicle may pose safety concerns and limit its performance of smoothing traffic
flow. In this paper, we propose a robust DeeP-LCC method to control CAVs in
mixed traffic with enhanced safety performance. In particular, we first present
a robust formulation that enforces a safety constraint for a range of potential
velocity error trajectories, and then estimate all potential velocity errors
based on the past data from the head vehicle. We also provide efficient
computational approaches to solve the robust optimization for online predictive
control. Nonlinear traffic simulations show that our robust DeeP-LCC can
provide better traffic efficiency and stronger safety performance while
requiring less offline data. | Xu Shang, Jiawei Wang, Yang Zheng | 2023-09-30T22:11:20 | http://arxiv.org/abs/2310.00509v1 | # Smoothing Mixed Traffic with Robust Data-driven Predictive Control
###### Abstract
The recently developed Deep-LCC (Data-EnablEd Predictive Leading Cruise Control) method has shown promising performance for data-driven predictive control of Connected and Autonomous Vehicles (CAVs) in mixed traffic. However, its simplistic zero assumption of the future velocity errors for the head vehicle may pose safety concerns and limit its performance of smoothing traffic flow. In this paper, we propose a robust Deep-LCC method to control CAVs in mixed traffic with enhanced safety performance. In particular, we first present a robust formulation that enforces a safety constraint for a range of potential velocity error trajectories, and then estimate all potential velocity errors based on the past data from the head vehicle. We also provide efficient computational approaches to solve the robust optimization for online predictive control. Nonlinear traffic simulations show that our robust Deep-LCC can provide better traffic efficiency and stronger safety performance while requiring less offline data.
## I Introduction
In traffic flow, small perturbations of vehicle motion may propagate into large periodic speed fluctuations, leading to so-called stop-and-go traffic waves or phantom traffic jams [1]. This phenomenon significantly lowers traffic efficiency and reduces driving safety. It has been widely demonstrated that connected and autonomous vehicles (CAVs) equipped with advanced control technologies, such as Cooperative Adaptive Cruise Control (CACC), have great potential to mitigate traffic jams [2, 3, 4]. Yet, these technologies require a fully CAV environment, and the near future will meet with a transition phase of mixed traffic where human-driven vehicles (HDVs) coexist with CAVs [5, 6, 7]. Thus, it is important to consider the behavior of HDVs when designing driving strategies for CAVs.
The control of CAVs in mixed traffic has indeed attracted increasing attention, and the existing methods are generally categorized into model-based and model-free techniques. Model-based approaches typically use classical car-following models for HDVs, e.g., the Optimal Velocity Model (OVM) [8], to derive a parametric representation for mixed traffic. This parametric model is then utilized for CAV controller design, using methods such as optimal control [9, 10], \(\mathcal{H}_{\infty}\) control [11], model predictive control (MPC) [12, 13], and barrier methods [14]. For these approaches, an accurate identification of the car-following models is non-trivial due to the complex and non-linear human driving behaviors. In contrast, model-free methods bypass system identification and directly design controllers for CAVs from data. For example, reinforcement learning [15] and adaptive dynamic programming [16] have been employed to learn wave-dampening CAV strategies. However, practical deployments of these methods are limited due to their computation burden and lack of interpretability and safety guarantees.
Alternatively, data-driven predictive control methods that combine learning techniques with MPC have shown promising results for providing safe and optimal control of CAVs. In particular, the recent Deep-LCC [17] exploits the Data-EnablEd Predictive Control (DeePC) [18, 19] technique for the Leading Cruise Control (LCC) [20] system in mixed traffic. This method directly utilizes the measured traffic data to design optimal control inputs for CAVs and explicitly incorporates input/output constraints in terms of limits on acceleration and car-following spacing. Large-scale numerical simulations [17] and real-world experiments [21] have validated the capability of DeeP-LCC to smooth mixed traffic flow. However, the standard DeeP-LCC has an important zero velocity error assumption, i.e., the future velocity of the head vehicle remains the same as the equilibrium velocity of traffic flow. This assumption facilitates the online computation of DeeP-LCC, but it will cause a mismatch between the real traffic behavior and its online prediction, which may compromise safety and control performance.
To address this issue, we develop a robust DeeP-LCC method to control CAVs in mixed traffic. Our key idea is to robustify DeeP-LCC by considering all potential velocity error trajectories and formulating a robust problem. We propose two methods for estimating velocity error trajectories and further present efficient computational approaches to solve the robust Deep-LCC online via adapting standard robust optimization techniques [22, 23]. In particular, our main contributions include: 1) We propose a robust DeeP-LCC to handle the unknown velocity errors from the head vehicle. Our predictive controller will predict a series of future outputs based on the disturbance set and requires all of them to satisfy the safety constraint, thus providing enhanced safety performance. 2) We introduce two disturbance estimation methods, the constant velocity model and the constant acceleration model, based on the past disturbance data of the head vehicle. Our methods are able to provide good estimations of the future velocity errors and improve the control performance of robust DeeP-LCC. 3) We further provide efficient computational approaches for solving the robust optimization problem. We analyze and compare the complexity of two different solving methods from the robust
optimization literature [22, 23] and further provide a downsampling method, adapted from [24], to further decrease the computational complexity. Numerical experiments validate the enhanced performance of the robust DeeP-LCC in reducing fuel consumption and improving driving safety while requiring less pre-collected data. For example, our robust DeeP-LCC only results in 4 and 0 emergencies out of 100 safety tests using small and large offline datasets, respectively; however, these numbers for DeeP-LCC [17] are 66 and 51 (which are unacceptably large).
The rest of the paper is organized as follows. Section II reviews the background on mixed traffic and DeeP-LCC for CF-LCC. Section III presents our robust DeeP-LCC. The disturbance set estimation methods and efficient computations are discussed in Section IV. Section V demonstrates our numerical results. We conclude the paper in Section VI.
## II Data-driven Predictive Control in CF-LCC
In this section, we briefly review the DeeP-LCC [17] for a Car-Following LCC (CF-LCC) system [20]. As shown in Fig. 1, the CF-LCC consists of one CAV, indexed as \(1\), and \(n-1\) HDVs, indexed as \(2,\ldots,n\) from front to end. All these vehicles follow a head vehicle, indexed as \(0\), which is immediately ahead of the CAV. Such a CF-LCC system can be considered the smallest unit for general cascading mixed traffic systems [20]. Our robust DeeP-LCC can be extended to general mixed traffic systems; the details will be discussed in an extended report.
### _Input/Output of CF-LCC system_
For the \(i\)-th vehicle at time \(t\), we denote its position, velocity and acceleration as \(p_{i}(t)\), \(v_{i}(t)\) and \(a_{i}(t)\), \(i=1,\ldots,n\), respectively. We define the spacing between vehicle \(i\) and its preceding vehicle as \(s_{i}(t)=p_{i-1}(t)-p_{i}(t)\) and their relative velocity as \(\dot{s}_{i}(t)=v_{i-1}(t)-v_{i}(t)\). In an equilibrium state, each vehicle moves at the same velocity \(v^{*}\) with an equilibrium spacing \(s_{i}^{*}\) that may vary from different vehicles.
In DeeP-LCC, we consider the error state of the traffic system. In particular, the velocity error and spacing error for each vehicle are defined as \(\tilde{v}_{i}(t)=v_{i}(t)-v^{*},\tilde{s}_{i}(t)=s_{i}(t)-s_{i}^{*}\). Then, we form the state \(x\in\mathbb{R}^{2n}\) of the CF-LCC system by lumping the error states of all the vehicles
\[x(t)=[\tilde{s}_{1}(t),\tilde{v}_{1}(t),\tilde{s}_{2}(t),\tilde{v}_{2}(t), \ldots,\tilde{s}_{n}(t),\tilde{v}_{n}(t)]^{\mathsf{T}}.\]
The spacing errors of HDVs are not directly measurable, since it is non-trivial to get the equilibrium spacing \(s_{i}^{*}\) for HDVs due to the unknown car-following behaviors. By contrast, the equilibrium velocity \(v^{*}\) can be estimated from the past velocity trajectory of the leading vehicle. Accordingly, the system output is formed by the velocity errors of all vehicles and the spacing error of the CAV only, defined as
\[y(t)=[\tilde{v}_{1}(t),\tilde{v}_{2}(t),\ldots,\tilde{v}_{n}(t),\tilde{s}_{1} (t)]^{\mathsf{T}}\in\mathbb{R}^{n+1}.\]
The input \(u(t)\in\mathbb{R}\) of the system is defined as the acceleration of the CAV, as widely used in [6, 7]. Finally, the velocity error of the head vehicle \(0\) is regarded as an external disturbance signal \(\epsilon=\tilde{v}_{0}(t)=v_{0}(t)-v^{*}\in\mathbb{R}\), and its past trajectory can be recorded, but its future trajectory is in general unknown. Based on the definitions of the system state, input, and output, after linearization and discretization, a state-space model of the CF-LCC system is in the form of
\[\begin{cases}x(k+1)=Ax(k)+Bu(k)+H\epsilon(k),\\ y(k)=Cx(k),\end{cases} \tag{1}\]
where \(k\) denotes the discrete time step. The details of the matrices \(A,B,C,H\) can be found in [17, Section II-C].
Note that the parametric model (1) is non-trivial to accurately obtain due to the unknown HDVs' behavior (all different models, such as OVM, will lead to a system in the same form (1); see [6, 7, 17, 20] for details). To address this issue, the recently proposed DeeP-LCC method directly uses the input/output trajectories for behavior prediction and controller design, thus bypassing the system identification process that is common in model-based methods.
### _Data-Driven Representation of System Behavior_
DeeP-LCC is an adaption of the standard DeePC [18] for mixed traffic control. It starts by forming a data-driven representation of the system with rich enough pre-collected offline data and employs it as a predictor to predict the dynamical behavior of CF-LCC (1). We recall a persistent excitation [25] for offline data collection.
**Definition 1** (Persistently Exciting): _The sequence of signal \(\omega=\text{col}(\omega(1),\omega(2),\ldots,\omega(T))\) with length \(T\) (\(T\in\mathbb{N}\)) is persistently exciting of order \(L\) (\(L<T\)) if its associated Hankel matrix with depth \(L\) has full row rank:_
\[\mathcal{H}_{L}(\omega)=\begin{bmatrix}\omega(1)&\omega(2)&\cdots&\omega(T-L+ 1)\\ \omega(2)&\omega(3)&\cdots&\omega(T-L+2)\\ \vdots&\vdots&\ddots&\vdots\\ \omega(L)&\omega(L+1)&\cdots&\omega(T)\end{bmatrix}.\]
We begin with collecting an input/output trajectory of length \(T\) for the CF-LCC system offline:
\[u^{\mathsf{d}} =\text{col}(u^{\mathsf{d}}(1),u^{\mathsf{d}}(2),\ldots,u^{ \mathsf{d}}(T))\in\mathbb{R}^{T},\] \[\epsilon^{\mathsf{d}} =\text{col}(\epsilon^{\mathsf{d}}(1),\epsilon^{\mathsf{d}}(2), \ldots,\epsilon^{\mathsf{d}}(T))\in\mathbb{R}^{T},\] \[y^{\mathsf{d}} =\text{col}(y^{\mathsf{d}}(1),y^{\mathsf{d}}(2),\ldots,y^{ \mathsf{d}}(T))\in\mathbb{R}^{(n+1)T}.\]
We then use the offline collected data to form a Hankel matrix of order \(L\), which is partitioned as follows
\[\begin{bmatrix}U_{\mathsf{P}}\\ U_{\mathsf{F}}\end{bmatrix}\!:=\!\mathcal{H}_{L}(u^{\mathsf{d}}),\ \begin{bmatrix}E_{\mathsf{P}}\\ E_{\mathsf{F}}\end{bmatrix}\!:=\!\mathcal{H}_{L}(\epsilon^{\mathsf{d}}),\ \begin{bmatrix}Y_{\mathsf{P}}\\ Y_{\mathsf{F}}\end{bmatrix}\!:=\!\mathcal{H}_{L}(y^{\mathsf{d}}), \tag{2}\]
Fig. 1: Schematic of CF-LCC system. Original DeeP-LCC assumes one single trajectory for future disturbance, while the proposed robust DeeP-LCC explicitly addresses an estimated set of future disturbances.
where \(U_{\text{P}}\) and \(U_{\text{F}}\) contains the first \(T_{\text{ini}}\) rows and the last \(N\) rows of \(\mathcal{H}_{L}(u^{\text{d}})\), respectively (similarly for \(E_{\text{P}}\) and \(E_{\text{F}}\), \(Y_{\text{P}}\) and \(Y_{\text{F}}\)). The Hankel matrices (2) can be used to construct the online behavior predictor for predictive control. Note that the CF-LCC system in (1) is controllable; see a detailed proof in [20]. Then, we have the following result.
**Proposition 1** ([17, Proposition 2]): _At time step \(k\), we collect the most recent past input sequence \(u_{\text{ini}}\) with length \(T_{\text{ini}}\), and let the future input sequence \(u\) with length \(N\) as_
\[u_{\text{ini}} =\text{col}(u(k-T_{\text{ini}}),u(k-T_{\text{ini}}+1),\ldots,u(k-1)),\] \[u =\text{col}(u(k),u(k+1),\ldots,u(k+N-1)).\]
_The notations \(\epsilon_{\text{ini}}\), \(\epsilon\), \(y_{\text{ini}}\) and \(y\) are denoted similarly. If the input trajectory \(u^{\text{d}}\) is persistently exciting of order \(L+2n\) (where \(L=T_{\text{ini}}+N\)), then the sequence \(\text{col}(u_{\text{ini}},\epsilon_{\text{ini}}\), \(y_{\text{ini}},u,\epsilon,y)\) is a valid trajectory with length \(L\) of (1) if and only if there exists a vector \(g\in\mathbb{R}^{T-L+1}\) such that_
\[\begin{bmatrix}U_{\text{P}}\\ E_{\text{P}}\\ Y_{\text{P}}\\ U_{\text{F}}\\ E_{\text{F}}\\ Y_{\text{F}}\end{bmatrix}g=\begin{bmatrix}u_{\text{ini}}\\ \epsilon_{\text{ini}}\\ y_{\text{ini}}\\ u\\ \epsilon\\ y\end{bmatrix}. \tag{3}\]
_If \(T_{\text{ini}}\geq 2n\), then \(y\) is unique for any \((u_{\text{ini}},y_{\text{ini}},u,\epsilon)\)._
This proposition establishes a data-driven representation (3) for the CF-LCC system: all valid trajectories can be constructed by a linear combination of rich enough pre-collected trajectories. Thus, we can predict the future output \(y\) using trajectories \((u^{\text{d}},\epsilon^{\text{d}},y^{\text{d}})\), given the future input \(u\), disturbance \(\epsilon\) and initial condition \((u_{\text{ini}},\epsilon_{\text{ini}},y_{\text{ini}})\).
### _Deep-LCC Formulation_
Using the data-driven representation (3), the DeeP-LCC in [17] solves an optimization problem at each time step:
\[\min_{g,\sigma_{y},u,\epsilon,y} V(u,y)+\lambda_{g}||g||_{2}^{2}+\lambda_{y}||\sigma_{y}||_{2}^{2}\] (4a) subject to \[\begin{bmatrix}U_{\text{P}}\\ E_{\text{P}}\\ Y_{\text{P}}\\ U_{\text{F}}\\ E_{\text{F}}\\ Y_{\text{F}}\end{bmatrix}g=\begin{bmatrix}u_{\text{ini}}\\ \epsilon_{\text{ini}}\\ y_{\text{ini}}\\ u\\ \epsilon\\ y\end{bmatrix}+\begin{bmatrix}0\\ 0\\ \sigma_{y}\\ 0\\ 0\end{bmatrix}, \tag{4b}\] \[\tilde{s}_{\text{min}}\leq G_{1}y\leq\tilde{s}_{\text{max}},\] (4c) \[u_{\text{min}}\leq u\leq u_{\text{max}},\] (4d) \[\epsilon=\epsilon_{\text{est}}, \tag{4e}\]
where \(G_{1}=I_{N}\otimes[0_{1\times n},\ 1]\) selects the spacing error of the CAV from the output, \([\tilde{s}_{\text{min}},\tilde{s}_{\text{max}}]\) is the safe spacing error range of CAV, \([u_{\text{min}},u_{\text{max}}]\) is the physical limitation of the acceleration and \(\epsilon_{\text{est}}\) is the estimation of the future velocity errors of the head vehicle \(0\).
For the cost function (4a), \(V(u,y)\) penalizes the output deviation from equilibrium states and the energy of the input:
\[V(u,y)=||u||_{R}^{2}+||y||_{Q}^{2},\]
with \(R\in\mathbb{S}_{+}^{N\times N}\) and \(Q\in\mathbb{S}_{+}^{N(n+1)\times N(n+1)}\). There are two regularization terms \(\|g\|_{2}^{2}\) and \(\|\sigma_{y}\|_{2}^{2}\) in the cost function with weight coefficients \(\lambda_{g},\lambda_{y}\). Also, a slacking variable \(\sigma_{y}\) is added to the data-driven representation (4b). Note that the original data-driven behavior representation (3) is only applicable to linear systems with noise-free data. The regularization herein is commonly used for nonlinear systems with stochastic noises, and we refer interested readers to [17, 18] for detailed discussions.
**Remark 1** (Robustification): _The DeeP-LCC (4) requires an estimated sequence \(\epsilon_{\text{est}}\) for the future disturbance (i.e., velocity errors) of the head vehicle. In the standard DeeP-LCC [17], it is assumed that the estimated future velocity error is zero, which was justified by the assumption that vehicle \(0\) always tries to maintain its equilibrium state. However, this assumption hardly stands since in real-world traffic, strong oscillations may happen, particularly during the occurrence of traffic waves. An inaccurate estimation of future velocity errors could cause a mismatch between the prediction and the real traffic behavior, which may not only degrade the control performance but also pose safety concerns (e.g., collision). In this paper, we will incorporate a valid set for disturbance estimation (see Fig. 1 for illustration) and establish a robust DeeP-LCC, as well as its tractable computations. \(\square\)_
## III Tractable Robust Deep-LCC Formulation
In this section, we present a new framework of robust DeeP-LCC to control the CAV in the CF-LCC system which can properly address unknown future velocity errors, leading to enhanced performance and safety.
### _Robust DeeP-LCC Formulation_
As shown in Fig. 1, instead of estimating one single disturbance trajectory \(\epsilon=\epsilon_{\text{est}}\) in DeeP-LCC, we introduce a disturbance set \(\mathcal{W}\) as the estimation, i.e., \(\epsilon\in\mathcal{W}\), which, by valid design (see details in Section IV), will contain the real trajectory with a much higher possibility.
Our key idea is to plan over the worst trajectory in \(\mathcal{W}\) for predictive control, leading to a robust optimization problem
\[\begin{array}{rl}\min_{g,\sigma_{y},u,y}&\max_{c\in\mathcal{W}}&V(u,y)+ \lambda_{g}||g||_{2}^{2}+\lambda_{y}||\sigma_{y}||_{2}^{2}\\ \text{subject to}&\eqref{
### _Reformulations of the min-max optimization_
The min-max optimization problem (5) is solved at each iteration of Algorithm 1, but standard solvers are not applicable with respect to its current form. We proceed to present a sequence of reformulation (and relaxations) for (5), which further allows for efficient computations in Section IV.
We first eliminate the equality constraint by expressing \(g\) and \(y\) in terms of \(u\), \(\sigma_{y}\) and \(\epsilon\) to :
\[g =H_{\text{p}}^{\dagger}b\!+\!H_{\text{p}}^{\perp}z, \tag{6a}\] \[y =Y_{\text{F}}g=Y_{\text{F}}H_{\text{p}}^{\dagger}b+Y_{\text{F}}H _{\text{p}}^{\perp}z, \tag{6b}\]
where \(H_{\text{p}}=\text{col}(U_{\text{P}},E_{\text{P}},Y_{\text{P}},U_{\text{F}},E _{\text{F}})\) with \(H_{\text{p}}^{\dagger}\) denoting its pseudo-inverse, \(H_{\text{p}}^{\perp}=I-H_{\text{p}}^{\dagger}H_{\text{p}}\), \(b=\text{col}(u_{\text{ini}},\epsilon_{\text{ini}},y_{\text{ini}}+\sigma_{y},u,\epsilon)\), and \(z\in\mathbb{R}^{T-L+1}\). For simplicity, we set \(z=0\) in the following derivation, which decreases the complexity of the optimization problem but also reduces the feasible set. From the simulations in Section V, we note that this simplification already provides satisfactory control performance. Then, the min-max robust problem (5) becomes:
\[\min_{u,\sigma_{y}}\;\max_{\epsilon\in\mathcal{W}} x^{\mathsf{T}}Mx+d^{\mathsf{T}}x+c_{0}\] (7a) subject to \[\tilde{s}_{\min}\leq P_{1}x+c_{1}\leq\tilde{s}_{\max}, \tag{7b}\] \[u_{\min}\leq P_{2}x\leq u_{\max}, \tag{7c}\]
where \(x=\text{col}(u,\sigma_{y},\epsilon)\) denotes the decision variable1, and \(M,d,c_{0},P_{1},P_{2},c_{1}\) only depend on problem data (their explicit forms are provided in our numerical implementation).
Footnote 1: With slight abuse of notations, we use \(x\) to denote the decision variable in robust optimization.
Without loss of generality, we eliminate the constant \(c_{0}\). We finally consider \(\epsilon\) as an uncertainty parameter, and transform problem (7) into its epi-graph form
\[\min_{x,t} t\] subject to \[x^{\mathsf{T}}Mx+d^{\mathsf{T}}x\leq t,\quad\forall\epsilon\in \mathcal{W}, \tag{8a}\] \[\tilde{s}_{\min}\leq P_{1}x+c_{1}\leq\tilde{s}_{\max},\quad \forall\epsilon\in\mathcal{W},\] (8b) \[u_{\min}\leq P_{2}x\leq u_{\max}. \tag{8c}\]
Compared with (7), the formulation (8) requires its feasible solutions to satisfy the safety constraint for any \(\epsilon\). This design indicates that the predictive controller needs to ensure safe constraints for all disturbance trajectories in \(\mathcal{W}\). Thus, the safety of the mixed traffic is enhanced by solving (8). On the other hand, the stricter safety constraint further increases the complexity, which will be addressed in Section IV-B.
**Remark 2** (Uncertainty Quantification): _We require an accurate and non-conservative estimation of \(\mathcal{W}\) for velocity error trajectories to ensure mixed traffic safety and good control performance. The actual disturbance trajectory should be inside or close to \(\mathcal{W}\); otherwise, a gap between online prediction and real traffic behavior may still exist. A conservative estimation is not preferred either, which will shrink the feasible solution set and degrade the control performance. \(\square\)_
## IV Disturbance Estimation and Efficient Computation
In this section, we first introduce two disturbance estimation methods based on different assumptions of human driving behaviors. We then present two solving methods of (8) and compare their complexity. Also, we provide a down-sampling method of low-dimensional approximation for the disturbance set \(\mathcal{W}\) for real-time computation.
### _Uncertainty Quantification_
In our problem, the estimated disturbance set is considered an \(N\)-dimensional polytope that is
\[\mathcal{W}=\{\epsilon\in\mathbb{R}^{N}|A_{\epsilon}\epsilon\leq b_{\epsilon}\}, \tag{9}\]
where \(A_{\epsilon}=[I;-I]\), \(b_{\epsilon}=[\epsilon_{\max};-\epsilon_{\min}]\) and \(\epsilon_{\max},\epsilon_{\min}\) are the upper and lower bound vectors of \(\epsilon\). The key part of estimating the disturbance set becomes estimating its (time-varying) bounds from the past velocity errors \(\epsilon_{\text{ini}}\).
We propose two different estimation methods (see Fig. 2 for illustration) and analyze their performance:
#### Iv-A1 Constant disturbance bounds
We assume that the disturbance (velocity error) of the head vehicle will not have a large deviation from its current value in a short time period based on the constant velocity model, and the disturbance variation for the future disturbance trajectory is close to its
Fig. 2: Schematic of two disturbance estimation methods. The purple line represents the actual disturbance trajectory and its past part is known while its future part is unknown. In the past region, the black line segment denotes the information needed for estimation. In the future region, the black dashed line represents the zero estimation while the red region and the blue region denote the time-varying bound estimated set and the constant bound estimated set, respectively.
past trajectory. From the historical disturbance values \(\epsilon_{\text{ini}}\), we can get the value of the current disturbance, i.e., \(\epsilon_{\text{ini}}(\text{end})\), and estimate the disturbance variation as \(\Delta\epsilon_{\text{low}}=\min(\epsilon_{\text{ini}})-\text{mean}(\epsilon_{ \text{ini}})\) and \(\Delta\epsilon_{\text{up}}=\max(\epsilon_{\text{ini}})-\text{mean}(\epsilon_{ \text{ini}})\). Then, the estimated bound of the future disturbance is given by
\[\epsilon_{\min}=\epsilon_{\text{cur}}+\Delta\epsilon_{\text{low}},\ \epsilon_{\max}=\epsilon_{\text{cur}}+\Delta\epsilon_{\text{up}}.\]
#### Iii-A2 Time-varying disturbance bounds
We can also assume the acceleration of the head vehicle will not deviate significantly from its current value based on the constant acceleration model, and its variation in the future is close to the variation in the past. We first get the past acceleration information from \(\epsilon_{\text{ini}}\) as \(a_{\text{ini}}(k)=\frac{\epsilon_{\text{ini}}(k+1)-\epsilon_{\text{in}}(k)}{ \Delta t}\) where \(\Delta t\) is the sampling time period. Then, using a similar procedure as in the previous approach, the acceleration variation bound is estimated as \(\Delta a_{\text{low}}=\min(a_{\text{ini}})-\text{mean}(a_{\text{ini}})\) and \(\Delta a_{\text{up}}=\max(a_{\text{ini}})-\text{mean}(a_{\text{ini}})\). Thus, the future disturbance in an arbitrary time step \(k\) is bounded by the following inequalities:
\[\epsilon_{\text{ini}}(\text{end})+(a_{\text{cur}}+\Delta a_{\text {low}}) \cdot k\Delta t\leq\epsilon(k)\] \[\leq\epsilon_{\text{ini}}(\text{end})+(a_{\text{cur}}+\Delta a_ {\text{up}})\cdot k\Delta t.\]
Fig. 2 illustrates the two disturbance estimation methods. It is clear that there exists a large gap between the actual disturbance trajectory and the zero line. For the constant disturbance bounds, the actual disturbance trajectory stays in the estimated set in the short term but will deviate from the set over time. For the second method using time-varying disturbance bounds, it includes the actual trajectory in the estimated set in this case but with a relatively conservative bound at the end of the time period. In most of our numerical simulations, the time-varying disturbance bounds outperform the constant disturbance bounds because traffic waves usually have high amplitude with low frequency.
### _Efficient Computations_
Upon estimating \(\mathcal{W}\), the robust optimization problem (8) is well-defined. Robust optimization is a well-studied field [22, 23]. We here adapt standard robust optimization techniques to solve (8) and compare their complexity.
**M1: Vertex-based**. Our first method utilizes constraints evaluated at vertices of \(\mathcal{W}\) to replace the robust constraints. The compact polytope \(\mathcal{W}\) can be represented as the convex hull of its extreme points as
\[\mathcal{W}=\text{conv}(\omega_{1},\ldots,\omega_{n_{\text{v}}}), \tag{10}\]
where \(n_{\text{v}}\) denotes the number of extreme points, and its value is \(2^{N}\) if no low-dimensional approximation is applied. Using this representation, we can rewrite problem (8) as
\[\min_{x,t} t\] subject to \[x_{j}^{\mathsf{T}}Mx_{j}+d^{\mathsf{T}}x_{j}\leq t,\,j=1,\ldots,n _{\text{v}}, \tag{11a}\] \[\tilde{s}_{\min}\!\leq\!P_{1}x_{j}+c_{1}\!\leq\!\tilde{s}_{\max },\,j=1,\ldots,n_{\text{v}},\] (11b) \[u_{\min}\leq P_{2}x\leq u_{\max}, \tag{11c}\]
where \(x_{j}\) represents the decision variable when the uncertainty parameter \(\epsilon\) is fixed to one of the extreme points \(w_{j}\) and the expression becomes \(\text{col}(u,\sigma_{y},w_{j})\).
**M2: Duality-based**. The second method treats robust constraint (8a) the same as the first method, but forms (8b) as a sub-level optimization problem and then changes it into its dual problem to combine both levels. For example, the right hand inequality of (8b) can be reformulated as
\[\tilde{s}_{\max}\geq\max_{e\in\mathcal{W}}\ \ p_{l}^{\mathsf{T}}x+c_{1,l},\ l =1,\ldots,N, \tag{12}\]
where \(p_{l}^{\mathsf{T}}\) and \(c_{1,l}\) is the \(l\)-th row vector and element in \(P_{1}\) and \(c_{1,l}\), respectively. Given the origin representation \(\mathcal{W}\) in (9), the right-hand side of (12) is a linear program (LP). Then, we can change them to their dual problems and the strong duality of LPs ensures the new formulation is equivalent to (12). The bi-level optimization problem becomes a min-min problem and we can combine both levels2. The optimization problem (8) can then be equivalently reformulated as
Footnote 2: This operation is standard; we refer the interested reader to Section 2.1 of [https://zhengy09.github.io/ECE285/lectures/L17.pdf](https://zhengy09.github.io/ECE285/lectures/L17.pdf).
\[\min_{x_{\text{d}},l_{1},\lambda_{2}} t\] subject to \[p_{l,\text{d}}^{\mathsf{T}}x_{\text{d}}+b_{\epsilon}^{\mathsf{T}} \lambda_{l,1}+c_{1,l}\leq\tilde{s}_{\max},\] (13a) \[A_{\epsilon}^{\mathsf{T}}\lambda_{l,1}-p_{l,\epsilon}=0,\] (13b) \[-p_{l,\text{d}}^{\mathsf{T}}x_{\text{d}}+b_{\epsilon}^{\mathsf{T}} \lambda_{l,2}-c_{1,l}\leq-\tilde{s}_{\min},\] (13c) \[A_{\epsilon}^{\mathsf{T}}\lambda_{l,2}+p_{l,\epsilon}=0,\] (13d) \[\lambda_{l,1}\geq 0,\lambda_{l,2}\geq 0,\ l=1,2,\ldots,N,\] (13e) (13f) (1a), (13g)
where \(x_{\text{d}}\) is the decision variable \(\text{col}(u,\sigma_{y})\), and \(\lambda_{l,1},\lambda_{l,2}\!\in\!\mathbb{R}^{2n_{\text{v}}}\) are dual variables with \(\lambda_{1}=\text{col}(\lambda_{1,1},\lambda_{2,1},\ldots,\lambda_{N,1})\), \(\lambda_{2}=\text{col}(\lambda_{1,2},\lambda_{2,2},\ldots,\lambda_{N,2})\); parameters \(c_{1,l},p_{l}\) are the same as (12) and \(p_{j,\text{d}}\) represents \(\text{col}(p_{l,u},p_{l,\sigma_{y}})\) with \(p_{l}\) sub-divided into \(\text{col}(p_{l,u},p_{l,\sigma_{y}},p_{l,\epsilon})\) corresponding to \(u,\sigma_{y}\) and \(\epsilon\).
**Theorem 1**: _Suppose (8) is feasible and its uncertainty set \(\mathcal{W}\) is a polytope. Problems (8), (11) and (13) are equivalent._
The equivalence between (8) and (11) is relatively straightforward. It requires standard duality arguments to establish the equivalence between (8) and (13); due to the page limit, we will put the details into an extended report.
Both (11) and (13) are standard convex optimization problems, which can be solved using standard solvers (e.g., Mosek [26]). We here discuss the complexity of the above two methods; see Table I. The main difference lies in the different formulations of (8b), i.e., (11b) and (13a) - (13e). In **M1**, (11b) represents \(N\cdot 2^{N+1}\) inequality constraints while (13a) - (13e) together represent \(2N(3N+1)\) inequality constraints in **M2**. The value \(N\cdot 2^{N+1}\) is much larger than \(2N(3N+2)\) when the prediction horizon \(N\) is large, while there exist extra \(4N^{2}\) decision variables in **M2**. This trade-off is also reflected in our numerical implementation.
### _Down-sampling strategy_
We here discuss a down-sampling strategy, adapted from [24], to relieve the exponential growth of the number of constraints. It approximates the \(N\)-dimensional disturbance trajectory by choosing one point for every \(T_{\text{s}}\) steps along
it and performing linear interpolation. We denote the low-dimensional representation of the future disturbance trajectory as \(\tilde{\epsilon}\in\mathbb{R}^{n_{\epsilon}}\) where \(n_{\epsilon}=(\lfloor\frac{N-2}{T_{\text{s}}}\rfloor+2)\). An approximated representation \(\hat{\epsilon}\) of \(\epsilon\) can be derived as
\[\hat{\epsilon}^{(k)}=\begin{cases}\tilde{\epsilon}^{(\bar{k}+1)}+((k-1)\text{ mod }T_{\text{s}})\times\frac{\tilde{\epsilon}^{(\bar{k}+2)}-\tilde{\epsilon}^{(\bar{k}+ 1)}}{T_{\text{s}}},\\ \hskip 142.26378pt1\leq k\leq\tilde{k}\cdot T_{s}\\ \tilde{\epsilon}^{(\bar{k}+1)}+(k-\tilde{k}\cdot T_{\text{s}}-1)\times\frac{ \tilde{\epsilon}^{(\bar{k}+2)}-\tilde{\epsilon}^{(\bar{k}+1)}}{N-\tilde{k} \cdot T_{\text{s}}-1},\\ \hskip 142.26378pt\tilde{k}\cdot T_{s}<k\leq N\end{cases}\]
where \(\bar{k}=\lfloor\frac{k-1}{T_{\text{s}}}\rfloor\) and \(\bar{k}=\lfloor\frac{N-2}{T_{\text{s}}}\rfloor\). Then we can use \(\tilde{\epsilon}\in\mathbb{R}^{n_{\epsilon}}\) to represent \(\epsilon\in\mathbb{R}^{N}\) as
\[\epsilon\approx\hat{\epsilon}=E_{\epsilon}\tilde{\epsilon}, \tag{14}\]
where \(\tilde{\epsilon}\in\tilde{\mathcal{W}}\) and \(\tilde{\mathcal{W}}\) can be estimated using the same methods we introduced before. Also, substituting (14) into our previous derivation will not affect its correctness.
The complexities of both methods (11) and (13) after using low-dimensional approximation are updated in the last two rows of Table I which depend on the choice of \(n_{\epsilon}\). Theoretically, with the same computational resource, the duality-based method allows us to choose a larger \(n_{\epsilon}\) because the coefficient of its exponential growth term \(2^{n_{\epsilon}}\) is \(1\) while it is \(2N+1\) for the vertex-based method. In our implementation, \(n_{\epsilon}\) is usually chosen as a small number to ensure real-time computational performance and these two methods might not have obvious differences. We note that replacing \(\epsilon\) with \(\hat{\epsilon}\) may fail to incorporate all cases in \(\mathcal{W}\) since the set of \(\hat{\epsilon}\) is a subset of \(\mathcal{W}\). However, our extensive simulations demonstrate that the down-sampling strategy provides satisfactory performances.
## V Traffic Simulations
In this section, we carry out nonlinear and non-deterministic traffic simulations to test the performance of robust DeeP-LCC in controlling the CF-LCC system in mixed traffic. Due to the page limit, we consider the time-varying bound disturbance estimation method and duality-based solving method, and the performance of other methods will be included in an extended report. We implemented an automatic routine transforming (13) into standard conic programs3, which are solved by Mosek [26].
Footnote 3: Our open-source implementation is available at [https://github.com/soc-ucsd/Decentralized-DeeP-LCC/](https://github.com/soc-ucsd/Decentralized-DeeP-LCC/).
### _Experimental Setup_
The car-following behaviors of HDVs are modeled by the nonlinear OVM model in [10], and a noise signal following the uniform distribution of \(\mathbb{U}[-0.1,0.1]\,\text{m}/\text{s}^{2}\) is added to the acceleration for each HDV. For the CF-LCC system in the mixed traffic, we consider the CAV \(1\) is followed by 4 HDVs, and there are three vehicles in front of the head vehicle \(0\) together in the mixed traffic flow; see Fig. 3 for illustration. During the simulation, a perturbation is imposed on the leading vehicle, indexed as \(-3\).
We use the following parameters in both DeeP-LCC and robust DeeP-LCC:
1. _Offline data collection_: lengths of pre-collected data sets are \(T=500\) for a small data set and \(T=1500\) for a large data set with \(\Delta t=0.05\)s. They are collected around the equilibrium state of the system with velocity \(15\,\text{m}/\text{s}\). Both \(u^{4}\) and \(\epsilon^{4}\) are generated by a uniform distributed signal of \(\mathbb{U}[-1,1]\) which satisfies the persistent excitation requirement in Proposition 1;
2. _Online predictive control_: the initial signal sequence and the prediction horizon are set to \(T_{\text{ini}}=20\), \(N=50\), respectively. For the objective function in (5), we have \(R=0.1I_{N}\) and \(Q=I_{N}\otimes\text{diag}(Q_{v},w_{s})\) where \(Q_{v}=\text{diag}(1,\dots,1)\in\mathbb{R}^{n}\) and \(w_{s}=0.5\). The regularized parameters are set to \(\lambda_{g}=100\) and \(\lambda_{y}=10000\). The spacing constraints for CAV are set as \(s_{\max}=40\) m, \(s_{\min}=5\) m and the bound of the spacing error is updated in each iteration as \(\tilde{s}_{\max}=s_{\max}-s^{*}\) and \(\tilde{s}_{\min}=s_{\min}-s^{*}\).
Note that \(s^{*}\) is also updated in each time step according to the current equilibrium state estimated by the leading vehicle's past trajectory [17]. The limitation of the acceleration is set as \(a_{\max}=2\) m/s\({}^{2}\) and \(a_{\min}=-5\) m/s\({}^{2}\).
### _Numerical Results_
**Experiment A:** We first validate the control performance of robust DeeP-LCC in a comprehensive simulation scenario which is motivated by New European Driving Cycle (NEDC) [27]. We design the velocity trajectory of the leading vehicle as the black profile in Fig. 4 and calculate the fuel consumption of the \(5\) following vehicles in CF-LCC system using the numerical model in [28] for evaluation.
The velocity profiles of robust DeeP-LCC and original DeeP-LCC with different sizes of data sets are shown in Fig. 4. Both methods allow for the CAV to track the desired velocity when using a large data set (see red curves in Fig. 4). However, in the case of using a small data set,
Fig. 3: Simulation scenario. In front of the CF-LCC system, there are four preceding HDVs, where the blue node, yellow node, red node, and grey nodes represent the CAV, the head vehicle, the leading vehicle, and other HDVs, respectively.
the degradation of control performance for DeeP-LCC is apparent, and there are some undesired oscillations (see blue curves in Fig. 4(a)), while robust DeeP-LCC remains a smooth velocity profile (see blue curves in Fig. 4(b)). Such performance degradation is highly related to the mismatch between the online prediction and real system behavior, caused by representation and estimation errors. Both original DeeP-LCC and robust DeeP-LCC employ the same data set to construct the data-driven representation (3), but robust DeeP-LCC allows for a relatively small estimation error, and provides more margin for potential representation errors. This is one main reason that the robust DeeP-LCC performs better than DeeP-LCC for a relatively small data set.
Table II lists fuel consumption results when using the large data set. Both robust DeeP-LCC and DeeP-LCC reduce fuel consumption compared with the case with all HDVs, and the improvement in the braking phase (Phase 1 and 4) is higher than the accelerating phases (Phases 2 and 3). Moreover, we note that robust DeeP-LCC achieves better fuel economy than DeeP-LCC in all phases, \(6.86\%\) vs. \(3.14\%\) and \(8.17\%\) vs. \(4.97\%\) during Phase 1 and 4, respectively.
**Experiment B:** We further validate the safety performance of robust DeeP-LCC in the braking scenario. In this experiment, the leading vehicle that moves at \(15\,\mathrm{m}/\mathrm{s}\) will brake with the maximum deceleration \(-5\,\mathrm{m}/\mathrm{s}^{2}\), stay at \(5\,\mathrm{m}/\mathrm{s}\) for a while, and then speed up back to \(15\,\mathrm{m}/\mathrm{s}\). We collect \(100\) small data sets (\(T=500\)) and \(100\) large data sets (\(T=1500\)) and carry out the same experiment. Recall that the safety constraint of the CAV is set from \(5\,\mathrm{m}\) to \(40\,\mathrm{m}\). We define "violation" as the case where the CAV's spacing deviates more than \(1\,\mathrm{m}\) from this range, and "emergency" as the case where the spacing deviates over \(5\,\mathrm{m}\) from this range. We note that, when an emergency happens, there are three possible undesired situations: 1) A rear-end collision happens; 2) The spacing of the CAV is too large which decreases the traffic capacity; 3) The controller fails to stabilize the system.
The results are shown in Table III, which clearly shows that DeeP-LCC has a much higher violation rate and emergency rate for small data sets. Although using large data sets decreases both of them, they are still relatively high, which are \(62\%\) and \(51\%\) respectively. On the other hand, using the same small data sets, the robust DeeP-LCC can provide a remarkably low violation rate and emergency rate which are \(5\%\) and \(4\%\). Moreover, both of them are decreased to \(0\%\) when using large data sets, which means perfect safety guarantees in our 100 experiments.
Fig. 5 demonstrates two examples from small data sets and large data sets to analyze different performances of the DeeP-LCC and robust DeeP-LCC. When using a large data set, both methods exhibit smaller velocity fluctuations compared with the case of all human drivers. It can be clearly observed that the CAV controlled by robust DeeP-LCC
Fig. 4: Velocity profiles in Experiment A. The black profile denotes the leading vehicle. The red profile and the blue profile represent DeeP-LCC control with data sets of size \(T=1500\) and \(T=500\), respectively. (a) The CAV utilizes DeeP-LCC. (b) The CAV utilizes robust DeeP-LCC.
Fig. 5: Simulation results in Experiment B. The black profile and the gray profile represent the leading vehicle and the head vehicle, respectively. The orange profile and the green profile correspond to DeeP-LCC and robust DeeP-LCC, respectively, while the purple profile corresponds to the all HDV case. (a) and (b) show the velocity and spacing profiles at different sizes of data sets.
always stays inside the safety bound for both large and small data sets, despite some small undesired velocity fluctuation for the small data set. However, DeeP-LCC is likely to lead to a rear-end collision for the small data set, and still violate the safe bound even with the large data set. Note that although the safety constraint is imposed in DeeP-LCC, it fails in the simulation due to the mismatch between the prediction and the real behavior of the system. More precisely, in prediction, DeeP-LCC considers the future velocity error of the head vehicle as \(\mathbb{0}_{N}\) by assuming that the head vehicle accurately tracks the equilibrium velocity. Thus, the CAV decelerates or accelerates immediately when the leading vehicle starts to brake or speed up. It is, however, not the case in real-world traffic flow, and the inaccurate estimation causes the mismatch and leads to an emergency. On the other hand, robust DeeP-LCC predicts a series of the CAV's future spacing based on the estimated disturbance set and requires all of them to satisfy the safety constraint. Thus, the robust DeeP-LCC provides much stronger safety guarantees.
## VI Conclusion
In this paper, we have proposed the robust DeeP-LCC for CAV control in mixed traffic. The robust formulation and disturbance set estimation methods together provide a strong safety guarantee, improve the control performance, and allow for the applicability of a smaller data set. Efficient computational methods are also provided for the real-time implementation. Extensive traffic simulations have validated the performance of robust DeeP-LCC in comprehensive and braking scenarios. Interesting future directions include learning-based estimation for future disturbances, incorporation of communication-delayed traffic data, and extension to large-scale mixed traffic scenarios.
| ```
最新のDeeP-LCC(データEnablEd予測先進航行制御)メソッドは、接続型自動運転車(CAV)のデータ駆動型予測制御において、混雑した道路環境で prometting のパフォーマンスを示しています。ただし、リード車体の未来速度誤差の単純なゼロ仮定は、安全性を懸念させる可能性があり、交通の流れの滑らかな制御の性能を制限する可能性があります。この論文では、より安全性の高い DeeP-LCC メソッドを提案し、混雑した道路環境で CAV の制御を行います。特に、安全の制約を適用する頑丈な論理を最初に提案し、リード車の潜在的な速度誤差の軌跡の範囲を制限します。そして、過去のリード車のデータに基づいて潜在的な速度誤差を推定します。また、オンライン予測制御のための頑健な最適化の効率的な計算手法を提供します。非線 |
2309.13370 | Rayleigh-Taylor Instability in Stratified Compressible Fluids
with/without the Interfacial Surface Tension | Guo--Tice formally established in 2011 that the Rayleigh--Taylor instability
inevitably occurs within stratified compressible viscous fluids in a slab
domain $\mathbb{R}^2\times (h_-,h_+)$, irrespecive of the presence of
interfacial surface tension, where the instability solutions are non-periodic
with respect to both horizontal spacial variables $x_1$ and $x_2$, by applying
a so-called ''normal mode'' method and a modified variational method to the
linearized (motion) equations. It is a long-standing open problem, however,
whether Guo--Tice's conclusion can be rigorously verified by the (original)
nonlinear equations. This challenge arises due to the failure of constructing a
growing mode solution, which is non-periodic with respect to both horizontal
spacial variables, to the linearized equations defined on a slab domain. In the
present work, we circumvent the difficulty related to growing mode solutions by
developing an alternative approximate scheme. In essence, our approach hinges
on constructing the horizontally periodic growing mode solution of the
linearized equations to approximate the {\it nonlinear} Rayleigh--Taylor
instability solutions, which do not exhibit horizontal periodicity. Thanks to
this new approximate scheme, we can apply Guo--Hallstrom--Spirn's bootstrap
instability method to the nonlinear equations in Lagrangian coordinates, and
thus prove Guo--Tice's conclusion. In particular, our approximate method could
also be applied to other instability solutions characterized by non-periodic
motion in a slab domain, such as the Parker instability and thermal
instability. | Fei Jiang, Han Jiang, Song Jiang | 2023-09-23T13:15:33 | http://arxiv.org/abs/2309.13370v1 | # Rayleigh-Taylor Instability in Stratified Compressible Fluids
###### Abstract
Guo-Tice formally established in 2011 that the Rayleigh-Taylor instability inevitably occurs within stratified compressible viscous fluids in a slab domain \(\mathbb{R}^{2}\times(h_{-},h_{+})\), irrespective of the presence of interfacial surface tension, where the instability solutions are non-periodic with respect to both horizontal spacial variables \(x_{1}\) and \(x_{2}\), by applying a so-called "normal mode" method and a modified variational method to the linearized (motion) equations [17]. It is a long-standing open problem, however, whether Guo-Tice's conclusion can be rigorously verified by the (original) nonlinear equations. This challenge arises due to the failure of constructing a growing mode solution, which is non-periodic with respect to both horizontal spacial variables, to the linearized equations defined on a slab domain. In the present work, we circumvent the difficulty related to growing mode solutions by developing an alternative approximate scheme. In essence, our approach hinges on constructing the horizontally periodic growing mode solution of the linearized equations to approximate the _nonlinear_ Rayleigh-Taylor instability solutions, which do not exhibit horizontal periodicity. Thanks to this new approximate scheme, we can apply Guo-Hallstrom-Spirn's bootstrap instability method in [12] to the nonlinear equations in Lagrangian coordinates, and thus prove Guo-Tice's conclusion. In particular, our approximate method could also be applied to other instability solutions characterized by non-periodic motion in a slab domain, such as the Parker instability and thermal instability.
keywords: Stratified compressible viscous fluids; Rayleigh-Taylor instability; free interfaces; bootstrap instability method. +
Footnote †: journal:
## 1 Introduction
Consider two perfectly plane-parallel layers of immiscible fluids, the heavier on top of the lighter one, and both subject to the earth's gravity. In this case, the equilibrium state is unstable to sustain a small disturbance, and this unstable disturbance will grow and lead to a release of potential energy, as the heavier fluid moves down under the gravitational force, and the lighter one is displaced upwards. This phenomenon was first studied by Rayleigh [36] and subsequently by Taylor [39], and is called therefore the Rayleigh-Taylor (RT) instability. In the last decades, this phenomenon has been extensively investigated from both physical and numerical perspectives, see [6; 8; 11; 16; 40] for examples. It has been also widely investigated how the RT instability evolves under the influence of various physical factors, such as elasticity [30; 33], rotation [3; 6], surface tension [17; 22; 42; 44], and magnetic fields [23; 25; 26; 29; 43].
In this article, we are interested in the existence of RT instability solutions within stratified compressible viscous fluids with/without interfacial surface tension. Before introducing our main result and the relevant progress in Section 1.3, first we shall formulate the RT instability problem.
### Model for stratified compressible viscous fluids
The three-dimensional motion equations of compressible fluids under a uniform gravitational field (directed along the negative \(x_{3}\)-axis) can be described as follows:
\[\begin{cases}\rho_{t}+\operatorname{div}(\rho v)=0,\\ \rho v_{t}+\rho v\cdot\nabla v+\operatorname{div}\mathcal{S}=-\rho g\mathbf{e} ^{3}.\end{cases} \tag{1.1}\]
Here \(\rho:=\rho(x,t)\), \(v:=v(x,t)\), \(\mathcal{S}\) and \(g\) denote the density, velocity, stress tensor and gravitational constant, respectively. The vector \(\mathbf{e}^{3}:=(0,0,1)^{\top}\), where the superscript \(\top\) denotes the transposition. In the above system we consider that the stress tensor \(\mathcal{S}\) is characterized by the expression:
\[\mathcal{S}:=P\mathbb{I}-\mathbb{S}(v), \tag{1.2}\]
where \(\mathbb{I}\) denotes a \(3\times 3\) identity matrix. \(P:=P(\tau)|_{\tau=\rho}\) and \(\mathbb{S}(v)\) denote the hydrodynamic pressure and viscosity tensor, respectively. In this article, _the pressure function \(P(\tau)\) is always assumed to be smooth, positive, and strictly increasing with respect to \(\tau\)_, and the viscosity tensor is given by
\[\mathbb{S}(v):=\mu\mathbb{D}v+(\varsigma-2\mu/3)\operatorname{div}\!v\, \mathbb{I}, \tag{1.3}\]
where \(\mathbb{D}v:=\nabla v+\nabla v^{\top}\), and \(\mu>0\) resp. \(\varsigma\geqslant 0\) denote the shear resp. bulk viscosity coefficients. For the sake of simplicity, in this paper we restrict our consideration to the case where \(\mu\) and \(\varsigma\) are constants.
To investigate the RT instability, we shall further consider two distinct, immiscible, compressible viscous fluids evolving in a moving domain \(\Omega(t)\), where \(\Omega(t):=\Omega_{+}(t)\cup\Omega_{-}(t)\) for \(t\geqslant 0\). The upper fluid occupies the upper domain:
\[\Omega_{+}(t):=\{(x_{\mathrm{h}},x_{3})^{\top}\ |\ x_{\mathrm{h}}:=(x_{1},x_{2 })^{\top}\in\mathbb{R}^{2},\ d(x_{\mathrm{h}},t)<x_{3}<h_{+}\},\]
while the lower fluid occupies the lower domain:
\[\Omega_{-}(t):=\{(x_{\mathrm{h}},x_{3})^{\top}\ |\ x_{\mathrm{h}}\in\mathbb{R}^{ 2},\ h_{-}<x_{3}<d(x_{\mathrm{h}},t)\}.\]
Here \(h_{-}\) and \(h_{+}\) are fixed constants that satisfy \(h_{-}<h_{+}\), whereas the internal surface function \(d:=d(x_{\mathrm{h}},t)\) is unspecified and unknown. The internal surface \(\Sigma(t):=\{x_{3}=d\}\) moves between the two fluids, and \(\Sigma_{\pm}:=\{x_{3}=h_{\pm}\}\) represent the fixed upper and lower boundaries of \(\Omega(t)\), respectively.
In the following, we employ the equations (1.1) to describe the motion of stratified compressible viscous fluids. We introduce the subscript \(+\) and \({}_{-}\) to the notations of the known physical parameters, pressure functions and other unknown functions in (1.1) for the upper and lower fluids, respectively. Consequently, the equations governing the motion of the stratified compressible viscous fluids in a slab domain, subject to a uniform gravitational field, can be formulated as follows:
\[\begin{cases}\partial_{t}\rho_{\pm}+\operatorname{div}(\rho_{\pm}v_{\pm})=0& \text{in }\Omega_{\pm}(t),\quad t>0,\\ \rho_{\pm}\partial_{t}v_{\pm}+\rho_{\pm}v_{\pm}\cdot\nabla v_{\pm}+ \operatorname{div}\!\mathcal{S}_{\pm}=-\rho_{\pm}g\mathbf{e}^{3}&\text{in }\Omega_{\pm}(t),\quad t>0,\end{cases} \tag{1.4}\]
where \(\mathcal{S}_{\pm}\) are defined by (1.2) with \((\mu_{\pm},\varsigma_{\pm},v_{\pm},P_{\pm})\) in place of \((\mu,\varsigma,v,P)\).
For two viscous fluids meeting at a free boundary, the conventional assumptions are that the velocity is continuous across the interface and that the jump in the normal stress is proportional to the mean curvature of the free surface multiplied by the normal to the surface [45]. This requires us to enforce the following (interfacial) jump conditions
\[v_{+}|_{\Sigma(t)}-v_{-}|_{\Sigma(t)}=0\text{ and }(\mathcal{S}_{+}\nu|_{\Sigma(t) }-\mathcal{S}_{-}\nu|_{\Sigma(t)})=\vartheta\mathcal{C}\nu\text{ on }\Sigma(t). \tag{1.5}\]
In these conditions, the coefficient of the interfacial surface tension is denoted by \(\vartheta\geqslant 0\), the normal vector to \(\Sigma(t)\) by \(\nu\), and twice the mean curvature of the internal surface \(\Sigma(t)\) by \(\mathcal{C}\)[17], i.e.,
\[\mathcal{C}:=\frac{\Delta_{\text{h}}d+(\partial_{1}d)^{2}\partial_{2}^{2}d+( \partial_{2}d)^{2}\partial_{1}^{2}d-2\partial_{1}d\partial_{2}d\partial_{1} \partial_{2}d}{(1+(\partial_{1}d)^{2}+(\partial_{2}d)^{2})^{3/2}}.\]
Additionally, we also enforce the condition that the fluid velocity vanishes at the fixed boundaries, implemented through the boundary conditions:
\[v_{\pm}=\mathbf{0}\text{ on }\Sigma_{\pm}. \tag{1.6}\]
Moreover, under the first jump condition in (1.5), the internal surface function is determined by \(v_{+}\) (or \(v_{-}\)), i.e., for each \(t>0\),
\[d_{t}+v_{1,+}(x_{\text{h}},d)\partial_{1}d+v_{2,+}(x_{\text{h}},d)\partial_{2 }d=v_{3,+}(x_{\text{h}},d)\text{ in }\mathbb{R}^{2}, \tag{1.7}\]
where \(v_{i,+}\) denotes the \(i\)-th component of \(v_{+}\) for \(1\leqslant i\leqslant 3\). Finally, we impose the initial data of \((\rho,v,d)\):
\[(\rho,v)|_{t=0}:=(\rho^{0},v^{0})\text{ in }\Omega\setminus\Sigma(0)\text{ and }d|_{t=0}=d^{0}\text{ on }\mathbb{R}^{2}, \tag{1.8}\]
where we denote \(\Omega:=\mathbb{R}^{2}\times\{h_{-},h_{+}\}\) and \(\Sigma(0):=\{x_{3}=d(x_{\text{h}},0)\}\). Then, the system (1.4)-(1.8) constitutes an initial-boundary value problem for stratified compressible viscous fluids with/without internal surface tension. For the sake of simplicity, we refer to the problem defined by (1.4)-(1.8) as the stratified compressible viscous fluid (SCVF) model.
The above SCVF model has been used to investigate the evolution of compressible RT instability. To achieve this objective, it is necessary to further construct a RT equilibrium of the SCVF model. For this purpose, we choose a constant \(\bar{d}\in(h_{-},h_{+})\), and density profiles \(\bar{\rho}_{\pm}\), which are _differentiable functions in \(\Omega_{\pm}\)_, dependent solely on \(x_{3}\), and fulfill the hydrostatic relations:
\[\begin{cases}\nabla P_{\pm}(\bar{\rho}_{\pm})=-\bar{\rho}_{\pm}g\mathbf{e}^{3 }&\text{ in }\Omega_{\pm},\\ \llbracket P(\bar{\rho})\rrbracket\mathbf{e}^{3}=0&\text{ on }\Sigma,\end{cases} \tag{1.9}\]
the non-vacuum condition
\[\inf_{x\in\Omega_{\pm}}\{\bar{\rho}_{\pm}(x_{3})\}>0, \tag{1.10}\]
and the RT (jump) condition
\[\llbracket\bar{\rho}\rrbracket>0\text{ on }\Sigma, \tag{1.11}\]
where we have denoted
\[\Omega_{+}:=\mathbb{R}^{2}\times\{\bar{d}<x_{3}<h_{+}\},\ \Omega_{-}:= \mathbb{R}^{2}\times\{h_{-}<x_{3}<\bar{d}\},\ \Sigma:=\mathbb{R}^{2}\times\{\bar{d}\},\] \[\llbracket P(\bar{\rho})\rrbracket:=P_{+}(\bar{\rho}_{+})|_{\Sigma }-P_{-}(\bar{\rho}_{-})|_{\Sigma}\ \ \text{ and }\ \llbracket\bar{\rho}\rrbracket:=\bar{\rho}_{+}|_{\Sigma}-\bar{\rho}_{-}|_{\Sigma}. \tag{1.12}\]
Now, let us further define
\[\bar{\rho}:=\bar{\rho}_{+}\text{ for }x\in\Omega_{+}\text{ and }\bar{\rho}_{-} \text{ for }x\in\Omega_{-}.\]
This leads to an RT equilibrium solution \((\rho,v)=(\bar{\rho},\mathbf{0})\) with \(d=\bar{d}\) of the SCVF model. We mention that such an equilibrium solution \((\bar{\rho},\mathbf{0})\), satisfying (1.9)-(1.11), indeed exists, see [16]. _In addition, we assume without loss of generality that \(\bar{d}=0\) in this article._ If \(\bar{d}\) is non-zero, we can adjust the \(x_{3}\) coordinate to make \(\bar{d}=0\). Consequently, \(h_{-}<0\), and \(d\) can be referred to as the displacement function, representing the deviation of the interface point from the plane \(\Sigma\). It has been established since 1953 that the presence of interfacial surface tension \(\vartheta\mathcal{C}\nu\) can slow down the growth rate of RT instability [5]. In particular, the question of whether interfacial surface tension inhibits the RT instability (i.e., the interfacial surface tension prevents the heavier fluid from sinking into the lower fluid) reduces to the verification of whether the RT equilibrium solution to the SCVF model is stable. The aim of this article is to show the instability of the RT equilibrium solution to the SCVF model in the context of non-periodic motion.
### Reformulation of the SCVF model
Since the movement of the free interface \(\Sigma(t)\) and the subsequent change of the domains \(\Omega_{\pm}(t)\) in Eulerian coordinates will result in several mathematical difficulties, we shall switch our analysis to Lagrangian coordinates, so that the interface and the domains stay fixed in time. To this end, we take \(\Omega_{+}\) and \(\Omega_{-}\) to be the fixed Lagrangian domains, and assume that there exist invertible mappings
\[\zeta_{\pm}^{0}:\Omega_{\pm}\to\Omega_{\pm}(0),\]
such that \(\det\nabla\zeta_{\pm}^{0}\neq 0\), and
\[\Sigma(0)=\zeta_{\pm}^{0}(\Sigma),\ \Sigma_{+}=\zeta_{+}^{0}(\Sigma_{+}) \text{ and }\Sigma_{-}=\zeta_{-}^{0}(\Sigma_{-}), \tag{1.13}\]
where \(\det\) denotes the determinant operator. The first condition in (1.13) means that the initial interface \(\Sigma(0)\) is parameterized by the mapping \(\zeta_{\pm}^{0}\) restricted to \(\Sigma\), while the subsequent two conditions in (1.13) indicate that \(\zeta_{\pm}^{0}\) map the fixed upper and lower boundaries into themselves. Define the flow maps \(\zeta_{\pm}\) as the solutions to the initial value problems:
\[\begin{cases}\partial_{t}\zeta_{\pm}(y,t)=v_{\pm}(\zeta_{\pm}(y,t),t)&\text{ in }\Omega_{\pm},\\ \zeta_{\pm}(y,0)=\zeta_{\pm}^{0}(y)&\text{ in }\Omega_{\pm}.\end{cases}\]
We denote the Eulerian coordinates by \((x,t)\) with \(x=\zeta(y,t)\), while the fixed variables \((y,t)\in\Omega\times\mathbb{R}_{0}^{+}\) stand for the Lagrangian coordinates, where \(\mathbb{R}_{0}^{+}:=[0,\infty)\).
In order to switch back and forth between Lagrangian and Eulerian coordinates, we temporarily assume that \(\zeta_{\pm}(\cdot,t)\) are invertible and \(\Omega_{\pm}(t)=\zeta_{\pm}(\Omega_{\pm},t)\). And since \(v_{\pm}\) and \(\zeta_{\pm}^{0}\) are all continuous across \(\Sigma\), one has \(\Sigma(t)=\zeta_{\pm}(\Sigma,t)\), i.e.,
\[\llbracket\zeta\rrbracket=0\ \text{ on }\Sigma. \tag{1.14}\]
In other words, the Eulerian domains of upper and lower fluids correspond to the images of \(\Omega_{\pm}\) under the mappings \(\zeta_{\pm}\), and the free interface is the image of \(\Sigma\) under the mappings \(\zeta_{\pm}(t,\cdot)\). Moreover, due to the non-slip boundary condition \(v_{\pm}|_{\Sigma_{\pm}}=0\), one has
\[y=\zeta_{\pm}(y,t)\ \text{ on }\Sigma_{\pm}.\]
From now on, we define \(\zeta:=\zeta_{+}\) for \(y\in\Omega_{+}\) and \(\zeta_{-}\) for \(y\in\Omega_{-}\), and subsequently introduce the quantity \(\eta:=\zeta-y\).
Next, we introduce some notations involving \(\eta\). Define \(\mathcal{A}:=(\mathcal{A}_{ij})_{3\times 3}\) through \(\mathcal{A}^{\top}=(\nabla(\eta+y))^{-1}:=(\partial_{j}(\eta+y)_{i})_{3\times 3}^ {-1}\), and the differential operators \(\nabla_{\mathcal{A}}\) and \(\operatorname{div}_{\mathcal{A}}\) as follows:
\[\nabla_{\mathcal{A}}w:=(\nabla_{\mathcal{A}}w_{1},\nabla_{ \mathcal{A}}w_{2},\nabla_{\mathcal{A}}w_{3})^{\top},\quad\nabla_{\mathcal{A}}w _{i}:=(\mathcal{A}_{1k}\partial_{k}w_{i},\mathcal{A}_{2k}\partial_{k}w_{i}, \mathcal{A}_{3k}\partial_{k}w_{i})^{\top},\] \[\operatorname{div}_{\mathcal{A}}(f_{1},f_{2},f_{3})^{\top}=( \operatorname{div}_{\mathcal{A}}f_{1},\operatorname{div}_{\mathcal{A}}f_{2}, \operatorname{div}_{\mathcal{A}}f_{3})^{\top},\quad\operatorname{div}_{ \mathcal{A}}f_{i}:=\mathcal{A}_{lk}\partial_{k}f_{il}, \tag{1.15}\] \[\Delta_{\mathcal{A}}w:=(\Delta_{\mathcal{A}}w_{1},\Delta_{ \mathcal{A}}w_{2},\Delta_{\mathcal{A}}w_{3})^{\top}\quad\text{and}\quad \Delta_{\mathcal{A}}w_{i}:=\operatorname{div}_{\mathcal{A}}\nabla_{\mathcal{A} }w_{i}\]
for vector functions \(w:=(w_{1},w_{2},w_{3})^{\top}\) and \(f_{i}:=(f_{i1},f_{i2},f_{i3})^{\top}\), where the Einstein summation convention over repeated indices is used, and \(\partial_{i}\) denotes the partial derivative with respect to the \(i\)-th component of the variable \(y\).
Finally, we further introduce some properties of \(\mathcal{A}\)[25].
1. In view of the definition of \(\mathcal{A}\), one can deduce the following two important properties: \[\partial_{j}(J\mathcal{A}_{ij})=0,\] (1.16) \[\partial_{i}(\eta+y)_{k}\mathcal{A}_{kj}=\mathcal{A}_{ik}\partial_{k}(\eta+y)_ {j}=\delta_{ij},\] (1.17) where \(\delta_{ij}=0\) for \(i\neq j\), \(\delta_{ij}=1\) for \(i=j\), \((\eta+y)_{j}\) denotes the \(j\)-th component of the vector \(\eta+y\), and \[J:=\det(\nabla(\eta+y)).\] (1.18) The relation (1.16) is often called the geometric identity.
2. It holds after a straightforward calculation that \[J\mathcal{A}\mathbf{e}^{3}=\partial_{1}(\eta+y)\times\partial_{2}(\eta+y),\] (1.19) which, together with (1.14), implies \[\llbracket J\mathcal{A}\mathbf{e}^{3}\rrbracket=0.\] (1.20)
3. Let \[\mathbf{n}=\frac{J\mathcal{A}\mathbf{e}^{3}}{|J\mathcal{A}\mathbf{e}^{3}|}.\] (1.21) By virtue of (1.20), \(\mathbf{n}|_{y_{3}=0}\) is the unit normal to \(\Sigma(t)=\zeta(\Sigma,t)\).
Let
\[1_{\Omega_{\pm}}=1_{\Omega_{\pm}}(y)=\begin{cases}1&\text{for }y\in\Omega_{\pm},\\ 0&\text{for }y\in\Omega_{\mp}.\end{cases}\]
We denote
\[P^{(n)}(\psi)=\begin{cases}P^{(n)}_{+}(\psi_{+}(y))&\text{for }y\in\Omega_{+},\\ P^{(n)}_{-}(\psi_{-}(y))&\text{for }y\in\Omega_{-},\end{cases}\]
where \(P^{(n)}_{\pm}(\cdot)\) are the \(n\)-order derivatives of \(P_{\pm}\).
Now, let us set the Lagrangian unknowns
\[(\varrho,u)(y,t)=(\rho,v)(\eta(y,t)+y,t)\quad\text{for }(y,t)\in\Omega\times \mathbb{R}^{+}\]
and re-define
\[\mu:=\mu_{+}1_{\Omega_{+}}+\mu_{-}1_{\Omega_{-}}\ \ \text{and}\ \ \varsigma:= \varsigma_{+}1_{\Omega_{+}}+\varsigma_{-}1_{\Omega_{-}}, \tag{1.22}\]
the SCVF model can thus be written as the following initial-boundary value problem with an interface for \((\eta,\varrho,u)\) in Lagrangian coordinates [17]:
\[\begin{cases}\eta_{t}=u&\text{in}\ \Omega,\\ \varrho_{t}+\varrho\text{div}_{\mathcal{A}}u=0&\text{in}\ \Omega,\\ \varrho u_{t}+\text{div}_{\mathcal{A}}(P(\varrho)\mathbb{I}-\mathbb{S}_{ \mathcal{A}}(u))=-\varrho g\mathbf{e}^{3}&\text{in}\ \Omega,\\ \llbracket P(\varrho)\mathbb{I}-\mathbb{S}_{\mathcal{A}}(u)\rrbracket\mathbf{n} =\vartheta\mathcal{H}\mathbf{n},\ \llbracket\eta\rrbracket=\llbracket u\rrbracket=0&\text{on}\ \Sigma,\\ (\eta,u)=0&\text{on}\ \partial\Omega,\\ (\eta,\varrho,u)|_{t=0}=(\eta^{0},\varrho^{0},u^{0}),&\text{in}\ \Omega.\end{cases} \tag{1.23}\]
Here the jump notations \(\llbracket\cdot\rrbracket\) in (1.23) are defined by (1.12), and \(\partial\Omega:=\Sigma_{+}\cup\Sigma_{-}\), \(\Omega:=\Omega_{+}\cup\Omega_{-}\), and
\[H^{n}:=|\partial_{1}\zeta|^{2}\partial_{2}^{2}\zeta-2(\partial_{ 1}\zeta\cdot\partial_{2}\zeta)\partial_{1}\partial_{2}\zeta+|\partial_{2} \zeta|^{2}\partial_{1}^{2}\zeta, \tag{1.24}\] \[H^{\text{d}}:=|\partial_{1}\zeta|^{2}|\partial_{2}\zeta|^{2}-| \partial_{1}\zeta\cdot\partial_{2}\zeta|^{2},\ \mathcal{H}:=H^{n}\cdot\mathbf{n}/H^{\text{d}},\] (1.25) \[\mathbb{S}_{\mathcal{A}}(u):=\mu\mathbb{D}_{\mathcal{A}}u+( \varsigma-2\mu/3)\,\text{div}_{\mathcal{A}}u\mathbb{I},\ \text{and}\] (1.26) \[\mathbb{D}_{\mathcal{A}}u:=\nabla_{\mathcal{A}}u+\nabla_{ \mathcal{A}}u^{\top}. \tag{1.27}\]
Since our aim is to construct unstable solutions of the problem (1.23), it is better to simplify the problem as much as possible. Thus, our next goal is to eliminate \(\varrho\) in (1.23) by expressing it in terms of \(\eta\) as in [25, 37].
It follows from (1.23)\({}_{1}\) that
\[J_{t}=J\text{div}_{\mathcal{A}}u, \tag{1.28}\]
which, together with (1.23)\({}_{2}\), yields the mass conservation of differential version
\[\partial_{t}(\varrho J)=0. \tag{1.29}\]
Hence, we deduce from (1.29) that \(\varrho^{0}J^{0}=\varrho J\), which implies \(\varrho=\bar{\rho}J^{-1}\), provided the initial data \((\eta^{0},\varrho^{0})\) satisfies
\[\varrho^{0}=\bar{\rho}/J^{0}. \tag{1.30}\]
Consequently, under the assumption (1.30), we derive the following (compressible) RT problem from the initial-boundary value problem (1.23):
\[\begin{cases}\eta_{t}=u&\text{in}\ \Omega,\\ \bar{\rho}J^{-1}u_{t}+\text{div}_{\mathcal{A}}(P(\bar{\rho}J^{-1})\mathbb{I} -\mathbb{S}_{\mathcal{A}}(u))=-\bar{\rho}J^{-1}g\mathbf{e}^{3}&\text{in}\ \Omega,\\ \llbracket P(\bar{\rho}J^{-1})\mathbb{I}-\mathbb{S}_{\mathcal{A}}(u) \rrbracket\mathcal{A}\mathbf{e}^{3}=\vartheta\mathcal{H}\mathcal{A}\mathbf{ e}^{3}&\text{on}\ \Sigma,\\ \llbracket\eta\rrbracket=\llbracket u\rrbracket=0&\text{on}\ \Sigma,\\ (\eta,u)=0&\text{on}\ \partial\Omega,\\ (\eta,u)|_{t=0}=(\eta^{0},u^{0})&\text{in}\ \Omega,\end{cases} \tag{1.31}\]
where, by virtue of (1.22),
\[\mu 1_{\Omega_{\pm}}=\mu_{\pm},\ \varsigma_{\Omega_{\pm}}=\varsigma_{\pm}\ \ \text{and}\ \ P(\bar{\rho}J^{-1})1_{\Omega_{\pm}}=P_{\pm}(\bar{\rho}_{\pm}/\det(\nabla( \eta 1_{\Omega_{\pm}}+y))). \tag{1.32}\]
_It should be noted that our constructed unstable solutions to the above RT problem automatically implies the instability of the problem (1.23) with \(\varrho=\bar{\rho}/J\) and \(J\mathcal{A}\mathbf{e}^{3}\neq 0\), see Remark 1.1._
Next, we further deduce the perturbation forms of (1.31)\({}_{2}\) and (1.31)\({}_{3}\). To begin with, we should rewrite the pressure \(P(\bar{\rho}J^{-1})\) as some perturbation form around \(\bar{P}\). By a simple computation, we have
\[P(\bar{\rho}J^{-1})=\bar{P}-P^{\prime}(\bar{\rho})\bar{\rho}\mathrm{div}\eta+R_ {P}, \tag{1.33}\]
where \(\bar{P}:=P(\bar{\rho})\) and
\[R_{P}:=P^{\prime}(\bar{\rho})\bar{\rho}(J^{-1}-1+\mathrm{div}\eta)+\int_{0}^{ \bar{\rho}(J^{-1}-1)}(\bar{\rho}(J^{-1}-1)-z)\frac{\mathrm{d}^{2}}{\mathrm{d} z^{2}}P(\bar{\rho}+z)\mathrm{d}z. \tag{1.34}\]
Let \(\tilde{\bar{P}}:=P(\tilde{\bar{\rho}})\) and \(\tilde{\bar{\rho}}:=\bar{\rho}(\eta_{3}+y_{3})\). Thus, the hydrostatic relation in (1.9)\({}_{1}\) in Lagrangian coordinates reads as \(\nabla_{\mathcal{A}}\tilde{\bar{P}}=-\tilde{\bar{\rho}}g\mathbf{e}^{3}.\) Using (1.9)\({}_{1}\), one has
\[\mathrm{div}_{\mathcal{A}}(\bar{P}\mathbb{I})=\nabla_{\mathcal{A}}\bar{P}=- \tilde{\bar{\rho}}g\mathbf{e}^{3}-\nabla_{\mathcal{A}}(\tilde{\bar{P}}-\bar{P })=g\mathrm{div}_{\mathcal{A}}(\bar{\rho}\eta_{3}\mathbb{I})-\tilde{\bar{\rho }}g\mathbf{e}^{3}+\mathbf{N}_{P}, \tag{1.35}\]
where
\[\mathbf{N}_{P}:=\nabla_{\mathcal{A}}\left(\int_{0}^{\eta_{3}}(z-\eta_{3}) \frac{\mathrm{d}^{2}}{\mathrm{d}z^{2}}\bar{P}(y_{3}+z)\mathrm{d}z\right).\]
In addition, we represent the gravity term as follows.
\[-\bar{\rho}J^{-1}g\mathbf{e}^{3}=-(\bar{\rho}(J^{-1}-1)+\bar{\rho}-\tilde{\bar {\rho}}+\tilde{\bar{\rho}})g\mathbf{e}^{3}=g\mathrm{div}(\bar{\rho}\eta) \mathbf{e}^{3}-\tilde{\bar{\rho}}g\mathbf{e}^{3}+\mathbf{N}_{g}, \tag{1.36}\]
where
\[\mathbf{N}_{g}:=g\left(\int_{0}^{\eta_{3}}(\eta_{3}-z)\frac{\mathrm{d}^{2}}{ \mathrm{d}z^{2}}\bar{\rho}(y_{3}+z)\mathrm{d}z-\bar{\rho}(J^{-1}-1+\mathrm{ div}\eta)\right)\mathbf{e}^{3}.\]
Thus, by (1.9)\({}_{2}\), (1.33), (1.35) and (1.36), one can transform (1.31)\({}_{2}\) and (1.31)\({}_{3}\) into the following equivalent forms:
\[\begin{cases}\bar{\rho}J^{-1}u_{t}-\mathrm{div}_{\mathcal{A}}(P^{\prime}(\bar{ \rho})\bar{\rho}\mathrm{div}\eta\mathbb{I}+\mathbb{S}_{\mathcal{A}}(u))=g\bar{ \rho}(\mathrm{div}\eta\mathbf{e}^{3}-\nabla\eta_{3})+\mathbf{N}^{1}&\text{ in }\Omega,\\ \llbracket P^{\prime}(\bar{\rho})\bar{\rho}\mathrm{div}\eta\mathbb{I}+\mathbb{S }_{\mathcal{A}}(u)\rrbracket J\mathcal{A}\mathbf{e}^{3}+\vartheta\Delta_{\mathrm{ h}}\eta_{3}\mathbf{e}^{3}=\mathbf{N}^{2}&\text{ on }\Sigma,\end{cases} \tag{1.37}\]
where \(\Delta_{\mathrm{h}}:=\partial_{1}^{2}+\partial_{2}^{2}\), and
\[\mathbf{N}^{1}:=\mathbf{N}_{g}-\mathbf{N}_{P}-\nabla_{\mathcal{A}}R_{P}-g \nabla_{\mathcal{\tilde{A}}}(\bar{\rho}\eta_{3})\text{ and }\mathbf{N}^{2}:=\llbracket R_{P} \rrbracket J\mathcal{A}\mathbf{e}^{3}+\vartheta(\Delta_{\mathrm{h}}\eta_{3} \mathbf{e}^{3}-\mathcal{H}J\mathcal{A}\mathbf{e}^{3}). \tag{1.38}\]
We will use (1.37) to derive the _a priori_ estimates for the temporal derivative of \(u\).
Now we further rewrite (1.37) as non-homogeneous perturbation forms. Letting \(\mathbf{f}\), \(\mathbf{0}\neq\mathbf{r}\in\mathbb{R}^{3}\), we define \(\Pi_{\mathbf{r}}\mathbf{f}:=\mathbf{f}-|\mathbf{r}|^{-2}(\mathbf{f}\cdot \mathbf{r})\mathbf{r}\). It should be noted that \(\Pi_{\mathbf{r}}\mathbf{f}=0\), if only if the vector \(\mathbf{f}\) parallels to \(\mathbf{r}\). Applying the operator \(\Pi_{\mathbf{n}}\) to the jump condition (1.37)\({}_{2}\), and then using the fact that \(\mathbf{n}\) parallels to \(J\mathcal{A}\mathbf{e}^{3}\), we get
\[\Pi_{\mathbf{n}}\llbracket(P^{\prime}(\bar{\rho})\bar{\rho}\mathrm{div}\eta \mathbb{I}+\mathbb{S}_{\mathcal{A}}(u))J\mathcal{A}\mathbf{e}^{3}\rrbracket=0 \text{ on }\Sigma.\]
We rewrite the above identity as a nonhomogeneous form, on the left hand side of which the terms are linear:
\[\Pi_{\mathbf{e}^{3}}[\![\Upsilon(\eta,u)\mathbf{e}^{3}]\!]=\mathbf{N}^{4}\text{ on }\Sigma, \tag{1.39}\]
where
\[\Upsilon(\eta,u):=P^{\prime}(\bar{\rho})\bar{\rho}\mathrm{div}\eta \mathbb{I}+\mathbb{S}(u), \tag{1.40}\] \[\mathbf{N}^{4}:=\mathbf{N}^{4}(\eta,u):=[\![(\Upsilon(\eta,u) \mathbf{e}^{3})\cdot\tilde{\mathbf{n}}\mathbf{n}+(\Upsilon(\eta,u)\mathbf{e}^{ 3})\cdot\mathbf{e}^{3}\tilde{\mathbf{n}}]\!]\] \[\qquad\qquad\qquad-\Pi_{\mathbf{n}}[\![\Upsilon(\eta,u)(J \mathcal{A}\mathbf{e}^{3}-\mathbf{e}^{3})+\mathbb{S}_{\tilde{\mathcal{A}}}(u) J\mathcal{A}\mathbf{e}^{3})]\!],\] (1.41) \[\tilde{\mathbf{n}}:=\tilde{\mathbf{n}}-\mathbf{e}^{3},\ \tilde{\mathcal{A}}:=\mathcal{A}-\mathbb{I}\text{ and } \mathbb{S}_{\tilde{\mathcal{A}}}(u)\text{ is defined by }(\ref{eq:1.26})\text{ with }\tilde{\mathcal{A}}\text{ in place of }\mathcal{A}. \tag{1.42}\]
Taking scalar product of (1.31)\({}_{3}\) and \(J\mathbf{n}/|J\mathcal{A}\mathbf{e}^{3}|\), we have
\[[\![P(\bar{\rho}J^{-1})-(\mathbb{S}_{\mathcal{A}}(u)\mathbf{n}) \cdot\mathbf{n}]\!]=\vartheta\mathcal{H},\]
which can be rewritten as a nonhomogeneous form by (1.9)\({}_{2}\) and (1.33)
\[[\![P^{\prime}(\bar{\rho})\bar{\rho}\mathrm{div}\eta+2\mu\partial_{3}u_{3}+( \varsigma-2\mu/3)\,\mathrm{div}u]\!]+\vartheta\Delta_{\mathrm{h}}\eta_{3}= \mathcal{N}, \tag{1.43}\]
where
\[\mathcal{N}:=\mathcal{N}(\eta,u):=\mathcal{N}^{\eta}+[\![R_{P}+ \mathcal{N}^{u}]\!],\] (1.44) \[\mathcal{N}^{\eta}(\eta):=\vartheta(\Delta_{\mathrm{h}}\eta_{3}- \mathcal{H})\] \[\qquad\qquad=\vartheta(H^{\mathrm{n}}\cdot\mathbf{n}(H^{\mathrm{ d}}-1)/H^{\mathrm{d}}-H^{\mathrm{n}}\cdot\tilde{\mathbf{n}}-H^{\mathrm{n}}\cdot \mathbf{e}^{3}+\Delta_{\mathrm{h}}\eta_{3}),\] (1.45) \[\mathcal{N}^{u}:=\mathcal{N}^{u}(\eta,u):=-\mathbb{S}_{\tilde{ \mathcal{A}}}(u)\mathbf{n}\cdot\mathbf{n}-(\mathbb{S}(u)\tilde{\mathbf{n}}) \cdot\mathbf{n}-(\mathbb{S}(u)\mathbf{e}^{3})\cdot\tilde{\mathbf{n}},\] and \[\mathbb{S}(u)\] is defined by (1.26) with \[\mathbb{I}\] in place of \[\mathcal{A}\].
Thanks to (1.39) and (1.43), we further rewrite (1.37) as the following nonhomogeneous form:
\[\begin{cases}\bar{\rho}u_{t}+g\bar{\rho}(\nabla\eta_{3}-\mathrm{ div}\eta\mathbf{e}^{3})-\mathrm{div}\Upsilon(\eta,u)=\mathbf{N}^{3}&\text{ in }\Omega,\\ [\![\Upsilon(\eta,u)\mathbf{e}^{3}]\!]+\vartheta\Delta_{\mathrm{h}}\eta_{3} \mathbf{e}^{3}=(\mathbf{N}^{4}_{1},\mathbf{N}^{4}_{2},\mathcal{N})^{\top}& \text{ on }\Sigma,\end{cases} \tag{1.46}\]
where
\[\mathbf{N}^{3}:=\mathbf{N}^{1}+\mathrm{div}_{\tilde{\mathcal{A}}}\mathbb{S}_{ \mathcal{A}}(u)+\mathrm{div}\mathbb{S}_{\tilde{\mathcal{A}}}(u)+\nabla_{ \tilde{\mathcal{A}}}(P^{\prime}(\bar{\rho})\bar{\rho}\mathrm{div}\eta)+\bar{ \rho}(1-J^{-1})u_{t}, \tag{1.47}\]
and \(\mathrm{div}_{\tilde{\mathcal{A}}}\) is defined by (1.15) with \(\tilde{\mathcal{A}}\) in place of \(\mathcal{A}\). The above nonhomogeneous forms in (1.46) are very useful to derive the _a priori_ estimates of tangential derivatives of \((\eta,u)\), in particular, a highest-order boundary estimate of \(u_{3}\) at interface for \(\vartheta>0\).
Thanks to (1.46), we can easily write out the linearized RT problem corresponding to (1.31) under the small perturbations:
\[\begin{cases}\eta_{t}=u&\text{ in }\Omega,\\ \bar{\rho}u_{t}=g\bar{\rho}(\mathrm{div}\eta\mathbf{e}^{3}-\nabla\eta_{3})+ \mathrm{div}\Upsilon(\eta,u)&\text{ in }\Omega,\\ [\![\eta]\!]=[\![u]\!]=0,\ [\![\Upsilon(\eta,u)\mathbf{e}^{3}]\!]=-\vartheta \Delta_{\mathrm{h}}\eta_{3}\mathbf{e}^{3}&\text{ on }\Sigma,\\ (\eta,u)=0&\text{ on }\partial\Omega,\\ (\eta,u)|_{t=0}=(\eta^{0},u^{0})&\text{ in }\Omega.\end{cases} \tag{1.48}\]
Moreover, the corresponding spectrum problem of (1.48) reads as follows.
\[\begin{cases}\lambda\sigma=w&\text{in }\Omega,\\ \lambda\bar{\rho}w=g\bar{\rho}(\mathrm{div}\sigma\mathbf{e}^{3}-\nabla\sigma_{3} )+\mathrm{div}\Upsilon(\sigma,w)&\text{in }\Omega,\\ \llbracket\sigma\rrbracket=\llbracket w\rrbracket=0,\ \llbracket\Upsilon(\sigma,w) \mathbf{e}^{3}\rrbracket=-\vartheta\Delta_{\mathrm{h}}\sigma_{3}\mathbf{e}^{3 }&\text{on }\Sigma,\\ (\sigma,w)=0&\text{on }\partial\Omega.\end{cases} \tag{1.49}\]
The linearized RT problem (1.48) has the advantage of being convenient in mathematical analysis in order to have an insight into the physical mechanism of the instability, and how the internal surface tension affects the RT instability. In particular, an instability solution of the linearized RT problem plays a role of standing point to further investigate the nonlinear instability. We mention that directly linearizing the problem (1.23) yields the following linear problem [17]:
\[\begin{cases}\eta_{t}=u&\text{in }\Omega,\\ \varrho_{t}=-\bar{\rho}\mathrm{div}u&\text{in }\Omega,\\ \bar{\rho}u_{t}=\mathrm{div}(\mathbb{S}(u)-P^{\prime}(\bar{\rho})\varrho \mathbb{I})-g(\varrho\mathbf{e}^{3}+\bar{\rho}\nabla\eta_{3})&\text{in }\Omega,\\ \llbracket\eta\rrbracket=\llbracket u\rrbracket=0,\ \llbracket\mathbb{S}(u)-P^{ \prime}(\bar{\rho})\varrho\mathbb{I}\rrbracket\mathbf{e}^{3}=-\vartheta \Delta_{\mathrm{h}}\eta_{3}\mathbf{e}^{3}&\text{on }\Sigma,\\ (\eta,u)=0&\text{on }\partial\Omega,\\ (\eta,u)|_{t=0}=(\eta_{0},u_{0})&\text{in }\Omega,\end{cases} \tag{1.50}\]
which is called the _linearized (non-periodic) m-RT problem_. Here we add the letter \(m\) in the name of linearized m-RT problem to emphasize that it includes the linearized mass equation (1.50)\({}_{2}\).
### Relevant progress and our main result
Let \(\mathbb{T}\) be a usual 1-torus,
\[\Omega_{L_{1},L_{2}}=2\pi L_{1}\mathbb{T}\times 2\pi L_{2}\mathbb{T}\times((h_{- },0)\cup(0,h_{+}))\text{ and }\Sigma_{L_{1},L_{2}}=2\pi L_{1}\mathbb{T}\times 2\pi L _{2}\mathbb{T}\times\{0\}, \tag{1.51}\]
then the linearized RT problem (1.48) with \(\Omega_{L_{1},L_{2}}\) resp. \(\Sigma_{L_{1},L_{2}}\) in place of \(\Omega\) resp. \(\Sigma\) is called the _linearized periodic RT problem_. Then the solution \((\eta,u)\) of the linearized periodic RT problem satisfies (1.48) and
\[(\eta(y,t),u(y,t))=(\eta(x,t),u(x,t))|_{(x_{1},x_{2})=(y_{1}+2\pi mL_{1},y_{2}+ 2\pi nL_{2})}\]
for any integers \(m\) and \(n\). Similarly the linearized m-RT problem (1.50) with \(\Omega_{L_{1},L_{2}}\) resp. \(\Sigma_{L_{1},L_{2}}\) in place of \(\Omega\) resp. \(\Sigma\) is called the _linearized periodic m-RT problem_. By applying a so-called "normal mode" method to the linearized periodic m-RT problem, Guo-Tice obtained both the linear stability and instability results [17]:
* the linearized periodic m-RT problem is stable, if the Rayleigh number satisfies \[R:=\frac{\vartheta}{g\max\{L_{1}^{2},L_{2}^{2}\}\llbracket\bar{\rho}\rrbracket} >1.\] (1.52)
* the linearized periodic m-RT problem is unstable, if the instability condition is satisfied \[R<1.\] (1.53)
In addition, they also _formally_ verified that the linearized (non-periodic) m-RT problem (1.50) is unstable, see [17, Theorem 2.4] (Here we emphasize the word "_formally_", since both the integral expressions in (2.24) and (2.25) in [17, Theorem 2.4] are formal). Obviously, the instability of the linearized m-RT problem (1.50) can be expected from the periodic case, since the domain \(\Omega\) can be regarded the limit case of \(\Omega_{L_{1},L_{2}}\), as \(L_{1}\), \(L_{2}\to\infty\). We mention that all Guo-Tice's results of the linearized m-RT problem also hold for the linearized periodic/non-periodic RT problem.
Guo-Tice's linear stability result roughly presents that the interfacial surface tension can inhibit the occurrence of the RT instability (under the case of small perturbation with horizontally periodic motion). However the rigorous proof of _the phenomenon of inhibition of RT instability by the interfacial surface tension_ is relatively difficult in Lagrangian coordinates. Therefore Jang-Tice-Wang used the flattening transformation introduced by Beal in [4] to rewrite the motion equations of stratified compressible viscous fluids, and then succeeded in the mathematical verification of the inhibition phenomena under the sharp stability condition (1.52) by further overcoming some difficulties arising from the compressibility, in particular, the stability solutions under the inhibition of surface tension enjoy the exponential decay-in-time [22]. Such conclusion also was obtained for the corresponding incompressible case by Wang-Tice-Kim [42]. Recently, Wilke further gave a similar result for the stratified incompressible viscous fluids in a bounded cylindrical domain \(G\times(h_{-},h_{+})\) under the sharp stability condition
\[R_{s}:=\frac{\lambda\vartheta}{g(\rho_{+}^{\mathrm{i}}-\rho_{-}^{\mathrm{i}}) }>1, \tag{1.54}\]
where \(G\subset\mathbb{R}^{2}\), \(\rho_{\pm}^{\mathrm{i}}\) are the constant densities of upper and lower incompressible fluids, respectively, and the constant \(\lambda\) depends on the geometric structure of \(G\); in particular, if the cylindrical domain is a cylinder, the expression of \(R_{s}\) in (1.54) is given by \(R_{s}=\vartheta\lambda^{*}/gr^{2}(\rho_{+}^{\mathrm{i}}-\rho_{-}^{\mathrm{i}})\), where \(r\) is the radius of the cylinder, \(\lambda^{*}=(\mathfrak{B}_{1,1}^{\prime})^{2}\) and \(\mathfrak{B}_{1,1}^{\prime}\) is the first zero of the derivative \(\mathfrak{B}_{1}^{\prime}\) of the Bessel function \(\mathfrak{B}_{1}\)[44].
Thanks to Guo-Tice's linear instability result, Jang-Tice-Wang [21] further established the nonlinear RT instability result that the RT equilibrium solution \((\bar{\rho},\mathbf{0})\) is unstable to the _periodic_ SCVF model with the instability condition (1.53) based on Guo-Hallstrom-Spirn's bootstrap instability method in [12], also see the result for the corresponding incompressible case [41]. However Guo-Tice's method for the construction of instability solutions of the linearized periodic RT problem can not be directly applied to the corresponding non-periodic case (1.48). Next let us take the case \(\vartheta=0\) to roughly explain the reason of failure.
Let us rewrite the two abstract forms of the equations in (1.48) and (1.49) with \(\vartheta=0\):
\[\frac{\mathrm{d}V}{\mathrm{d}t}=\mathcal{L}(V) \tag{1.55}\]
and
\[\lambda U=\mathcal{L}(U), \tag{1.56}\]
where \(V=(\eta^{\top},u^{\top})^{\top}\), \(U=(\sigma^{\top},w^{\top})^{\top}\) and \(\mathcal{L}\) denotes a linear differential operator. For a function \(f\in L^{2}\), we define the horizontal Fourier transform of \(f\) in \(\mathbb{R}^{2}\) via
\[\hat{f}(\xi,y_{3}):=\mathcal{F}_{y_{\mathrm{h}}\to\xi}(f(y_{\mathrm{h}},y_{3}) )=\frac{1}{2\pi}\int_{\mathbb{R}^{2}}f(y_{\mathrm{h}},y_{3})e^{-\mathrm{i} \xi\cdot y_{\mathrm{h}}}\mathrm{d}y_{\mathrm{h}} \tag{1.57}\]
for \(\xi\in\mathbb{R}^{2}\), where \(y_{\mathrm{h}}:=(y_{1},y_{2})\) and \(\xi\cdot y_{\mathrm{h}}:=\xi_{1}y_{1}+\xi_{2}y_{2}\). Applying the horizontal Fourier transform to (1.56) yields
\[\lambda(\xi)\hat{U}(\xi,y_{3})=\mathbb{P}\left(\xi,\frac{\mathrm{d}}{\mathrm{ d}y_{3}}\right)\hat{U}(\xi,y_{3}), \tag{1.58}\]
where \(\mathbb{P}\left(\xi,\frac{\mathrm{d}}{\mathrm{d}y_{3}}\right):=\left(\mathcal{P}_{ ij}\left(\xi_{1},\xi_{2},\frac{\mathrm{d}}{\mathrm{d}y_{3}}\right)\right)_{6\times 6}\), all entry \(\mathcal{P}_{ij}\left(\xi_{1},\xi_{2},\frac{\mathrm{d}}{\mathrm{d}y_{3}}\right)\) are at most two order complex polynomials with respect to \(\xi_{1}\), \(\xi_{2}\) and the derivative operator \(\frac{\mathrm{d}}{\mathrm{d}y_{3}}\) applying some component of \(\hat{U}(\xi,y_{3})\), where \(1\leqslant i\), \(j\leqslant 6\), \(\left(\frac{\mathrm{d}}{\mathrm{d}y_{3}}\right)^{l}:=\frac{\mathrm{d}^{l}}{ \mathrm{d}y_{3}^{l}}\) and the coefficients of the complex polynomials depends on \(g\), \(\mu\), \(\varsigma\), \(\bar{\rho}\), \(P^{\prime}(\bar{\rho})\) and \((P^{\prime}(\bar{\rho})\bar{\rho})^{\prime}\). Exploiting a modified variational method introduced by Guo-Tice in [17], we can construct solutions \(\hat{U}(\xi,y_{3})\) and \(\lambda(\xi)\) to (1.58) (with some boundary conditions, which can be obtained by applying the horizontal Fourier transform to the boundary conditions (1.49)\({}_{3}\) and (1.49)\({}_{4}\)) for any given frequency \(\xi\), where \(\lambda(\xi)>0\) is a bounded and continuous function on \(\xi\). Formally \(V=\mathcal{F}_{\xi\to y_{\mathrm{h}}}^{-1}(e^{\lambda(\xi)t}\hat{U}(\xi,y_{3}))\) is a growing mode solution of (1.55), since \(\|V\|_{L^{2}(\Omega)}=\|e^{\lambda(\xi)t}\hat{U}(\xi,y_{3})\|_{L^{2}(\Omega)}\) and \(\lambda(\xi)>0\).
The above idea seems to provide a road to construct a growing mode solution. Unfortunately, it is difficult to prove that \(\hat{U}(\xi,y_{3})\) is a measurable function on the variable \((\xi,y_{3})\), and thus we can not apply Fourier inverse transform to further construct a growing mode solution, in particular, enjoying an almost largest growth rate (i.e. close to the largest growth rate of all linear solutions roughly speaking) to the linearized RT problem (1.48). It should be noted that solutions \(\hat{U}(\xi,y_{3})\) and \(\lambda(\xi)\) satisfying (1.58) can be also obtained for the case \(\vartheta>0\), where \(\xi\) should satisfy \(|\xi|^{2}<g\llbracket\bar{\rho}\rrbracket/\vartheta\). Based on this fact and _an assumption of \(\hat{U}(\xi,y_{3})\) being measurable_, Guo-Tice formally obtained the conclusion that the linearized m-RT problem (1.50) is unstable (see [17, Theorem 2.4]), and their's linear result roughly presents that RT instability always occur in the SCVF model with non-periodic motion. It is worth to be noted that Pruess-Simonett had verified such conclusion in the case of stratified incompressible viscous fluids in \(\mathbb{R}^{3}\) by the spectrum analysis of the linearized problem, the maximal regularity theory of type \(L^{p}\) and the Henry's instability theorem [35]. Later Wilke also used Pruess-Simonett's method to prove the RT instability in a cylindrical domain under the instability condition \(R_{s}<1\)[44]. _However it is still not clear whether Pr\(\ddot{\mathrm{u}}\)ess-Simonett's method can be further applied to the corresponding compressible case to obtain the solutions with instability of \(L^{2}\)-norm_.
In this article, we still use Guo-Hallstrom-Spirn's bootstrap instability method in [12] to investigate the instability of the (non-periodic) RT problem (1.31), and thus face the difficulty of the construction of a growing mode solution for the linearized RT problem (1.48) with an almost largest growth rate for the linearized problem discussed previously. However we do not try to fix the construction problem, and develop a new alternative scheme to avoid it, i.e. roughly speaking, using a growing mode solution with an almost largest growth rate of the periodic RT problem to approximately construct the RT instability solutions for the RT problem (1.31). Thanks to the new approximate scheme, we can rigorously prove the instability of the RT problem, and thus extend Guo-Tice's linear conclusion to the nonlinear case. The brief proof frame will be further mentioned after Theorem 1.1. Our RT instability result presents that the RT equilibrium solution \((\bar{\rho},\mathbf{0})\) is always unstable to the SCVF model with non-periodic motion, see Remark 1.2.
Before further stating our RT instability result, we will introduce some simplified notations in this article.
(1) Basic notations.
\(\partial_{\mathrm{h}}^{\alpha}\) denotes \(\partial_{1}^{\alpha_{1}}\partial_{2}^{\alpha_{2}}\) with \(\alpha=(\alpha_{1},\alpha_{2})\). \(\partial_{\mathrm{h}}^{i}\) represents \(\partial_{\mathrm{h}}^{\alpha}\) for any \(|\alpha|:=\alpha_{1}+\alpha_{2}=i\geqslant 0\). \(\mathbb{R}^{+}:=(0,\infty)\), \(\int:=\int_{\Omega}\), \(\mathbf{f}_{\mathrm{h}}:=(\mathbf{f}_{1},\mathbf{f}_{2})^{\top}\), \(\nabla_{\mathrm{h}}f:=(\partial_{1}f,\partial_{2}f)^{\top}\) and \(\mathrm{div}_{\mathrm{h}}\mathbf{f}_{\mathrm{h}}=\partial_{1}\mathbf{f}_{1}+ \partial_{2}\mathbf{f}_{2}\), where \(\mathbf{f}=(\mathbf{f}_{1},\mathbf{f}_{2},\mathbf{f}_{3})^{\top}\). For the sake of the simplicity, we denote \(\sqrt{\sum_{i=1}^{n}\|w_{i}\|_{X}^{2}}\) by \(\|(w_{1},\cdots,w_{n})\|_{X}\), where \(\|\cdot\|_{X}\) represents a norm or a semi-norm, and \(w_{i}\) is a scalar function or a vector function for \(1\leqslant i\leqslant n\). We define the fractional differential operator
\[\mathfrak{D}_{\mathbf{h}}^{3/2}w:=(w(y+\mathbf{h})-w(y))/|\mathbf{h}|^{3/2}\text { for }\mathbf{h}\in\mathbb{R}^{2}\times\{0\}. \tag{1.59}\]
\(a\lesssim b\) means that \(a\leqslant cb\) for some constant \(c>0\), where the positive constants \(c\) may depend on the domain \(\Omega\), and other known physical parameters/functions in the RT problem (1.31) such as \(g\), \(\vartheta\), \(\mu\), \(\varsigma\), \(\bar{\rho}\), \(h_{\pm}\) and \(P_{\pm}(\tau)\), and may vary from line to line. Similarly the fixed constants \(c_{i}\), \(\tilde{c}_{j}\) and \(\delta_{k}\) also depend on the domain \(\Omega\), and the other known physical parameters/functions in the RT problem for \(1\leqslant i\leqslant 8\), \(1\leqslant j\leqslant 6\) and \(1\leqslant k\leqslant 4\).
(2) Simplified notations of the Sobolev spaces and norms.
\[L^{p}:=L^{p}(\Omega)=W^{0,p}(\Omega),\ H^{k}:=W^{k,2}(\Omega),\ H _{0}^{1}:=W_{0}^{1,2}(\Omega\ ),\ H^{s}:=W^{s,2}(\mathbb{R}^{2}),\] \[\|\cdot\|_{k}:=\|\cdot\|_{H^{k}},\ |\cdot|_{y_{3}=0}|_{s}:=\| \cdot\|_{H^{s}(\mathbb{R}^{2})},\ \|\cdot\|_{i,k}^{2}:=\sum_{\alpha_{1}+\alpha_{2}=i}\| \partial_{1}^{\alpha_{1}}\partial_{2}^{\alpha_{2}}\cdot\|_{k}^{2},\] \[\|\cdot\|_{\underline{i},k}^{2}:=\sum_{j=0}^{i}\|\cdot\|_{j,k}^{2 },\ H_{0,*}^{3+k,1/2}:=\{\xi\in H_{0}^{1}\cap H^{3+k}\ |\ \|\xi(y)\|_{3}\leqslant\iota\},\]
where \(1<p\leqslant\infty\), \(s\) is a real number, \(i\), \(k\) are non-negative integers, \(\iota\) is the constant in Lemma A.10 and the definition of \(W^{s,2}\) can be found in [34, Section 1.3.5.10] (the equivalent definition of the norm \(|\cdot|_{1/2}\) by Fourier transform can be found in [1, Theorem 7.63]). It should be noted that our constructed nonlinear instability solution \(\eta\) belongs to \(H_{0,*}^{3,1/2}\) for each \(t\geqslant 0\), and thus \(\zeta\) is invertible by Lemma A.10.
(3) Definitions of functionals.
\[\begin{split}&\mathcal{U}(w):=\int\left((\varsigma-2\mu/3)| \mathrm{div}w|^{2}+\mu|\mathbb{D}w|^{2}/2\right)\mathrm{d}y,\\ &\mathcal{I}(w):=\left\|\sqrt{P^{\prime}(\bar{\rho})\bar{\rho}} \left(\frac{gw_{3}}{P^{\prime}(\bar{\rho})}-\mathrm{div}w\right)\right\|_{0}^ {2}+\vartheta|\nabla_{\mathrm{h}}w_{3}|_{0}^{2},\\ &\mathcal{E}:=\|\eta\|_{3}^{2}+\|u\|_{2}^{2}+\|u_{t}\|_{0}^{2}, \ \mathcal{D}:=\|\eta\|_{3}^{2}+\|u\|_{3}^{2}+\|u_{t}\|_{1}^{2}.\end{split} \tag{1.60}\]
(4) Cut-off functions: Let \(\chi(r)\in C_{0}^{\infty}(-2,2)\) be a given smooth function such that
\[0\leqslant\chi(r)\leqslant 1\ \text{and}\ \chi(r)=\begin{cases}1&\text{for}\ |r| \leqslant 1;\\ 0&\text{for}\ |r|\geqslant 2.\end{cases}\]
Then we further define the smooth function \(\chi_{n}(r)\in C_{0}^{\infty}(\mathbb{R})\) as follows: for given \(n\geqslant 1\),
\[\chi_{n}(r)=\begin{cases}1&\text{for}\ |r|\leqslant n-1;\\ \chi(n+1-|r|)&\text{for}\ n-1<|r|<n;\\ 0&\text{for}\ |r|\geqslant n.\end{cases} \tag{1.61}\]
In addition, we define \(\chi_{n,n}(y_{1},y_{2}):=\chi_{n}(y_{1})\chi_{n}(y_{2})\).
Now we state our instability result, which physically shows that the RT instability always occurs in the SCVF mode with non-periodic motion.
**Theorem 1.1**.: _We assume that_
1. \(P_{\pm}(\tau)\in C^{4}(\mathbb{R})\) _are positive, and strictly increasing with respect to_ \(\tau\)_;_
2. \(\mu_{\pm}>0\) _and_ \(\varsigma_{\pm}>0\) _are constants;_
3. \(\bar{\rho}_{-}\in W^{3,\infty}(h_{-},0)\)_,_ \(\bar{\rho}_{+}\in W^{3,\infty}(0,h_{+})\) _and they satisfy (_1.9_)-(_1.11_)._
_The zero solution is unstable to the RT problem (1.31) in Hadamard sense, that is, there are positive constants \(\epsilon\), \(c_{i}\) for \(1\leqslant i\leqslant 8\), and functions \(\tilde{\eta}^{0}\), \(\tilde{u}^{0}\in H^{1}_{0}(\Omega_{c_{1},c_{2}})\cap H^{3}(\Omega_{c_{1},c_{2}})\), such that, for any \(\delta\in(0,c_{3}]\) and for any \(n\geqslant\max\{\delta^{-2},c_{4}\}\),_
* \(\|\chi_{n,n}(\tilde{\eta}^{0},\tilde{u}^{0})/n\|_{3}\leqslant c_{5}\)_,_
* _there exists a function_ \(u^{\mathrm{r}}\in H^{1}_{0}\cap H^{2}\) _depending on_ \(\delta\) _and satisfying_ \[\|u^{\mathrm{r}}\|_{2}\leqslant c_{6},\]
* _the RT problem with the initial data_ \[(\eta^{0},u^{0}):=\delta n^{-1}\chi_{n,n}(\tilde{\eta}^{0},\tilde{u}^{0})+ \delta^{2}(0,u^{\mathrm{r}})\in(H^{1}_{0}\cap H^{3})\times(H^{1}_{0}\cap H^{2})\] (1.62) _admits a unique strong solution_ \((\eta,u)\) _belongs to_ \(C^{0}([0,T],H^{3,1/2}_{0,*}\times H^{2})\) _and satisfies_ \[\|\omega_{\mathrm{h}}(T^{\delta})\|_{0},\ \|\omega_{3}(T^{\delta})\|_{0},\ |\omega_{3}(T^{\delta})|_{0}\geqslant\epsilon\] (1.63) _for some escape time_ \(T^{\delta}:=c_{7}{}^{-1}\mathrm{ln}(\epsilon c_{8}/\delta)\in(0,T)\)_, where_ \(\omega=\eta\) _or_ \(u\)_._
**Remark 1.1**.: It should be noted that \(\Omega_{c_{1},c_{2}}=2\pi c_{1}\times 2\pi c_{2}\times(h_{-},h_{+})\) and \(\Omega_{c_{1},c_{2}}\) in Theorem 1.1 is defined as \(\Omega_{L_{1},L_{2}}\) with \(c_{i}\) in place of \(L_{i}\) for \(i=1\), \(2\), see (1.51) for the definition of \(\Omega_{L_{1},L_{2}}\). In addition, let \(\varrho=\bar{\rho}J^{-1}\), then \(\eta\), \(\varrho\) and \(u\) are just strong solutions of the problem (1.23); moreover \(\|\varrho-\bar{\rho}\|_{3}\lesssim\|\eta\|_{2}\) and \(\varrho-\bar{\rho}\in C^{0}([0,T],H^{2})\) due to \(\eta\in C^{0}([0,T],H^{3,1/2}_{0,*})\).
**Remark 1.2**.: By Lemma A.10, \(\xi:=\eta+y\) satisfies the lower bound (A.25) and the homeomorphism properties (A.26)-(A.28) for sufficiently small \(\delta\), therefore we can write out the corresponding instability result in Eulerian coordinates, please refer to [31, Theorem 1.2] or Appendix B; in particular, we have the interface instability by (A.26), (B.16) and (B.25):
\[|d(T^{\delta})|_{0}=\int_{\mathbb{R}^{2}}|\eta_{3}(y_{\mathrm{h}},0,T^{\delta })|^{2}\det\nabla_{y_{\mathrm{h}}}\zeta_{\mathrm{h}}(y_{\mathrm{h}},0,T^{ \delta})|\mathrm{d}y_{\mathrm{h}}\geqslant|\eta_{3}(T^{\delta})|_{0}^{2}/2 \geqslant\epsilon/2, \tag{1.64}\]
which presents that the RT equilibrium solution \((\bar{\rho},\mathbf{0})\) with \(d=0\) is unstable to the SCVF model with non-periodic motion.
The proof of Theorem 1.1 is based on a so-called bootstrap instability method, which has its origin in [14, 15]. Later, various versions of bootstrap approaches were presented by many authors, see [10, 12] for examples. In the spirit of bootstrap instability method in [12, Lemma 1.1], the proof procedure for our problem will be divided into the following five steps.
Firstly, we will construct unstable solution \(e^{c\tau t}(\tilde{\eta}^{0},\tilde{u}^{0})\) for the linearized periodic RT problem, see Proposition 2.2 in Section 2. Secondly, we will establish a Gronwall-type energy inequality for the solutions of the (non-periodic) RT problem (1.31) as in [31, Lemma 4.10], see Proposition 3.1 in Section 3. Thirdly, we want to use \(\delta e^{c\tau t}\chi_{n,n}(\tilde{\eta}^{0},\tilde{u}^{0})/n\) to be the approximate solution of the RT problem, therefore we will exploit the stratified elliptic regularity theory to modify the initial data of the approximate solution \(\delta\chi_{n,n}(\tilde{\eta}^{0},\tilde{u}^{0})/n\) as in [25, Proposition 8], so that the obtained modified initial data satisfy the initial compatibility jump condition of the RT problem, i.e.,
\[[\![P(\bar{\rho}J^{-1})\mathbb{I}-\mathbb{S}_{\mathcal{A}}(u)]\!]\mathcal{A} \mathbf{e}^{3}=\vartheta\mathcal{H}\mathcal{A}\mathbf{e}^{3}\ \text{on}\ \Sigma\ \text{for}\ t=0, \tag{1.65}\]
and at the same time, are close to \(\delta\chi_{n,n}(\tilde{\eta}^{0},\tilde{u}^{0})/n\), see Proposition 4.1 in Section 4. Fourthly, we deduce the error estimates between the approximate solutions and the solutions of the RT problem as in (28, Lemma 4.1), see Proposition 5.1 in Section 5. Finally, we show the existence of escape times as in (12, Lemma 1.1) and thus obtain Theorem 1.1 in Section 6.
Now we further mention the first step, which includes new ideas. Since it seems to be difficulty to directly construct the unstable solutions for the linearized (non-periodic) RT problem, we first will construct the unstable solution, denoted by \(e^{c\tau t}(\tilde{\eta}^{0},\tilde{u}^{0})\), for the linearized periodic RT problem by following the argument of (17, Theorem 2.2), see the first two conclusions in Proposition 2.2. Obviously \(\delta e^{c\tau t}(\tilde{\eta}^{0},\tilde{u}^{0})\) are also the linearized periodic RT problem, but can not directly used to be the approximate solutions of the nonlinear RT problem (1.48) due to the periodicity. Therefore we will use a cut-off function \(\chi_{n,n}\) to cut off the obtained linear unstable periodic solutions. Thanks to the structure of variable separation with respective to \(y_{\rm h}\) and \(y_{3}\) of the linear solution \((\tilde{\eta}^{0},\tilde{u}^{0})\), we can derive the uniform cut-off estimates with respect to \(n\) for \(\chi_{n,n}(\tilde{\eta}^{0},\tilde{u}^{0})/n\) (see the third conclusion in Proposition 2.2) and thus excitedly find that \(\delta e^{c\tau t}\chi_{n,n}(\tilde{\eta}^{0},\tilde{u}^{0})/n\) can be regarded as the approximate solution of the nonlinear unstable solution if \(n\) is sufficiently large for given \(\delta\). We mention that our proof for Theorem 1.1 can be also applied to the corresponding incompressible case, which will be recorded in a forthcoming paper. In addition, our approximate method can be also applied to further construct to other instability solutions with non-periodic motion, such as Parker instability [24] and thermal instability [27] in a slab domain. In addition, Wang verified that the sufficiently strong non-horizontal magnetic fields can inhibit RT instability in stratified incompressible viscous non-resistive magnetohydrodynamic fluids in a slab domain \(\mathbb{R}^{2}\times(h_{-},h_{+})\)[43]. Applying our approximate method, we can further prove that the magnetic fields can not inhibit RT instability in a slab domain, if the field intensity is too weak.
Finally we remark that there exist a huge of existence results of instability solutions, which are periodic with respect to the horizontal spacial variables, for the problems of flow instabilities, see the solutions of RT instability in [19; 31], the solutions of Parker instability in [24] and the solutions of thermal convection instability in [13; 27] for examples. However, by further using our new approximate method, all results of (nonlinear) instability solutions with periodic motion aforementioned can be extended to the corresponding non-periodic cases. In addition, it is pointed out that Wang verified the fact that the sufficiently strong non-horizontal magnetic fields can inhibit RT instability in stratified incompressible viscous non-resistive magnetohydrodynamic fluids in a slab domain \(\mathbb{R}^{2}\times(h_{-},h_{+})\)[43]. However, applying our approximate method, we can further prove that the magnetic fields can not inhibit RT instability in a slab domain, if the field intensity is too weak.
## 2 Linear instability
In this section, we wish to construct an unstable solution, which has growing \(H^{4}\)-norm, to the linearized periodic RT problem:
\[\begin{cases}\eta_{t}=u&\text{in }\Omega_{L_{1},L_{2}},\\ \bar{\rho}u_{t}=g\bar{\rho}(\mathrm{div}\eta\mathbf{e}^{3}-\nabla\eta_{3})+ \mathrm{div}\Upsilon(\eta,u)&\text{in }\Omega_{L_{1},L_{2}},\\ \llbracket u\rrbracket=\llbracket\eta\rrbracket=0,\ \llbracket\Upsilon(\eta,u) \mathbf{e}^{3}\rrbracket=-\vartheta\Delta_{\mathrm{h}}\eta\mathbf{e}^{3}& \text{on }\Sigma_{L_{1},L_{2}},\\ (\eta,u)=0&\text{on }\partial\Omega,\\ (\eta,u)|_{t=0}=(\eta^{0},u^{0})&\text{in }\Omega_{L_{1},L_{2}},\end{cases} \tag{2.1}\]
see (1.51) for the definitions of \(\Omega_{L_{1},L_{2}}\) and \(\Sigma_{L_{1},L_{2}}\). We will construct such growing solution via some synthesis as in [17] by first constructing a growing mode for any but fixed spatial frequency.
We mention that the construction of the linear instability solutions follows Guo-Tice's method in [17], however some new conclusions, which are useful in the construction of nonlinear (non-periodic) instability solutions, will be further supplemented.
### Growing modes
To start with, we make a growing mode ansatz of solutions, i.e., for some \(\lambda>0\),
\[\eta(y,t)=\sigma(y)e^{\lambda t}\text{ and }u(y,t)=w(y)e^{\lambda t}.\]
Substituting this ansatz into (2.1), and then eliminating \(\sigma\) by using the first equation, we arrive at the boundary value problem \(w\):
\[\begin{cases}\lambda^{2}\bar{\rho}w=g\bar{\rho}(\mathrm{div}w\mathbf{e}^{3}- \nabla w_{3})+\mathrm{div}\Upsilon(w,\lambda w)&\text{ in }\Omega_{L_{1},L_{2}},\\ \llbracket w\rrbracket=0,\ \llbracket\Upsilon(w,\lambda w)\mathbf{e}^{3} \rrbracket=-\vartheta\Delta_{\mathrm{h}}w_{3}\mathbf{e}^{3}&\text{ on }\Sigma_{L_{1},L_{2}},\\ w=0&\text{ on }\partial\Omega\.\end{cases} \tag{2.2}\]
Now we formally define the new unknowns \((\varphi,\theta,\psi):(h_{-},h_{+})\to\mathbb{R}^{3}\) according to
\[w_{1}(y)=-\mathrm{i}\varphi(y_{3})e^{\mathrm{i}\xi\cdot y_{\mathrm{h}}},\ w_{2}(y)=- \mathrm{i}\theta(y_{3})e^{\mathrm{i}\xi\cdot y_{\mathrm{h}}}\text{ and }w_{3}(y)=\psi(y_{3})e^{\mathrm{i}\xi\cdot y_{ \mathrm{h}}},\]
where \(\xi\cdot y_{\mathrm{h}}=\xi_{1}y_{1}+\xi_{2}y_{2}\) for \(\xi\in L_{1}^{-1}\mathbb{Z}\times L_{2}^{-1}\mathbb{Z}\). It is easy to check that [17]
\[\mathrm{div}w=(\xi_{1}\varphi+\xi_{2}\theta+\psi^{\prime})e^{\mathrm{i}x_{ \mathrm{h}}\cdot\xi}\]
and
\[\mathbb{D}w=\begin{pmatrix}2\xi_{1}\varphi&\xi_{1}\theta+\xi_{2}\varphi& \mathrm{i}(\xi_{1}\psi-\varphi^{\prime})\\ \xi_{1}\theta+\xi_{2}\varphi&2\xi_{2}\theta&\mathrm{i}(\xi_{2}\psi-\theta^{ \prime})\\ \mathrm{i}(\xi_{1}\psi-\varphi^{\prime})&\mathrm{i}(\xi_{2}\psi-\theta^{ \prime})&2\psi^{\prime}\end{pmatrix}e^{\mathrm{i}x_{\mathrm{h}}\cdot\xi}.\]
For each fixed \(\xi\), and for the new unknowns \(\varphi(x_{3}),\theta(x_{3}),\psi(x_{3})\) and \(\lambda\) we arrive at the following system of ODEs:
\[\begin{split}&\left(\lambda\left(\lambda\bar{\rho}+\mu\left| \xi\right|^{2}+\xi_{1}^{2}\left(\varsigma+\mu/3\right)\right)+\xi_{1}^{2}P^{ \prime}(\bar{\rho})\bar{\rho}\right)\varphi-\lambda\mu\varphi^{\prime\prime} \\ &=\xi_{1}\left(g\bar{\rho}\psi-\left(\lambda\left(\varsigma+\mu/3 \right)+P^{\prime}(\bar{\rho})\bar{\rho}\right)\psi^{\prime}\right)-\xi_{1} \xi_{2}(\lambda\left(\varsigma+\mu/3\right)+P^{\prime}(\bar{\rho})\bar{\rho}) \theta,\\ &\left(\lambda\left(\lambda\bar{\rho}+\mu\left|\xi\right|^{2}+\xi _{2}^{2}\left(\varsigma+\mu/3\right)\right)+\xi_{2}^{2}P^{\prime}(\bar{\rho}) \bar{\rho}\right)\theta-\lambda\mu\theta^{\prime\prime}\\ &=\xi_{2}\left(g\bar{\rho}\psi-\left(\lambda\left(\varsigma+\mu/3 \right)+P^{\prime}(\bar{\rho})\bar{\rho}\right)\psi^{\prime}\right)-\xi_{1} \xi_{2}\left(\lambda(\varsigma+\mu/3)+P^{\prime}(\bar{\rho})\bar{\rho}\right) \varphi\end{split} \tag{2.3}\]
and
\[\begin{split}&(\lambda^{2}\bar{\rho}+\lambda\mu\left|\xi \right|^{2})\psi-\left(\left(\lambda\left(4\mu/3+\varsigma\right)+P^{\prime}( \bar{\rho})\bar{\rho}\right)\psi^{\prime}\right)^{\prime}\\ &=\left(\left(\lambda\left(\varsigma+\mu/3\right)+P^{\prime}( \bar{\rho})\bar{\rho}\right)\left(\xi_{1}\varphi+\xi_{2}\theta\right)\right) ^{\prime}+g\bar{\rho}(\xi_{1}\varphi+\xi_{2}\theta).\end{split} \tag{2.5}\]
The first jump condition in (2.2)\({}_{2}\) yields jump conditions for the new unknowns:
\[\llbracket\varphi\rrbracket=\llbracket\theta\rrbracket=\llbracket\psi \rrbracket=0. \tag{2.6}\]
The second jump condition in (2.2)\({}_{2}\) becomes
\[\llbracket(\lambda\left(\varsigma-2\mu/3\right)+P^{\prime}(\bar{\rho})\bar{ \rho})\left(\xi_{1}\varphi+\xi_{2}\theta+\psi^{\prime})\mathbf{e}^{3}\]
\[+\lambda\mu(\mathrm{i}(\xi_{1}\psi-\varphi^{\prime})\mathbf{e}^{1}+ \mathrm{i}(\xi_{2}\psi-\theta^{\prime})\mathbf{e}^{2}+2\psi^{\prime}\mathbf{e}^{ 3})]=\vartheta\left|\xi\right|^{2}\psi\mathbf{e}^{3},\]
which implies that
\[\llbracket\mu(\varphi^{\prime}-\xi_{1}\psi)\rrbracket=\llbracket\mu(\theta^{ \prime}-\xi_{2}\psi)\rrbracket=0 \tag{2.7}\]
and that
\[\llbracket(\lambda(\varsigma+\mu/3)+P^{\prime}(\bar{\rho})\bar{ \rho})\left(\xi_{1}\varphi+\xi_{2}\theta+\psi^{\prime}\right)\] \[+\lambda\mu\left(\psi^{\prime}-\xi_{1}\varphi-\xi_{2}\theta \right)\rrbracket=\vartheta\left|\xi\right|^{2}\psi. \tag{2.8}\]
By (2.2)\({}_{3}\), the boundary conditions
\[\varphi(h_{-})=\varphi(h_{+})=\theta(h_{-})=\theta(h_{+})=\psi(h_{-})=\psi(h _{+})=0 \tag{2.9}\]
must also hold. We will prove the the existence of real-value solutions \(\varphi\), \(\theta\), \(\psi\) to the boundary value problem (2.3)-(2.9) in next section, and thus further get real-value solutions \(w\) and \(\lambda\) of the original problem (2.2).
Next we provide the energy structure for the boundary value problem of (2.3)-(2.9). Taking the inner product of (2.3), (2.4), resp. (2.5) and \(\varphi\), \(\theta\), resp. \(\psi\) in \(L^{2}(h_{-},h_{+})\), adding the three resulting identities together, and then making use of the integration by parts and the boundary conditions (2.7)-(2.9), we derive the following energy identity for (2.3)-(2.9):
\[\lambda^{2}\mathcal{J}(\varphi,\theta,\psi)=F(\varphi,\theta,\psi):=E(\varphi,\theta,\psi)-\lambda D(\varphi,\theta,\psi), \tag{2.10}\]
where we have defined that
\[\mathcal{J}(\varphi,\theta,\psi):=\int_{h_{-}}^{h_{+}}\bar{\rho}( |\varphi|^{2}+|\theta|^{2}+|\psi|^{2})\mathrm{d}y_{3}, \tag{2.11}\] \[E(\varphi,\theta,\psi):=\int_{h_{-}}^{h_{+}}\left(2g\bar{\rho}( \xi_{1}\varphi+\xi_{2}\theta)\psi-P^{\prime}(\bar{\rho})\bar{\rho}(\xi_{1} \varphi+\xi_{2}\theta+\psi^{\prime})^{2}\right)\mathrm{d}y_{3}-\vartheta|\xi |^{2}\psi^{2}(0) \tag{2.12}\]
and
\[D(\varphi,\theta,\psi):= \int_{h_{-}}^{h_{+}}\varsigma|\xi_{1}\varphi+\xi_{2}\theta+\psi^ {\prime}|^{2}\mathrm{d}y_{3}+\int_{h_{-}}^{h+}\mu(|\xi_{2}\varphi-\xi_{1} \theta|^{2}+|\varphi^{\prime}-\xi_{1}\psi|^{2}\] \[+|\theta^{\prime}-\xi_{2}\psi|^{2}+|\xi_{1}\varphi+\xi_{2}\theta- \psi^{\prime}|^{2}+|\xi_{1}\varphi+\xi_{2}\theta+\psi^{\prime}|^{2}/3)\mathrm{ d}y_{3}. \tag{2.13}\]
In addition, by a simple computation and the hydrostatic relations in (1.9)\({}_{1}\), we have
\[2g\bar{\rho}(\xi_{1}\varphi+\xi_{2}\theta)\psi-P^{\prime}(\bar{ \rho})\bar{\rho}\left(\xi_{1}\varphi+\xi_{2}\theta+\psi^{\prime}\right)^{2}\] \[=\frac{g^{2}\bar{\rho}}{P^{\prime}(\bar{\rho})}\psi^{2}-2g\bar{ \rho}\psi\psi^{\prime}-P^{\prime}(\bar{\rho})\bar{\rho}\left(\frac{g}{P^{ \prime}(\bar{\rho})}\psi-\xi_{1}\varphi-\xi_{2}\theta-\psi^{\prime}\right)^{2}\]
and
\[\int_{h_{-}}^{h_{+}}\left(\frac{g^{2}\bar{\rho}}{P^{\prime}(\bar{\rho})}\psi^{ 2}-2g\bar{\rho}\psi\psi^{\prime}\right)\mathrm{d}y_{3}=g\llbracket\bar{\rho} \rrbracket\psi^{2}(0).\]
Thanks to the above two identities, \(E\) can be rewritten as follows:
\[E(\varphi,\theta,\psi)= (g\llbracket\bar{\rho}\rrbracket-\vartheta|\xi|^{2})|\psi(0)|^{2 }-\int_{h_{-}}^{h_{+}}P^{\prime}(\bar{\rho})\bar{\rho}\left|\frac{g}{P^{ \prime}(\bar{\rho})}\psi-\xi_{1}\varphi-\xi_{2}\theta-\psi^{\prime}\right|^{2} \mathrm{d}y_{3}. \tag{2.14}\]
_In Section 5, we will extend both the expressions of (2.11), (2.13) and (2.14) to the case that \(\varphi\), \(\theta\) and \(\psi\) are complex functions. Under such case, the notation \(|\cdot|\) represents the modulus of a complex function._
### Solutions to (2.3)-(2.9) via modified variational methods
To obtain the solution of (2.3)-(2.9), we will first consider the following initial value problem, which is obtained from (2.3)-(2.9) by modifying \(\lambda\):
\[\begin{cases}\left(\alpha(s)\bar{\rho}+s(\mu\left|\xi\right|^{2}+\xi_{1}^{2} \left(\varsigma+\mu/3\right))+\xi_{1}^{2}P^{\prime}(\bar{\rho})\bar{\rho} \right)\varphi-s\mu\varphi^{\prime\prime}\\ =\xi_{1}\left(g\bar{\rho}\psi-\left(s\left(\varsigma+\mu/3\right)+P^{\prime}( \bar{\rho})\bar{\rho}\right)\psi^{\prime}\right)-\xi_{1}\xi_{2}(s\left( \varsigma+\mu/3\right)+P^{\prime}(\bar{\rho})\bar{\rho})\theta,\\ \left(\alpha(s)\bar{\rho}+s(\mu\left|\xi\right|^{2}+\xi_{2}^{2}\left(\varsigma+ \mu/3\right))+\xi_{2}^{2}P^{\prime}(\bar{\rho})\bar{\rho}\right)\theta-s\mu \theta^{\prime\prime}\\ =\xi_{2}\left(g\bar{\rho}\psi-\left(s\left(\varsigma+\mu/3\right)+P^{\prime}( \bar{\rho})\bar{\rho}\right)\psi^{\prime}\right)-\xi_{1}\xi_{2}\left(s( \varsigma+\mu/3)+P^{\prime}(\bar{\rho})\bar{\rho}\right)\varphi,\\ \left(\alpha(s)\bar{\rho}+s\mu\left|\xi\right|^{2}\right)\psi-\left(\left(s \left(4\mu/3+\varsigma\right)+P^{\prime}(\bar{\rho})\bar{\rho}\right)\psi^{ \prime}\right)^{\prime}\\ =\left(\left(s\left(\varsigma+\mu/3\right)+P^{\prime}(\bar{\rho})\bar{\rho} \right)(\xi_{1}\varphi+\xi_{2}\theta)\right)^{\prime}+g\bar{\rho}(\xi_{1} \varphi+\xi_{2}\theta),\\ \llbracket\llbracket\varphi\rrbracket=\llbracket\theta\rrbracket=\llbracket \psi\rrbracket=0,\\ \llbracket\llbracket\mu(\varphi^{\prime}-\xi_{1}\psi)\rrbracket=\llbracket \mu(\theta^{\prime}-\xi_{2}\psi)\rrbracket=0,\\ \llbracket(s(\varsigma+\mu/3)+P^{\prime}(\bar{\rho})\bar{\rho})\left(\xi_{1} \varphi+\xi_{2}\theta+\psi^{\prime}\right)+s\mu\left(\psi^{\prime}-\xi_{1} \varphi-\xi_{2}\theta\right)\rrbracket=\vartheta\left|\xi\right|^{2}\psi,\\ \varphi(h_{-})=\varphi(h_{+})=\theta(h_{-})=\theta(h_{+})=\psi(h_{-})=\psi(h_{ +})=0.\end{cases} \tag{2.15}\]
In view of (2.10), the above modified problem has the following variational structure:
\[\alpha(s):=\sup_{\phi,\chi,\omega\in H^{1}_{0}(h_{-},h_{+})}\frac{F(\phi, \chi,\omega;s)}{\mathcal{J}(\phi,\chi,\omega)}, \tag{2.16}\]
see the definition of \(F(\varphi,\theta,\psi;s)\) in (2.10). Since both \(F\) and \(\mathcal{J}\) are homogeneous of degree \(2\), it holds that
\[\alpha(s)=\sup_{(\phi,\chi,\omega)\in\mathcal{A}}F(\phi,\chi,\omega;s), \tag{2.17}\]
where \(\mathcal{A}:=\{(\phi,\chi,\omega)\in(H^{1}_{0}(h_{-},h_{+}))^{3}\mid\mathcal{ J}(\phi,\chi,\omega)=1\}\). Thus we easily construct solutions to the modified problem (2.15) by utilizing a standard variational method.
**Lemma 2.1**: _Under the assumptions of Theorem 1.1, for any given \(s\in\mathbb{R}^{+}\) and any given \(\xi\in\mathbb{R}^{2}\), \(F(\phi,\chi,\omega;s)\) has a maximizer \((\varphi,\theta,\psi)\) in \(\mathcal{A}\). Moreover, the maximizer \((\varphi,\theta,\psi)\) belongs to \(H^{4}((h_{-},0)\cup(0,h_{+}))\) and satisfies (2.15) with \(\alpha(s)=F(\varphi,\theta,\psi;s)\)._
In what follows, we denote \(F(\cdot,\cdot,\cdot;s)\) by \(F(\cdot,\cdot,\cdot)\) for the sake of the simplicity.
(1) Recalling the definition of \(F(\varphi,\theta,\psi)\), we can derive that
\[F(\varphi,\theta,\psi)= g\int_{h_{-}}^{h_{+}}\bar{\rho}\left(\xi_{1}(\varphi^{2}+\psi^{2})+ \xi_{2}(\theta^{2}+\psi^{2})\right)\mathrm{d}y_{3}-\vartheta|\xi|^{2}\psi^{2}( 0)-sD(\varphi,\theta,\psi)\] \[-\int_{h_{-}}^{h_{+}}(P^{\prime}(\bar{\rho})\bar{\rho}(\xi_{1} \varphi+\xi_{2}\theta+\psi^{\prime})^{2}+g\bar{\rho}(\xi_{1}(\varphi-\psi)^{2} +\xi_{2}(\theta-\psi)^{2}))\mathrm{d}y_{3}\] \[\leqslant g(|\xi_{1}|+|\xi_{2}|), \tag{2.18}\]
which shows that \(F\) is bounded above on \(\mathcal{A}\). Therefore there exists a maximizing sequence \(\{(\varphi_{n},\theta_{n},\psi_{n})\}_{n=1}^{\infty}\subset\mathcal{A}\). Obviously \(\{\psi_{n}(0)\}_{n=1}^{\infty}\) is bounded in \(\mathbb{R}\), and \(\{\varphi_{n}\}_{n=1}^{\infty}\), \(\{\theta_{n}\}_{n=1}^{\infty}\) and \(\{\psi_{n}\}_{n=1}^{\infty}\) are bounded in \(H^{1}_{0}(h_{-},h_{+})\), so up to the extraction of a subsequence \((\varphi_{n},\theta_{n},\psi_{n})\rightharpoonup(\varphi,\theta,\psi)\) weakly in \((H^{1}_{0}(h_{-},h_{+}))^{3}\), and \((\varphi_{n},\theta_{n},\psi_{n})\to(\varphi,\theta,\psi)\) strongly in \((L^{2}(h_{-},h_{+}))^{3}\). In addition, the compact embedding \(H^{1}_{0}(h_{-},h_{+})\hookrightarrow C^{0}(h_{-},h_{+})\) implies that \(\psi_{n}(0)\to\psi(0)\) as well (referring to section 1.3.5.8 in [34]). Because of the quadratic structure of all the terms in the
integrals defining \(F(\varphi,\theta,\psi)\), weak lower semi-continuity (referring to section 1.4.2.7 in [34]) and strong \(L^{2}\)-convergence imply that
\[\alpha(s)\geqslant F(\varphi,\theta,\psi) \geqslant\lim_{n\to\infty}\left(\int_{h_{-}}^{h_{+}}2g\bar{\rho}( \xi_{1}\varphi_{n}+\xi_{2}\theta_{n})\psi_{n}\mathrm{d}y_{3}-\vartheta|\xi|^{2 }\psi_{n}^{2}(0)\right)\] \[\quad-\limsup_{n\to\infty}\left(\int_{h_{-}}^{h_{+}}P^{\prime}( \bar{\rho})\bar{\rho}(\xi_{1}\varphi_{n}+\xi_{2}\theta_{n}+\psi_{n}^{\prime})^ {2}\mathrm{d}y_{3}+sD(\varphi_{n},\theta_{n},\psi_{n})\right)\] \[\geqslant\limsup_{n\to\infty}F(\varphi_{n},\theta_{n},\psi_{n})= \alpha(s).\]
That \((\varphi,\theta,\psi)\in\mathcal{A}\) follows from the strong \(L^{2}\)-convergence. Hence \(F(\varphi,\theta,\psi)\) has a maximizer \((\varphi,\theta,\psi)\) in \(\mathcal{A}\).
(2) Next we further show that the maximizer \((\varphi,\theta,\psi)\) constructed above satisfies Euler-Langrange equations, which are equivalent to (2.15). Let \(\tilde{\varphi}\), \(\tilde{\theta}\), \(\tilde{\psi}\in H^{1}_{0}(h_{-},h_{+})\) be given. Then we define
\[f(t):=F(\varphi+t\tilde{\varphi},\theta+t\tilde{\theta},\psi+t\tilde{\psi})- \alpha\mathcal{J}(\varphi+t\tilde{\varphi},\theta+t\tilde{\theta},\psi+t\tilde {\psi}). \tag{2.19}\]
It is easy to see that \(f\in C^{\infty}(\mathbb{R})\), \(f(t)\geqslant 0\) for any \(t\in\mathbb{R}\) and \(f(0)=0\). Therefore \(f^{\prime}(0)=0\), which implies that
\[\int_{h_{-}}^{h_{+}}\left(g\bar{\rho}((\xi_{1}\varphi+\xi_{2} \theta)\tilde{\psi}+(\xi_{1}\tilde{\varphi}+\xi_{2}\tilde{\theta})\psi)-P^{ \prime}(\bar{\rho})\bar{\rho}\left(\xi_{1}\varphi+\xi_{2}\theta+\psi^{\prime} \right)\left(\xi_{1}\tilde{\varphi}+\xi_{2}\tilde{\theta}+\tilde{\psi}^{ \prime}\right)\right)\mathrm{d}y_{3}\] \[-\vartheta|\xi|^{2}\psi(0)\tilde{\psi}(0)-s\int_{h_{-}}^{h_{+}} \varsigma(\xi_{1}\varphi+\xi_{2}\theta+\psi^{\prime})(\xi_{1}\tilde{\varphi}+ \xi_{2}\tilde{\theta}+\tilde{\psi}^{\prime})\mathrm{d}y_{3}\] \[-s\int_{h_{-}}^{h_{+}}\mu((\xi_{2}\varphi-\xi_{1}\theta)(\xi_{2} \tilde{\varphi}-\xi_{1}\tilde{\theta})+(\varphi^{\prime}-\xi_{1}\psi)(\tilde{ \varphi}^{\prime}-\xi_{1}\tilde{\psi})+(\theta^{\prime}-\xi_{2}\psi)(\tilde{ \theta}^{\prime}-\xi_{2}\tilde{\psi})\] \[+(\xi_{1}\varphi+\xi_{2}\theta-\psi^{\prime})(\xi_{1}\tilde{ \varphi}+\xi_{2}\tilde{\theta}-\tilde{\psi}^{\prime})+(\xi_{1}\varphi+\xi_{2} \theta+\psi^{\prime})(\xi_{1}\tilde{\varphi}+\xi_{2}\tilde{\theta}+\tilde{ \psi}^{\prime})/3)\mathrm{d}y_{3}\] \[=\alpha\int_{h_{-}}^{h_{+}}\bar{\rho}(\varphi\tilde{\varphi}+ \theta\tilde{\theta}+\psi\tilde{\psi})\mathrm{d}y_{3}. \tag{2.20}\]
Since \(\tilde{\varphi}\), \(\tilde{\theta}\) and \(\tilde{\psi}\) are independent, the above identity gives rise to the three weak forms:
\[\int_{h_{-}}^{h_{+}}\left(g\xi_{1}\bar{\rho}\psi-\xi_{1}P^{ \prime}(\bar{\rho})\bar{\rho}\left(\xi_{1}\varphi+\xi_{2}\theta+\psi^{\prime} \right)-s\varsigma\xi_{1}(\xi_{1}\varphi+\xi_{2}\theta+\psi^{\prime})\right) \tilde{\varphi}\mathrm{d}y_{3}\] \[-s\int_{h_{-}}^{h_{+}}\mu\left(\frac{1}{3}\left((\xi_{1}^{2}+3| \xi|^{2})\varphi+\xi_{1}\xi_{2}\theta-2\xi_{1}\psi^{\prime}\right)\tilde{ \varphi}+(\varphi^{\prime}-\xi_{1}\psi)\tilde{\varphi}^{\prime}\right)\mathrm{d }y_{3}=\alpha\int_{h_{-}}^{h_{+}}\bar{\rho}\varphi\tilde{\varphi}\mathrm{d}y_{3}, \tag{2.21}\]
\[\int_{h_{-}}^{h_{+}}\left(g\xi_{2}\bar{\rho}\theta-\xi_{2}P^{ \prime}(\bar{\rho})\bar{\rho}\left(\xi_{1}\varphi+\xi_{2}\theta+\psi^{\prime} \right)-s\varsigma\xi_{2}(\xi_{1}\varphi+\xi_{2}\theta+\psi^{\prime})\right) \tilde{\theta}\mathrm{d}y_{3}\] \[-s\int_{h_{-}}^{h_{+}}\mu\left(\frac{1}{3}\left(\xi_{1}\xi_{2} \varphi+(\xi_{2}^{2}+3|\xi|^{2})\theta-2\xi_{2}\psi^{\prime}\right)\tilde{ \theta}+(\theta^{\prime}-\xi_{2}\psi)\tilde{\theta}^{\prime}\right)\mathrm{d }y_{3}=\alpha\int_{h_{-}}^{h_{+}}\bar{\rho}\theta\tilde{\theta}\mathrm{d}y_{3} \tag{2.22}\]
and
\[\int_{h_{-}}^{h_{+}}\left(g\bar{\rho}(\xi_{1}\varphi+\xi_{2}\theta)\tilde{\psi}- P^{\prime}(\bar{\rho})\bar{\rho}\left(\xi_{1}\varphi+\xi_{2}\theta+\psi^{\prime} \right)\tilde{\psi}^{\prime}\right)\mathrm{d}y_{3}-\vartheta|\xi|^{2}\psi(0) \tilde{\psi}(0)\]
\[+s\int_{h_{-}}^{h_{+}}\mu\left(\left(\left(\xi_{1}\varphi+\xi_{2} \theta\right)^{\prime}-|\xi|^{2}\psi\right)\tilde{\psi}+\left(2\left(\xi_{1} \varphi+\xi_{2}\theta\right)\tilde{\psi}^{\prime}-4\psi^{\prime}\tilde{\psi}^{ \prime}\right)/3\right)\mathrm{d}y_{3}\] \[-s\int_{h_{-}}^{h_{+}}\varsigma(\xi_{1}\varphi+\xi_{2}\theta+ \psi^{\prime})\tilde{\psi}^{\prime}\mathrm{d}y_{3}=\alpha\int_{h_{-}}^{h_{+}} \bar{\rho}\psi\tilde{\psi}\mathrm{d}y_{3}. \tag{2.23}\]
By making variations with \(\tilde{\varphi}\), \(\tilde{\theta}\) and \(\tilde{\psi}\) compactly supported in either \((h_{-},0)\) or \((0,h_{+})\), we find that \(\varphi\), \(\theta\) and \(\psi\) satisfy the equations (2.15)\({}_{1}\)-(2.15)\({}_{3}\) in a weak sense in \((h_{-},0)\) and \((0,h_{+})\). Standard bootstrapping arguments then show that \((\varphi,\theta,\psi)\) are in \(H^{4}((h_{-},0)\cup(0,h_{+}))\). This implies that the equations are also satisfied on \((h_{-},0)\) and \((0,h_{+})\). Since \((\varphi,\theta,\psi)\in(H^{4}((h_{-},0)\cup(0,h_{+})))^{3}\), the traces of the functions and their derivatives are well-defined at the endpoints \(y_{3}=0\), \(h_{\pm}\).
To show that the jump conditions are satisfied, we make variations with respect to arbitrary \(\tilde{\varphi}\), \(\tilde{\theta}\), \(\tilde{\psi}\in C_{0}^{\infty}(h_{-},h_{+})\). Integrating the terms in (2.21) with derivatives of \(\tilde{\varphi}\) by parts and using that \(\varphi\) solves (2.15)\({}_{1}\) on \((h_{-},0)\cup(0,h_{+})\), we find that
\[\llbracket\mu(\varphi^{\prime}-\xi_{1}\psi)\rrbracket\tilde{\varphi}(0)=0. \tag{2.24}\]
Since \(\tilde{\varphi}(0)\) may be chosen arbitrarily, we deduce \(\llbracket\mu(\varphi^{\prime}-\xi_{1}\psi)\rrbracket=0\) in (2.15)\({}_{5}\). Similarly, we also have \(\llbracket\mu(\theta^{\prime}-\xi_{2}\psi)\rrbracket=0\). Therefore (2.15)\({}_{5}\) holds. In addition, performing a similar integration by parts in (2.23) yields the jump condition (2.15)\({}_{6}\). Finally, the conditions (2.15)\({}_{4}\) and (2.15)\({}_{7}\) are satisfied trivially since \(\varphi\), \(\theta\), \(\psi\in H^{1}_{0}(h_{-},h_{+})\hookrightarrow C_{0}^{0,1/2}[h_{-},h_{+}]\).
The next lemma establishes the behaviors of \(\alpha(s)\) with respect to \(s\).
**Lemma 2.2**.: _Under the assumptions of Theorem 1.1, for any given \(\xi\in\mathbb{R}^{2}\), let \(\alpha\): \(\mathbb{R}^{+}\to\mathbb{R}\) be given by (2.17). Then the following assertions hold._
1. \(\alpha\in C_{\mathrm{loc}}^{0,1}(\mathbb{R}^{+})\)_, and in particular_ \(\alpha\in C^{0}(\mathbb{R}^{+})\)_._
2. _There exists a positive constant_ \(b_{1}=b_{1}(\xi,\mu,\varsigma,h_{\pm})\) _so that_ \[\alpha(s)\leqslant g(|\xi_{1}|+|\xi_{2}|)-sb_{1}.\] (2.25)
3. \(\alpha(s)\) _is strictly decreasing._
4. _Let_ \[|\xi|_{\mathrm{c}}=\begin{cases}\sqrt{g\llbracket\bar{\rho}\rrbracket/\vartheta }&\text{if }\vartheta>0;\\ \infty&\text{if }\vartheta=0.\end{cases}\] _If_ \(\xi\) _satisfies_ \[|\xi|\in(0,|\xi|_{\mathrm{c}}),\] (2.26) _then there exists a_ \(s_{0}\)_, depending on_ \(\xi\)_,_ \(g\)_,_ \(\vartheta\)_,_ \(\mu\)_,_ \(\rho\)_,_ \(h_{\pm}\) _and_ \(P_{\pm}\)_, such that_ \(\alpha(s)>0\) _for any_ \(s\in(0,s_{0})\)_._
Proof.: (1) Let \(Q=[a,b]\in\mathbb{R}^{+}\) be a compact interval. By Lemma 2.1, for each \(s\in\mathbb{R}^{+}\), there exists \((\varphi_{s},\theta_{s},\psi_{s})\in\mathcal{A}\) so that
\[F(\varphi_{s},\theta_{s},\psi_{s};s)=\sup_{(\phi,\chi,\omega)\in\mathcal{A}}F( \phi,\chi,\omega;s)=\alpha(s). \tag{2.27}\]
We deduce from the non-negativity of \(D\), the maximality of \((\varphi_{s},\theta_{s},\psi_{s})\), and the equality in (2.18) that
\[F(\tilde{\varphi},\tilde{\theta},\tilde{\psi};b)\leqslant F(\tilde{\varphi}, \tilde{\theta},\tilde{\psi};s)\leqslant F(\varphi_{s},\theta_{s},\psi_{s};s) \leqslant g(|\xi_{1}|+|\xi_{2}|)-sD(\varphi_{s},\theta_{s},\psi_{s}) \tag{2.28}\]
for all \((\tilde{\varphi},\tilde{\theta},\tilde{\psi})\in{\cal A}\) and all \(s\in Q\). This implies that there exists a constant \(0<\tilde{b}_{1}:=(g(|\xi_{1}|+|\xi_{2}|)-F(\tilde{\varphi},\tilde{\theta}, \tilde{\psi};b))/a<\infty\), where we fix the functions \(\tilde{\varphi}\), \(\tilde{\theta}\) and \(\tilde{\psi}\), so that, for any \(s\in Q\),
\[\sup_{s\in Q}D(\varphi_{s},\theta_{s},\psi_{s})\leqslant\tilde{b}_{1}. \tag{2.29}\]
Let \(s_{i}\in Q\) for \(i=1,2\). Then we may bound
\[\alpha(s_{1})= F(\varphi_{s_{1}},\theta_{s_{1}},\psi_{s_{1}};s_{2})+(s_{2}-s_{ 1})D(\varphi_{s_{1}},\theta_{s_{1}},\psi_{s_{1}})\] \[\leqslant \alpha(s_{2})+|s_{1}-s_{2}|D(\varphi_{s_{1}},\theta_{s_{1}}, \psi_{s_{1}}), \tag{2.30}\]
which, together with (2.29), yields
\[\alpha(s_{1})\leqslant\alpha(s_{2})+\tilde{b}_{1}|s_{1}-s_{2}|. \tag{2.31}\]
Reversing the role of the indices \(1\) and \(2\) in the derivation of the above inequality gives the same bound with the indices switched. Therefore we deduce that
\[|\alpha(s_{1})-\alpha(s_{2})|\leqslant\tilde{b}_{1}|s_{1}-s_{2}|, \tag{2.32}\]
which proves the first assertion.
(2) To prove (2.25) we note that the equality in (2.18) and the non-negativity of \(D\) imply that
\[\alpha(s)\leqslant g(|\xi_{1}|+|\xi_{2}|)-s\inf_{(\varphi,\theta,\psi)\in{ \cal A}}D(\varphi,\theta,\psi), \tag{2.33}\]
where \((\varphi,\theta,\psi)\in{\cal A}\). It is a simple matter to see that this infimum, which we call the constant \(b_{1}\), is positive.
(3) To prove the third assertion, note that if \(0<s_{1}<s_{2}<\infty\), then
\[\alpha(s_{1})=F(\varphi_{s_{1}},\theta_{s_{1}},\psi_{s_{1}};s_{1})\geqslant F (\varphi_{s_{2}},\theta_{s_{2}},\psi_{s_{2}};s_{1})\geqslant F(\varphi_{s_{2} },\theta_{s_{2}},\psi_{s_{2}};s_{2})=\alpha(s_{2}). \tag{2.34}\]
This shows that \(\alpha\) is non-increasing in \(s\). Now suppose by way of contradiction that \(\alpha(s_{1})=\alpha(s_{2})\), then the previous relation (2.34) implies that
\[s_{1}D(\varphi_{s_{2}},\theta_{s},\psi_{s_{2}})=s_{2}D(\varphi_{s_{2}},\theta _{s_{2}},\psi_{s_{2}}), \tag{2.35}\]
which means that \(D(\varphi_{s_{2}},\theta_{s_{2}},\psi_{s_{2}})=0\). This in turn forces \(\varphi_{s_{2}}=\theta_{s_{2}}=\psi_{s_{2}}=0\), which contradicts the fact that \((\varphi_{s_{2}},\theta_{s_{2}},\psi_{s_{2}})\in{\cal A}\). Hence equality \(\alpha(s_{1})=\alpha(s_{2})\) cannot be achieved, and \(\alpha\) is strictly decreasing in \(s\).
(4) Finally, we prove the fourth assertion. Obviously, it suffices to show that, under the condition (2.26),
\[\sup_{(\phi,\chi,\omega)\in(H^{1}_{0}(h_{-},h_{+}))^{3}}\frac{F(\phi,\chi, \omega)}{{\cal J}(\phi,\chi,\omega)}>0, \tag{2.36}\]
but since \({\cal J}\) is positive definite, we may reduce to constructing a triple \((\varphi,\theta,\psi)\in H^{1}_{0}(h_{-},h_{+})\) such that \(F(\varphi,\theta,\psi)>0\). Next we divide the construction into two cases.
(a) If \(\xi_{1}\neq 0\), we let \(\theta=0\) and \(\varphi=-\psi^{\prime}/\xi_{1}\). To obtain (2.36), we must then construct \(\psi\in H^{2}_{0}(h_{-},h_{+})\) so that
\[\tilde{F}(\psi):= F(-\psi^{\prime}/\xi_{1},0,\psi)\]
\[= -\int_{h_{-}}^{h_{+}}\left(2g\bar{\rho}\psi\psi^{\prime}+s\mu\left( \left(\frac{\psi^{\prime\prime}}{\xi_{1}}+\xi_{1}\psi\right)^{2}+(\xi_{2}\psi)^{ 2}+(4+(\xi_{2}/\xi_{1})^{2})(\psi^{\prime})^{2}\right)\right){\rm d}y_{3}\] \[-\vartheta|\xi|^{2}\psi^{2}(0)>0, \tag{2.37}\]
where, by the integration by parts and (1.9)\({}_{1}\),
\[-2\int_{h_{-}}^{h_{+}}g\bar{\rho}\psi\psi^{\prime}{\rm d}y_{3}= g\llbracket\bar{\rho}\rrbracket\psi^{2}(0)+\int_{h_{-}}^{h_{+}}g\bar{ \rho}^{\prime}\psi^{2}{\rm d}y_{3}=g\llbracket\bar{\rho}\rrbracket\psi^{2}(0)- g^{2}\int_{h_{-}}^{h_{+}}\frac{\bar{\rho}}{P^{\prime}(\bar{\rho})}\psi^{2}{ \rm d}y_{3}. \tag{2.38}\]
Now we define the test function \(\psi_{\beta}\in H^{2}_{0}(h_{-},h_{+})\) for \(\beta\geqslant 5\) according to
\[\psi_{\beta}(y_{3})=\begin{cases}\left(1-\frac{y_{3}^{2}}{h_{+}^{2}}\right)^{ \beta/2}&\text{for $y_{3}\in[0,h_{+})$;}\\ \left(1-\frac{y_{3}^{2}}{h_{-}^{2}}\right)^{\beta/2}&\text{for $y_{3}\in(h_{-},0)$.} \end{cases}\]
By Lebesgue's dominated convergence theorem, it is easy to see that
\[\int_{h_{-}}^{h_{+}}(\psi_{\beta})^{2}{\rm d}y_{3}=o(\beta). \tag{2.39}\]
where \(o(\beta)\) represents a quantity that vanishes as \(\beta\to\infty\).
In addition,
\[\int_{h_{-}}^{h_{+}}\mu\left(\left(\frac{\psi_{\beta}^{\prime\prime}}{\xi_{1} }+\xi_{1}\psi_{\beta}\right)^{2}+(\xi_{2}\psi_{\beta})^{2}+(4+(\xi_{2}/\xi_{1 })^{2})(\psi_{\beta}^{\prime})^{2}\right){\rm d}y_{3}\leqslant\tilde{b}_{2} \tag{2.40}\]
for some constant \(\tilde{b}_{2}\) depending on \(\beta\), \(\xi\), \(\mu\) and \(h_{\pm}\). Exploiting (2.37)-(2.40) and the fact \(\psi_{\beta}(0)=1\), we find that
\[\tilde{F}(\psi_{\beta})\geqslant g\llbracket\rho\rrbracket-\vartheta\left|\xi \right|^{2}+o(\beta)-\tilde{b}_{2}s, \tag{2.41}\]
where \(o(\beta)\) also represents a quantity that vanishes as \(\beta\to\infty\), and depends on \(g\), \(\bar{\rho}\), \(h_{\pm}\) and \(P_{\pm}^{\prime}\). Since \(\xi\) satisfies the condition (2.26), we may then fix \(\beta\) sufficiently large so that the sum of the first three terms is equal the half of the sum of the first two terms. Then there exists \(s_{0}>0\) depending on \(\xi\), \(g\), \(\vartheta\), \(\mu\), \(\bar{\rho}\), \(h_{\pm}\) and \(P_{\pm}\), so that for \(s\leqslant s_{0}\) it holds that
\[\tilde{F}(\psi_{\beta})>0, \tag{2.42}\]
thereby proving the desired result for \(\xi_{1}\neq 0\).
(b) If
\[\xi_{1}=0\text{ and }\xi_{2}\neq 0, \tag{2.43}\]
we let \(\varphi=0\) and \(\theta=-\psi^{\prime}/\xi_{2}\), and thus
\[F(0,-\psi^{\prime}/\xi_{2},\psi)\] \[=-\int_{h_{-}}^{h_{+}}\left(2g\bar{\rho}\psi\psi^{\prime}+s\mu \left(\left(\frac{\psi^{\prime\prime}}{\xi_{2}}+\xi_{2}\psi\right)^{2}+(\xi_{ 1}\psi)^{2}+(4+(\xi_{1}/\xi_{2})^{2})(\psi^{\prime})^{2}\right)\right){\rm d}y _{3}\] \[\quad-\vartheta|\xi|^{2}\psi^{2}(0)>0. \tag{2.44}\]
Comparing both the structures of (2.37) and (2.44), and recalling the derivation of (2.42), we easily see the fourth assertion also holds under the case (2.43). This completes the proof.
With Lemmas 2.1-2.2 in hand, we are in the position to the proof of the existence of the solutions to (2.3)-(2.9).
**Proposition 2.1**.: _Let the assumptions in Theorem 1.1 and the condition (2.26) be satisfied._
1. _There exist solutions_ \(\varphi(\xi,y_{3})\)_,_ \(\theta(\xi,y_{3})\) _and_ \(\psi(\xi,y_{3})\) _with_ \(\lambda(\xi)>0\) _to the boundary value problem of (_2.3_)-(_2.9_). Moreover,_ \(\varphi(\xi,y_{3})\)_,_ \(\theta(\xi,y_{3})\)_,_ \(\psi(\xi,y_{3})\in H^{4}((h_{-},0)\cup(0,h_{+}))\)_,_ \[\lambda^{2}=\sup_{(\phi,\chi,\omega)\in\mathcal{A}}F(\phi,\chi,\omega;\lambda)= F(\varphi,\theta,\psi;\lambda)>0\] (2.45) _and_ \[\psi(\xi,0),\ |\varphi(\xi,y_{3})|+|\theta(\xi,y_{3})|,\ \psi(\xi,y_{3})\neq 0.\] (2.46)
2. _Let_ \(\varphi(\xi,y_{3})\)_,_ \(\theta(\xi,y_{3})\) _and_ \(\psi(\xi,y_{3})\) _with_ \(\lambda(\xi)>0\) _be constructed above. Then_ \[\lambda(\xi)=\lambda(-\xi),\] (2.47) _and_ \(\varphi(\xi,y_{3})\)_,_ \(\theta(\xi,y_{3})\)_,_ \(-\psi(\xi,y_{3})\) _with_ \(\lambda(-\xi)>0\) _also are the solutions of the boundary value problem of (_2.3_)-(_2.9_)._
Proof.: (1) By Lemma 2.2, there exists a positive constant \(\gamma\) such that
\[0<\alpha(s)\in C^{0}(0,\gamma]\ \text{and}\ \alpha(\gamma)=0; \tag{2.48}\]
moreover, \(\alpha(s)\) is strictly decreasing. Now we define
\[\beta(s):=\sqrt{\alpha(s)}, \tag{2.49}\]
then \(\beta\in C^{0}(0,\gamma]\) is a strictly decreasing and \(\beta(s)>0\) for any \(s\in(0,\gamma)\). Obviously there exists a unique \(\lambda\) such that \(\beta(\lambda):=\lambda\); in particular, \(\lambda^{2}=\alpha(\lambda)\). Moreover, by Lemma 2.1, there exists \((\varphi(\xi,\cdot),\theta(\xi,\cdot),\,\psi(\xi,\cdot))\), which belongs to \(\mathcal{A}\cap(H^{4}((h_{-},0)\cup(0,h_{+})))^{3}\) and is the solution to (2.15) with \(s=\lambda\) and \(\alpha(s)=\lambda^{2}\). In particular, \((\varphi(\xi,\cdot),\theta(\xi,\cdot),\,\psi(\xi,\cdot))\) with \(\lambda(\xi)\) is also the desired solution to the problem (2.3)-(2.9) due to \(\lambda>0\), and satisfies \(\lambda^{2}=F(\varphi,\theta,\psi;\lambda)\). Heene (2.45) holds. In addition, by the definitions of \(E\) and \(F\), the identity (2.14) and the fact \(F(\varphi,\theta,\psi;\lambda)>0\), we easily get (2.46).
(2) Recalling the definition of \(\alpha\), we can see that \(\alpha(\xi,s)=\alpha(-\xi,s)\). Therefore we immediately get (2.47) by recalling the construction of \(\lambda(\xi)\). Finally, since \((\varphi(\xi,\cdot),\theta(\xi,\cdot),\psi(\xi,\cdot))\) with \(\lambda(\xi)\) is the solution of the boundary value problem of (2.3)-(2.9), thus we easily observe that \((\varphi(\xi,\cdot),\theta(\xi,\cdot),-\psi(\xi,\cdot))\) with \(\lambda(-\xi)\) is also the solution of (2.3)-(2.9). The proof of Proposition 2.1 is complete.
### Behavior of \(\lambda\) with respect to \(\xi\)
In this section we will study the behavior of \(\lambda\) from Proposition 2.1 in terms of \(\xi\).
**Lemma 2.3**.: _Let the assumptions in Theorem 1.1 and the condition (2.26) be satisfied. The function \(\lambda:(0,|\xi|_{\rm c})\to\mathbb{R}^{+}\) is continuous and satisfies the upper bound_
\[\lambda(\xi)\leqslant\frac{3h_{+}g\llbracket\bar{\rho}\rrbracket}{2\mu_{+}}. \tag{2.50}\]
_Moreover,_
\[\lim_{|\xi|\to 0^{+}}\lambda(\xi)=0, \tag{2.51}\]
_and, if \(\vartheta>0\), then also_
\[\lim_{|\xi|\to|\xi|_{\rm c}^{-}}\lambda(\xi)=0. \tag{2.52}\]
Proof. (1) Recalling the definition of \(\alpha\), we easily see that \(\alpha\) depends on \(\xi\) for any given positive constant \(s\). Therefore we denote \(\alpha\) by \(\alpha(\xi)\) or \(\alpha(\xi,s)\). To obtain the desired continuity claim of \(\lambda(\xi)\), we will first prove that, for any given \(s>0\), \(\alpha(\xi)\) is continuous with respect to \(\xi\). To this purpose, we choose a point \(\xi^{0}\in\mathbb{R}^{2}\) such that \(|\xi^{0}|\in(0,\varpi)\), where we have defined that
\[\varpi:=\begin{cases}|\xi|_{\rm c}&\text{if }\vartheta>0;\\ 2|\xi^{0}|&\text{if }\vartheta=0.\end{cases}\]
Next we prove that \(\alpha(\xi)\) is continuous at \(\xi^{0}\) for given \(s>0\).
Without loss of a generality, it suffices to consider the case \(\xi^{0}_{1}\neq 0\). We assume that \(\xi\) satisfies
\[|\xi|\in(0,\varpi)\text{ and }\sigma:=|\xi-\xi^{0}|\leqslant\min\{1,|\xi^{0}_{1 }|/2\}, \tag{2.53}\]
By virtue of Lemma 2.1, for any given \(s>0\) and any given \(\xi\), there exists a triple
\[(\psi^{\xi},\theta^{\xi},\psi^{\xi}):=(\psi(\xi,y_{3}),\theta( \xi,y_{3}),\psi(\xi,y_{3}))\in\mathcal{A}, \tag{2.54}\]
such that
\[\alpha(\xi)=E(\psi^{\xi},\theta^{\xi},\psi^{\xi})-sD(\psi^{\xi}, \theta^{\xi},\psi^{\xi}). \tag{2.55}\]
In addition, we easily see from the derivation of (2.41) and the definition of \(\alpha(\xi)\) that, for some positive constant \(\tilde{b}_{3}:=\tilde{b}_{3}(\xi^{0}_{1},\varpi,g,\vartheta,\mu,\bar{\rho},h_ {\pm},P_{\pm})\),
\[\alpha(\xi)\geqslant(g[\bar{\rho}]-\vartheta\,|\xi|^{2})/2- \tilde{b}_{3}s.\]
Exploiting (2.54) and Young's inequality, we can derive from (2.55) and the above inequality that, for some positive constant \(\tilde{b}_{4}:=\tilde{b}_{4}(\xi^{0}_{1},\varpi,s,g,\vartheta,\mu,\bar{\rho}, h_{\pm},\,P_{\pm})\),
\[\|(\psi^{\xi},\theta^{\xi},\psi^{\xi})\|_{H^{1}_{0}(h_{-}h_{+}) }\leqslant\tilde{b}_{4}. \tag{2.56}\]
Let \(\sigma_{i}=\xi_{i}-\xi^{0}_{i}\). Then plugging \(\xi_{i}=\xi^{0}_{i}+\delta_{i}\) into the expression of \(\alpha(\xi)\) in (2.55), and then exploiting (2.56) and the condition \(|\sigma_{i}|\leqslant\sigma\leqslant 1\), we can estimate that, for some positive constant \(\tilde{b}_{5}:=\tilde{b}_{5}(\xi^{0}_{1},\varpi,s,g,\vartheta,\mu,\varsigma, \bar{\rho},h_{\pm},P_{\pm})\),
\[\alpha(\xi)-\alpha(\xi^{0})\leqslant\tilde{b}_{5}\sigma\]
Similarly, we also have
\[\alpha(\xi^{0})-\alpha(\xi)\leqslant\tilde{b}_{6}\sigma,\]
which, together with (2.57), implies, for any given \(s>0\),
\[\alpha(\xi,s)\to\alpha(\xi^{0},s)\text{ as }\xi\to\xi^{0}\text{ with }\xi\text{ satisfying \eqref{eq:100}.} \tag{2.57}\]
Now we prove the continuity of \(\lambda(\xi)\). Let \(\beta(s)\) be defined by (2.49). Since \(\beta(s)\) and its the definition domain \((0,\gamma)\) depend on \(\xi\), we denote them by \(\beta(\xi,s)\) and \((0,\gamma_{\xi})\), respectively.
Recalling (2.57) and the proof of the first assertion in Proposition 2.1, we conclude the following two facts.
* for any \(\varepsilon>0\), there exists a \(\delta>0\) such that, for any \(\xi\) satisfying \(|\xi|\in(0,\varpi)\) and \(|\xi-\xi^{0}|\leqslant\delta\), it holds that \(|\beta(\xi,s_{\xi^{0}})-\lambda(\xi^{0},s_{\xi^{0}})|<\varepsilon\), where \[(0,\gamma_{\xi})\ni s_{\xi^{0}}=\lambda(\xi^{0},s_{\xi^{0}})= \beta(\xi^{0},s_{\xi^{0}})=\sqrt{\alpha(\xi^{0},s_{\xi^{0}})}>0.\]
* for each fixed \(|\xi|\in(0,|\xi|_{c})\), \(\beta(\xi,s)\) is continuous and strictly decreasing with respect to \(s\in(0,\gamma_{\xi})\), and there exists a unique \(s_{\xi}\in(0,\gamma_{\xi})\) satisfying \(\lambda(\xi,s_{\xi})=\beta(\xi,s_{\xi})=s_{\xi}>0\).
Consequently, we immediately infer that \(|\lambda(\xi,s_{\xi})-\lambda(\xi^{0},s_{\xi^{0}})|<\varepsilon\) with \(s_{\xi}=\lambda(\xi,s^{\xi})\) from the above two facts. Therefore, \(\lambda(\xi)\) is continuous on \((0,|\xi|_{c})\).
(2) Now we turn to the derivation of the upper bound (2.50). Thanks to (2.14) and (2.45), there exists a triple \((\varphi,\theta,\psi)\in\mathcal{A}\) such that
\[\lambda^{2}=(g\llbracket\bar{\rho}\rrbracket-\vartheta|\xi|^{2})\psi^{2}(0)- \int_{h_{-}}^{h_{+}}P^{\prime}(\bar{\rho})\bar{\rho}\left(\frac{g}{P^{\prime}( \bar{\rho})}\psi-\xi_{1}\varphi-\xi_{2}\theta-\psi^{\prime}\right)^{2}\mathrm{ d}y_{3}-\lambda D(\varphi,\theta,\psi). \tag{2.58}\]
By Newton-Leibniz formula and Holder's inequality, it is easy to see that
\[|\psi(0)|^{2}= \frac{1}{4}\left|\int_{0}^{h_{+}}(\psi^{\prime}-\xi_{1}\varphi- \xi_{2}\theta+\psi^{\prime}+\xi_{1}\varphi+\xi_{2}\theta)\mathrm{d}y_{3} \right|^{2}\] \[\leqslant \frac{h_{+}}{2}\int_{0}^{h_{+}}((\psi^{\prime}-\xi_{1}\varphi- \xi_{2}\theta)^{2}+(\psi^{\prime}+\xi_{1}\varphi+\xi_{2}\theta)^{2})\mathrm{d }y_{3}\leqslant\frac{3h_{+}}{2\mu_{+}}D(\varphi,\theta,\psi). \tag{2.59}\]
Putting the above estimate into (2.58) yields
\[\lambda^{2}\leqslant\left(\frac{3h_{+}g\llbracket\bar{\rho}\rrbracket}{2\mu_{ +}}-\lambda\right)D(\varphi,\theta,\psi),\]
which, together the fact \(D\geqslant 0\), implies (2.50).
(3) Finally we derive the limits as \(|\xi|\to 0\) and \(|\xi|_{c}\). By the identity in (2.18) and the first conclusion in Proposition 2.1, for any \(\xi\) satisfying \(|\xi|\in(0,|\xi|_{c})\), there exists \((\varphi,\theta,\psi)\in\mathcal{A}\) associated a \(\lambda\) such that
\[0<\lambda^{2}=F(\varphi,\theta,\psi)\leqslant g\int_{h_{-}}^{h_{+}}\bar{\rho} \left(\xi_{1}(\varphi^{2}+\psi^{2})+\xi_{2}(\theta^{2}+\psi^{2})\right) \mathrm{d}y_{3}-\vartheta|\xi|^{2}\psi^{2}(0),\]
which yields (2.51) and
\[\psi^{2}(0)<g(|\xi_{1}|+|\xi_{2}|)/\vartheta|\xi|^{2},\]
but by (2.14) we also know that
\[\lambda^{2}(\xi)\leqslant(g\llbracket\bar{\rho}\rrbracket-\vartheta|\xi|^{2}) \psi^{2}(0).\]
Chaining the above two inequalities together then shows that \(\lim_{|\xi\to|\xi|_{c}^{-}}\lambda(\xi)=0\) for \(\vartheta>0\).
### Solutions to the linearized periodic RT problem (2.1)
In this section we will construct a linear real-valued solution to the linearized RT problem (2.1) which grows in-time in \(H^{4}\).
**Proposition 2.2**.:
1. _Let_ \[\Lambda:=\sup_{|\xi|<|\xi|_{c}}\lambda(\xi)<\infty.\] (2.60) _there exists a frequency_ \(\xi^{1}\)_, such that_ \(|\xi^{1}|\in(0,|\xi|_{c})\) _and_ \[\lambda(\xi^{1})\in(2\Lambda/3,\Lambda].\] (2.61)
2. _Let_ \(\xi^{2}=-\xi^{1}\)_,_ \(c_{7}=\lambda(\xi^{1})\)_,_ \(\varphi(\xi^{1})\)_,_ \(\theta(\xi^{1})\) _and_ \(\psi(\xi^{1})\) _be the solutions provided by the first assertion in Proposition_ 2.1_, and_ \((\varphi(\xi^{2}),\theta(\xi^{2}),\psi(\xi^{2}))=(\varphi(\xi^{1}),\theta(\xi^{ 1}),-\psi(\xi^{1}))\)_. We define_ \[w(\xi^{j},y_{3})=-\mathrm{i}\varphi(\xi^{j},y_{3})\mathbf{e}^{1}- \mathrm{i}\theta(\xi^{j},y_{3})\mathbf{e}^{2}+\psi(\xi^{j},y_{3})\mathbf{e}^{3} \text{ for }j=1,\ 2,\] \[\tilde{u}^{0}(y):=\sum_{j=1}^{2}(-1)^{j-1}w(\xi^{j},y_{3})e^{ \mathrm{i}y_{\mathrm{h}}\cdot\xi^{j}}\text{ and }\tilde{\eta}^{0}(y)=\tilde{u}^{0}(y)/c_{7},\] (2.62) _Then_ \[\eta(y,t)=e^{c_{7}t}\tilde{\eta}^{0}(y)\text{ and }u(y,t)=e^{c_{7}t}\tilde{u}^{0}(y)\] (2.63) _are real solutions to the linearized periodic RT problem (_2.1_) with, for_ \(k=1\) _and_ \(2\)_,_ \[L_{k}:=\begin{cases}1/|\xi_{k}^{1}|&\text{if }\xi_{k}^{1}\neq 0;\\ 1&\text{if }\xi_{k}^{1}=0.\end{cases}\] (2.64)
3. _There exist constants_ \(b_{i}\) _may depending on_ \(\xi^{1}\)_,_ \(h_{\pm}\)_,_ \(\varphi(\xi^{1},y_{3})\)_,_ \(\theta(\xi^{1},y_{3})\) _and_ \(\psi(\xi^{1},y_{3})\) _such that_ \[\sum_{\beta_{1}+\beta_{2}+\beta_{3}\leqslant 4,\ 1\leqslant\beta_{1}+\beta_{2}}\| \partial_{1}^{\beta_{1}}\partial_{2}^{\beta_{2}}\chi_{n,n}\partial_{3}^{\beta _{2}}\tilde{u}^{0}\|_{0}^{2}/n\leqslant b_{2},\] (2.65) \[\|\chi_{n,\tilde{u}}\tilde{u}^{0}\|_{4}/n\to b_{3},\] (2.66) \[\|\chi_{n,\tilde{u}}\tilde{u}^{0}_{\|0}/n\to b_{4},\] (2.67) \[\|\chi_{n,n}\tilde{u}^{0}_{3}\|_{0}/n\to b_{5}\text{ and }|\chi_{n, \tilde{u}}\tilde{u}^{0}_{3}|_{0}/n\to b_{6},\] (2.68) _as_ \(n\to\infty\)_, where_ \(b_{2}\geqslant 0\) _and_ \(b_{i}>0\) _for_ \(3\leqslant i\leqslant 6\)_._
Proof. (1) The first assertion in Proposition 2.2 is obvious, since \(\lambda(\xi)\) is bounded and continuous by Lemma 2.3.
(2) Recalling Proposition 2.1 and the derivation of (2.3)-(2.9) from (2.1), it is easy to observe that \((\eta,u)\) defined by (2.63) is a real solution to the linearized periodic RT problem (2.1) with \(L_{k}\) be defined by (2.64).
(3) Recalling the definitions of \(\tilde{u}^{0}_{1}\) and \(\chi_{n,n}\), for any given multi-index \((\beta_{1},\beta_{2},\beta_{3})\) satisfying \(\beta_{1}+\beta_{2}+\beta_{3}\leqslant 4\) and \(\beta_{1}+\beta_{2}\geqslant 1\), there exists a non-negative constant \(\tilde{b}_{7}:=\tilde{b}_{7}(\beta,h_{\pm},\varphi(\xi^{1},y_{3}))\) such that
\[\|\partial_{1}^{\beta_{1}}\partial_{2}^{\beta_{2}}\chi_{n,n} \partial_{3}^{\beta_{3}}\tilde{u}^{0}_{1}\|_{0}^{2}/n\] \[=4n^{-1}\int_{h_{-}}^{h_{+}}\partial_{3}^{\beta_{3}}\varphi^{2}( \xi^{1},y_{3})\mathrm{d}y_{3}\int_{-n}^{n}\int_{-n}^{n}(\partial_{1}^{\beta_{ 1}}\chi_{n}(y_{1})\partial_{2}^{\beta_{2}}\chi_{n}(y_{2})\sin(\xi^{1}\cdot y_{ \mathrm{h}}))^{2}\mathrm{d}y_{1}\mathrm{d}y_{2}\] \[\leqslant 16\int_{h_{-}}^{h_{+}}\partial_{3}^{\beta_{3}}\varphi^{2}( \xi^{1},y_{3})\mathrm{d}y_{3}\sup_{|y_{1}|<n}\{(\partial_{1}^{\beta_{1}}\chi_ {n}(y_{1}))^{2}\}\sup_{|y_{2}|<n}\{(\partial_{2}^{\beta_{2}}\chi_{n}(y_{2}))^{2 }\}=:\tilde{b}_{7} \tag{2.69}\]
for any \(n\). Similarly we also have \(\|\partial_{1}^{\beta_{1}}\partial_{2}^{\beta_{2}}\chi_{n,n}\partial_{3}^{\beta _{3}}\tilde{u}^{0}_{i}\|_{0}^{2}/n\leqslant\tilde{b}_{6+i}\), where \(\tilde{b}_{6+i}\geqslant 0\) for \(i=2,\,3\). Thus we arrive at (2.65).
Now we derive (2.67). From now on, we assume that \(\beta_{1}+\beta_{2}\geqslant 0\) and make a convention that \(0^{0}=1\). It is easy to see that
\[n^{-2}\int_{(-n,n)^{2}}\cos(2\xi^{1}\cdot y_{\mathrm{h}})\mathrm{d}y_{\mathrm{h}}\]
\[\|\chi_{n,n}\tilde{u}_{1}^{0}\|_{4}/n\to\tilde{b}_{10}\geqslant\sum_{\beta_{1}+ \beta_{2}+\beta_{3}\leqslant 4}8|\xi_{1}|^{2\beta_{1}}|\xi_{2}|^{2\beta_{2}}\int_{h_{-}}^{h_ {+}}|\partial_{3}^{\beta_{3}}\varphi(\xi^{1},y_{3})|^{2}\mathrm{d}y_{3}.\]
Moreover \(\tilde{b}_{12}(\tilde{b}_{10}+\tilde{b}_{11})>0\) due to \(|\varphi(\xi,y_{3})|+|\theta(\xi,y_{3})|\neq 0\), \(\psi(\xi,y_{3})\neq 0\) in (2.46) and \(|\xi|\neq 0\). Thanks to the above two limits, we immediately get (2.66).
In view of the derivation of (2.67), we easily observe that (2.67) and (2.68) also hold. This completes the proof of Proposition 2.2.
## 3 Gronwall-type energy inequality
Now we _a prior_ derive the Gronwall-type energy inequality for the solution \((\eta,u)\) of the RT problem. To this end, we assume that \((\eta,u)\) satisfies
\[\sup_{0\leqslant t\leqslant T}(\|\eta(t)\|_{3}+\|u(t)\|_{2})\leqslant\delta\text{ for some }T>0, \tag{3.1}\]
where \(\delta\in(0,\iota]\) is sufficiently small constant, and \(\iota\) is the constant in Lemma A.10. It should be noted that the smallness of \(\delta\) depends on the domain \(\Omega\) and other known physical functions/parameters, such as \(g\), \(\vartheta\), \(\mu\), \(\varsigma\), \(\bar{\rho}\), \(h_{\pm}\) and \(P_{\pm}(\tau)\), in the RT problem, and will be repeatedly used in what follows. In addition, by virtue of Lemma A.10 and (3.1) with sufficiently small \(\delta\), the expressions of \(J^{-1}\), \(\mathbf{n}\), \(\mathcal{H}\) and \(\mathbf{N}_{g}\) in Section 1.2 make sense.
### Estimates involving \(J\) and \(\mathcal{A}\)
In this section, we derive some estimates involving the determinant \(J\) and matrix \(\mathcal{A}\).
**Lemma 3.1**.: _Under the assumption_
\[\sup_{0\leqslant t\leqslant T}\|\eta(t)\|_{3}\leqslant\delta\text{ with sufficiently small }\delta, \tag{3.2}\]
_the determinant \(J\) enjoys the following estimates: for \(0\leqslant i\leqslant 2\),_
\[1\lesssim\inf_{y\in\Omega}J\leqslant\sup_{y\in\Omega}J\lesssim 1, \tag{3.3}\] \[\|(J-1,J^{-1}-1)\|_{i}\lesssim\|\eta\|_{i+1},\] (3.4) \[\|J^{-1}-1+\operatorname{div}\eta\|_{i}\lesssim\|\eta\|_{3}\|\eta \|_{i+1},\] (3.5) \[\|\partial_{t}(J,J^{-1})\|_{i}\lesssim\|u\|_{i+1},\] (3.6) \[\|J_{t}^{-1}+\operatorname{div}u\|_{i}\lesssim\|\eta\|_{3}\|u\|_ {i+1}. \tag{3.7}\]
Proof.: By the definition of \(J\), we find that
\[J-1=\operatorname{div}\eta+P_{2}(\nabla\eta)+P_{3}(\nabla\eta)\text{ and }J_{t}=\operatorname{div}u+\partial_{t}(P_{2}(\nabla\eta)+P_{3}(\nabla\eta)), \tag{3.8}\]
where \(P_{i}(\nabla\eta)\) denotes the homogeneous polynomial of degree \(i\) with respect to \(\partial_{j}\eta_{k}\) for \(1\leqslant i\), \(j\), \(k\leqslant 3\). In addition,
\[J^{-1}-1=(1-J^{-1})\left(\operatorname{div}\eta+P_{2}(\nabla\eta)+P_{3}( \nabla\eta)\right)-\left(\operatorname{div}\eta+P_{2}(\nabla\eta)+P_{3}( \nabla\eta)\right), \tag{3.9}\]
which yields
\[J_{t}^{-1}= -\operatorname{div}u-J_{t}^{-1}\left(\operatorname{div}\eta+P_{2 }(\nabla\eta)+P_{3}(\nabla\eta)\right)-\partial_{t}\left(P_{2}(\nabla\eta)+P_ {3}(\nabla\eta)\right)\] \[-(J^{-1}-1)\partial_{t}\left(\operatorname{div}\eta+P_{2}(\nabla \eta)+P_{3}(\nabla\eta)\right), \tag{3.10}\]
Thus, using the smallness condition (3.2), the embedding inequality \(H^{2}\hookrightarrow L^{\infty}\) in (A.2), the product estimates (of \(H^{i}\)) in (A.6) and the fact \(\eta_{t}=u\), we easily derive the desired estimates (3.3)-(3.7) from (3.8)-(3.10) for sufficiently small \(\delta\)
**Lemma 3.2**.: _Under the assumptions of (3.2), the matrix \(\mathcal{A}\) enjoys the following estimates: for \(0\leqslant i\leqslant 2\),_
\[\sup_{y\in\Omega}|\mathcal{A}|\lesssim 1, \tag{3.11}\] \[\|\tilde{\mathcal{A}}\|_{i}\lesssim\|\eta\|_{i+1},\] (3.12) \[\|\mathcal{A}_{t}\|_{i}\lesssim\|u\|_{i+1}. \tag{3.13}\]
Proof.: We can compute out that, for sufficiently small \(\delta\),
\[\mathcal{A}^{\top}=(\nabla\eta+\mathbb{I})^{-1}=\mathbb{I}-\nabla\eta+(\nabla \eta)^{2}\mathcal{A}^{\top}\]
and
\[\mathcal{A}^{\top}_{t}=(\nabla\eta)^{2}\mathcal{A}^{\top}_{t}+(\nabla\eta \nabla u+\nabla u\nabla\eta)\mathcal{A}^{\top}-\nabla u.\]
Recalling \(\tilde{\mathcal{A}}=\mathcal{A}-\mathbb{I}\), then, similarly to Lemma 3.1, we derive the desired estimates (3.11)-(3.13) from the above two identities for sufficiently small \(\delta\).
**Lemma 3.3**.: _Under the assumption (3.2), we have, for \(0\leqslant i\leqslant 2\),_
\[1\lesssim\inf_{y\in\Omega}|\mathbf{n}|\leqslant\sup_{y\in \Omega}|\mathbf{n}|\lesssim 1, \tag{3.14}\] \[\|(J\mathcal{A}\mathbf{e}^{3}-\mathbf{e}^{3},\tilde{\mathbf{n}}) \|_{i}\lesssim\|\eta\|_{1,i},\] (3.15) \[\left\|\partial_{t}(J\mathcal{A}\mathbf{e}^{3},\mathbf{n})\right\| _{i}\lesssim\|u\|_{1,i}, \tag{3.16}\]
_where we have defined that \(\tilde{\mathbf{n}}:=\mathbf{n}-\mathbf{e}^{3}\)._
Proof.: Recalling (1.19) and (1.21), we have
\[J\mathcal{A}\mathbf{e}^{3}=\mathbf{e}^{3}+\mathbf{e}^{1}\times \partial_{2}\eta+\partial_{1}\eta\times\mathbf{e}^{2}+\partial_{1}\eta\times \partial_{2}\eta, \tag{3.17}\] \[\partial_{i}|J\mathcal{A}\mathbf{e}^{3}|=(J\mathcal{A}\mathbf{e}^ {3})\cdot\partial_{i}(J\mathcal{A}\mathbf{e}^{3})/|J\mathcal{A}\mathbf{e}^{3}|,\] (3.18) \[\tilde{\mathbf{n}}=(J\mathcal{A}\mathbf{e}^{3}-\mathbf{e}^{3}+(1- |J\mathcal{A}\mathbf{e}^{3}|)\mathbf{e}^{3})/|J\mathcal{A}\mathbf{e}^{3}|,\] (3.19) \[\mathbf{n}_{t}=\partial_{t}(J\mathcal{A}\mathbf{e}^{3})|J\mathcal{ A}\mathbf{e}^{3}|^{-1}-J\mathcal{A}\mathbf{e}^{3}|J\mathcal{A}\mathbf{e}^{3}|^{-3}(J \mathcal{A}\mathbf{e}^{3})\cdot\partial_{t}(J\mathcal{A}\mathbf{e}^{3}), \tag{3.20}\]
where \(\mathbf{e}^{1}=(1,0,0)^{\top}\) and \(\mathbf{e}^{2}=(0,1,0)^{\top}\). Utilizing the product estimates and the embedding inequality \(H^{2}\hookrightarrow L^{\infty}\), we deduce from (3.17) and (3.18) that, for sufficiently small \(\delta\),
\[\|J\mathcal{A}\mathbf{e}^{3}-\mathbf{e}^{3}\|_{i}\lesssim\|\eta \|_{1,i},\ \|\partial_{t}(J\mathcal{A}\mathbf{e}^{3})\|_{i}\lesssim\|u\|_{1,i}, \tag{3.21}\] \[\sup_{y\in\Omega}|J\mathcal{A}\mathbf{e}^{3}|^{-1}\lesssim 1,\ \left\|1-1/|J \mathcal{A}\mathbf{e}^{3}|\right\|_{i}\lesssim\|\eta\|_{1,i}, \tag{3.22}\]
and
\[\|1-|J\mathcal{A}\mathbf{e}^{3}|\|_{i}\lesssim\|\eta\|_{1,i}. \tag{3.23}\]
In particular, by (3.22), we have
\[\|f/|J\mathcal{A}\mathbf{e}^{3}|\|_{i}\lesssim\|f\|_{i}\text{ for any }f\in H^{i}. \tag{3.24}\]
Thus, using (3.21), (3.23), (3.24), the embedding inequality of \(H^{2}\hookrightarrow L^{\infty}\) and the product estimates, we can easily derive the desired estimates in (3.14)-(3.16) from (3.19) and (3.20).
**Lemma 3.4**.: _Under the assumption (3.2), we have_
\[\|w\|_{1}^{2}\lesssim\mathcal{U}_{\mathcal{A}}(w)\text{ for any }w\in H _{0}^{1}, \tag{3.25}\] \[\|w\|_{1+i}^{2}\lesssim\|\nabla_{\mathcal{A}}w\|_{i}^{2}\lesssim\| \nabla w\|_{i}^{2}\text{ for any }w\in H^{1+i}, \tag{3.26}\]
_where \(0\leqslant i\leqslant 2\), and we have defined that_
\[\mathcal{U}_{\mathcal{A}}(w):=\int\mathbb{S}_{\mathcal{A}}(w):\nabla_{ \mathcal{A}}w\mathrm{d}y. \tag{3.27}\]
Proof.: Noting that
\[\mathcal{U}(w)=\mathcal{U}_{\mathcal{A}}(w)-\int\mathbb{S}(w):\nabla_{\tilde{ \mathcal{A}}}w\mathrm{d}y-\int\mathbb{S}_{\tilde{\mathcal{A}}}(w):\nabla_{ \mathcal{A}}w\mathrm{d}y\]
and
\[\mathcal{U}(w)=\frac{1}{2}\|\sqrt{\mu}(\mathbb{D}w-2\mathrm{div}w\mathbb{I}/3 )\|_{0}^{2}+\varsigma\|\mathrm{div}w\|_{0}^{2},\]
thus, employing Korn's inequality (A.10), product estimate and (3.12), we obtain (3.25) for sufficiently small \(\delta\). Similarly to the derivation of (3.25), we easily see that (3.26) holds by further using Poincare's inequality (A.9).
### Estimates of the nonlinear terms in the RT problem
Now we proceed to derive some estimates on the nonlinear terms in the RT problem. We first control the nonlinear terms \(\mathbf{N}^{3}\) and \((\mathbf{N}_{1}^{4},\mathbf{N}_{2}^{4},\mathcal{N})\) in the nonhomogeneous form (1.46).
**Lemma 3.5**.: _Under the assumptions of (3.2) with \(\delta\in(0,\iota]\), it holds that, for \(i=0\), \(1\),_
\[\|\mathbf{N}^{3}\|_{i}\lesssim\|\eta\|_{3}(\|(\eta,u)\|_{2+i}+\|u_ {t}\|_{i}), \tag{3.28}\] \[|\mathbf{N}_{\mathrm{h}}^{4}|_{i+1/2}+|[\![R_{P}+\mathcal{N}^{ \prime}]\!]|_{i+1/2}\lesssim\|\eta\|_{3}\|(\eta,u)\|_{2+i},\] (3.29) \[|\mathcal{N}^{\eta}|_{y_{3}=0}|_{1/2}\lesssim\|\mathcal{N}^{\eta} \|_{1}\lesssim\|\eta\|_{3}\|\eta\|_{2,1}. \tag{3.30}\]
Proof.: (1) To being with, we bound \(\mathbf{N}^{3}\). Using the product estimates, (3.4), (3.12) and the regularity conditions of \(\bar{\rho}\) and \(P_{\pm}\) in Theorem 1.1, we infer that
\[\|\mathbf{N}^{3}\|_{i}\lesssim \|(\mathbf{N}_{g},\mathbf{N}_{P})\|_{i}+(1+\|\tilde{\mathcal{A}} \|_{2})\|R_{P}\|_{1+i}+\|\tilde{\mathcal{A}}\|_{2}\|(\eta,u)\|_{2+i}+\|J^{-1}- 1\|_{2}\|u_{t}\|_{i}\] \[\lesssim \|\eta\|_{3}(\|(\eta,u)\|_{2+i}+\|u_{t}\|_{i})+\|(\mathbf{N}_{g},\mathbf{N}_{P})\|_{i}+\|R_{P}\|_{1+i}. \tag{3.31}\]
In addition, if we use (3.4), (3.5), the regularity conditions of \(\bar{\rho}\) and \(P_{\pm}\), the homeomorphism property of \(\eta+y\) (see (A.27)), we have, for sufficiently small \(\delta\),
\[\|\mathbf{N}_{g}\|_{i}\lesssim\left\|\int_{0}^{\eta_{3}}(\eta_{3 }-z)\frac{\mathrm{d}^{2}}{\mathrm{d}z^{2}}\bar{\rho}(y_{3}+z)\mathrm{d}z\right\| _{i}+\|J^{-1}-1+\mathrm{div}\eta\|_{i}\lesssim\|\eta\|_{3}\|\eta\|_{1+i}, \tag{3.32}\] \[\|\mathbf{N}_{P}\|_{i}\lesssim\left\|\int_{0}^{\eta_{3}}(\eta_{3} -z)\frac{\mathrm{d}^{2}}{\mathrm{d}z^{2}}\bar{P}(y_{3}+z)\mathrm{d}z\right\| _{1+i}\lesssim\|\eta\|_{2}\|\eta\|_{1+i},\] (3.33) \[\|R_{P}\|_{i+1}\lesssim\|(J^{-1}-1+\mathrm{div}\eta)\|_{1+i}\] \[\qquad\qquad\qquad+\left\|\int_{0}^{\bar{\rho}(J^{-1}-1)}(\bar{ \rho}(J^{-1}-1)-z)\frac{\mathrm{d}^{2}}{\mathrm{d}z^{2}}P(\bar{\rho}+z)\mathrm{d }z\right\|_{1+i}\lesssim\|\eta\|_{3}\|\eta\|_{2+i}. \tag{3.34}\]
Inserting the above three estimates into (3.31) immediately yields (3.28).
(2) Thanks to (3.15), the trace estimate (A.12) and the product estimate, we easily estimates that
\[|\Pi_{\mathbf{n}}\mathbf{f}|_{i+1/2}\lesssim\|\mathbf{f}-(\mathbf{f}\cdot \mathbf{n})\mathbf{n}\|_{1+i}\lesssim\|f\|_{1+i}\text{ for any }f\in H^{1+i}\text{ with }i=0,\ 1. \tag{3.35}\]
Making use of (3.15), (3.35), trace estimate and the product estimate, we easily derive that
\[|\mathbf{N}^{4}_{\mathrm{h}|i+1/2}\lesssim \|(\Upsilon(\eta,u)\mathbf{e}^{3})\cdot\tilde{\mathbf{n}}\mathbf{n }+(\Upsilon(\eta,u)\mathbf{e}^{3})\cdot\mathbf{e}^{3}\tilde{\mathbf{n}}\|_{1+i}\] \[+\|\Upsilon(\eta,u)(J\mathcal{A}\mathbf{e}^{3}-\mathbf{e}^{3})+ \mathbb{S}_{\mathcal{A}}(u)J\mathcal{A}\mathbf{e}^{3}\|_{1+i}\lesssim\|\eta\|_ {3}\|(\eta,u)\|_{2+i}\]
Similarly, we have
\[|[\![\mathcal{N}^{u}]\!]|_{i+1/2}\lesssim\|\mathcal{N}^{u}\|_{1+i}\lesssim\| \eta\|_{3}\|u\|_{2+i}.\]
Thus we immediately derive (3.29) from (3.34), the above two estimates and trace estimate.
(3) Recalling the definitions of \(H^{\mathrm{n}}\) and \(H^{\mathrm{d}}\) in (1.24) and (1.25), and then using the product estimates, we easily get, for sufficiently small \(\delta\),
\[\|H^{\mathrm{n}}\|_{1}\lesssim\|\eta\|_{2,1},\ \|H^{\mathrm{n}} \cdot\mathbf{e}^{3}-\Delta_{\mathrm{h}}\eta_{3}\|_{1}\lesssim\|\eta\|_{3}\| \eta\|_{2,1},\ \|H^{\mathrm{d}}-1\|_{2}\lesssim\|\eta\|_{1,2}, \tag{3.36}\] \[\sup_{y\in\Omega}\{1/H^{\mathrm{d}}\}\lesssim 1\text{ and }\|1-1/H^{ \mathrm{d}}\|_{2}\lesssim\|\eta\|_{1,2} \tag{3.37}\]
Moreover, by (3.37), we have
\[\|f/H^{\mathrm{d}}\|_{2}\lesssim\|f\|_{2}\text{ for any }f\in H^{2}. \tag{3.38}\]
Recalling the definition of \(\mathcal{N}^{\eta}\) in (1.45), and then making use of (3.15), (3.36), (3.38), the product estimate and the trace estimate to get that
\[|\mathcal{N}^{\eta}|_{y_{3}=0}|_{1/2}\lesssim \|\mathcal{N}^{\eta}\|_{1}\lesssim\|(H^{\mathrm{n}}\cdot\mathbf{n }(H^{\mathrm{d}}-1)/H^{\mathrm{d}}\|_{1}+\|H^{\mathrm{n}}\cdot\tilde{\mathbf{n }}\|_{1}+\|H^{\mathrm{n}}\cdot\mathbf{e}^{3}-\Delta_{\mathrm{h}}\eta_{3}\|_{1}\] \[\lesssim \|H^{\mathrm{n}}\|_{1}((1+\|\tilde{\mathbf{n}}\|_{2})\|(H^{ \mathrm{d}}-1)/H^{\mathrm{d}}\|_{2}+\|\tilde{\mathbf{n}}\|_{2})+\|H^{\mathrm{ n}}\cdot\mathbf{e}^{3}-\Delta_{\mathrm{h}}\eta_{3}\|_{1}\] \[\lesssim \|\eta\|_{3}\|\eta\|_{2,1}, \tag{3.39}\]
which yields (3.30).
To estimate the temporal derivatives of \(u\) in Section 3.3, we can apply \(\partial_{t}\) to (1.37) to derive that
\[\begin{cases}\bar{\rho}J^{-1}u_{tt}-\mathrm{div}_{\mathcal{A}}(P^{\prime}(\bar {\rho})\bar{\rho}\mathrm{div}u)\mathbb{I}+\partial_{t}\mathbb{S}_{\mathcal{A} }(u))=g\bar{\rho}(\mathrm{div}u\mathbf{e}^{3}-\nabla u_{3})+\mathbf{N}^{5}& \text{in }\Omega,\\ \llbracket P^{\prime}(\bar{\rho})\bar{\rho}\mathrm{div}u+\partial_{t}\mathbb{S} _{\mathcal{A}}(u)\rrbracket J\mathcal{A}\mathbf{e}^{3}+\vartheta\Delta_{ \mathrm{h}}u_{3}\mathbf{e}^{3}=\mathbf{N}^{6},\ \llbracket u_{t}\rrbracket=0&\text{on }\Sigma,\\ u_{t}=0&\text{on }\partial\Omega,\end{cases} \tag{3.40}\]
where we have defined that
\[\mathbf{N}^{5}:= \mathbf{N}^{1}_{t}+\mathrm{div}_{\mathcal{A}_{t}}(P^{\prime}(\bar {\rho})\bar{\rho}\mathrm{div}\eta\mathbb{I}+\mathbb{S}_{\mathcal{A}}(u))-\bar{ \rho}J^{-1}_{t}u_{t},\] \[\mathbf{N}^{6}:= \mathbf{N}^{2}_{t}-\llbracket P^{\prime}(\bar{\rho})\bar{\rho} \mathrm{div}\eta\mathbb{I}+\mathbb{S}_{\mathcal{A}}(u)\rrbracket\partial_{t}(J \mathcal{A}\mathbf{e}^{3}).\]
Then we will establish the following estimates for both the nonlinear terms \(\mathbf{N}^{5}\) and \(\mathbf{N}^{6}\).
**Lemma 3.6**.: _Under the assumptions of (3.2) and \(\delta\in(0,\iota]\),_
\[|\mathcal{N}^{\eta}_{t}|_{y_{3}=0}|_{1/2}\lesssim\|\mathcal{N}^{ \eta}_{t}\|_{1}\lesssim\|\eta\|_{3}\|u\|_{3}, \tag{3.41}\] \[\|\mathbf{N}^{5}\|_{0}+|\mathbf{N}^{6}|_{1/2}\lesssim\|\eta\|_{3} \|u\|_{3}+\|u\|_{2}(\|u\|_{3}+\|u_{t}\|_{1}). \tag{3.42}\]
Proof.: (1) Noting that
\[\|H^{\mathrm{n}}_{t}\|_{1}\lesssim\|u\|_{1,2},\|H^{\mathrm{d}}_{t}\|_{2} \lesssim\|u\|_{1,2}\text{ and }\|H^{\mathrm{n}}_{t}\cdot\mathbf{e}^{3}-\Delta_{ \mathrm{h}}u_{3}\|_{1}\lesssim\|\eta\|_{1,2}\|u\|_{1,2},\]
thus we can use (3.15), (3.16), (3.36), (3.38) and the above three estimates to derive that
\[|\mathcal{N}^{\eta}_{t}|_{y_{3}=0}|_{1/2}\lesssim\|\mathcal{N}^{ \eta}_{t}\|_{1}= \|\partial_{t}(H^{\mathrm{n}}\cdot\mathbf{n}(H^{\mathrm{d}}-1)/ H^{\mathrm{d}}-H^{\mathrm{n}}\cdot\tilde{\mathbf{n}}\] \[-H^{\mathrm{n}}\cdot\mathbf{e}^{3}+\Delta_{\mathrm{h}}\eta_{3}) \|_{1}\lesssim\|\eta\|_{3}\|u\|_{1,2},\]
which yields (3.41).
(2) Noting that
\[\partial_{t}(\mathbf{N}_{g}-\mathbf{N}_{P})\] \[=g\left(u_{3}\int_{0}^{\eta_{3}}\frac{\mathrm{d}^{2}}{\mathrm{d} z^{2}}\bar{\rho}(y_{3}+z)\mathrm{d}z-\bar{\rho}(J^{-1}_{t}+\mathrm{div}u) \right)\mathbf{e}^{3}\] \[\quad+\nabla_{\mathcal{A}_{t}}\left(\int_{0}^{\eta_{3}}(z-\eta_{3 })\frac{\mathrm{d}^{2}}{\mathrm{d}z^{2}}\bar{P}(y_{3}+z)\mathrm{d}z\right)- \nabla_{\mathcal{A}}\left(u_{3}\int_{0}^{\eta_{3}}\frac{\mathrm{d}^{2}}{ \mathrm{d}z^{2}}\bar{P}(y_{3}+z)\mathrm{d}z\right),\]
thus, following the arguments of (3.32) and (3.33) by further exploiting (3.7), (3.11) and (3.13), we easily get from the above identity that
\[\|\partial_{t}(\mathbf{N}_{g}-\mathbf{N}_{P})\|_{0}\lesssim\|\eta\|_{3}\|u\|_ {2}. \tag{3.43}\]
Similarly, making use of (3.4) and (3.7), we have
\[\|\partial_{t}R_{P}\|_{1}\lesssim\left\|P^{\prime}(\bar{\rho})\bar{\rho}(J^{- 1}_{t}+\mathrm{div}u)+\bar{\rho}J^{-1}_{t}\int_{0}^{\bar{\rho}(J^{-1}-1)} \frac{\mathrm{d}^{2}}{\mathrm{d}z^{2}}P(\bar{\rho}+z)\mathrm{d}z\right\|_{1} \lesssim\|\eta\|_{3}\|u\|_{2}, \tag{3.44}\]
which, together with (3.11), (3.13) and (3.34), implies
\[\|\partial_{t}\nabla_{\mathcal{A}}R_{P}\|_{0}\lesssim\|\eta\|_{3}\|u\|_{2}. \tag{3.45}\]
Thanks to (3.12), (3.13), (3.43) and (3.45), we easily get
\[\|\mathbf{N}^{1}_{t}\|_{0}=\|\partial_{t}(\mathbf{N}_{g}-\mathbf{N}_{P}- \nabla_{\mathcal{A}}R_{P}-g\nabla_{\tilde{\mathcal{A}}}(\bar{\rho}\eta_{3})) \|_{0}\lesssim\|\eta\|_{3}\|u\|_{2}. \tag{3.46}\]
In addition, it is easy see that
\[\|\mathrm{div}_{\mathcal{A}_{t}}(P^{\prime}(\bar{\rho})\bar{\rho} \mathrm{div}\eta\mathbb{I}+\mathbb{S}_{\mathcal{A}}(u))-\bar{\rho}J^{-1}_{t}u _{t}\|_{0}\] \[\lesssim\|\eta\|_{3}\|u\|_{2}+\|u\|_{2}(\|u\|_{3}+\|u_{t}\|_{1}),\]
which, together with (3.46), yields
\[\|\mathbf{N}^{5}\|_{0}\lesssim \|\mathbf{N}^{1}_{t}\|_{0}+\|\partial_{t}(\mathrm{div}_{\mathcal{A} _{t}}(P^{\prime}(\bar{\rho})\bar{\rho}\mathrm{div}\eta\mathbb{I}+\mathbb{S}_{ \mathcal{A}}(u))-\bar{\rho}J^{-1}_{t}u_{t})\|_{0}\] \[\lesssim \|\eta\|_{3}\|u\|_{2}+\|u\|_{2}(\|u\|_{3}+\|u_{t}\|_{1}). \tag{3.47}\]
Exploiting (1.45), (3.15), (3.16), (3.30), (3.34), (3.41), (3.44) and trace estimate, we have
\[|\mathbf{N}^{2}_{t}|_{1/2}\lesssim |\partial_{t}(\llbracket R_{P}\rrbracket\mathcal{J}\mathcal{A} \mathbf{e}^{3}+\vartheta(\Delta_{\mathrm{h}}\eta_{3}\mathbf{e}^{3}-\mathcal{ H}J\mathcal{A}\mathbf{e}^{3}))|_{1/2}\] \[\lesssim \|\partial_{t}(R_{P}J\mathcal{A}\mathbf{e}^{3})\|_{1}+\|\partial _{t}(\mathcal{N}^{\eta}J\mathcal{A}\mathbf{e}^{3}-\Delta_{\mathrm{h}}\eta_{3}( J\mathcal{A}\mathbf{e}^{3}-\mathbf{e}^{3}))\|_{1}\lesssim\|\eta\|_{3}\|u\|_{3}. \tag{3.48}\]
Similarly, we have
\[|\llbracket P^{\prime}(\bar{\rho})\bar{\rho}\mathrm{div}\eta\mathbb{I}+\mathbb{S }_{\mathcal{A}}(u)\rrbracket\partial_{t}(J\mathcal{A}\mathbf{e}^{3})|_{1/2} \lesssim(\|\eta\|_{3}+\|u\|_{3})\|u\|_{2}, \tag{3.49}\]
which, together with (3.47) and (3.48), yields (3.42).
### Basic estimates for \((\eta,u)\)
In this subsection we derive the \(y_{\rm h}\)-derivative estimates of \((\eta,u)\) in Lemmas 3.7 and 3.8, the temporal derivative estimates of \(u\) in Lemma 3.9 and the \(y_{3}\)-derivative estimates of \(u\) in Lemma 3.10.
**Lemma 3.7**.: _Under the assumptions of (3.2) and \(\delta\in(0,\iota]\), the following estimates hold:_
\[\frac{{\rm d}}{{\rm d}t}\left(\int\bar{\rho}\partial_{\rm h}^{i} \eta\cdot\partial_{\rm h}^{i}u{\rm d}y+\mathcal{U}(\partial_{\rm h}^{i}\eta)/2 \right)+\mathcal{I}(\partial_{\rm h}^{i}\eta)\] \[\lesssim|\eta_{3}|_{i}^{2}+\|u\|_{i,0}^{2}+\sqrt{\mathcal{E}} \mathcal{D}\ \text{for}\ \begin{cases}0\leqslant i\leqslant 1;\\ (i,\vartheta)=(2,0),\end{cases} \tag{3.50}\] \[\frac{{\rm d}}{{\rm d}t}\int_{\mathbb{R}^{2}}\left(\int\bar{\rho} \mathfrak{D}_{\rm h}^{3/2}\partial_{\rm h}\eta\cdot\mathfrak{D}_{\rm h}^{3/2 }\partial_{\rm h}u{\rm d}y+\mathcal{U}(\mathfrak{D}_{\rm h}^{3/2}\partial_{ \rm h}\eta)/2\right){\rm d}{\bf h}+\vartheta|\nabla_{\rm h}\partial_{\rm h} \eta_{3}|_{1/2}^{2}\] \[\lesssim|\partial_{\rm h}\eta_{3}|_{1/2}^{2}+|\nabla_{\rm h} \partial_{\rm h}\eta_{3}|_{0}^{2}+\|u\|_{1,1}^{2}+\sqrt{\mathcal{E}}\mathcal{D},\] (3.51) \[\frac{{\rm d}}{{\rm d}t}\left(\int\bar{\rho}\partial_{\rm h}^{2} \eta\cdot\partial_{\rm h}^{2}u{\rm d}y+\mathcal{U}(\partial_{\rm h}^{2}\eta)/ 2\right)+\|\partial_{\rm h}^{2}{\rm div}\eta\|_{0}^{2}\] \[\lesssim|\partial_{\rm h}^{2}\eta_{3}|_{0}^{2}+\|(\eta_{3},u)\|_ {2,0}^{2}+|\partial_{\rm h}^{2}\eta_{3}|_{1/2}\left(\sqrt{\|\nabla_{\rm h}{ \rm div}\eta\|_{\underline{1}0}\|\partial_{3}{\rm div}\eta\|_{1,0}}\right.\] \[\left.+\|\nabla_{\rm h}{\rm div}\eta\|_{\underline{1}0}+\|\nabla_ {\rm h}u\|_{\underline{1}1}+\sqrt{\|\nabla_{\rm h}u\|_{\underline{1}1}\|u\|_{ 1,2}}\right)+\sqrt{\mathcal{E}}\mathcal{D}. \tag{3.52}\]
Proof.: To begin with we derive the estimate (3.50). Applying \(\partial_{\rm h}^{i}\) to (1.31)\({}_{4}\), (1.31)\({}_{5}\) and (1.46), we have
\[\bar{\rho}\partial_{\rm h}^{i}u_{t}+\partial_{\rm h}^{i}(g\bar{ \rho}(\nabla\eta_{3}-{\rm div}\eta{\bf e}^{3})-{\rm div}\Upsilon(\eta,u))= \partial_{\rm h}^{i}{\bf N}^{3}\ {\rm in}\ \Omega, \tag{3.53}\] \[\llbracket\partial_{\rm h}^{i}u\rrbracket=\llbracket\partial_{\rm h }^{i}\eta\rrbracket=0,\ \partial_{\rm h}^{i}(\llbracket\Upsilon(\eta,u){\bf e}^{3}\rrbracket+\vartheta \Delta_{\rm h}\eta_{3}{\bf e}^{3})=\partial_{\rm h}^{i}({\bf N}_{1}^{4},{ \bf N}_{2}^{4},\mathcal{N})^{\top}\ {\rm on}\ \Sigma,\] (3.54) \[\partial_{\rm h}^{i}\eta=\partial_{\rm h}^{i}u=0\ {\rm on}\ \partial\Omega, \tag{3.55}\]
Taking the inner product of (3.53) and \(\partial_{\rm h}^{i}\eta\) in \(L^{2}\), we have
\[\frac{{\rm d}}{{\rm d}t}\int\bar{\rho}\partial_{\rm h}^{i}\eta \cdot\partial_{\rm h}^{i}u{\rm d}y=\int\bar{\rho}|\partial_{\rm h}^{i}u|^{2}{ \rm d}y+\sum_{j=1}^{3}I_{j}, \tag{3.56}\]
where we have defined that
\[I_{1}:=\int g\bar{\rho}\partial_{\rm h}^{i}({\rm div}\eta{\bf e}^ {3}-\nabla\eta_{3})\cdot\partial_{\rm h}^{i}\eta{\rm d}y,\] \[I_{2}:=\int{\rm div}\partial_{\rm h}^{i}\Upsilon(\eta,u)\cdot \partial_{\rm h}^{i}\eta{\rm d}y\ {\rm and}\ I_{3}:=\int\partial_{\rm h}^{i}{\bf N}^{3}\cdot\partial_{\rm h}^{i} \eta{\rm d}y.\]
Integrating by parts, using (3.54), the boundary condition of \(\partial_{\rm h}^{i}\eta\) in (3.55), and the symmetry of \(\Upsilon\), we have
\[I_{1}=g\llbracket\bar{\rho}\rrbracket|\partial_{\rm h}^{i}\eta_{3}|_{0}^{2}+g \int(\bar{\rho}^{\prime}|\partial_{\rm h}^{i}\eta_{3}|^{2}+2\bar{\rho}{\rm div }\partial_{\rm h}^{i}\eta\partial_{\rm h}^{i}\eta_{3}){\rm d}y \tag{3.57}\]
and
\[I_{2}= -\int\partial_{\rm h}^{i}\Upsilon(\eta,u):\nabla\partial_{\rm h}^{i }\eta{\rm d}y-\int_{\Sigma}\llbracket\partial_{\rm h}^{i}\Upsilon(\eta,u) \rrbracket{\bf e}^{3}\cdot\partial_{\rm h}^{i}\eta{\rm d}y_{\rm h}\]
\[= -\int P^{\prime}(\bar{\rho})\bar{\rho}|\mathrm{div}\partial_{\mathrm{ h}}^{i}\eta|^{2}\mathrm{d}y-\vartheta|\nabla_{\mathrm{h}}\partial_{\mathrm{h}}^{i} \eta_{3}|_{0}^{2}-\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{U}(\partial _{\mathrm{h}}^{i}\eta)+I_{4}, \tag{3.58}\]
where we have defined that
\[I_{4}:=-\int_{\Sigma}\partial_{\mathrm{h}}^{i}(\mathbf{N}_{1}^{ 4},\mathbf{N}_{2}^{4},\mathcal{N})^{\top}\cdot\partial_{\mathrm{h}}^{i}\eta \mathrm{d}y_{\mathrm{h}}.\]
Exploiting (1.9)\({}_{1}\), we have the relation
\[\int(P^{\prime}(\bar{\rho})\bar{\rho}|\mathrm{div}w|^{2}-g\bar{ \rho}^{\prime}w_{3}^{2}-2g\bar{\rho}\mathrm{div}ww_{3})\mathrm{d}y=\left\| \sqrt{P^{\prime}(\bar{\rho})\bar{\rho}}\left(\frac{gw_{3}}{P^{\prime}(\bar{ \rho})}-\mathrm{div}w\right)\right\|_{0}^{2}. \tag{3.59}\]
Making use of (3.57), (3.58), the definition of \(\mathcal{I}\) in (1.60) and the above relation, we obtain
\[\frac{\mathrm{d}}{\mathrm{d}t}\left(\int\bar{\rho}\partial_{ \mathrm{h}}^{i}\eta\cdot\partial_{\mathrm{h}}^{i}u\mathrm{d}y+\mathcal{U}( \partial_{\mathrm{h}}^{i}\eta)/2\right)+\mathcal{I}(\partial_{\mathrm{h}}^{i} \eta)\leqslant c(|\eta_{3}|_{i}^{2}+\|u\|_{i,0}^{2})+I_{3}+I_{4}. \tag{3.60}\]
Making use of (3.28)-(3.30), the integration by parts and Holder's inequality, we get, for \(0\leqslant i\leqslant 1\) and \((\vartheta,i)=(0,2)\),
\[I_{3}+I_{4}\lesssim\sqrt{\mathcal{E}}\mathcal{D}. \tag{3.61}\]
Putting the above estimates into (3.60) yields (3.50).
Now we estimate for (3.51). Applying the fractional differential operator \(\mathfrak{D}_{\mathrm{h}}^{3/2}\) (see the definition (1.59)) to (3.53) with \(i=1\), and then arguing in a way similar to that in the derivation of (3.60), we get
\[\frac{\mathrm{d}}{\mathrm{d}t}\left(\int\bar{\rho}\mathfrak{D}_{ \mathrm{h}}^{3/2}\partial_{\mathrm{h}}\eta\cdot\mathfrak{D}_{\mathrm{h}}^{3/2 }\partial_{\mathrm{h}}u\mathrm{d}y+\mathcal{U}(\mathfrak{D}_{\mathrm{h}}^{3/2 }\partial_{\mathrm{h}}\eta)/2\right)+\mathcal{I}(\mathfrak{D}_{\mathrm{h}}^{3/2 }\partial_{\mathrm{h}}\eta)+\vartheta|\nabla_{\mathrm{h}}\partial_{\mathrm{h}} \eta_{3}|_{(y_{\mathrm{h}},y_{3})=\mathrm{h}}|^{2}\] \[\lesssim|\nabla_{\mathrm{h}}\partial_{\mathrm{h}}\eta_{3}|_{(y_{ \mathrm{h}},y_{3})=\mathrm{h}}|^{2}+|\mathfrak{D}_{\mathrm{h}}^{3/2}\partial_{ \mathrm{h}}\eta_{3}|_{0}^{2}+\|\mathfrak{D}_{\mathrm{h}}^{3/2}\partial_{ \mathrm{h}}u\|_{0}^{2}\] \[\quad+\left|\int\mathfrak{D}_{\mathrm{h}}^{3/2}\partial_{\mathrm{ h}}\mathbf{N}^{3}\cdot\mathfrak{D}_{\mathrm{h}}^{3/2}\partial_{\mathrm{h}}\eta \mathrm{d}y-\int_{\Sigma}\mathfrak{D}_{\mathrm{h}}^{3/2}\partial_{\mathrm{h}}( \mathbf{N}_{1}^{4},\mathbf{N}_{2}^{4},\mathcal{N})^{\top}\cdot\mathfrak{D}_{ \mathrm{h}}^{3/2}\partial_{\mathrm{h}}\eta\mathrm{d}y_{\mathrm{h}}\right|=:I_{5 }(\mathbf{h}). \tag{3.62}\]
Making use of (A.16), (3.28)-(3.30), the dual estimate (A.21), the definition of the norm \(|\cdot|_{1/2}\) and a partial integration, we have
\[\int_{\mathbb{R}^{2}}I_{5}(\mathbf{h})\mathrm{d}\mathbf{h}\lesssim |\partial_{\mathrm{h}}\eta_{3}|_{1/2}^{2}+|\nabla_{\mathrm{h}} \partial_{\mathrm{h}}\eta_{3}|_{0}^{2}+\|u\|_{1,1}^{2}+\|\eta\|_{3}\|\mathbf{N }^{3}\|_{1}+|\nabla_{\mathrm{h}}\partial_{\mathrm{h}}\eta|_{1/2}|(\mathbf{N}_{ \mathrm{h}}^{4},\mathcal{N})|_{1/2}\] \[\lesssim |\partial_{\mathrm{h}}\eta_{3}|_{1/2}^{2}+|\nabla_{\mathrm{h}} \partial_{\mathrm{h}}\eta_{3}|_{0}^{2}+\|u\|_{1,1}^{2}+\sqrt{\mathcal{E}} \mathcal{D}.\]
Thus, integrating (3.62) with respect to \(\mathbf{h}\) over \(\mathbb{R}^{2}\), and then utilizing the above estimate, we obtain (3.51).
Finally, we turn to the derivation of (3.52). For \(i=2\), we integrate by parts to find that
\[I_{2}=I_{6}-\int P^{\prime}(\bar{\rho})\bar{\rho}|\partial_{ \mathrm{h}}^{2}\mathrm{div}\eta|^{2}\mathrm{d}y-\frac{1}{2}\frac{\mathrm{d}}{ \mathrm{d}t}\mathcal{U}(\partial_{\mathrm{h}}^{2}\eta), \tag{3.63}\]
where we have defined that
\[I_{6}:=-\int_{\Sigma}[\![\partial_{\mathrm{h}}^{2}(P^{\prime}( \bar{\rho})\bar{\rho}\mathrm{div}\eta+2\mu\partial_{3}u_{3}+(\varsigma-2\mu/3) \,\mathrm{div}u)]\partial_{\mathrm{h}}^{2}\eta_{3}\mathrm{d}y_{\mathrm{h}}- \int_{\Sigma}\partial_{\mathrm{h}}^{2}\mathbf{N}_{\mathrm{h}}^{4}\cdot\partial_{ \mathrm{h}}^{2}\eta_{\mathrm{h}}\mathrm{d}y_{\mathrm{h}}.\]
Putting (3.63) into (3.56) with \(i=2\), and then employing (3.57) with \(i=2\) and (3.59), we conclude
\[\frac{\mathrm{d}}{\mathrm{d}t}\int\left(\bar{\rho}\partial_{\mathrm{ h}}^{2}\eta\cdot\partial_{\mathrm{h}}^{2}u+\frac{1}{2}\mathcal{U}(\partial_{ \mathrm{h}}^{2}\eta)\right)\mathrm{d}y+\left\|\sqrt{P^{\prime}(\bar{\rho})\bar {\rho}}\partial_{\mathrm{h}}^{2}\left(\frac{g\eta_{3}}{P^{\prime}(\bar{\rho})} -\mathrm{div}\eta\right)\right\|_{0}^{2}\] \[\leqslant c(|\partial_{\mathrm{h}}^{2}\eta_{3}|_{0}^{2}+\|u\|_{2,0}^{2})+I_{6}+I_{7}, \tag{3.64}\]
where we have defined that \(I_{7}:=\int\partial_{\mathrm{h}}^{2}\mathbbm{N}^{3}\cdot\partial_{\mathrm{h}} ^{2}\eta\mathrm{d}y\).
Making use of (3.28), (3.29), the trace estimate, dual estimate and integration by parts, we can deduce that
\[I_{6}+I_{7}\lesssim |\partial_{\mathrm{h}}^{2}\eta_{3}|_{1/2}\left(\sqrt{\|\nabla_{ \mathrm{h}}\mathrm{div}\eta\|_{\mathbbm{1}0}\|\partial_{3}\mathrm{div}\eta\| _{1,0}}\right.\] \[\left.+\|\nabla_{\mathrm{h}}\mathrm{div}\eta\|_{\mathbbm{1}0}+ \|\nabla_{\mathrm{h}}u\|_{\mathbbm{1}1}+\sqrt{\|\nabla_{\mathrm{h}}u\|_{ \mathbbm{1}1}\|u\|_{1,2}}\right)+\sqrt{\mathcal{E}}\mathcal{D}. \tag{3.65}\]
Consequently, inserting the above estimate into (3.64), and then using Young's inequality, we obtain (3.52).
**Lemma 3.8**.: _Under the assumptions of (3.2) and \(\delta\in(0,\iota]\), the following estimates hold:_
\[\frac{\mathrm{d}}{\mathrm{d}t}(\|\sqrt{\bar{\rho}}\partial_{ \mathrm{h}}^{i}u\|_{0}^{2}+\mathcal{I}(\partial_{\mathrm{h}}^{i}\eta))+c\| \partial_{\mathrm{h}}^{i}u\|_{1}^{2}\] \[\lesssim\sqrt{\mathcal{E}}\mathcal{D}+\begin{cases}\|\eta_{3}\|_ {1}^{2}&\text{for $i=0$};\\ \|\eta_{3}\|_{i-1,1}^{2}&\text{for $i=1$ and $(\vartheta,i)=(0,2)$},\end{cases} \tag{3.66}\] \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\mathbb{R}^{2}}\left(\|\sqrt{ \bar{\rho}}\mathfrak{D}_{\mathrm{h}}^{3/2}\partial_{\mathrm{h}}u\|_{0}^{2}+ \mathcal{I}(\mathfrak{D}_{\mathrm{h}}^{3/2}\partial_{\mathrm{h}}\eta)\right) \mathrm{d}\mathbf{h}\lesssim\|\eta_{3}\|_{1,1}^{2}+\sqrt{\mathcal{E}} \mathcal{D},\] (3.67) \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\|\sqrt{\bar{\rho}}\partial_ {\mathrm{h}}^{2}u\|_{0}^{2}+\left\|\sqrt{P^{\prime}(\bar{\rho})\bar{\rho}} \partial_{\mathrm{h}}^{2}\left(\frac{g\eta_{3}}{P^{\prime}(\bar{\rho})}- \mathrm{div}\eta\right)\right\|_{0}^{2}\right)+c\|\partial_{\mathrm{h}}^{2}u \|_{1}^{2}\] \[\lesssim\|\eta_{3}\|_{1,1}^{2}+|\partial_{\mathrm{h}}^{2}u_{3}|_{ 1/2}\left(\sqrt{\|\nabla_{\mathrm{h}}\mathrm{div}\eta\|_{\mathbbm{1}0}\| \partial_{3}\mathrm{div}\eta\|_{1,0}}\right.\] \[\left.+\|\nabla_{\mathrm{h}}\mathrm{div}\eta\|_{\mathbbm{1}0}+ \|\nabla_{\mathrm{h}}u\|_{\mathbbm{1}1}+\sqrt{\|\nabla_{\mathrm{h}}u\|_{ \mathbbm{1}1}\|u\|_{1,2}}\right)+\sqrt{\mathcal{E}}\mathcal{D}. \tag{3.68}\]
Proof.: Taking the inner product of (3.53)\({}_{1}\) and \(\partial_{\mathrm{h}}^{i}u\) in \(L^{2}\), we obtain
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int\bar{\rho}|\partial_ {\mathrm{h}}^{i}u|^{2}\mathrm{d}y\] \[=\int g\bar{\rho}\partial_{\mathrm{h}}^{i}(\mathrm{div}\eta \mathbf{e}^{3}-\nabla\eta_{3})\cdot\partial_{\mathrm{h}}^{i}u\mathrm{d}y+ \int\mathrm{div}\partial_{\mathrm{h}}^{i}\Upsilon(\eta,u)\cdot\partial_{ \mathrm{h}}^{i}u\mathrm{d}y+\int\partial_{\mathrm{h}}^{i}\mathbbm{N}^{3} \cdot\partial_{\mathrm{h}}^{i}u\mathrm{d}y. \tag{3.69}\]
Thus, following the process used for (3.50), and applying Korn's inequality, we arrive at
\[\frac{\mathrm{d}}{\mathrm{d}t}\left(\|\sqrt{\bar{\rho}}\partial_ {\mathrm{h}}^{i}u\|_{0}^{2}+\mathcal{I}(\partial_{\mathrm{h}}^{i}\eta) \right)+c\|\partial_{\mathrm{h}}^{i}u\|_{1}^{2}\lesssim\left|\int_{\Sigma} \partial_{\mathrm{h}}^{i}\eta_{3}\partial_{\mathrm{h}}^{i}u_{3}\mathrm{d}y_{ \mathrm{h}}\right|+\sqrt{\mathcal{E}}\mathcal{D}. \tag{3.70}\]
We can utilize the trace estimate and the dual estimate to get
\[\left|\int_{\Sigma}\partial_{\mathrm{h}}^{i}\eta_{3}\partial_{ \mathrm{h}}^{i}u_{3}\mathrm{d}y_{\mathrm{h}}\right|\lesssim \begin{cases}|\eta_{3}|_{0}|u_{3}|_{0}\lesssim\|\eta_{3}\|_{1}\|u_{3}\|_{1}& \text{for $i=0$};\\ |\partial_{\mathrm{h}}^{i-1}\eta_{3}|_{1/2}|\partial_{\mathrm{h}}^{i}u_{3}|_{1/ 2}\lesssim\|\eta_{3}\|_{i-1,1}\|\partial_{\mathrm{h}}^{i}u_{3}\|_{1}&\text{for $1 \leqslant i\leqslant 2$}.\end{cases} \tag{3.71}\]
Consequently, plugging the above estimate into (3.70), and then using Young's inequality, we obtain (3.66) from (3.70).
Similarly to (3.62) and (3.70), we can derive from (3.53) with \(i=1\) that
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\left(\|\sqrt{\rho} \mathfrak{D}_{\mathbf{h}}^{3/2}\partial_{\mathrm{h}}u\|_{0}^{2}+\mathcal{I}( \mathfrak{D}_{\mathbf{h}}^{3/2}\partial_{\mathrm{h}}\eta)\right)+\|\mathfrak{D }_{\mathbf{h}}^{3/2}\partial_{\mathrm{h}}u\|_{1}^{2}\] \[\lesssim|\mathfrak{D}_{\mathbf{h}}^{3/2}\partial_{\mathrm{h}} \eta_{3}|_{0}|\mathfrak{D}_{\mathbf{h}}^{3/2}\partial_{\mathrm{h}}u_{3}|_{0}\] \[\quad+\left|\int\mathfrak{D}_{\mathbf{h}}^{3/2}\partial_{\mathrm{ h}}\mathbf{N}^{3}\cdot\mathfrak{D}_{\mathbf{h}}^{3/2}\partial_{\mathrm{h}}u \mathrm{d}y-\int_{\Sigma}\mathfrak{D}_{\mathbf{h}}^{3/2}\partial_{\mathrm{h}}( \mathbf{N}_{1}^{4},\mathbf{N}_{2}^{4},\mathcal{N})^{\top}\cdot\mathfrak{D}_{ \mathbf{h}}^{3/2}\partial_{\mathrm{h}}u\mathrm{d}y_{\mathrm{h}}\right|.\]
Thus, similarly to the argument of (3.51), integrating the above inequality with respect to \(\mathbf{h}\) over \(\mathbb{R}^{2}\), we easily get (3.67) by further using trace estimate and Young's inequality.
Finally, we derive (3.68). Similarly to (3.64), we can derive from (3.53) with \(i=2\) that
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\left(\|\sqrt{\rho} \partial_{\mathrm{h}}^{2}u\|_{0}^{2}+\left\|\sqrt{P^{\prime}(\bar{\rho})\bar{ \rho}}\partial_{\mathrm{h}}^{2}\left(\frac{g\eta_{3}}{P^{\prime}(\bar{\rho})}- \mathrm{div}\eta\right)\right\|_{0}^{2}\right)+\mathcal{U}(\partial_{\mathrm{h }}^{2}u)\] \[=\int_{\Sigma}\partial_{\mathrm{h}}^{2}\eta_{3}\partial_{\mathrm{ h}}^{2}u_{3}\mathrm{d}y_{\mathrm{h}}+I_{8}, \tag{3.72}\]
where we have defined that
\[I_{8}:= \int\partial_{\mathrm{h}}^{2}\mathbf{N}^{3}\cdot\partial_{ \mathrm{h}}^{2}u\mathrm{d}y-\int_{\Sigma}\partial_{\mathrm{h}}^{2}\mathbf{N}_ {\mathrm{h}}^{4}\cdot\partial_{\mathrm{h}}^{2}u_{\mathrm{h}}\mathrm{d}y_{ \mathrm{h}}\] \[-\int_{\Sigma}\llbracket\partial_{\mathrm{h}}^{2}(P^{\prime}(\bar {\rho})\bar{\rho}\mathrm{div}\eta+2\mu\partial_{3}u_{3}+(\varsigma-2\mu/3) \,\mathrm{div}u)\rrbracket\partial_{\mathrm{h}}^{2}u_{3}\mathrm{d}y_{\mathrm{h}}.\]
Analogously to (3.65), we can show
\[I_{8}\lesssim |\partial_{\mathrm{h}}^{2}u_{3}|_{1/2}\left(\sqrt{\|\nabla_{ \mathrm{h}}\mathrm{div}\eta\|_{\mathfrak{L}^{0}}\|\partial_{3}\mathrm{div} \eta\|_{1,0}}\right.\] \[\left.+\|\nabla_{\mathrm{h}}\mathrm{div}\eta\|_{\underline{1},0}+ \|\nabla_{\mathrm{h}}u\|_{\underline{1},1}+\sqrt{\|\nabla_{\mathrm{h}}u\|_{ \underline{1},1}\|u\|_{1,2}}\right)+\sqrt{\mathcal{E}}\mathcal{D}.\]
Putting the above estimate into (3.6) and then using (3.71) with \(i=2\) and Korn's inequality, we get (3.68).
**Lemma 3.9**.: _Under the assumptions of (3.1) and \(\delta\in(0,\iota]\), the following estimates hold._
\[\frac{\mathrm{d}}{\mathrm{d}t}\left(\|\sqrt{\bar{\rho}}u_{t}\|_{ 0}^{2}+\mathcal{I}(u)\right)+c\|u_{t}\|_{1}^{2}\lesssim\|u_{3}\|_{1}^{2}+ \sqrt{\mathcal{E}}\mathcal{D}, \tag{3.73}\] \[\|u_{t}\|_{0}\lesssim\|(\eta,u)\|_{2}. \tag{3.74}\]
Proof.: Taking the inner product of (3.40)\({}_{1}\) and \(Ju_{t}\) in \(L^{2}\), we have
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int\bar{\rho}|u_{t}|^{2 }\mathrm{d}y=g\int\bar{\rho}(\mathrm{div}u\mathbf{e}^{3}-\nabla u_{3})\cdot u _{t}\mathrm{d}y\] \[+\int J\mathrm{div}_{\mathcal{A}}(P^{\prime}(\bar{\rho})\bar{\rho} \mathrm{div}u\mathbb{I}+\partial_{t}\mathbb{S}_{\mathcal{A}}(u))\cdot u_{t} \mathrm{d}y+\int J\mathbf{N}^{5}\cdot u_{t}\mathrm{d}y\]
\[\|\bar{\rho}u_{t}\|_{0}\lesssim\|g\bar{\rho}(\mathrm{div}\eta\mathbf{ \mathrm{e}}^{3}-\nabla\eta_{3})+\mathrm{div}\Upsilon(\eta,u)+\mathbf{N}^{3}\|_{0} \lesssim\|(\eta,u)\|_{2}^{2}+\|\eta\|_{3}\|u_{t}\|_{0}^{2},\]
which implies (3.74) for sufficiently small \(\delta\).
**Lemma 3.10**.: _Under the assumptions of (3.2) and \(\delta\in(0,\iota]\), the following estimates hold:_
\[\|u\|_{2}\lesssim\|\eta\|_{\mathsf{L}^{2}}+\|u_{t}\|_{0}, \tag{3.77}\] \[\|u\|_{3}\lesssim\|\eta\|_{3}+\|u\|_{\mathsf{L}^{2},1}+\|u_{t}\|_ {1},\] (3.78) \[\|u\|_{1,2}\lesssim\|(\mathrm{div}\eta,\nabla_{\mathrm{h}}\eta)\| _{1,1}+\|\nabla_{\mathrm{h}}u\|_{\mathsf{L}^{1}}+\|u_{t}\|_{1}+\sqrt{\mathcal{ E}D}. \tag{3.79}\]
Proof. We can rewrite (1.31)\({}_{4}\), (1.31)\({}_{5}\) and (1.46) as a stratified Lame problem with jump conditions:
\[\begin{cases}\mu\Delta u+\tilde{\mu}\nabla\mathrm{div}u=\mathbf{F}^{1}&\text{ in }\Omega,\\ \llbracket u\rrbracket=0,\ \llbracket\mathbb{S}(u)\rrbracket\mathbf{e}^{3}= \mathbf{F}^{2}&\text{ on }\Sigma,\\ u=0&\text{ on }\partial\Omega,\end{cases} \tag{3.80}\]
where we have defined that
\[\mathbf{F}^{1} :=g\bar{\rho}(\nabla\eta_{3}-\mathrm{div}\eta\mathbf{e}^{3})- \nabla(P^{\prime}(\bar{\rho})\bar{\rho}\mathrm{div}\eta)+\bar{\rho}u_{t}- \mathbf{N}^{3},\] \[\mathbf{F}^{2} :=(\mathbf{N}_{1}^{4},\mathbf{N}_{2}^{4},\mathcal{N})^{\top}- \llbracket P^{\prime}(\bar{\rho})\bar{\rho}\mathrm{div}\eta\mathbf{e}^{3} \rrbracket-\vartheta\Delta_{\mathrm{h}}\eta_{3}\mathbf{e}^{3}\text{ and }\tilde{\mu}:=\varsigma+\mu/3.\]
Applying the stratified elliptic estimate (A.32) to (3.80), we find that
\[\|u\|_{2}\lesssim\|\mathbf{F}^{1}\|_{0}+|\mathbf{F}^{2}|_{1/2} \lesssim \|\eta\|_{2}+|(\llbracket P^{\prime}(\bar{\rho})\bar{\rho} \mathrm{div}\eta\mathbf{e}^{3}\rrbracket,\Delta_{\mathrm{h}}\eta_{3})|_{1/2}\] \[+\|(u_{t},\mathbf{N}^{3})\|_{0}+|(\mathbf{N}_{1}^{4},\mathbf{N}_ {2}^{4},\mathcal{N})|_{1/2}, \tag{3.81}\]
where \(\mathcal{N}=\mathcal{N}^{\eta}+\llbracket R_{P}+\mathcal{N}^{u}\rrbracket\). Making use of (3.2), (3.28)-(3.30) and trace estimate, we further get (3.77) from the above estimate.
We rewrite (3.80)\({}_{1}\) as a stratified Lame problem with Dirichlet boundary conditions:
\[\begin{cases}\mu\Delta u+\tilde{\mu}\nabla\mathrm{div}u=\mathbf{F}^{1}&\text {in }\Omega,\\ u=u|_{\Sigma}&\text{on }\Sigma,\\ u=0&\text{on }\partial\Omega\.\end{cases} \tag{3.82}\]
Applying the elliptic estimate (A.30) to the above Lame problems, we obtain
\[\|\partial_{\mathrm{h}}^{i}u\|_{3-i}\lesssim\|\mathbf{F}^{1}\|_{i,1-i}+| \partial_{\mathrm{h}}^{i}u|_{5/2-i}, \tag{3.83}\]
where \(i=0\) and \(1\). In addition, we have
\[\|\mathbf{F}^{1}\|_{i,1-i}\lesssim \|(\mathrm{div}\eta,\nabla_{\mathrm{h}}\eta)\|_{i,2-i}+\|(u_{t}, \mathbf{N}^{3})\|_{i,1-i}\] \[\lesssim \|(\mathrm{div}\eta,\nabla_{\mathrm{h}}\eta)\|_{i,2-i}+\|u_{t}\| _{1}+\|\eta\|_{3}\|(\eta,u)\|_{3}.\]
Putting the above estimate into (3.83) and then using (A.13) we further obtain (3.78) and (3.79). \(\Box\)
### Highest-order boundary estimates of \(u_{3}\) at interface for \(\vartheta>0\)
In this subsection we further establish a highest-order boundary estimate of \(u_{3}\). We should remark that it is difficult to directly derive the desired estimate based on the RT problem (1.31). Motivated by the well-posdeness result of incompressible stratified viscoelastic fluids [45], we will break up the nonhomogeneous form of the RT problem into two subproblems.
To this purpose, we will first use Lemma A.9 to construct a function \(\mathcal{M}\in C^{0}(\mathbb{R}^{+},H^{1}(\mathbb{R}^{2}))\) such that
\[\mathcal{M}|_{t=0}=\mathcal{H}^{0}, \tag{3.84}\] \[\|\mathcal{M}\|_{L^{\infty}(\mathbb{R}^{+},H^{1}(\mathbb{R}^{2}) )}+\|\mathcal{M}\|_{L^{2}(\mathbb{R}^{+},H^{3/2})}+\|\mathcal{M}_{t}\|_{L^{2}( \mathbb{R}^{+},H^{1/2})}\lesssim|\mathcal{H}^{0}|_{1}, \tag{3.85}\]
where \(\mathcal{H}^{0}=(\Delta_{\mathrm{h}}\eta_{3}-\mathcal{N}^{\eta}/\vartheta)|_{t=0}= \mathcal{H}|_{t=0}\) (see (1.45) for the definition of \(\mathcal{H}\)). Let \((\eta^{1},u^{1}):=(\eta,u)-(\eta^{2},u^{2})\), where \((\eta^{2},u^{2})\) is a solution to the linear problem
\[\begin{cases}\eta_{t}^{2}=u^{2}&\text{in }\Omega,\\ \bar{\rho}u_{t}^{2}-\mathrm{div}\Upsilon(\eta^{2},u^{2})=\widetilde{\mathrm{N }}^{3}:=g\bar{\rho}(\mathrm{div}\eta\mathbf{e}^{3}-\nabla\eta_{3})+\mathbf{N}^ {3}&\text{in }\Omega,\\ \llbracket\![\eta^{2}]\rrbracket=\llbracket\![u^{2}]\rrbracket=0&\text{on } \Sigma,\\ \llbracket\![\Upsilon(\eta^{2},u^{2})\mathbf{e}^{3}]\!\rrbracket=(\mathbf{N}_{1 }^{4},\mathbf{N}_{2}^{4},\llbracket\![R_{P}+\mathcal{N}^{u}]\!]-\vartheta \mathcal{M})^{\top}&\text{on }\Sigma,\\ (\eta^{2},u^{2})=0&\text{on }\partial\Omega,\\ (\eta^{2},u^{2})|_{t=0}=(\eta^{0},u^{0})&\text{on }\Omega.\end{cases} \tag{3.86}\]
It should be noted that, due to (3.84) and (3.86)\({}_{6}\),
\[\llbracket\![\Upsilon(\eta^{2},u^{2})\mathbf{e}^{3}]\!\rrbracket=(\mathbf{N}_{1 }^{4},\mathbf{N}_{2}^{4},\llbracket\![R_{P}+\mathcal{N}^{u}]\!]-\vartheta \mathcal{M})^{\top}\text{on }\Sigma\text{ for }t=0, \tag{3.87}\]
which makes sure the existence of a unique solution of the above problem (3.86), see the second conclusion in Proposition 3.1. Then, \((\eta^{1},u^{1})\) satisfies
\[\begin{cases}\eta_{t}^{1}=u^{1}&\text{in }\Omega,\\ \bar{\rho}u_{t}^{1}=\mathrm{div}\Upsilon(\eta^{1},u^{1})&\text{in }\Omega,\\ \llbracket\![\eta^{1}]\rrbracket=\llbracket\![u^{1}]\rrbracket=0&\text{on } \Sigma,\\ \llbracket\![(\Upsilon(\eta^{1},u^{1}))\mathbf{e}^{3}]\!\rrbracket=(\widetilde{ \mathcal{N}}^{\eta}-\vartheta\Delta_{\mathrm{h}}\eta_{3}^{1}+\vartheta \mathcal{M})\mathbf{e}^{3}&\text{on }\Sigma,\\ (\eta^{1},u^{1})=0&\text{on }\partial\Omega,\\ (\eta^{1},u^{1})|_{t=0}=0&\text{on }\Omega,\end{cases} \tag{3.88}\]
where we have defined that \(\widetilde{\mathcal{N}}^{\eta}:=\mathcal{N}^{\eta}-\vartheta\Delta_{\mathrm{ h}}\eta_{3}^{2}\). Thus, we can use the above two auxiliary problems to derive the following highest-order boundary estimate of \(u_{3}\).
**Lemma 3.11**.: _Under the assumptions of \(\vartheta>0\), (3.2) and \(\delta\in(0,\iota]\), we have_
\[\|\eta^{2}\|_{2,1}^{2}+\int_{0}^{t}|\nabla_{\mathrm{h}}^{2}u_{3}| _{1/2}^{2}\mathrm{d}\tau\] \[\leqslant c\left(\|\eta^{0}\|_{3}^{2}+\|u^{0}\|_{2}^{2}+|\mathcal{ H}^{0}|_{1}^{2}+\int_{0}^{t}\left(\|\eta\|_{2}^{2}+\sqrt{\mathcal{E}} \mathcal{D}\right)\mathrm{d}\tau\right)+c_{7}\int_{0}^{t}\|\eta^{2}(t)\|_{2,1}^ {2}\mathrm{d}\tau, \tag{3.89}\]
_where \(c_{7}\) is the constant after (2.61) in Proposition 2.2 and \(|\nabla_{\mathrm{h}}^{2}u_{3}|_{1/2}^{2}:=\sum_{|\alpha|=2}|\partial_{\mathrm{ h}}^{\alpha}u_{3}|_{1/2}^{2}\)._
Proof.: (1) Let \(0\leqslant i\leqslant 1\). Applying \(\partial_{\mathrm{h}}^{i}\partial_{t}\) to (3.88) yields
\[\begin{cases}\bar{\rho}\partial_{\mathrm{h}}^{i}u_{tt}^{1}=\partial_{\mathrm{ h}}^{i}\mathrm{div}\Upsilon(u^{1},u_{t}^{1})&\text{in }\Omega,\\ \llbracket\![\partial_{\mathrm{h}}u^{1}]\rrbracket=\llbracket\![u_{t}^{1}]\! \rrbracket=0&\text{on }\Sigma,\\ \partial_{\mathrm{h}}^{i}\llbracket\![\Upsilon(u^{1},u_{t}^{1})\mathbf{e}^{3}] \!\rrbracket=\partial_{\mathrm{h}}^{i}\partial_{t}(\widetilde{\mathcal{N}}^{ \eta}-\vartheta\Delta_{\mathrm{h}}\eta_{3}^{1}+\vartheta\mathcal{M})\mathbf{e }^{3}&\text{on }\Sigma,\\ (\partial_{\mathrm{h}}u^{1},u_{t}^{1})=0&\text{on }\partial\Omega\.\end{cases} \tag{3.90}\]
Multiplying (3.90)\({}_{1}\) with \(i=0\) by \(u_{t}^{1}\) in \(L^{2}\) and then integrating by parts, we infer that
\[\frac{\mathrm{d}}{\mathrm{d}t}\left(\|\sqrt{\bar{\rho}}u_{t}^{1}\|_{0}^{2}+\|P^ {\prime}(\bar{\rho})\bar{\rho}\mathrm{div}u^{1}\|_{0}^{2}+\vartheta|\nabla_{ \mathrm{h}}u_{3}^{1}|_{0}^{2}\right)+\|\mathcal{U}(u_{t}^{1})\|_{0}^{2} \lesssim|\partial_{t}u_{3}^{1}|_{1/2}|\partial_{t}(\widetilde{\mathcal{N}}^{ \eta},\mathcal{M})|_{-1/2}. \tag{3.91}\]
Obviously we have from (3.88)\({}_{2}\) that
\[\|u_{t}^{1}\|_{0}^{2}\lesssim\|(\eta^{1},u^{1})\|_{2}^{2},\]
which, together with the zero initial data (3.88)\({}_{6}\), gives
\[\|u_{t}^{1}|_{t=0}\|_{0}^{2}\lesssim 0. \tag{3.92}\]
Integrating (3.91) with respect to \(t\), and then using (3.92), Korn's, Young's inequalities and trace estimate, we further have
\[\|u_{t}^{1}\|_{0}^{2}+|\nabla_{\rm h}u_{3}^{1}|_{0}^{2}+\int_{0}^{t}\|u_{\tau}^ {1}\|_{1}^{2}{\rm d}\tau\lesssim\int_{0}^{t}|\partial_{\tau}(\widetilde{\cal N} ^{\eta},{\cal M})|_{-1/2}^{2}{\rm d}\tau. \tag{3.93}\]
We multiply (3.90)\({}_{1}\) with \(i=1\) by \(\partial_{\rm h}u^{1}\) in \(L^{2}\) and then integrate by parts to deduce
\[\frac{{\rm d}}{{\rm d}t}\int\left(\frac{1}{2}{\cal U}(\partial_{ \rm h}u^{1})-\bar{\rho}u_{t}^{1}\partial_{\rm h}^{2}u^{1}\right){\rm d}y+\|P^ {\prime}(\bar{\rho})\bar{\rho}\partial_{\rm h}{\rm div}u^{1}\|_{0}^{2}+\vartheta |\nabla_{\rm h}\partial_{\rm h}u_{3}^{1}|_{0}^{2}\] \[\lesssim\|\partial_{\rm h}u_{t}^{1}\|_{0}^{2}+|\nabla_{\rm h} \partial_{\rm h}u_{3}^{1}|_{0}|\partial_{t}(\widetilde{\cal N}^{\eta},{\cal M })|_{0}.\]
Thus, integrating the above equality with respect to \(t\), and using then (3.88)\({}_{6}\), and Korn's, Young's inequalities, one infers that
\[\|\partial_{\rm h}u^{1}\|_{1}^{2}+\int_{0}^{t}|\nabla_{\rm h} \partial_{\rm h}u_{3}^{1}|_{0}^{2}{\rm d}\tau\lesssim\|u_{t}\|_{0}^{2}+\int_{ 0}^{t}\left(\|\partial_{\rm h}u_{\tau}^{1}\|_{0}^{2}+|\partial_{\tau}( \widetilde{\cal N}^{\eta},{\cal M}_{\tau})|_{0}^{2}\right){\rm d}\tau,\]
which, together with (3.93), yields
\[\int_{0}^{t}|\nabla_{\rm h}\partial_{\rm h}u_{3}^{1}|_{0}^{2}{\rm d }\tau\lesssim\int_{0}^{t}|\partial_{\tau}(\widetilde{\cal N}^{\eta},{\cal M} _{\tau})|_{0}^{2}{\rm d}\tau. \tag{3.94}\]
Applying the operator \(\mathfrak{D}_{\rm h}^{3/2}\) to (3.90), and then following the same process as in the derivation of (3.94), we obtain
\[\int_{0}^{t}|\mathfrak{D}_{\rm h}^{3/2}\nabla_{\rm h}\partial_{ \rm h}u_{3}^{1}|_{0}^{2}{\rm d}\tau\lesssim\int_{0}^{t}|\mathfrak{D}_{\rm h}^{ 3/2}\partial_{\tau}(\widetilde{\cal N}^{\eta},{\cal M})|_{0}^{2}{\rm d}\tau.\]
Integrating the above integral over \(\mathbb{R}^{2}\), and then adding the resulting estimate to (3.94) yields
\[\int_{0}^{t}|\nabla_{\rm h}\partial_{\rm h}u_{3}^{1}|_{1/2}^{2}{ \rm d}\tau\lesssim\int_{0}^{t}(|\Delta_{\rm h}u_{3}^{2}|_{1/2}^{2}+|\partial_{ t}({\cal N}^{\eta},{\cal M})|_{1/2}^{2}){\rm d}\tau,\]
which, together with (3.41) and (3.85), implies
\[\int_{0}^{t}|\nabla_{\rm h}\partial_{\rm h}u_{3}^{1}|_{1/2}^{2}{ \rm d}\tau\lesssim|{\cal H}^{0}|_{1}^{2}+\int_{0}^{t}(|\Delta_{\rm h}u_{3}^{2}| _{1/2}^{2}+\sqrt{\cal E}{\cal D}){\rm d}\tau. \tag{3.95}\]
(3) Analogously to (3.60) with \((i,\vartheta)=(2,0)\), we can derive from (3.86) that
\[\frac{{\rm d}}{{\rm d}t}\int\left(\bar{\rho}\partial_{\rm h}^{2} \eta^{2}\cdot\partial_{\rm h}^{2}u^{2}+{\cal U}(\partial_{\rm h}^{2}\eta^{2})/ 2\right){\rm d}y+\|\sqrt{P^{\prime}(\bar{\rho})\bar{\rho}}\partial_{\rm h}^{2} {\rm div}\eta^{2}\|_{0}^{2}\] \[\lesssim\|u^{2}\|_{2,0}^{2}+\|\partial_{\rm h}^{2}\eta^{2}\|_{1} \|\widetilde{\bf N}^{3}\|_{1}+|\partial_{\rm h}^{2}\eta^{2}|_{1/2}|({\bf N}_{ 1}^{4},{\bf N}_{2}^{4},\llbracket R_{P}+{\cal N}^{u}\rrbracket,{\cal M})|_{3/2}.\]
We further utilize (3.28), (3.29) and the trace estimate to have
\[\int\left(\bar{\rho}\partial_{\mathrm{h}}^{2}\eta^{2}\cdot\partial_{ \mathrm{h}}^{2}u^{2}+\mathcal{U}(\partial_{\mathrm{h}}^{i}\partial_{\mathrm{h}} ^{2}\eta^{2})/2\right)\mathrm{d}y\lesssim\|\eta^{0}\|_{3}^{2}+\|u^{0}\|_{2}^{2}\] \[+\int_{0}^{t}\left(\|u^{2}\|_{2,0}^{2}+\|\eta^{2}\|_{2,1}(\|\eta \|_{2}+|\mathcal{M}|_{3/2}+\|\eta\|_{3}\sqrt{\mathcal{D}})\right)\mathrm{d}\tau. \tag{3.96}\]
In addition, by the same manner as in the derivation of (3.70) with \((i,\vartheta)=(2,0)\), we can deduce from (3.86) that
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}(\|\sqrt{\bar{\rho}} \partial_{\mathrm{h}}^{2}u^{2}\|_{0}^{2}+\|\sqrt{P^{\prime}(\bar{\rho})\bar{ \rho}}\partial_{\mathrm{h}}^{2}\mathrm{div}\eta^{2}\|_{0}^{2})+c\|\partial_{ \mathrm{h}}^{2}u^{2}\|_{1}^{2}\] \[\lesssim\|\partial_{\mathrm{h}}^{2}u^{2}\|_{1}\|\mathbf{\tilde{N }}^{3}\|_{1}+|\partial_{\mathrm{h}}^{2}u^{2}|_{1/2}|(\mathbf{N}_{1}^{4}, \mathbf{N}_{2}^{4},[\![R_{P}+\mathcal{N}^{u}]\!],\mathcal{M})|_{3/2},\]
which further yields that
\[\|\partial_{\mathrm{h}}^{2}u^{2}\|_{0}^{2}+\int_{0}^{t}\|\partial_{ \mathrm{h}}^{2}u^{2}\|_{1}^{2}\mathrm{d}\tau\lesssim\|\eta^{0}\|_{3}^{2}+\|u^ {0}\|_{2}^{2}+\int_{0}^{t}\left(\|\eta\|_{2}^{2}+|\mathcal{M}|_{3/2}^{2}+\sqrt {\mathcal{E}}\mathcal{D}\right)\mathrm{d}\tau. \tag{3.97}\]
Using Korn's inequality and trace estimate, we can derive from (3.96) and (3.97) that
\[\|\eta^{2}\|_{2,1}^{2}+\|u^{2}\|_{2,0}^{2}+\int_{0}^{t}|\nabla_{ \mathrm{h}}^{2}u^{2}|_{1/2}^{2}\mathrm{d}\tau\] \[\lesssim\|\eta^{0}\|_{3}^{2}+\|u^{0}\|_{2}^{2}+\int_{0}^{t} \left(\|\eta\|_{2}^{2}+|\mathcal{M}|_{3/2}^{2}+\|\eta^{2}\|_{2,1}(\|\eta\|_{2}\right.\] \[\qquad+|\mathcal{M}|_{3/2}+\|\eta\|_{3}\sqrt{\mathcal{D}})+\sqrt {\mathcal{E}}\mathcal{D}\right)\!\mathrm{d}\tau, \tag{3.98}\]
which, together with (3.95), yields
\[\|\eta^{2}\|_{2,1}^{2}+\int_{0}^{t}|\nabla_{\mathrm{h}}^{2}u_{3} |_{1/2}^{2}\mathrm{d}\tau\] \[\lesssim\|\eta^{0}\|_{3}^{2}+\|u^{0}\|_{2}^{2}+|\mathcal{H}^{0} |_{1}^{2}+\int_{0}^{t}\left(\|\eta\|_{2}^{2}+|\mathcal{M}|_{3/2}^{2}\right.\] \[\qquad+\|\eta^{2}\|_{2,1}(\|\eta\|_{2}+|\mathcal{M}|_{3/2}+\|\eta \|_{3}\sqrt{\mathcal{D}})+\sqrt{\mathcal{E}}\mathcal{D}\right)\!\mathrm{d}\tau,\]
which yields (3.89) by further using (3.85) and Young's inequality.
### Gronwall-type energy inequality
With the estimates of \((\eta,u)\) in Lemmas 3.7-3.11, we are in a position to _a prior_ derive Gronwall-type energy inequality, which couples with the solution \((\eta^{2},u^{2})\) of the linear problem (3.86) for the case \(\vartheta>0\).
**Lemma 3.12**.: _Let \((\eta,u)\) be the solution of the RT problem (1.31) and satisfy (3.1) with \(\delta\in(0,\iota]\), where \(\iota\) is the constant in Lemma A.10, and thus the following definition makes sense_
\[d(x_{\mathrm{h}},t):=\zeta_{3}((\zeta_{\mathrm{h}})^{-1}(x_{\mathrm{h}},t),0,t )\in(h_{-},h_{+})\text{ for each }t\in[0,T]. \tag{3.99}\]
_For \(\vartheta>0\), we additionally assume that \(d^{0}:=d|_{t=0}\in H^{3}(\mathbb{R}^{2})\) and \((\eta^{2},u^{2})\) is a solution of the linear problem (3.86) with \(\mathcal{H}^{0}\in H^{1}(\mathbb{R}^{2})\). There are an energy functional \(\tilde{\mathcal{E}}(t)\) of \((\eta(t),u(t))\), and constants \(\delta_{1}\in(0,\iota]\), \(c>0\) such that, for any \(\delta\leqslant\delta_{1}\), \((\eta,u)\) enjoys the Gronwall-type energy inequality:_
\[\tilde{\mathcal{E}}(t)+\vartheta\left(\|\eta^{2}\|_{2,1}^{2}(t)+ c^{-1}|d(t)|_{3}^{2}\right)+c^{-1}\int_{0}^{t}\mathcal{D}(\tau)\mathrm{d}\tau\] \[\leqslant c_{7}\int_{0}^{t}(\tilde{\mathcal{E}}(\tau)+\vartheta \|\eta^{2}(\tau)\|_{2,1}^{2})\mathrm{d}\tau+c\left(\|\eta^{0}\|_{3}^{2}+\|u^{ 0}\|_{2}^{2}+\vartheta|d^{0}|_{3}^{2}+\int_{0}^{t}\|(\eta,\vartheta u)\|_{0}^ {2}\mathrm{d}\tau\right), \tag{3.100}\]
_where \(c_{7}\) is the constant after (2.61) in Proposition 2.2, \(\tilde{\mathcal{E}}(t)\) satisfies_
\[c^{-1}\mathcal{E}(t)\leqslant\tilde{\mathcal{E}}(t)\leqslant c\mathcal{E}(t) \text{ for any }t\in[0,T], \tag{3.101}\]
_and the constants \(\delta_{1}\), \(c\) depend on the domain \(\Omega\) and parameters/functions in the RT problem. It should be noted that \(\|\eta^{2}\|_{2,1}^{2}\) and \(|d|_{3}^{2}\) exist only for \(\vartheta>0\)._
Proof. If we make use of trace estimate, (3.50) for \(0\leqslant i\leqslant 1\), (3.66) for \(0\leqslant i\leqslant 1\) and (3.73), we can infer that there is a constant \(c\) such that, for any sufficiently large constant \(\tilde{c}_{1}\geqslant 1\) and any sufficiently small \(\delta\),
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{1}+c^{-1}\mathcal{D}_{1}\leqslant c \tilde{c}_{1}\left(\|\eta\|_{2}^{2}+\sqrt{\mathcal{E}}\mathcal{D}\right), \tag{3.102}\]
where we have defined that
\[\mathcal{E}_{1}:=\sum_{|\alpha|\leqslant 1}\left(\int\bar{\rho}\partial_{ \mathrm{h}}^{\alpha}\eta\cdot\partial_{\mathrm{h}}^{\alpha}u\mathrm{d}y+ \tilde{c}_{1}\|\sqrt{\bar{\rho}}\partial_{\mathrm{h}}^{\alpha}u\|_{0}^{2}+ \tilde{c}_{1}\mathcal{I}(\partial_{\mathrm{h}}^{\alpha}\eta)+\mathcal{U}( \partial_{\mathrm{h}}^{\alpha}\eta)/2\right)+\|\sqrt{\bar{\rho}}u_{t}\|_{0}^{ 2}+\mathcal{I}(u)\]
and
\[\mathcal{D}_{1}:=\|u_{t}\|_{1}^{2}+\tilde{c}_{1}\|u\|_{\underline{1}}^{2}.\]
Thanks to Korn's and Young's inequalities, and the trace estimate, we have, for any sufficiently large constant \(\tilde{c}_{1}\),
\[\|\eta\|_{\underline{1}}^{2}+\tilde{c}_{1}\|u\|_{\underline{1},0}^{2}+\|u_{t} \|_{0}^{2}\lesssim\mathcal{E}_{1}. \tag{3.103}\]
Next we further derive the estimate for the higher-order normal derivatives of \(\eta\). To this purpose, we rewrite (3.80)\({}_{1}\) as follows:
\[\mu\partial_{3}^{2}u_{\mathrm{h}}=\nabla_{\mathrm{h}}(g\bar{\rho}\eta_{3}-P^{ \prime}(\bar{\rho})\bar{\rho}\mathrm{div}\eta)-\mu\Delta_{\mathrm{h}}u_{ \mathrm{h}}-\tilde{\mu}\nabla_{\mathrm{h}}\mathrm{div}u+\bar{\rho}\partial_{t }u_{\mathrm{h}}-\mathbf{N}_{\mathrm{h}}^{3}=:(\mathbf{F}_{1}^{3},\mathbf{F}_{ 2}^{3})^{\top} \tag{3.104}\]
and
\[((\mu+\tilde{\mu})\partial_{3}^{2}u_{3}+P^{\prime}(\bar{\rho}) \bar{\rho}\partial_{3}^{2}\eta_{3})=g\bar{\rho}\partial_{3}\eta_{3}-\mu\Delta_ {\mathrm{h}}u_{3}-\tilde{\mu}\partial_{3}\mathrm{div}u_{\mathrm{h}}\] \[-(P^{\prime}(\bar{\rho}))^{\prime}\bar{\rho}\mathrm{div}\eta-P^{ \prime}(\bar{\rho})\bar{\rho}\partial_{3}\mathrm{div}u_{\mathrm{h}}\eta_{ \mathrm{h}}+\bar{\rho}\partial_{t}u_{3}-\mathbf{N}_{3}^{3}=:\mathbf{F}_{3}^{3}. \tag{3.105}\]
We can deduce from (3.104) that, for \(0\leqslant j\leqslant 1\),
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\sqrt{\mu}\partial_{3}^{2+j}\eta_{ \mathrm{h}}\|_{\underline{1-j},0}^{2}\lesssim\|\partial_{3}^{2+j}\eta_{ \mathrm{h}}\|_{\underline{1-j},0}\|\partial_{3}^{j}\mathbf{F}_{\mathrm{h}}^{3} \|_{\underline{1-j},0}. \tag{3.106}\]
Similarly, we can also get from (3.105) that
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\left\|\sqrt{\mu+\tilde{\mu }}\partial_{3}^{2+j}\eta_{3}\right\|_{\underline{1-j},0}^{2}+c\|\partial_{3}^{ 2+j}\eta_{3}\|_{\underline{1-j},0}^{2}\] \[\lesssim\|\partial_{3}^{2+j}\eta_{3}\|_{\underline{1-j},0}\| \partial_{3}^{j}\mathbf{F}_{3}^{3}\|_{\underline{1-j},0}^{2}+\begin{cases}0& \text{for }j=0;\\ \|\partial_{3}^{2}\eta_{3}\|_{0}\|\partial_{3}^{3}\eta_{3}\|_{0}&\text{for }j=1. \end{cases} \tag{3.107}\]
In addition, by (3.28),
\[\|\partial_{3}^{j}\mathbf{F}^{3}\|_{\underline{1-j},0}^{2}\lesssim\|\partial_ {3}^{j}(\eta,u)\|_{\underline{2-j},1}^{2}+\|(u_{t},\mathbf{N})\|_{1}^{2} \lesssim\|\partial_{3}^{j}(\eta,u)\|_{\underline{2-j},1}^{2}+\|u_{t}\|_{1}^ {2}+\sqrt{\mathcal{E}}\mathcal{D},\]
where \(\mathbf{F}^{3}=(\mathbf{F}_{1}^{3},\mathbf{F}_{2}^{3},\mathbf{F}_{3}^{3})^{\top}\). Thus we derive from (3.106), (3.107) and Young's inequality that
\[\frac{\mathrm{d}}{\mathrm{d}t}\overline{\|\partial_{3}^{2}\eta\|}_{\underline {1},0}^{2}\lesssim\tilde{c}_{1}^{-1}\|\partial_{3}^{2}\eta\|_{\underline{1},0 }^{2}+c\tilde{c}_{1}(\|(\eta,u)\|_{\underline{2},1}^{2}+\|u_{t}\|_{1}^{2}+ \sqrt{\mathcal{E}}\mathcal{D}). \tag{3.108}\]
and
\[\frac{\mathrm{d}}{\mathrm{d}t}\overline{\|\partial_{3}^{3}\eta\|}_{0}^{2} \leqslant\tilde{c}_{1}^{-1}\|\partial_{3}^{3}\eta\|_{0}^{2}+c\tilde{c}_{1}(\| \partial_{3}^{2}\eta\|_{\underline{1},0}^{2}+\|\eta\|_{\underline{2},1}^{2}+ \|u\|_{\underline{1},2}^{2}+\|u_{t}\|_{1}^{2}+\sqrt{\mathcal{E}}\mathcal{D}), \tag{3.109}\]
where we have defined that \(\overline{\|\partial_{3}^{3}\eta\|}_{0}^{2}:=\overline{\|\partial_{3}^{3}\eta \|}_{\underline{0},0}^{2}\) and
\[\overline{\|\partial_{3}^{2+j}\eta\|}_{\underline{1-j},0}^{2}:=\left\|\left( \sqrt{\mu}\partial_{3}^{2+j}\eta_{3},\sqrt{\mu+\tilde{\mu}}\partial_{3}^{2+j }\eta_{3}\right)\right\|_{\underline{1-j},0}^{2}\text{ for }0\leqslant j\leqslant 1.\]
Moreover,
\[\|\partial_{3}^{3}\eta\|_{0}\lesssim\overline{\|\partial_{3}^{3}\eta\|}_{0} \text{ and }\|\partial_{3}^{2}\eta\|_{\underline{1},0}\lesssim\overline{\| \partial_{3}^{2}\eta\|}_{\underline{1},0}. \tag{3.110}\]
Plugging (3.77) and (3.79) into (3.109), we further get
\[\frac{\mathrm{d}}{\mathrm{d}t}\overline{\|\partial_{3}^{3}\eta\|}_{0}^{2} \leqslant\tilde{c}_{1}^{-1}\|\partial_{3}^{3}\eta\|_{0}+c\tilde{c}_{1}(\| \partial_{3}^{2}\eta\|_{\underline{1},0}^{2}+\|(\eta,u)\|_{\underline{2},1}^{2 }+\|u_{t}\|_{1}^{2}+\sqrt{\mathcal{E}}\mathcal{D}). \tag{3.111}\]
Multiplying (3.108) and (3.111) by \(\tilde{c}_{1}^{-2}\) and \(\tilde{c}_{1}^{-4}\) respectively, and then adding the two resulting inequalities, we deduce that
\[\frac{\mathrm{d}}{\mathrm{d}t}(\tilde{c}_{1}^{-2}\overline{\| \partial_{3}^{2}\eta\|}_{\underline{1},0}^{2}+\tilde{c}_{1}^{-4}\overline{\| \partial_{3}^{3}\eta\|}_{0}^{2})\lesssim \tilde{c}_{1}^{-1}(\|(\eta,u)\|_{\underline{2},1}^{2}+\|u_{t}\|_{1 }^{2}+\sqrt{\mathcal{E}}\mathcal{D})\] \[+\tilde{c}_{1}^{-3}\|\partial_{3}^{2}\eta\|_{\underline{1},0}^{2}+ \tilde{c}_{1}^{-5}\|\partial_{3}^{3}\eta\|_{0}^{2}, \tag{3.112}\]
Now we prove (3.100) by two cases.
(1) Case of \(\vartheta=0\).
We can derive from (3.50), (3.66) with \((\vartheta,i)=(0,2)\), (3.102) and (3.112) that
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{2}+c^{-1}\mathcal{D}_{2}\] \[\leqslant c(\tilde{c}_{1}^{-1}(\|\eta\|_{\underline{2},1}^{2}+\|u_ {t}\|_{1}^{2})+\tilde{c}_{1}^{-3}\|\partial_{3}^{2}\eta\|_{\underline{1},0}^{2} +\tilde{c}_{1}^{-5}\|\partial_{3}^{3}\eta\|_{0}^{2}\] \[\quad+|\eta_{3}|_{2}^{2}+\|u\|_{\underline{2},1}^{2}+\tilde{c}_{1 }(\|\eta\|_{2}^{2}+\sqrt{\mathcal{E}}\mathcal{D})), \tag{3.113}\]
where we have defined that
\[\mathcal{E}_{2}:= \mathcal{E}_{1}+\tilde{c}_{1}^{-2}\overline{\|\partial_{3}^{2} \eta\|}_{\underline{1},0}^{2}+\tilde{c}_{1}^{-4}\overline{\|\partial_{3}^{3} \eta\|}_{0}^{2}\]
\[+\sum_{|\alpha|=2}\left(\int\bar{\rho}\partial_{\rm h}^{\alpha}\eta \cdot\partial_{\rm h}^{\alpha}u+\tilde{c}_{1}\|\sqrt{\bar{\rho}}\partial_{\rm h }^{\alpha}u\|_{0}^{2}+\tilde{c}_{1}\mathcal{I}(\partial_{\rm h}^{\alpha}\eta)+ \mathcal{U}(\partial_{\rm h}^{\alpha}\eta)/2\right)\]
and
\[\mathcal{D}_{2}:=\tilde{c}_{1}^{-5}\|\eta\|_{3}^{2}+\tilde{c}_{1}\|u\|_{\underline {2}\perp 1}^{2}+\mathcal{D}_{1}.\]
Thanks to (3.77), (3.110) and Korn's inequality, we see that \(\mathcal{E}_{2}\) satisfies, for any sufficiently large \(\tilde{c}_{1}\),
\[c^{-1}\tilde{c}_{1}^{-4}\mathcal{E}\leqslant\mathcal{E}_{2}\leqslant c\tilde{c }_{1}\mathcal{E} \tag{3.114}\]
and
\[\|\eta\|_{\underline{2}\perp 1}^{2}+\tilde{c}_{1}^{-2}\|\partial_{3}^{2}\eta \|_{\underline{1}\perp 0}^{2}+\tilde{c}_{1}^{-4}\|\partial_{3}^{3}\eta\|_{0}^{2} \leqslant\mathcal{E}_{2}. \tag{3.115}\]
In addition, by (3.78), Newton-Leibniz formula and Holder's inequality, it is obvious that
\[\|u_{t}\|_{1}^{2}+\tilde{c}_{1}\|u\|_{\underline{2}\perp 1}^{2}+\tilde{c}_{1}^ {-5}\mathcal{D}\lesssim\mathcal{D}_{2} \tag{3.116}\]
and
\[|\eta_{3}|_{2}^{2}=\int_{h_{+}}^{0}\partial_{3}|\eta_{3}|_{2}^{2}{\rm d}y_{3} \lesssim\|\eta_{3}\|_{2,0}\|\eta_{3}\|_{2,1}. \tag{3.117}\]
Thus, exploiting (3.116), (3.117) and Young's inequality, it follows from (3.113) that, sufficiently large \(\tilde{c}_{1}\),
\[\frac{{\rm d}}{{\rm d}t}\mathcal{E}_{2}+c^{-1}(\|u_{t}\|_{1}^{2} +\tilde{c}_{1}\|u\|_{\underline{2}\perp 1}^{2}+\tilde{c}_{1}^{-5}\mathcal{D})\] \[\leqslant c(\tilde{c}_{1}^{-1}\|\eta\|_{\underline{2}\perp 1}^{2}+ \tilde{c}_{1}^{-3}\|\partial_{3}^{2}\eta\|_{\underline{1}\perp 0}^{2}+ \tilde{c}_{1}^{-5}\|\partial_{3}^{3}\eta\|_{0}^{2}+\tilde{c}_{1}(\|\eta\|_{2} ^{2}+\sqrt{\mathcal{E}}\mathcal{D})).\]
Making use of (3.115) and the interpolation inequality (A.5), we further derive from the above estimate that there exist a positive constant \(c\) and a sufficiently large \(\tilde{c}_{1}\) such that, for any sufficiently small \(\delta\),
\[\frac{{\rm d}}{{\rm d}t}\mathcal{E}_{2}+c^{-1}\tilde{c}_{1}^{-5}\mathcal{D} \leqslant c_{7}\mathcal{E}_{2}+c(\|\eta\|_{0}^{2}+\sqrt{\mathcal{E}}\mathcal{D}). \tag{3.118}\]
Now let \(\tilde{\mathcal{E}}=\mathcal{E}_{2}\), where \(\mathcal{E}_{2}\) satisfies (3.114). Finally we immediately obtain (3.100) with \(\vartheta=0\) by integrating (3.118) over \((0,t)\), and then using (3.74) and (3.114) with \(t=0\).
(2) Case of \(\vartheta>0\).
It follows from (3.51), (3.52), (3.67), (3.68), (3.102) and (3.112) that, for any sufficiently small \(\delta\),
\[\frac{{\rm d}}{{\rm d}t}\mathcal{E}_{3}+c^{-1}\mathcal{D}_{3}\leqslant c(\tilde{c}_{1}^{-1}(\|(\eta,u)\|_{2,1}^{2}+\|u_{t}\|_{1}^{2})+\tilde{c}_{1} ^{-3}\|\partial_{3}^{2}\eta\|_{\underline{1},0}^{2}+\tilde{c}_{1}^{-5}\| \partial_{3}^{3}\eta\|_{0}^{2}\] \[+\tilde{c}_{1}^{3}(|\eta|_{2}^{2}+\|(\eta,u)\|_{2}^{2}+\sqrt{ \mathcal{E}}\mathcal{D}))\] \[+c(|\partial_{\rm h}^{2}\eta_{3}|_{1/2}+\tilde{c}_{1}|\partial_{\rm h }^{2}u_{3}|_{1/2})\left(\sqrt{\|\eta\|_{\underline{2}\perp 1}\|\eta\|_{1,2}}\right.\] \[\left.\quad+\|\eta\|_{\underline{2},1}+\|u\|_{\underline{2},1}+ \sqrt{\|u\|_{\underline{2},1}\|u\|_{1,2}}\right), \tag{3.119}\]
where we have defined that
\[\mathcal{E}_{3}:= \mathcal{E}_{1}+\tilde{c}_{1}^{-2}\overline{\|\partial_{3}^{2}\eta \|}_{\text{L}0}^{2}+\tilde{c}_{1}^{-4}\overline{\|\partial_{3}^{3}\eta\|}_{0}^ {2}\] \[+\tilde{c}_{1}^{2}\sum_{|\alpha|=1}\int_{\mathbb{R}^{2}}\bigg{(} \int\bar{\rho}\mathfrak{D}_{\mathbf{h}}^{3/2}\partial_{\mathbf{h}}^{\alpha} \eta\cdot\mathfrak{D}_{\mathbf{h}}^{3/2}\partial_{\mathbf{h}}^{\alpha}u \mathrm{d}y+\mathcal{U}(\mathfrak{D}_{\mathbf{h}}^{3/2}\partial_{\mathbf{h}}^{ \alpha}\eta)/2\bigg{)}\] \[+\tilde{c}_{1}(\|\sqrt{\bar{\rho}}\mathfrak{D}_{\mathbf{h}}^{3/2 }\partial_{\mathbf{h}}^{\alpha}u\|_{0}^{2}+\mathcal{I}(\mathfrak{D}_{\mathbf{ h}}^{3/2}\partial_{\mathbf{h}}^{\alpha}\eta))\bigg{)}\mathrm{d}\mathbf{h}+\sum_{| \alpha|=2}\bigg{(}\int\bar{\rho}\partial_{\mathbf{h}}^{\alpha}\eta\cdot \partial_{\mathbf{h}}^{\alpha}u\mathrm{d}y+\mathcal{U}(\partial_{\mathbf{h}}^ {\alpha}\eta)/2\bigg{)}\] \[+\tilde{c}_{1}\left(\|\sqrt{\bar{\rho}}u\|_{2,0}^{2}+\bigg{\|} \sqrt{P^{\prime}(\bar{\rho})}\bar{\rho}\partial_{\mathbf{h}}^{2}\left(\frac{ g\eta_{3}}{P^{\prime}(\bar{\rho})}-\mathrm{div}\eta\right)\bigg{\|}_{2,0}^{2}\right)\]
and
\[\mathcal{D}_{3}:=\|\mathrm{div}\eta\|_{2,0}^{2}+\vartheta\tilde{c}_{1}^{2}| \nabla_{\mathbf{h}}^{2}\eta_{3}|_{1/2}^{2}+\tilde{c}_{1}^{-5}\|\eta\|_{3}^{2}+ \tilde{c}_{1}\|u\|_{2,1}^{2}+\mathcal{D}_{1},\]
Thanks to (3.77), (3.110), (A.16), trace estimate and Korn's inequality, we see that \(\mathcal{E}_{3}\) satisfies, for any sufficiently large \(\tilde{c}_{1}\),
\[c^{-1}\tilde{c}_{1}^{-4}\mathcal{E}\leqslant\mathcal{E}_{3}\leqslant c\tilde{ c}_{1}^{3}\mathcal{E} \tag{3.120}\]
and
\[\|\eta\|_{\underline{2}\,1}^{2}+\tilde{c}_{1}^{-2}\|\partial_{3}^{2}\eta\|_{ \underline{1}\,0}^{2}+\tilde{c}_{1}^{-4}\|\partial_{3}^{3}\eta\|_{0}^{2} \leqslant\mathcal{E}_{3}. \tag{3.121}\]
In addition, by (3.78),
\[\tilde{c}_{1}^{2}|\nabla_{\mathbf{h}}^{2}\eta_{3}|_{1/2}^{2}+\tilde{c}_{1}\|u \|_{\underline{2}\,1}^{2}+\|u_{t}\|_{1}^{2}+\tilde{c}_{1}^{-5}\mathcal{D} \lesssim\mathcal{D}_{3}. \tag{3.122}\]
Using (3.79), (3.122) and Young's inequality, we further get from (3.119) that, for any sufficiently large \(\tilde{c}_{1}\),
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{3}+c^{-1}(\tilde{c}_{1 }^{2}|\nabla_{\mathbf{h}}^{2}\eta_{3}|_{1/2}^{2}+\tilde{c}_{1}\|u\|_{ \underline{2}\,1}^{2}+\|u_{t}\|_{1}^{2}+\tilde{c}_{1}^{-5}\mathcal{D})\] \[\leqslant c(\tilde{c}_{1}^{-1}\|\eta\|_{2,1}^{2}+\tilde{c}_{1}^{- 3}\|\eta\|_{1,2}^{2}+\tilde{c}_{1}^{-5}\|\partial_{3}^{3}\eta\|_{0}^{2}\] \[\qquad+\tilde{c}_{1}^{4}(|\eta|_{2}^{2}+\|(\eta,u)\|_{2}^{2}+| \nabla_{\mathbf{h}}^{2}u_{3}|_{1/2}^{2}+\sqrt{\mathcal{E}}\mathcal{D})).\]
Making use of (3.117), (3.121), the interpolation inequality and Young's inequality, we derive from the above inequality that there exist a positive constant \(c\) such that, for any sufficiently small \(\delta\),
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{3}+c^{-1}\mathcal{D}\leqslant c( \|(\eta,u)\|_{0}^{2}+|\nabla_{\mathbf{h}}^{2}u_{3}|_{1/2}^{2}+\sqrt{\mathcal{ E}}\mathcal{D})+c_{7}\mathcal{E}_{3}/2, \tag{3.123}\]
which, together with (3.120) with \(t=0\), yields
\[\mathcal{E}_{3}+c^{-1}\int_{0}^{t}\mathcal{D}\mathrm{d}\tau\leqslant c\mathcal{ E}|_{t=0}+\int_{0}^{t}(c(\|(\eta,u)\|_{0}^{2}+|\nabla_{\mathbf{h}}^{2}u_{3}|_{1/2}^{ 2}+\sqrt{\mathcal{E}}\mathcal{D})+c_{7}\mathcal{E}_{3}/2)\mathrm{d}\tau.\]
Exploiting the interpolation inequality and (3.74), we further deduce from (3.89) and the above inequality that, for any sufficiently small \(\delta\),
\[\vartheta\|\eta^{2}\|_{2,1}^{2}+\tilde{c}_{2}\mathcal{E}_{3}+c^{-1}\int_{0}^{t }(\mathcal{D}+\vartheta|\nabla_{\mathbf{h}}^{2}u_{3}|_{1/2}^{2})\mathrm{d}\tau\]
\[\leqslant c\left(\|\eta^{0}\|_{3}^{2}+\|u^{0}\|_{2}^{2}+|\mathcal{H}^{0}|_{1}^{2} +\int_{0}^{t}\|(\eta,u)\|_{0}^{2}\mathrm{d}\tau\right)+\int_{0}^{t}(c_{7} \vartheta\|\eta^{2}(t)\|_{2,1}^{2}+c_{7}\tilde{c}_{2}\mathcal{E}_{3})\mathrm{d}\tau. \tag{3.124}\]
In addition, we can deduce that
\[|\mathcal{H}|_{1}\lesssim|d|_{3}, \tag{3.125}\] \[|d(t)|_{3}^{2}\lesssim|d^{0}|_{3}^{2}+\int_{0}^{t}\mathcal{D}(\tau )\mathrm{d}\tau\text{ for any }t\in[0,T] \tag{3.126}\]
(also cf. the derivation (B.46) and (B.47) in Section B.3). Here we will omit the proof of the above two estimates, since the auxiliary properties of inverse transform in Lagrangian coordinates are required and will be introduced in Section B.1. Consequently, let \(\tilde{\mathcal{E}}=\tilde{c}_{2}\mathcal{E}_{2}\), we get (3.100) from (3.124)-(3.126). This completes the proof of Lemma 3.12.
The local existence of strong solutions to the equations of stratified compressible viscous fluids has been established, see [20] for example. Similarly to [20], we can use a Faedo-Galerkin approximation scheme for the linearized problem and an iterative method to obtain a local-in-time existence result of a unique strong solution \((\eta,u)\) to the RT problem (1.31), and a unique global-in-time strong solution \((\eta^{2},u^{2})\) to the linear problem (3.86) for given \(\mathcal{M}\), \(\mathbf{N}_{1}^{4}\), \(\mathbf{N}_{2}^{4}\), \(\widetilde{\mathbf{N}}^{3}\), \(R_{P}\) and \(\mathcal{N}^{u}\) defined by \((\eta,u)\). Moreover, the strong solutions also satisfy the _a priori_ estimates in Lemma 3.12. Since the proof is standard in the well-posedness theory of PDEs, and hence we omit its details here, and only state the well-posedness results, in which the solutions enjoy the Gronwall-type energy inequality.
**Proposition 3.1**.:
1. _Let_ \((\eta^{0},u^{0})\in H^{3,1/2}_{0,*}\times H^{2}_{0}\) _and_ \(\zeta^{0}:=\eta^{0}+y\)_. There is a sufficiently small_ \(\delta_{2}\in(0,\iota)\)_, such that if_ \((\eta^{0},u^{0})\) _satisfies_ \[\sqrt{\|\eta^{0}\|_{3}^{2}+\|u^{0}\|_{2}^{2}}\leqslant\delta_{2},\text{ the (necessary) compatibility condition \eqref{eq:
**Remark 3.1**.: By the smallness condition \(\|\eta^{\vartheta}\|_{3}\leqslant\delta_{2}<\iota\) and Lemma A.10, \(\zeta_{\mathrm{h}}^{0}(y_{\mathrm{h}},0):\mathbb{R}^{2}\to\mathbb{R}^{2}\) is a homeomorphism mapping, and thus \(d^{0}:=\zeta_{3}((\zeta_{\mathrm{h}})^{-1}(x_{\mathrm{h}}),0)\) in (3.128) makes sense; moreover \(d^{0}\in(h_{-},h_{+})\). In addition, we do not state the necessary compatibility conditions in the second assertion in Proposition 3.1 as in the first assertion, since it is easy to observe that the initial data in the linear problem (3.86) automatically satisfy the necessary compatibility condition, i.e. (3.87).
## 4 Construction of initial data for the nonlinear problem
Let \((\tilde{\eta}^{0},\tilde{u}^{0})\) come from (2.62) in Proposition 2.2 and \(c_{7}\) be the constant after (2.61). By the definition of \(\tilde{\eta}^{0}\) in (2.62) and (2.65)-(2.68), there exist positive constants \(c_{4}\), \(c_{5}\) and \(c_{8}\) such that \((\tilde{\eta}^{0},\tilde{u}^{0})\) enjoys the estimates, for any \(n\geqslant c_{4}\),
\[\sum_{\beta_{1}+\beta_{2}+\beta_{3}\leqslant 4,\ 1\leqslant\beta_{1} +\beta_{2}}\|\partial_{1}^{\beta_{1}}\partial_{2}^{\beta_{2}}\chi_{n,n} \partial_{3}^{\beta_{2}}(\tilde{\eta}^{0},\tilde{u}^{0})\|_{0}^{2}+n^{-1}\| \chi_{n,n}(\tilde{\eta}^{0},\tilde{u}^{0})\|_{4}^{2}\leqslant c_{5}^{2}n, \tag{4.1}\] \[2c_{8}n\leqslant\min_{\omega=\tilde{\eta}^{0},\tilde{u}^{0}}\{\| \chi_{n,n}\omega_{\mathrm{h}}\|_{0},\|\chi_{n,n}\omega_{3}\|_{0},|\chi_{n,n} \omega_{3}|_{0}\}. \tag{4.2}\]
From now on, for any given \(\delta>0\), the integer \(n\) always satisfies
\[n\geqslant\max\{c_{4},\delta^{-2}\}. \tag{4.3}\]
Let
\[(\eta^{\mathrm{a}},u^{\mathrm{a}})=\delta e^{c_{7}t}\chi_{n,n}(\tilde{\eta}^{0 },\tilde{u}^{0})/n\in(H_{0}^{1}\cap H^{4})^{2}, \tag{4.4}\]
Then the approximate solution \((\eta^{\mathrm{a}},u^{\mathrm{a}})\) satisfies the estimate by (4.1) and the form (4.4): for any \(0\leqslant j\leqslant 2\) and for any \(t\geqslant 0\),
\[\|\partial_{t}^{j}(\eta^{\mathrm{a}},u^{\mathrm{a}})\|_{3}=c_{7}^{j}\delta e ^{c_{7}t}\|\chi_{n,n}(\tilde{\eta}^{0},\tilde{u}^{0})\|_{3}/n\leqslant c_{5}c _{7}^{j}\delta e^{c_{7}t}, \tag{4.5}\]
and the following relations
\[\begin{cases}\eta_{t}^{\mathrm{a}}=u^{\mathrm{a}}&\text{in }\Omega,\\ \bar{\rho}u_{t}^{\mathrm{a}}=g\bar{\rho}(\mathrm{div}\eta^{\mathrm{a}}\mathbf{ e}^{3}-\nabla\eta_{3}^{\mathrm{a}})+\mathrm{div}\Upsilon(\eta^{\mathrm{a}},u^{ \mathrm{a}})+\delta e^{c_{7}t}\mathbf{R}^{1}(\tilde{\eta}^{0},\tilde{u}^{0})/n& \text{in }\Omega,\\ \llbracket u^{\mathrm{a}}\rrbracket=\llbracket\eta^{\mathrm{a}}\rrbracket=0,\ \llbracket \Upsilon(\eta^{\mathrm{a}},u^{\mathrm{a}})\mathbf{e}^{3}\rrbracket+\vartheta \Delta_{\mathrm{h}}\eta_{3}^{\mathrm{a}}\mathbf{e}^{3}=\delta e^{c_{7}t} \mathbf{R}^{2}(\tilde{\eta}^{0},\tilde{u}^{0})/n&\text{on }\Sigma,\\ (\eta^{\mathrm{a}},u^{\mathrm{a}})=0&\text{on }\partial\Omega,\\ (\eta^{\mathrm{a}},u^{\mathrm{a}})|_{t=0}=\delta\chi_{n,n}(\tilde{\eta}^{0}, \tilde{u}^{0})/n&\text{in }\Omega,\end{cases} \tag{4.6}\]
where we have defined that
\[\mathbf{R}^{1}(\tilde{\eta}^{0},\tilde{u}^{0}):= g\bar{\rho}(\tilde{\eta}^{0}\cdot\nabla\chi_{n,n}\mathbf{e}^{3}- \tilde{\eta}_{3}^{0}\nabla\chi_{n,n})+\mathrm{div}(P^{\prime}(\bar{\rho})\bar {\rho}\bar{\eta}^{0}\cdot\nabla\chi_{n,n}\mathbb{I}\] \[+\mu(\nabla\chi_{n,n}(\tilde{u}_{0})^{\top}+\tilde{u}^{0}(\nabla \chi_{n,n})^{\top})+(\varsigma-2\mu/3)\,\tilde{u}^{0}\cdot\nabla\chi_{n,n} \mathbb{I}),\] \[\mathbf{R}^{2}(\tilde{\eta}^{0},\tilde{u}^{0}):= -\llbracket\mu(\nabla\chi_{n,n}(\tilde{u}^{0})^{\top}+\tilde{u}^{0 }(\nabla\chi_{n,n})^{\top})+(\varsigma-2\mu/3)\,\tilde{u}^{0}\cdot\nabla\chi_{ n,n}\mathbb{I}\rrbracket\] \[-\vartheta(2\nabla_{\mathrm{h}}\chi_{n,n}\cdot\nabla_{\mathrm{h}} \tilde{\eta}_{3}^{0}+\Delta_{\mathrm{h}}\chi_{n,n}\tilde{\eta}_{3}^{0}) \mathbf{e}^{3}.\]
Thanks to (4.1) and trace estimate, we easily estimate that
\[n^{-1/2}(\|\mathbf{R}^{1}(\tilde{\eta}^{0},\tilde{u}^{0})\|_{1}+| \mathbf{R}^{2}(\tilde{\eta}^{0},\tilde{u}^{0})|_{1/2})\] \[\lesssim n^{-1/2}(\|(\tilde{\eta}^{0}\cdot\nabla\chi_{n,n},\tilde {\eta}_{3}^{0}\nabla\chi_{n,n},\nabla_{\mathrm{h}}\chi_{n,n}\cdot\nabla_{ \mathrm{h}}\tilde{\eta}_{3}^{0},\Delta_{\mathrm{h}}\chi_{n,n}\tilde{\eta}_{3}^{0}) \|_{1}\]
\[\begin{cases}\mathbb{I}[u^{\text{r}}]=0&\text{on $\Sigma$},\\ \llbracket\mathbb{S}(u^{\text{r}})\mathbf{e}^{3}\rrbracket=\delta^{-2}\mathbf{F}^{ 4}(\eta^{\delta}_{0},u^{\delta}_{0})-\delta^{-1}\mathbf{R}^{2}(\tilde{\eta}^{ 0},\tilde{u}^{0})/n&\text{on $\Sigma$},\\ u^{\text{r}}=0&\text{on $\partial\Omega$},\end{cases} \tag{4.11}\]
if \((\eta^{\delta}_{0},u^{\delta}_{0})\) given by (4.8) satisfies the compatibility jump condition (4.9) with sufficiently small \(\delta\).
To look for such \(u^{\text{r}}\) satisfying (4.11), we consider the following stratified Lame problem for given \(w\in H^{2}\):
\[\begin{cases}\mu\Delta u+(\varsigma+\mu/3)\nabla\text{div}u=0&\text{in $\Omega$},\\ \llbracket u\rrbracket=0,\\ \llbracket\mathbb{S}(u^{\text{r}})\mathbf{e}^{3}\rrbracket=\delta^{-2}\mathbf{F}^ {4}(\delta n^{-1}\chi_{n,n}\tilde{\eta}^{0},\delta n^{-1}\chi_{n,n}\tilde{u}^ {0}+\delta^{2}w)-\delta^{-1}\mathbf{R}^{2}(\tilde{\eta}^{0},\tilde{u}^{0})/n &\text{on $\Sigma$},\\ u=0&\text{on $\partial\Omega$}\.\end{cases} \tag{4.12}\]
In view of the theory of stratified Lame problem in Lemma A.12, there exists a solution \(u\) to (4.12); moreover
\[\|u\|_{2}\lesssim|(\delta^{-2}\mathbf{F}^{4}(\delta n^{-1}\chi_{n,n}\tilde{\eta}^{ 0},\delta n^{-1}\chi_{n,n}\tilde{u}^{0}+\delta^{2}w),\delta^{-1}\mathbf{R}^{2}( \tilde{\eta}^{0},\tilde{u}^{0})/n)|_{1/2}. \tag{4.13}\]
Following the arguments of (3.29) and (3.30), we have, for sufficiently small \(\delta\),
\[|\mathbf{F}^{4}(\delta n^{-1}\chi_{n,n}\tilde{\eta}^{0},\delta n^ {-1}\chi_{n,n}\tilde{u}^{0}+\delta^{2}w)|_{1/2}\] \[\lesssim\delta^{2}\|n^{-1}\chi_{n,n}\tilde{\eta}^{0}\|_{3}(\|n^{ -1}\chi_{n,n}\tilde{\eta}^{0}\|_{3}+\|n^{-1}\chi_{n,n}\tilde{u}^{0}+\delta w\| _{2})\] \[\lesssim\delta^{2}(1+\delta\|w\|_{2}), \tag{4.14}\]
where we have used (4.1) in the last inequality.
By (4.3), we see that
\[n^{-1/2}\leqslant\delta. \tag{4.15}\]
Thanks to (4.7), (4.14) and (4.15), we can get from (4.13) that
\[\|u\|_{2}\leqslant c_{6}(1+\delta\|w\|_{2})/2 \tag{4.16}\]
for some constant \(c_{6}\). Therefore, one can construct an approximate function sequence \(\{u_{\mathrm{r}}^{m}\}_{m=1}^{\infty}\), such that, for any \(m\geqslant 2\),
\[\begin{cases}\mu\Delta u_{\mathrm{r}}^{m}+(\varsigma+\mu/3)\nabla\mathrm{div}u _{\mathrm{r}}^{m}=0&\text{in }\Omega,\\ \llbracket u_{\mathrm{r}}^{m}\rrbracket=0,\\ \llbracket\mathbb{S}(u_{\mathrm{r}}^{m})\mathbf{e}^{3}\rrbracket=\delta^{-2}( \mathbf{F}^{4}(\delta n^{-1}\chi_{n,n}\tilde{\eta}^{0},\delta n^{-1}\chi_{n,n }\tilde{u}^{0}+\delta^{2}u_{\mathrm{r}}^{m-1})+\delta^{-1}\mathbf{R}^{2}( \tilde{\eta}^{0},\tilde{u}^{0})/n&\text{on }\Sigma,\\ u_{\mathrm{r}}^{m}=0&\text{on }\partial\Omega\end{cases} \tag{4.17}\]
and \(\|u_{\mathrm{r}}^{1}\|_{2}\leqslant c_{6}\). Moreover, by (4.16), one has
\[\|u_{\mathrm{r}}^{m}\|_{2}\leqslant c_{6}(1+\delta\|u_{\mathrm{r}}^{m-1}\|_{ 2})/2\]
for any \(m\geqslant 2\), which implies that
\[\|u_{\mathrm{r}}^{m}\|_{2}\leqslant c_{6} \tag{4.18}\]
for any \(n\) satisfying (4.3), and for any sufficiently small \(\delta\leqslant 1/c_{6}\).
Next we further show that \(u_{\mathrm{r}}^{m}\) is a Cauchy sequence in \(H^{2}\). We define that \(u_{\mathrm{r}}^{m,\mathrm{d}}:=u_{\mathrm{r}}^{m}-u_{\mathrm{r}}^{m-1}\),
\[\mathbb{D}_{\mathrm{h},0}^{m,\delta}:= \delta^{2}(\llbracket\mathbb{S}(u_{\mathrm{r}}^{m,\mathrm{d}}) \mathbf{e}^{3}\rangle\cdot\tilde{\mathbf{n}}_{0}^{\delta}\mathbf{n}_{0}^{ \delta}+(\mathbb{S}(u_{\mathrm{r}}^{m,\mathrm{d}})\mathbf{e}^{3})\cdot \mathbf{e}^{3}\tilde{\mathbf{n}}_{0}^{\delta}\rrbracket\] \[-\Pi_{\mathbf{n}_{0}^{\delta}}\llbracket\mathbb{S}(u_{\mathrm{r} }^{m,\mathrm{d}})(J_{0}^{\delta}\mathcal{A}_{0}^{\delta}\mathbf{e}^{3}- \mathbf{e}^{3})+\mathbb{S}_{\tilde{\mathcal{A}}_{0}^{\delta}}(u_{\mathrm{r}}^ {m,\mathrm{d}})J^{\delta}\mathcal{A}_{0}^{\delta}\mathbf{e}^{3}\rrbracket)_{ \mathrm{h}}\]
and
\[\mathbb{D}_{3,0}^{m,\delta}:=-\delta^{2}(\mathbb{S}_{\tilde{\mathcal{A}}_{0}^{ \delta}}(u_{\mathrm{r}}^{m,\mathrm{d}})\tilde{\mathbf{n}}_{0}^{\delta}\cdot \mathbf{n}_{0}^{\delta}+(\mathbb{S}(u_{\mathrm{r}}^{m,\mathrm{d}})\tilde{ \mathbf{n}}_{0}^{\delta})\cdot\mathbf{n}_{0}^{\delta}+(\mathbb{S}(u_{\mathrm{r }}^{m,\mathrm{d}})\mathbf{e}^{3})\cdot\tilde{\mathbf{n}}_{0}^{\delta}),\]
where \(J_{0}^{\delta}\), \(\mathcal{A}_{0}^{\delta}\), \(\tilde{\mathcal{A}}_{0}^{\delta}\), \(\mathbf{n}_{0}^{\delta}\) resp. \(\tilde{\mathbf{n}}_{0}^{\delta}\) are defined as \(J\), \(\mathcal{A}\), \(\tilde{\mathcal{A}}\), \(\mathbf{n}\) resp. \(\tilde{\mathbf{n}}\) by with \(\delta n^{-1}\chi_{n,n}\tilde{\eta}^{0}\) in place of \(\eta\), and \(\mathbb{S}_{\tilde{\mathcal{A}}^{\delta}}(u_{\mathrm{r}}^{m,\mathrm{d}})\) is defined as \(\mathbb{S}_{\mathcal{A}}(u)\) in (1.27) with \(\tilde{\mathcal{A}}_{0}^{\delta}\) resp. \(u_{\mathrm{r}}^{m,\mathrm{d}}\) in place of \(\mathcal{A}\) resp. \(u\).
Noting that
\[\begin{cases}\mu\Delta u_{\mathrm{r}}^{m+1,\mathrm{d}}+(\varsigma+\mu/3)\nabla \mathrm{div}u_{\mathrm{r}}^{m+1,\mathrm{d}}=0&\text{in }\Omega,\\ \llbracket u_{\mathrm{r}}^{m+1,\mathrm{d}}\rrbracket=0,\ \llbracket\mathbb{S}(u_{ \mathrm{r}}^{m+1,\mathrm{d}})\mathbf{e}^{3}\rrbracket=\delta^{-2}\mathbb{D}_{0} ^{m,\delta}&\text{on }\Sigma,\\ u_{\mathrm{r}}^{m+1,\mathrm{d}}=0&\text{on }\partial\Omega,\end{cases}\]
thus we have
\[\|u_{\rm r}^{m+1,{\rm d}}\|_{2}\lesssim\delta^{-2}|\mathbb{D}_{0}^{m,\delta}|_{1/ 2}, \tag{4.19}\]
where \(\mathbb{D}_{0}^{m,\delta}=((\mathbb{D}_{{\rm h},0}^{m,\delta})^{\top},\mathbb{D }_{3,0}^{m,\delta})\).
In addition, similarly to (4.14), it is easy to estimate that, for sufficiently small \(\delta\),
\[|\mathbb{D}_{0}^{m,\delta}|_{1/2}\lesssim\delta^{3}\|u_{\rm r}^{m,{\rm d}}\|_{ 2}.\]
Putting the above estimate into (4.19) yields
\[\|u_{\rm r}^{m+1,{\rm d}}\|_{2}\lesssim\delta\|u_{\rm r}^{m,{\rm d}}\|_{2},\]
which presents that \(\{u_{\rm r}^{m}\}_{m=1}^{\infty}\) is a Cauchy sequence in \(H^{2}\) by choose a sufficiently small \(\delta\). Consequently, we can use a compactness argument to get a limit function \(u^{\rm r}\) which solves (4.11) by (4.17). Moreover \(u^{\rm r}\) satisfies (4.10) by (4.18) and the strong convergence of \(\{u_{\rm r}^{m}\}_{m=1}^{\infty}\).
## 5 Error estimates
This section is devoted to the derivation of error estimates between the solutions of the (nonlinear) RT problem (1.31) and the solutions of the linearized RT problem (1.48). To start with, let us introduce the estimate:
\[|\phi|_{3}\leqslant|\phi|_{7/2}\leqslant\tilde{c}_{3}(\|\chi\|_{4})\|\chi\|_{ 4}\ \text{for any}\ \chi\in H^{4,1/2}_{0,*}\ \text{satisfying}\ \|\chi\|_{4}^{2}\leqslant 1, \tag{5.1}\]
where \(\phi(x_{\rm h}):=\tilde{\chi}_{3}((\tilde{\chi}_{\rm h})^{-1}(x_{\rm h}),0)\), \(\tilde{\chi}:=\chi+y\), and _the positive constant \(\tilde{c}_{3}(\|\chi\|_{4})\) is increasing with respect to \(\|\chi\|_{4}\)_. Please refer to (B.27) in Section B.2.3 for a derivation of (5.1).
Then we define
\[\tilde{c}_{4}:=c_{5}+c_{6}+\sqrt{\vartheta}\tilde{c}_{3}(c_{5}+c_{6})+1\ \text{and}\ c_{3}:=\min\{\delta_{1}/\tilde{c}_{4},\delta_{2}/2\tilde{c}_{4}, \delta_{3}\}\leqslant 1. \tag{5.2}\]
From now on, we always assume \(\delta\) satisfies
\[\delta\in(0,c_{3}]. \tag{5.3}\]
Making use of (4.1), (4.8), (4.10), (5.1), Lemma A.10 and Minkowski's inequality in discrete form, we have
\[\sqrt{\|\eta_{0}^{\delta}\|_{3}^{2}+\|u_{0}^{\delta}\|_{2}^{2}+\vartheta|d_{0} ^{\delta}\|_{3}^{2}}\leqslant\tilde{c}_{4}\delta<\delta_{2}\in(0,\iota)\]
and
\[(\eta_{0}^{\delta},u_{0}^{\delta})\in H^{3,1/2}_{0,*}\times(H^{1}_{0}\cap H^{2 }),\]
where \((\eta_{0}^{\delta},u_{0}^{\delta})\) is constructed by Proposition 4.1 with the condition \(\delta\leqslant\delta_{3}\), and \(d_{0}^{\delta}(x_{\rm h}):={\bf e}^{3}\cdot\zeta_{0}^{\delta}((\zeta_{\rm h} ^{0})^{-1}(x_{\rm h}),0)\) with \(\zeta_{0}^{\delta}:=\eta_{0}^{\delta}+y\). Hence, in view of Proposition 3.1, there exists a unique local (nonlinear) solution \((\eta,u)\in C^{0}([0,T_{\rm loc}),H^{3,1/2}_{0,*}\times H^{2})\) to the RT problem emanating from the initial data \((\eta_{0}^{\delta},u_{0}^{\delta})\); moreover
\[\lim_{t\to T_{\rm loc}^{-}}(\|\eta(t)\|_{3}^{2}+\|u(t)\|_{2}^{2})>\delta_{2}^{2}. \tag{5.4}\]
Now we estimate the error between the solution \((\eta,u)\) and the solution \((\eta^{\rm a},u^{\rm a})\) provided by (4.4). To this purpose, we define an error function \((\eta^{\rm d},u^{\rm d}):=(\eta,u)-(\eta^{\rm a},u^{\rm a})\). Then, we can establish the following estimate of the error function.
**Proposition 5.1**.: _Let \(\delta\) and \(n\) satisfy (4.3) and (5.3). For any given positive constant \(\beta\), we assume that, for any \(t\in[0,T]\),_
\[\sqrt{\|\eta\|_{3}^{2}+\|u\|_{2}^{2}+\|u_{t}\|_{0}^{2}+\|u\|_{L^{2}( (0,t),H^{3})}^{2}+\|u_{\tau}\|_{L^{2}((0,t),H^{1})}^{2}}\leqslant\beta\delta e^ {c\tau t}, \tag{5.5}\] \[\delta e^{c\tau t}\leqslant 1\text{ and }\|\eta(t)\|_{3} \leqslant\delta_{1}, \tag{5.6}\]
_then there exists a constant \(c\) such that for any \(t\in[0,T]\),_
\[\|(\eta^{\mathrm{d}},u^{\mathrm{d}})\|_{1}+\|u_{t}^{\mathrm{d}}\|_{0}+|(\eta^ {\mathrm{d}},u^{\mathrm{d}})|_{1/2}\leqslant c\sqrt{\delta^{3}e^{3c\tau t}}, \tag{5.7}\]
**Remark 5.1**.: Here and in the proof of Proposition 5.1, the generic constants \(c\) not only may depend on the domain \(\Omega\), and other known physical parameters/functions in the RT problem (1.31), _but also increasingly depend on the given constant \(\beta\)._
Proof.: We will break up the proof into four steps.
(1) _First we assert that, for any \(w\in H^{1}_{0}\),_
\[g\llbracket\bar{\rho}\rrbracket|w|_{0}^{2}-\mathcal{I}(w)\leqslant\Lambda^{2} \|\sqrt{\rho}w\|_{0}^{2}+\Lambda\mathcal{U}(w), \tag{5.8}\]
_where \(\Lambda\) is defined by (2.60). Next we verify the above assertion._
Let \(\hat{f}\) be the horizontal Fourier transform of \(f\), see (1.57) the definition. By Fubini's and Parseval's theorems, we have that \(\hat{f}\in L^{2}\) and
\[\int|f(y)|^{2}\mathrm{d}y=\int_{\mathbb{R}^{2}}\int_{h_{-}}^{h^{+}}|\hat{f}( \xi,x_{3})|^{2}\mathrm{d}y_{3}\mathrm{d}\xi. \tag{5.9}\]
Applying the horizontal Fourier transform to \(w\), we have
\[\mathrm{div}w=\partial_{3}\hat{w}_{3}-\mathrm{i}\xi_{1}\hat{w}_{1}-\mathrm{i} \xi_{2}\hat{w}_{2} \tag{5.10}\]
and
\[\mathbb{D}w=\begin{pmatrix}-2\mathrm{i}\xi_{1}\hat{w}_{1}&-\mathrm{i}(\xi_{1 }\hat{w}_{2}+\xi_{2}\hat{w}_{1})&\partial_{3}\hat{w}_{1}-\mathrm{i}\xi_{1} \hat{w}_{3}\\ -\mathrm{i}(\xi_{1}\hat{w}_{2}+\xi_{2}\hat{w}_{1})&-2\mathrm{i}\xi_{2}\hat{w}_ {2}&\partial_{3}\hat{w}_{2}-\mathrm{i}\xi_{2}\hat{w}_{3}\\ \partial_{3}\hat{w}_{1}-\mathrm{i}\xi_{1}\hat{w}_{3}&\partial_{3}\hat{w}_{2}- \mathrm{i}\xi_{2}\hat{w}_{3}&2\partial_{3}\hat{w}_{3}\end{pmatrix}.\]
In particular, we can further compute out that
\[|\mathbb{D}w|^{2}/2= 2(|\xi_{1}\hat{w}_{1}|^{2}+|\xi_{2}\hat{w}_{2}|^{2}+|\partial_{3} \hat{w}_{3}|)+|\mathrm{i}(\xi_{1}\hat{w}_{2}+\xi_{2}\hat{w}_{1})|^{2}\] \[+|\partial_{3}\hat{w}_{1}-\mathrm{i}\xi_{1}\hat{w}_{3}|^{2}+| \partial_{3}\hat{w}_{2}-\mathrm{i}\xi_{2}\hat{w}_{3}|^{2}\] \[= |\mathrm{i}(\xi_{1}\hat{w}_{2}-\xi_{2}\hat{w}_{1})|^{2}+|\partial _{3}\hat{w}_{1}-\mathrm{i}\xi_{1}\hat{w}_{3}|^{2}+|\partial_{3}\hat{w}_{2}- \mathrm{i}\xi_{2}\hat{w}_{3}|^{2}\] \[+|\partial_{3}\hat{w}_{3}+\mathrm{i}\xi_{1}\hat{w}_{1}+\mathrm{i} \xi_{2}\hat{w}_{2}|^{2}+|\partial_{3}\hat{w}_{3}-\mathrm{i}\xi_{1}\hat{w}_{1}- \mathrm{i}\xi_{2}\hat{w}_{2}|^{2}. \tag{5.11}\]
Exploiting (5.9) and (5.10), we have
\[\|\sqrt{\rho}w\|_{0}^{2}= \int_{\mathbb{R}^{2}}\int_{h_{-}}^{h_{+}}\bar{\rho}(|\hat{w}_{1}|^ {2}+|\hat{w}_{2}|^{2}+|\hat{w}_{3}|^{2})\mathrm{d}y_{3}\mathrm{d}\xi\]
and
\[g\llbracket\bar{\rho}\rrbracket|w|_{0}^{2}-\mathcal{I}(w)= (g\llbracket\bar{\rho}\rrbracket-\vartheta\left|\xi\right|^{2}) \left|\hat{w}_{3}\right|_{0}^{2}-\left\|\sqrt{P^{\prime}(\bar{\rho})\bar{\rho} }\left|\frac{g}{P^{\prime}(\bar{\rho})}\hat{w}_{3}+\mathrm{i}\xi_{1}\hat{w}_{1} +\mathrm{i}\xi_{2}\hat{w}_{2}-\partial_{3}\hat{w}_{3}\right|\right\|_{0}^{2}\]
\[E(\Re\varphi,\Re\theta,\Re\psi;\xi)\leqslant \lambda^{2}(\xi)\mathcal{J}(\Re\varphi,\Re\theta,\Re\psi;\xi)+ \lambda(\xi)D(\Re\varphi,\Re\theta,\Re\psi;\xi)\] \[\leqslant \Lambda^{2}\mathcal{J}(\Re\varphi,\Re\theta,\Re\psi;\xi)+\Lambda D (\Re\varphi,\Re\theta,\Re\psi;\xi). \tag{5.14}\]
If \(|\xi|\geqslant\xi_{\mathrm{c}}\) for \(\vartheta>0\), the expression for \(E\) is non-positive, so the above inequality holds automatically. In addition, for \(\xi=0\), we can compute out that
\[E(\Re\varphi,\Re\theta,\Re\psi;\xi):=-\int_{h_{-}}^{h_{+}}P^{\prime}(\bar{\rho })\bar{\rho}|\Re\psi^{\prime}|^{2}\mathrm{d}y_{3}.\]
This means (5.14) also holds for \(\xi=0\). Therefore we get (5.14) for all \(\xi\in\mathbb{R}^{2}\).
Obviously we also have
\[E(\Im\varphi,\Im\theta,\Im\psi;\xi)\leqslant\Lambda^{2}\mathcal{J}(\Im\varphi,\Im\theta,\Im\psi;\xi)+\Lambda D(\Im\varphi,\Im\theta,\Im\psi;\xi)\text{ for any }\xi\in\mathbb{R}^{2}. \tag{5.15}\]
By (5.13)-(5.14) for any \(\xi\in\mathbb{R}^{2}\) and (5.15), we arrive at
\[E(\varphi,\theta,\psi;\xi)\leqslant\Lambda^{2}\mathcal{J}(\varphi,\theta, \psi;\xi)+\Lambda D(\varphi,\theta,\psi;\xi).\]
Integrating each side of the above inequality over all \(\xi\in\mathbb{R}^{2}\), and using (5.12) then proves (5.8).
(2) _Secondly we will derive the energy inequality satisfied by \((\eta^{\rm d},\eta^{\rm d})\)_.
It is easy to see from the RT problem and the linear problem (4.6) that \((\eta^{\rm d},u^{\rm d})\) satisfies the following error problem:
\[\begin{cases}\eta^{\rm d}_{t}=u^{\rm d}&\text{in }\Omega,\\ \bar{\rho}J^{-1}u^{\rm d}_{t}-{\rm div}_{\mathcal{A}}(P^{\prime}(\bar{\rho}) \bar{\rho}{\rm div}\eta^{\rm d}\mathbb{I}+\mathbb{S}_{\mathcal{A}}(u^{\rm d}))= g\bar{\rho}({\rm div}\eta^{\rm d}{\bf e}^{3}-\nabla\eta^{\rm d}_{3}){\bf e}^{3}+{ \bf R}^{3}&\text{in }\Omega,\\ \llbracket u^{\rm d}\rrbracket=0,\ \llbracket P^{\prime}(\bar{\rho})\bar{\rho}{\rm div }\eta^{\rm d}\mathbb{I}+\mathbb{S}_{\mathcal{A}}(u^{\rm d})\rrbracket J \mathcal{A}{\bf e}^{3}+\vartheta\Delta_{\rm h}\eta^{\rm d}_{3}{\bf e}^{3}={ \bf R}^{4}&\text{on }\Sigma,\\ (\eta^{\rm d},u^{\rm d})=0&\text{on }\partial\Omega\\ (\eta^{\rm d},u^{\rm d})|_{t=0}=(0,\delta^{2}u_{\rm r})&\text{in }\Omega,\end{cases} \tag{5.16}\]
where we have defined that
\[{\bf R}^{3}:= {\bf N}^{1}-\bar{\rho}(J^{-1}-1)u^{\rm a}_{t}+{\rm div}_{\bar{ \mathcal{A}}}(P^{\prime}(\bar{\rho})\bar{\rho}{\rm div}\eta^{\rm a}\mathbb{I} +\mathbb{S}_{\mathcal{A}}(u^{\rm a}))\] \[+{\rm div}{\rm S}_{\bar{\mathcal{A}}}(u^{\rm a})-\delta e^{c_{ \bar{\tau}}t}{\bf R}^{1}(\tilde{\eta}^{0},\tilde{u}^{0})/n \tag{5.17}\]
and
\[{\bf R}^{4}:= {\bf N}^{2}-\llbracket P^{\prime}(\bar{\rho})\bar{\rho}{\rm div} \eta^{\rm a}\mathbb{I}+\mathbb{S}(u^{\rm a})\rrbracket(J\mathcal{A}{\bf e}^{3} -{\bf e}^{3})-\llbracket\mathbb{S}_{\bar{\mathcal{A}}}(u^{\rm a})\rrbracket J \mathcal{A}{\bf e}^{3}-\delta e^{c_{\bar{\tau}}t}{\bf R}^{2}(\tilde{\eta}^{0 },\tilde{u}^{0})/n. \tag{5.18}\]
Similarly to (3.40), we apply \(\partial_{t}\) to (5.16)\({}_{2}\)-(5.16)\({}_{4}\) to get
\[\begin{cases}\bar{\rho}J^{-1}u^{\rm d}_{tt}-{\rm div}_{\mathcal{A}}(P^{\prime }(\bar{\rho})\bar{\rho}{\rm div}u^{\rm d}\mathbb{I}+\partial_{t}\mathbb{S}_{ \mathcal{A}}(u^{\rm d})))=g\bar{\rho}({\rm div}u^{\rm d}{\bf e}^{3}-\nabla u^{ \rm d}_{3})+{\bf R}^{5}&\text{in }\Omega,\\ \llbracket P^{\prime}(\bar{\rho})\bar{\rho}{\rm div}u^{\rm d}\mathbb{I}+ \partial_{t}\mathbb{S}_{\mathcal{A}}(u^{\rm d})\rrbracket J\mathcal{A}{\bf e}^{3}+ \vartheta\Delta_{\rm h}u_{3}{\bf e}^{3}={\bf R}^{6},\ \llbracket u^{\rm d}_{t}\rrbracket=0,&\text{on }\Sigma,\\ u^{\rm d}_{t}=0&\text{on }\partial\Omega,\end{cases} \tag{5.19}\]
where we have define that
\[{\bf R}^{5}:= {\bf R}^{3}_{t}+{\rm div}_{\mathcal{A}}(P^{\prime}(\bar{\rho}) \bar{\rho}{\rm div}u^{\rm d}\mathbb{I}+\mathbb{S}_{\mathcal{A}}(u^{\rm d}))- \bar{\rho}J^{-1}_{t}u^{\rm d}_{t}, \tag{5.20}\] \[{\bf R}^{6}:= {\bf R}^{4}_{t}-\llbracket P^{\prime}(\bar{\rho})\bar{\rho}{\rm div }\eta^{\rm d}\mathbb{I}+\mathbb{S}_{\mathcal{A}}(u^{\rm d})\rrbracket\partial_{ t}(J\mathcal{A}{\bf e}^{3}). \tag{5.21}\]
Similarly to (3.76), we can deduce from (5.19) that
\[\frac{1}{2}\frac{{\rm d}}{{\rm d}t}\left(\|\sqrt{\bar{\rho}}u^{\rm d }_{t}\|_{0}^{2}+\mathcal{I}(u^{\rm d})-g\llbracket\bar{\rho}\rrbracket\|u^{ \rm d}\lfloor_{0}^{2}\right)+\mathcal{U}_{\sqrt{J}\mathcal{A}}(u^{\rm d}_{t})= R_{1}, \tag{5.22}\]
where
\[R_{1}:= \int(J{\bf R}^{5}+g\bar{\rho}(J-1)({\rm div}u^{\rm d}{\bf e}^{3}- \nabla u^{\rm d}_{3}))\cdot u^{\rm d}_{t}+((1-J)P^{\prime}(\bar{\rho})\bar{ \rho}{\rm div}u^{\rm d}\mathbb{I}\] \[-J\mathbb{S}_{\mathcal{A}_{t}}(u^{\rm d})):\nabla_{\mathcal{A}}u ^{\rm d}_{t}-P^{\prime}(\bar{\rho})\bar{\rho}{\rm div}u^{\rm d}{\rm div}_{\bar{ \mathcal{A}}}u^{\rm d}_{t}){\rm d}y-\int_{\Sigma}{\bf R}^{6}\cdot u^{\rm d}_{t}{ \rm d}y_{\rm h}.\]
Recalling \(u^{\rm d}(0)=\delta^{2}u^{\rm r}\), we integrate (5.22) in time from \(0\) to \(t\) to get
\[\|\sqrt{\bar{\rho}}u^{\rm d}_{t}\|_{0}^{2}+2\int_{0}^{t}\mathcal{U}(u^{\rm d}_ {t}){\rm d}\tau=g\llbracket\bar{\rho}\rrbracket|u^{\rm d}|_{0}^{2}-\mathcal{I} (u^{\rm d}(t))+\sum_{i=1}^{3}\mathcal{R}_{i}, \tag{5.23}\]
where
\[\mathcal{R}_{1}:=2\int_{0}^{t}R_{1}(\tau){\rm d}\tau,\ \mathcal{R}_{2}:=\|\sqrt{\bar{\rho}}u^{\rm d}_{t}\|_{0}^{2} \bigg{|}_{t=0}+\mathcal{I}(\delta^{2}u^{\rm r})-g\llbracket\bar{\rho}\rrbracket |\delta^{2}u^{\rm r}|_{0}^{2},\]
\[\text{and }\mathcal{R}_{3}=-2\int_{0}^{t}\int(\mathbb{S}(u^{\rm d}):\nabla_{J \mathcal{A}-\mathbb{I}}u^{\rm d}+\mathbb{S}_{J\mathcal{A}-\mathbb{I}}(u^{\rm d}) :\nabla_{\mathcal{A}}u^{\rm d})\mathrm{d}y\mathrm{d}\tau.\]
(3) _Thirdly we will estimate for the three rest terms \(\mathcal{R}_{1}\)-\(\mathcal{R}_{3}\)._
Noting the assumption \(\|\eta\|_{3}\leqslant\delta_{1}\) in (5.6), which makes sure that all estimates in Lemmas 3.1-3.6 can be satisfied by \((\eta,u)\) here, thus we have
\[1\lesssim\inf_{y\in\Omega}\{J,J^{-1}\}\ \text{(see (\ref{eq:3.3}))}, \tag{5.24}\] \[\|(J^{-1}-1,\tilde{\mathcal{A}})\|_{2}\lesssim\|\eta\|_{3}\ \text{(see (\ref{eq:3.4}) and (\ref{eq:3.12}))},\] (5.25) \[\|\mathbf{N}^{1}\|_{0}\lesssim\|\eta\|_{3}\|\eta\|_{2}\ \text{( referring to (\ref{eq:3.28}))},\] (5.26) \[\begin{cases}\|(J-1,J\mathcal{A}\mathbf{e}^{3}-\mathbf{e}^{3})\| _{2}\lesssim\|\eta\|_{3}\ \text{(see (\ref{eq:3.4}), (\ref{eq:3.15}) and (\ref{eq:3.34}))},\\ \|\partial_{t}(J^{-1},\mathcal{A},J\mathcal{A}\mathbf{e}^{3})\|_{1}\lesssim\|u \|_{2}\ \text{(see (\ref{eq:3.6}), (\ref{eq:3.13}) and (\ref{eq:3.16}))},\\ \|\mathbf{N}^{1}_{t}\|_{0}\lesssim\|\eta\|_{3}\|u\|_{2},\ |\mathbf{N}^{2}_{t}|_{1/2} \lesssim\|\eta\|_{3}\|u\|_{3}.\ \text{(see (\ref{eq:3.46}) and (\ref{eq:3.48}))}.\end{cases} \tag{5.27}\]
Thanks to (4.5), (5.5) and (5.25), it is easy to see that
\[\mathcal{R}_{3}\lesssim\int_{0}^{t}\delta^{3}e^{3c\tau}\mathrm{d}\tau \lesssim\delta^{3}e^{3c\tau t}. \tag{5.28}\]
Applying \(\|\cdot\|\) to (5.16)\({}_{2}\) yields
\[\|\bar{\rho}J^{-1}u^{\rm d}_{t}\|_{0}=\|\text{div}_{\mathcal{A}}(P^{\prime}( \bar{\rho})\bar{\rho}\text{div}\eta^{\rm d}\mathbb{I}+\mathbb{S}_{\mathcal{A} }(u^{\rm d}))+g\bar{\rho}(\text{div}\eta^{\rm d}\mathbf{e}^{3}-\nabla\eta^{ \rm d}_{3})+\mathbf{R}^{3})\|_{0}.\]
Thus, using (5.24) and (5.25), we obtain
\[\|u^{\rm d}_{t}\|_{0}^{2}\lesssim\|(\eta^{\rm d},u^{\rm d})\|_{2}^{2}+\| \mathbf{R}^{3}\|_{0}^{2}. \tag{5.29}\]
In addition, making use of (4.5), (4.7), (5.5), (5.25) and (5.26), we can estimate that
\[\|\mathbf{R}^{3}\|_{0}= \|\mathbf{N}^{1}-\bar{\rho}(J^{-1}-1)u^{\rm a}_{t}+\text{div}_{ \tilde{\mathcal{A}}}(P^{\prime}(\bar{\rho})\bar{\rho}\text{div}\eta^{\rm a} \mathbb{I}+\mathbb{S}_{\mathcal{A}}(u^{\rm a}))\] \[+\text{div}\mathbb{S}_{\tilde{\mathcal{A}}}(u^{\rm a})-\delta e^{ c\tau t}\mathbf{R}^{1}(\tilde{\eta}^{0},\tilde{u}^{0})/n\|_{0}^{2}\lesssim\delta^{2}e^{2 \Lambda t}+\delta n^{-1/2}e^{c\tau t}.\]
Thus, putting the above estimate into (5.29) and taking then \(t=0\), we can have
\[\|u^{\rm d}_{t}\|_{0}^{2}\bigg{|}_{t=0}\lesssim\delta^{4}+\delta^{2}n^{-1}+\| \delta^{2}u^{\rm r}\|_{2}^{2}\lesssim\delta^{4},\]
where we have used (4.3) and (4.10). Similarly,
\[\mathcal{I}(\delta^{2}u^{\rm r})-g\llbracket\bar{\rho}\rrbracket|\delta^{2}u^{ \rm r}|_{0}^{2}\lesssim\delta^{4}.\]
Putting the above two estimates together yields
\[\mathcal{R}_{2}\lesssim\delta^{4}\lesssim\delta^{3}e^{3c\tau t}. \tag{5.30}\]
Now we turn to the estimate of \(\mathcal{R}_{1}\). Recalling the definitions of \(\mathbf{N}^{1}\), \(\mathbf{N}^{2}\) in (1.38), \(\mathbf{R}^{3}\) in (5.17), \(\mathbf{R}^{4}\) in (5.18), \(\mathbf{R}^{5}\) in (5.20) and \(\mathbf{R}^{6}\) in (5.21), it is easy to see that
\[R_{1}(t)\lesssim(1+\|J-1\|_{2})\|\mathbf{R}^{5}\|_{0}\|u^{\rm d}_{t}\|_{0}+((1+ \|\tilde{\mathcal{A}}\|_{2})(\|J-1\|_{2}\|u^{\rm d}\|_{1}\]
\[\|\sqrt{\rho}u_{t}^{\rm d}\|_{0}^{2}+2\int_{0}^{t}\mathcal{U}_{\mathcal{A}}(u_{t}^{ \rm d}){\rm d}\tau\leqslant\Lambda^{2}\|\sqrt{\rho}u^{\rm d}\|_{0}^{2}+\Lambda \mathcal{U}(u^{\rm d})+c\delta^{3}e^{3c_{\tau}t}. \tag{5.33}\]
We apply Newton-Leibniz's formula, Cauchy-Schwarz's inequality and the fact \(u^{\rm d}(0)=\delta^{2}u^{\rm r}\) to find that
\[\Lambda\mathcal{U}(u^{\rm d})= 2\Lambda\int_{0}^{t}((\varsigma-2\mu/3){\rm div}u^{\rm d}{\rm div }u_{\tau}^{\rm d}+\mu\mathbb{D}u:\partial_{\tau}\mathbb{D}u{\rm d}\tau\] \[+\Lambda\mathcal{U}(\delta^{2}u^{\rm r})\leqslant\Lambda^{2}\int_ {0}^{t}\mathcal{U}(u^{\rm d}){\rm d}\tau+\int_{0}^{t}\mathcal{U}(u_{\tau}^{\rm d }){\rm d}\tau+c\delta^{3}e^{3c_{\tau}t}.\]
Combining (5.33) with the above estimate, one gets
\[\frac{1}{\Lambda}\|\sqrt{\rho}u_{t}^{\rm d}\|_{0}^{2}+\mathcal{U}(u^{\rm d}) \leqslant\Lambda\|\sqrt{\rho}u^{\rm d}\|_{0}^{2}+2\Lambda\int_{0}^{t} \mathcal{U}(u^{\rm d}){\rm d}\tau+c\delta^{3}e^{3c_{\tau}t}. \tag{5.34}\]
In addition,
\[\frac{{\rm d}}{{\rm d}t}\|\sqrt{\rho}u^{\rm d}\|_{0}^{2}=2\int\bar{\rho}u^{ \rm d}\cdot u_{t}^{\rm d}{\rm d}y\leqslant\frac{1}{\Lambda}\|\sqrt{\rho}u_{t} ^{\rm d}\|_{0}^{2}+\Lambda\|\sqrt{\rho}u^{\rm d}\|_{0}^{2}.\]
If we put the previous two estimates together, we get the differential inequality
\[\frac{{\rm d}}{{\rm d}t}\|\sqrt{\rho}u^{\rm d}\|_{0}^{2}+\mathcal{U}(u^{\rm d}) \leqslant 2\Lambda\left(\|\sqrt{\rho}u^{\rm d}(t)\|_{0}^{2}+\int_{0}^{t} \mathcal{U}(u^{\rm d}){\rm d}\tau\right)+c\delta^{3}e^{3c_{\tau}t}. \tag{5.35}\]
Recalling \(u^{\rm d}(0)=\delta^{2}u^{\rm r}\) and \(c_{7}=\lambda(\xi^{1})\in(2\Lambda/3,\Lambda)\) in Proposition 2.2, one can apply Gronwall's inequality to (5.35) to conclude that
\[\|\sqrt{\rho}u^{\rm d}\|_{0}^{2}+\int_{0}^{t}\mathcal{U}(u^{\rm d}){\rm d}\tau \lesssim e^{2\Lambda t}\left(\int_{0}^{t}\delta^{3}e^{(3c_{7}-2\Lambda)\tau}{ \rm d}\tau+\|\sqrt{\rho}\delta^{2}u^{\rm r}\|_{0}^{2}\right)\lesssim\delta^{3 }e^{3c_{7}t}, \tag{5.36}\]
Moreover, we can further deduce from (5.33), (5.34), (5.36) and Korn's inequality that
\[\|u^{\rm d}\|_{1}^{2}+\|u_{t}^{\rm d}\|_{0}^{2}+\|u_{\tau}^{\rm d}\|_{L^{2}((0, t),H^{1})}^{2}\lesssim\delta^{3}e^{3c_{7}t}. \tag{5.37}\]
Finally it follows from (5.16)\({}_{1}\), (5.16)\({}_{5}\) and (5.37) that
\[\|\eta^{\rm d}\|_{1}\lesssim\int_{0}^{t}\|u^{\rm d}\|_{1}{\rm d}\tau\lesssim \sqrt{\delta^{3}e^{3\Lambda t}}. \tag{5.38}\]
Summing up the two estimates (5.37) and (5.38), and then using trace estimate, we obtain the desired estimate (5.7).
## 6 Existence of escape times
Now we are in a position to show Theorem 1.1. Let \(\delta\) satisfy (5.3), \((\eta,u)\) be the strong solution constructed in Section 5 with an existence time \([0,T_{\rm loc})\), and \((\eta^{\rm d},u^{\rm d})\) be defined in Section 5. Let \(\epsilon_{0}\in(0,1]\) be a constant, which will be defined in (6.7). We further define
\[T^{\delta}:=c_{7}^{-1}{\rm ln}(\epsilon_{0}/\delta)>0,\ {\rm i.e.},\ \delta e^{c_{7}T^{\delta}}=\epsilon_{0}, \tag{6.1}\] \[T^{*}:=\sup\left\{t\in(0,T_{\rm loc})\,\bigg{|}\ \sqrt{\|\eta(\tau)\|_{3}^{2}+\|u( \tau)\|_{2}^{2}+\vartheta|d(\tau)|_{3}^{2}}\leqslant 2\tilde{c}_{4}c_{3}\ {\rm for\ any}\ \tau\in[0,t) \right\},\] \[T^{**}:=\sup\left\{t\in(0,T_{\rm loc})\,\big{|}\ \|(\eta,u)(\tau)\|_{0} \leqslant 2\tilde{c}_{4}\delta e^{\tau t}\ {\rm for\ any}\ \tau\in[0,t) \right\}.\]
Noting that, by (5.2) and (5.3),
\[\sqrt{\|\eta(t)\|_{3}^{2}+\|u(t)\|_{2}^{2}+\vartheta|d(t)|_{3}^{2}}\bigg{|}_{t=0}= \sqrt{\|\eta_{0}^{\delta}\|_{3}^{2}+\|u_{0}^{\delta}\|_{2}^{2}+\vartheta|d_{0}^ {\delta}|_{3}^{2}}\leqslant\tilde{c}_{4}\delta<2\tilde{c}_{4}c_{3}\leqslant \delta_{2}, \tag{6.2}\]
thus \(T^{*}>0\) by (5.4). Similarly, we also have \(T^{**}>0\). Moreover, we can easily see that
\[\sqrt{\|\eta(T^{*})\|_{3}^{2}+\|u(T^{*})\|_{2}^{2}+\vartheta|d(T^{ *})|_{3}^{2}}=2\tilde{c}_{4}c_{3}\leqslant\delta_{2}\text{ if }T^{*}<\infty, \tag{6.3}\] \[\|(\eta,u)(T^{**})\|_{0}=2\tilde{c}_{4}\delta e^{c_{T}T^{**}}\quad \text{ if }T^{**}<T_{\text{loc}}. \tag{6.4}\]
We denote \(T^{\text{min}}:=\min\{T^{\delta},T^{*},T^{**}\}\), thus \(T^{\text{min}}<T_{\text{loc}}\) by (6.3) and Proposition 3.1. Noting that, by (5.2),
\[\sup_{0\leqslant t\leqslant T^{\text{min}}}\sqrt{\|\eta(t)\|_{3}^{2}+\|u(t)\| _{2}^{2}+\vartheta|d(t)|_{3}^{2}}\leqslant\tilde{c}_{4}\delta\leqslant\tilde {c}_{4}c_{3}\leqslant\delta_{1},\]
thus, in view of both the second and third assertions in Proposition 3.1, we deduce from the estimate (3.100) satisfied by \((\eta,u,d,\vartheta\eta^{2})\) that for all \(t\in[0,T^{\text{min}}]\),
\[\tilde{\mathcal{E}}(t)+\vartheta(\|\eta^{2}(t)\|_{2,1}^{2}+c^{-1 }|d(t)|_{3}^{2})+c^{-1}\int_{0}^{t}\mathcal{D}(\tau)\mathrm{d}\tau\] \[\leqslant c\delta^{2}e^{2c\tau t}+c_{7}\int_{0}^{t}(\tilde{ \mathcal{E}}(\tau)+\vartheta\|\eta^{2}(\tau)\|_{2,1}^{2})\mathrm{d}\tau. \tag{6.5}\]
Applying Gronwall's inequality to the above estimate, we deduce that
\[\tilde{\mathcal{E}}(t)+\vartheta(|d(t)|_{3}^{2}+\|\eta^{2}(t)\|_{2,1}^{2}) \lesssim\delta^{2}\left(e^{2c\tau t}+\int_{0}^{t}e^{c\tau(t+\tau)}\mathrm{d} \tau\right)\lesssim\delta^{2}e^{2c\tau t}.\]
Putting the above estimate to (6.5), we get
\[\tilde{\mathcal{E}}(t)+\vartheta(|d(t)|_{3}^{2}+\|\eta^{2}(t)\|_{2,1}^{2})+ \int_{0}^{t}\mathcal{D}(\tau)\mathrm{d}\tau\lesssim\delta^{2}e^{2c\tau t},\]
which, together with (3.101) satisfied by \((\eta,u)\), yields that
\[\sqrt{\|\eta\|_{3}^{2}+\|u\|_{2}^{2}+\vartheta|d|_{3}^{2}+\|u_{t }\|_{0}^{2}+\|u\|_{L^{2}((0,t),H^{3})}^{2}+\|u_{\tau}\|_{L^{2}((0,t),H^{1})}^{2}}\] \[\leqslant\tilde{c}_{5}\delta e^{c\tau t}\leqslant\tilde{c}_{5} \epsilon_{0}\text{ on }[0,T^{\text{min}}]. \tag{6.6}\]
Let \(\beta=\tilde{c}_{5}\) in (5.5) and then we denote the constant \(c\) in (5.7) in Proposition 5.1 by \(\tilde{c}_{6}\). Now we define that
\[\epsilon_{0}:=\min\left\{\frac{c_{3}}{\tilde{c}_{5}},\frac{\tilde{c}_{4}^{2}} {4\tilde{c}_{6}^{2}},\frac{c_{8}^{2}}{\tilde{c}_{6}^{2}},1\right\}>0, \tag{6.7}\]
Noting that \((\eta,u)\) satisfies (6.6), where \(\delta^{c\tau t}\leqslant 1\) and \(\epsilon_{0}\leqslant c_{3}/\tilde{c}_{5}\leqslant\delta_{1}/\tilde{c}_{5}\) (i.e., \(\tilde{c}_{5}\epsilon_{0}\leqslant\delta_{1}\leqslant 1\)) by the definitions of \(c_{3}\) and \(\epsilon_{0}\), thus, by Proposition 5.1 with \(\beta=\tilde{c}_{5}\) and \(c=\tilde{c}_{6}\), we immediately see that
\[\|(\eta^{\mathrm{d}},u^{\mathrm{d}})\|_{1}+|(\eta^{\mathrm{d}},u^{\mathrm{d}}) |_{0}\leqslant\tilde{c}_{6}\sqrt{\delta^{3}e^{3c\tau t}}\text{ on }[0,T^{\text{min}}], \tag{6.8}\]
where \((\eta^{\mathrm{d}},u^{\mathrm{d}})=(\eta,u)-(\eta^{\mathrm{s}},u^{\mathrm{a}})\). Consequently, we further have the relation
\[T^{\delta}=T^{\text{min}}, \tag{6.9}\]
which can be showed by contradiction as follows:
If \(T^{\min}=T^{*}\), then \(T^{*}<\infty\). Recalling \(\epsilon_{0}\leqslant c_{3}/\tilde{c}_{5}\) and \(\tilde{c}_{4}\geqslant 1\), we can deduce from (6.6) that
\[\sqrt{\|\eta(T^{*})\|_{3}^{2}+\|u(T^{*})\|_{2}^{2}+|d(T^{*})|_{3}^{2}}\leqslant \tilde{c}_{5}\epsilon_{0}\leqslant\tilde{c}_{4}c_{3}<2\tilde{c}_{4}c_{3},\]
which contradicts (6.3). Hence, \(T^{\min}\neq T^{*}\).
If \(T^{\min}=T^{**}\), then \(T^{**}<T^{*}\leqslant T_{\rm loc}\). Noting that \(\sqrt{\epsilon_{0}}\leqslant\tilde{c}_{4}/2\tilde{c}_{6}\) and \(c_{5}\leqslant\tilde{c}_{4}\), we obtain from (4.5), (5.7), (6.1) and the fact \(\varepsilon_{0}\leqslant\tilde{c}_{4}^{2}/4\tilde{c}_{5}^{2}\) that
\[\|(\eta,u)(T^{**})\|_{0} \leqslant\|(\eta^{\rm a},u^{\rm a})(T^{**})\|_{0}+\|(\eta^{\rm d },u^{\rm d})(T^{**})\|_{0}\leqslant\delta e^{cT^{**}}(\tilde{c}_{4}+\tilde{c} _{6}\sqrt{\delta e^{c\tau T^{*}}})\] \[\leqslant\delta e^{c\tau T^{**}}(\tilde{c}_{4}+\tilde{c}_{6}\sqrt {\epsilon_{0}})\leqslant 3c_{4}\delta e^{\tilde{c}_{4}T^{**}}/2<2\tilde{c}_{4} \delta e^{c\tau T^{**}},\]
which contradicts (6.4). Hence, \(T^{\min}\neq T^{**}\). We immediately see that (6.9) holds.
Noting that \(\sqrt{\epsilon_{0}}\leqslant c_{8}/\tilde{c}_{6}\), making use of (4.2), (4.4), (6.1) and (6.8), we can deduce that
\[\|\omega_{3}(T^{\delta})\|_{0}\geqslant \|\omega_{3}^{\rm a}(T^{\delta})\|_{0}-\|\omega_{3}^{\rm d}(T^{ \delta})\|_{0}\geqslant\delta e^{c\tau T^{\delta}}(\|\chi_{n,n}\omega_{3}/n\| _{0}-\tilde{c}_{6}\sqrt{\delta e^{c\tau T^{\delta}}})\] \[= (2c_{8}-\tilde{c}_{6}\sqrt{\epsilon_{0}})\epsilon_{0}\geqslant c_ {8}\epsilon_{0},\]
where \(\omega=\tilde{\eta}^{0}\) or \(\tilde{u}^{0}\). Similarly, we also can verify that \(\|\omega_{\rm h}(T^{\delta})\|_{0}\), \(|\omega_{3}(T^{\delta})|_{0}\geqslant c_{8}\epsilon_{0}\). This completes the proof of Theorem 1.1 by taking \(\epsilon=c_{8}\epsilon_{0}\) and \(c_{k}=L_{k}\), where \(L_{k}\) is provided by (2.64) for \(k=1\) and \(2\).
## Appendix A Analysis tools
This Appendix is devoted to listing some mathematical analysis tools, which have been used in the previous sections. It should be remarked that in this appendix we still adapt the simplified mathematical notations in Section 1.3. For simplicity, we still use the notation \(a\lesssim b\) to mean that \(a\leqslant cb\) for some constant \(c>0\), where the positive constant \(c\) may depend on the domain and other given parameters in the lemmas below.
**Lemma A.1**.: _Embedding inequality (see [1, Theorem 7.58] and [2, Theorems 4.12]): Let \(D\subset\mathbb{R}^{3}\) be a domain satisfying the cone condition, then_
\[\|f\|_{L^{p}(D)}\lesssim\|f\|_{H^{1}(D)}\ \ \text{for}\ 2\leqslant p \leqslant 6,\] (A.1) \[\|f\|_{C^{0}(\overline{D})}=\|f\|_{L^{\infty}(D)}\lesssim\|f\|_{H^ {2}(D)},\] (A.2) \[\|\phi\|_{L^{4}(\mathbb{R}^{2})}\lesssim|\phi|_{1/2},\] (A.3) \[\|\phi\|_{L^{\infty}(\mathbb{R}^{2})}\lesssim\|\phi\|_{W^{1,4}( \mathbb{R}^{2})}\lesssim|\phi|_{2},\] (A.4)
_where \(\overline{D}\) denotes the closure of \(D\)._
**Lemma A.2**.: _Interpolation inequality in \(H^{j}\) (see [2, 5.2 Theorem]): Let \(D\) be a domain in \(\mathbb{R}^{n}\) satisfying the cone condition, then for any \(0\leqslant j<i\), \(\varepsilon>0\),_
\[\|f\|_{H^{j}(D)}\lesssim\|f\|_{L^{2}(D)}^{1-\frac{j}{\varepsilon}}\|f\|_{H^{i} (D)}^{\frac{j}{\varepsilon}}\leqslant c(\varepsilon,j)\|f\|_{L^{2}(D)}+ \varepsilon\|f\|_{H^{i}(D)},\] (A.5)
_where the constant \(c(\varepsilon,j)\) depends on the domain, \(j\) and \(\varepsilon\)._
**Lemma A.3**.: _Product estimates: Let \(D\subset\mathbb{R}^{3}\) be a domain satisfying the cone condition. The functions \(f\), \(g\) are defined on \(D\), and \(\phi\), \(\varphi\) are defined on \(\mathbb{R}^{2}\)._
1. _Product estimates in_ \(H^{i}\)_:_ \[\|fg\|_{H^{i}(D)}\lesssim\begin{cases}\|f\|_{H^{1}(\Omega)}\|g\|_{H^{1}(D)}&\text {for }i=0;\\ \|f\|_{H^{i}(D)}\|g\|_{H^{2}(D)}&\text{for }0\leqslant i\leqslant 2;\\ \|f\|_{H^{2}(D)}\|g\|_{H^{i}(D)}+\|f\|_{H^{i}(D)}\|g\|_{H^{2}(D)}&\text{for }i=3.\end{cases}\] (A.6)
2. _Product estimates in_ \(H^{s}(\mathbb{R}^{2})\)_:_ \[|\phi\varphi|_{1/2}\lesssim\|\phi\|_{W^{1,4}(\mathbb{R}^{2})}|\varphi|_{1/2} \lesssim|\phi|_{3/2}|\varphi|_{1/2}\] (A.7)
**Remark A.1**.: In particular, by (A.7), we further have
\[|\phi\varphi|_{j+1/2}\lesssim|\phi|_{j+1/2}|\varphi|_{3/2}+|\varphi|_{j+1/2}| \phi|_{3/2}\text{ for }j=1\text{ and }2.\] (A.8)
Proof.: The product estimate (A.6) can be shown by using Holder's inequality and the embedding inequalities (A.1)-(A.2). The estimate (A.7) can be obtained by following the proof of [18, Lemma A.2] and using the estimates (A.3)-(A.4).
**Lemma A.4**.: _Poincare's inequality (see [18, Lemma A.10]): It holds that_
\[\|w\|_{0}\lesssim\|\partial_{3}w\|_{0}\text{ for any }w\in H^{1}_{0}.\] (A.9)
**Lemma A.5**.: _Korn's inequality:_
\[\|w\|_{1}\lesssim\|\mathbb{D}w-2\mathrm{div}w\mathbb{I}/3\|_{0}\text{ for any }w\in H^{1}_{0}.\] (A.10)
Proof.: It's well-known that, see [9, Theorem 10.16],
\[\|\nabla w\|_{0}\lesssim\|\mathbb{D}w-2\mathrm{div}w\mathbb{I}/3\|_{0}\text{ for any }w\in W^{1,2}(\mathbb{R}^{3}).\] (A.11)
Now we defined that
\[\tilde{w}:=\begin{cases}0&\text{for }y\in\mathbb{R}^{3}\backslash\Omega;\\ w&\text{for }y\in\Omega.\end{cases}\]
By [2, Theorems 5.29], \(\tilde{w}\in W^{1,2}(\mathbb{R}^{3})\), and thus we can use (A.11) to derive that
\[\|\nabla w\|_{0}\lesssim\|\mathbb{D}w-2\mathrm{div}w\mathbb{I}/3\|_{0}.\]
Since \(w|_{\partial\Omega}=0\), we further derive from Poincare's inequality (A.9) and the above estimate.
**Lemma A.6**.: _Trace estimate: It holds that, for given \(i\geqslant 0\),_
\[\|f|_{y_{3}=a}\|_{H^{i+1/2}(\mathbb{R}^{2})}\lesssim\|f\|_{\underline{1+i,0}}^{ 1/2}\|f\|_{\underline{1}\underline{1}^{1/2}}^{1/2}\lesssim\|f\|_{\underline{1+ i,0}}^{1/2}\|\partial_{3}f\|_{\underline{i}^{1/2}}^{1/2}+\|f\|_{\underline{1+i,0}}\] (A.12)
_for any \(f\in H^{1+i}\) and for any \(a\in(h_{-},h_{+})\)._
**Remark A.2**.: By the trace estimate, we further have
\[|f|_{i+1/2}\lesssim\|f\|_{\underline{i}^{1}}\text{ for any }f\in H^{i+1}.\] (A.13)
Proof.: It suffices to consider the case \(i=0\). Let \(\mathbb{R}^{3}_{h_{-},+}=\mathbb{R}^{2}\times(h_{-},+\infty)\) and \(\mathbb{R}^{3}_{h_{+},-}=\mathbb{R}^{2}\times(-\infty,h_{+})\). It is well-known that, for any \(f\in H^{1}(\mathbb{R}^{3}_{h_{-},+})\), \(\varphi\in H^{1}(\mathbb{R}^{3}_{h_{+},-})\), \(a\in[h_{-},+\infty)\) and \(b\in(-\infty,h_{+}]\),
\[\|f|_{y_{3}=a}\|_{H^{1/2}(\mathbb{R}^{2})} \lesssim\sum_{|\alpha|\leqslant 1}\|\partial_{\mathbf{h}}^{\alpha}f\| _{L^{2}(\mathbb{R}^{3}_{h_{-},+})}^{1/2}\|f\|_{H^{1}(\mathbb{R}^{3}_{h_{-},+})}^ {1/2}\] (A.14)
and
\[\|\varphi|_{y_{3}=b}\|_{H^{1/2}(\mathbb{R}^{2})} \lesssim\sum_{|\alpha|\leqslant 1}\|\partial_{\mathbf{h}}^{\alpha} \varphi\|_{L^{2}(\mathbb{R}^{3}_{h_{+},-})}^{1/2}\|\varphi\|_{H^{1}(\mathbb{R }^{3}_{h_{+},-})}^{1/2},\] (A.15)
see Theorem 5.7 in [7, Section 1.5.3]. Thus using cut-off functions depending on \(y_{3}\) to cut off the extension function of \(f\), we easily obtain (A.12) with \(i=0\) from (A.14) and (A.15).
**Lemma A.7**.: _Estimates involving the fractional differential operators:_
\[\int_{\mathbb{R}^{2}}\|\mathfrak{D}^{3/2}_{\mathbf{h}}\varphi\|_ {0}^{2}\mathrm{d}\mathbf{h}\lesssim\|\varphi\|_{1}^{2}\text{ for any }\varphi\in H^{1},\] (A.16) \[\int_{\mathbb{R}^{2}}|\mathfrak{D}^{s}_{\mathbf{h}}\phi|_{1/2}^{2 }\mathrm{d}\mathbf{h}\lesssim|\phi|_{s+1/2}^{2}\text{ for any }\phi:=\phi(y_{1},y_{2},0)\in H^{s+1/2}(\mathbb{R}^{2}),\] (A.17)
Proof.: In view of Fubini's theorem and the trace estimate (A.12), we can obtain
\[\int_{\mathbb{R}^{2}}\|\mathfrak{D}^{3/2}_{\mathbf{h}}\varphi\|_ {0}^{2}\mathrm{d}\mathbf{h}=\int_{h_{-}}^{h_{+}}\int_{\mathbb{R}^{2}}\int_{ \mathbb{R}^{2}}|\mathfrak{D}^{3/2}_{\mathbf{h}}\varphi|_{0}^{2}\mathrm{d}y_{ \mathbf{h}}\mathrm{d}\mathbf{h}\mathrm{d}y_{3}\lesssim\int_{h_{-}}^{h_{+}}| \varphi|_{1/2}^{2}\mathrm{d}y_{3}\lesssim\|\varphi\|_{1}^{2},\]
which yields (A.16).
It is easy to see that
\[\int_{\mathbb{R}^{2}}\sin^{2}(\xi\cdot\mathbf{h}/2)|\mathbf{h}|^ {-3}\mathrm{d}\mathbf{h}= \int_{0}^{\infty}\int_{0}^{2\pi}\sin^{2}(|\xi|r\cos(\xi,\mathbf{ h})/2)|r|^{-2}\mathrm{d}\theta\mathrm{d}r\] \[\leqslant 2\pi\left(\int_{r<|\xi|^{-1}}r^{-2}\sin^{2}(|\xi|r/2)\mathrm{d}r +\int_{|\xi|^{-1}<r<1}r^{-2}\mathrm{d}r+\int_{1<r}r^{-2}\mathrm{d}r\right)\] \[\leqslant 2\pi|\xi|\left(1+\int_{0}^{1}r^{-2}\sin^{2}(r/2)\mathrm{d}r \right)\leqslant 4\pi|\xi|,\] (A.18)
where \(\cos(\xi,\mathbf{h})\) denotes the cosine of angle between \(\xi\) and \(\mathbf{h}\). By the above estimate, Parseval's theorem of Fourier transform defined on \(\mathbb{R}^{2}\) and the equivalent definition of the norm \(|\cdot|_{1/2}\) by Fourier transform in [1, Theorems 7.63], we have
\[\int_{\mathbb{R}^{2}}|\mathfrak{D}^{s}_{\mathbf{h}}\phi|_{1/2}^{2 }\mathrm{d}\mathbf{h}\lesssim \int_{\mathbb{R}^{2}}\int_{\mathbb{R}^{2}}(1+|\xi|^{2})^{s}| \widehat{\mathfrak{D}^{3/2}_{\mathbf{h}}\phi}|^{2}\mathrm{d}\xi\mathrm{d} \mathbf{h}\] \[\lesssim \int_{\mathbb{R}^{2}}\int_{\mathbb{R}^{2}}(1+|\xi|^{2})^{s}|e^{- \xi\cdot\mathbf{h}\ \mathrm{i}}-1|^{2}|\hat{\phi}|^{2}|\mathbf{h}|^{-3}\mathrm{d}\xi\mathrm{d} \mathbf{h}\] \[\lesssim \int_{\mathbb{R}^{2}}(1+|\xi|^{2})^{s}|\hat{\phi}|^{2}\int_{ \mathbb{R}^{2}}\sin^{2}(\xi\cdot\mathbf{h}/2)|\mathbf{h}|^{-3}\mathrm{d} \mathbf{h}\mathrm{d}\xi\] \[\lesssim \int_{\mathbb{R}^{2}}(1+|\xi|^{2})^{s+1/2}|\hat{\phi}|^{2} \mathrm{d}\xi\lesssim|\phi|_{s+1/2},\] (A.19)
which yields (A.17).
**Lemma A.8**.: _Dual estimates: If \(\varphi\) and \(\psi\in H^{1/2}\), then_
\[\left|\int_{\mathbb{R}^{2}}\partial_{\mathrm{h}}\varphi\psi\mathrm{d}y_{ \mathrm{h}}\right|\lesssim|\varphi|_{1/2}|\psi|_{1/2}.\] (A.20)
_In particular,_
\[|\partial_{\mathrm{h}}\varphi|_{-1/2}\lesssim|\varphi|_{1/2}.\] (A.21)
Proof. Using Parseval's theorem of Fourier transform, the equivalent definition of the norm \(|\cdot|_{1/2}\) by Fourier transform in [1, Theorems 7.63] and Holder's inequality, we immediately get (A.20), which immediately yields (A.21).
**Lemma A.9**.: _Let \(f\in H^{1}\), then there exists a function \(F\) such that_
\[F\in C^{0}(\mathbb{R}^{+},H^{1}(\mathbb{R}^{2}))\cap L^{2}(\mathbb{R}^{+},H^{ 3/2}),\ F_{t}\in L^{2}(\mathbb{R}^{+},H^{1/2})\ \text{and}\ F|_{t=0}=f.\]
_Moreover,_
\[\|F\|_{L^{\infty}(\mathbb{R}^{+},H^{1}(\mathbb{R}^{2}))}+\|F\|_{L^{2}( \mathbb{R}^{+},H^{3/2})}+\|F_{t}\|_{L^{2}(\mathbb{R}^{+},H^{1/2})}\lesssim|f |_{1}.\]
Proof. Let \(\varphi\in C^{\infty}_{0}(\mathbb{R})\) be such that \(\varphi(0)=1\). We then define \(\hat{F}(\xi,t)=\varphi(t\langle\xi\rangle)\hat{f}(\xi)\), where \(\hat{\cdot}\) denotes the Fourier transform defined on \(\mathbb{R}^{2}\) and \(\langle\xi\rangle:=\sqrt{1+|\xi|^{2}}\). By construction, \(F(\cdot,0)=f\) and \(\partial_{t}\hat{F}(\xi,t)=\varphi^{\prime}(t\langle\xi\rangle)\hat{f}(\xi) \langle\xi\rangle\).
We can estimate that
\[|F(\cdot,t)|_{1}^{2}=\int_{\mathbb{R}^{2}}\big{|}\varphi(t\langle\xi\rangle) \big{|}^{2}\big{|}\hat{f}(\xi)\big{|}^{2}\langle\xi\rangle^{2}\mathrm{d}\xi \leqslant\big{\|}\varphi\big{\|}_{L^{\infty}}^{2}\big{|}f\big{|}_{1}^{2}\] (A.22)
and
\[|\partial_{t}F(\cdot,t)|_{0}^{2}=\int_{\mathbb{R}^{2}}\big{|}\varphi^{\prime}( t\langle\xi\rangle)\big{|}^{2}\big{|}\hat{f}(\xi)\big{|}^{2}\langle\xi\rangle^{2} \mathrm{d}\xi\leqslant\big{\|}\varphi^{\prime}\big{\|}_{L^{\infty}}^{2}\big{|} f\big{|}_{1}^{2}.\]
We easily further observe from the above two estimates that
\[F\in C^{0}(\mathbb{R}^{+}_{0},H^{1})\ \text{and}\ F_{t}\in C^{0}(\mathbb{R}^{+}_{0},L ^{2})\] (A.23)
by using Lebesgue's dominated convergence theorem. Similarly, for \(i=0\), and \(1\),
\[\int_{0}^{\infty}|\partial_{t}^{i}F|_{3/2-i}^{2}\mathrm{d}t =\int_{0}^{\infty}\int_{\mathbb{R}^{2}}\langle\xi\rangle^{2i} \big{|}\varphi^{(i)}(t\langle\xi\rangle)\big{|}^{2}\big{|}\hat{f}\langle\xi \rangle\big{|}^{2}\langle\xi\rangle^{3-2i}\mathrm{d}\xi\mathrm{d}t\] \[=\int_{0}^{\infty}\int_{\mathbb{R}^{2}}\big{|}\varphi^{(i)}(t \langle\xi\rangle)\big{|}^{2}\big{|}\hat{f}(\xi)\big{|}^{2}\langle\xi\rangle^{ 3}\mathrm{d}\xi\mathrm{d}t\] \[=\int_{\mathbb{R}^{2}}\big{|}\hat{f}(\xi)\big{|}^{2}\langle\xi \rangle^{3}\bigg{(}\frac{1}{\langle\xi\rangle}\int_{0}^{\infty}\big{|}\varphi ^{(i)}(r)\big{|}^{2}\mathrm{d}r\bigg{)}\mathrm{d}\xi\] \[=\big{\|}\varphi^{(i)}\big{\|}_{L^{2}(\mathbb{R}^{+})}^{2}\int_{ \mathbb{R}^{2}}\Big{|}\hat{f}(\xi)\Big{|}^{2}\langle\xi\rangle^{2}\mathrm{d} \xi=\big{\|}\varphi^{(i)}\big{\|}_{L^{2}(\mathbb{R})}^{2}\,|f|_{1}^{2}\,,\] (A.24)
where \(\varphi^{(0)}=\varphi\) and \(\varphi^{(1)}=\varphi^{\prime}\). Consequently, we immediately see from (A.22)-(A.24) that \(F\) satisfies the desired conclusion stated in Lemma A.9 (referring to [32, Lemma A.10]).
**Lemma A.10**.: _Homeomorphism theorem: Let \(k\geqslant 3\). There is a constant \(\iota\) depending on \(\Omega\), such that for any \(\eta\in H^{1}_{0}\cap H^{k}\) satisfying \(\|\eta\|_{3}\leqslant\iota\in(0,1]\), we have_
\[\det\nabla\zeta(y),\ \det\nabla_{\mathrm{h}}\zeta_{\mathrm{h}}(y_{ \mathrm{h}},0),\ H^{\mathrm{d}}\geqslant 1/2\text{ for any }y_{\mathrm{h}}\in\mathbb{R}^{2},\] (A.25) \[\zeta_{\mathrm{h}}(y_{\mathrm{h}},0):\mathbb{R}^{2}\to\mathbb{R}^ {2}\text{ is a }C^{k-2}\text{-diffeomorphic mapping},\] (A.26) \[\zeta:\overline{\Omega}\to\overline{\Omega}\text{ is a homeomorphism mapping},\] (A.27) \[\zeta_{\pm}:\Omega_{\pm}\to\zeta_{\pm}(\Omega_{\pm})\text{ are }C^{k-2}\text{-diffeomorphic mappings},\] (A.28)
_where \(\zeta:=\eta+y\), \((\zeta_{\mathrm{h}})^{-1}\) denotes the inverse mapping of \(\zeta_{\mathrm{h}}(y_{\mathrm{h}},0)\), \(\zeta_{\mathrm{h}}\) represents the first two components of \(\zeta:=\eta+y\) and \(H^{\mathrm{d}}=|\partial_{1}\zeta|^{2}|\partial_{2}\zeta|^{2}-|\partial_{1} \zeta\cdot\partial_{2}\zeta|^{2}\)._
Proof.: Please refer to [31, Proposition 5.2].
**Lemma A.11**.: _Existence theory of the stratified Lame problem with Dirichlet boundary conditions: let \(i\geqslant 0\), \(\mathbf{F}^{1}\in H^{i}\) and \(\mathbf{F}^{2}\in H^{i+3/2}\), then there exists a unique solution \(u\in H^{i+2}\) of the following Lame problem:_
\[\begin{cases}\mu\Delta u+(\varsigma+\mu/3)\nabla\mathrm{div}u=\mathbf{F}^{1}& \text{in }\Omega,\\ u_{+}=u_{-}=\mathbf{F}^{2}&\text{on }\Sigma,\\ u=0&\text{on }\partial\Omega;\end{cases}\] (A.29)
_moreover,_
\[\|u\|_{i+2}\lesssim\|\mathbf{F}^{1}\|_{i}+|\mathbf{F}^{2}|_{i+3/2}.\] (A.30)
Proof.: Both the results of existence and regularity for unique solutions of the following horizontally periodic stratified Lame problem
\[\begin{cases}\mu\Delta u+(\varsigma+\mu/3)\nabla\mathrm{div}u=\mathbf{F}^{1}& \text{in }\Omega_{L_{1},L_{2}},\\ u_{+}=u_{-}=\mathbf{F}^{2}&\text{on }\Sigma_{L_{1},L_{2}},\\ u=0&\text{on }\partial\Omega\end{cases}\] (A.31)
had been proved by Jang-Tice-Wang in [20, Proposition E.4], see (1.51) for the definitions of \(\Omega_{L_{1},L_{2}}\) and \(\Sigma_{L_{1},L_{2}}\). Following Jang-Tice-Wang's augment, we easily extend the results of the periodic case in [20, Proposition E.4] to our non-periodic case stated in Lemma A.11.
**Lemma A.12**.: _Existence theory of the stratified Lame problem with jump conditions: let \(i\geqslant 0\), \(\mathbf{F}^{1}\in H^{i}\) and \(\mathbf{F}^{2}\in H^{i+1/2}\), then there exists a unique solution \(u\in H^{i+2}\) of the following stratified Lame problem:_
\[\begin{cases}\mu\Delta u+(\varsigma+\mu/3)\nabla\mathrm{div}u=\mathbf{F}^{1}& \text{in }\Omega,\\ \llbracket u\rrbracket=0,\ \llbracket\mathbb{S}(u)\mathbf{e}^{3}\rrbracket= \mathbf{F}^{2}&\text{on }\Sigma,\\ u=0&\text{on }\partial\Omega\;.\end{cases}\]
_Moreover,_
\[\|u\|_{i+2}\lesssim\|\mathbf{F}^{1}\|_{i}+|\mathbf{F}^{2}|_{i+1/2}.\] (A.32)
Proof. Both the results of existence and regularity for unique solutions of the following horizontally periodic stratified Lame problem
\[\begin{cases}\mu\Delta u+(\varsigma+\mu/3)\nabla\text{div}u=\mathbf{F}^{1}&\text {in }\Omega_{L_{1},L_{2}},\\ \llbracket u\rrbracket=0,\ \llbracket\mathbb{S}(u)\mathbf{e}^{3}\rrbracket= \mathbf{F}^{2}&\text{on }\Sigma_{L_{1},L_{2}},\\ \mathbb{S}(u)\mathbf{e}^{3}=\mathbf{F}^{3}&\text{on }\Sigma_{+}^{L_{1},L_{2}}:=2 \pi L_{1}\mathbb{T}\times 2\pi L_{1}\mathbb{T}\times\{h_{+}\},\\ u=0&\text{on }\partial\Omega\end{cases}\] (A.33)
had been provided by Jang-Tice-Wang in [22, Lemma A.10]. Following Jang-Tice-Wang's augment, we easily extend the results of the periodic case in [22, Lemma A.10] to our non-periodic case stated in Lemma A.12.
## Appendix B Regularity for the solutions in Eulerian coordinates
This section is devoted to the derivation of regularity for solutions, which are obtained from Theorem 1.1 by the inverse transform of Lagrangian coordinates, in Eulerian coordinates. In particular, we provide the derivation of (3.125) and (3.126) for \(\vartheta>0\) in Section 3.5. In what follows, we always assume that the solution \((\eta,u)\in C^{0}([0,T],H^{3,1/2}_{0,*}\times H^{2})\) is provided by Theorem 1.1 for given \(\delta\). In addition, for the sake of the simplicity, we define that \(\Omega_{\pm}^{T}:=\Omega_{\pm}\times(0,T)\), \(\Omega^{T}:=\Omega\times(0,T)\), \(\mathbb{R}_{T}^{2}:=\mathbb{R}^{2}\times(0,T)\) and the closure of a set \(S\) by \(\overline{S}\).
### Homeomorphism
Let \(\zeta=\eta+y\), \(\tilde{y}:=(y,t)\), \(\tilde{y}_{\text{h}}:=(y_{\text{h}},t)\), \(\tilde{x}=(x,t)\) and \(\tilde{x}_{\text{h}}=(x_{\text{h}},t)\). Since \(\eta(t)\in H^{3,1/2}_{0,*}\) and \(\eta\in C([0,T],H^{3})\), we have
\[\nabla_{x}\zeta^{-1}=(\nabla_{y}\zeta)^{-1}|_{y=\zeta^{-1}}= \mathcal{A}^{\text{T}}|_{y=\zeta^{-1}}\text{ in }\Omega,\] (B.1) \[\tilde{\zeta}:\overline{\Omega^{r}}\mapsto\overline{\Omega^{r}} \text{ is a bijective mapping,}\] (B.2) \[\tilde{\zeta}_{\pm}:X\mapsto\tilde{\zeta}_{\pm}(X)\text{ are bijective mappings,}\] (B.3) \[\det\nabla_{\tilde{y}}\tilde{\zeta}(\tilde{y})\geqslant 1/2 \text{ in }\overline{\Omega_{\pm}^{T}},\] (B.4) \[\bar{\zeta}:Y\to Y\text{ is a bijective mapping,}\] (B.5) \[\det\nabla_{\tilde{y}_{\text{h}}}\bar{\zeta}(\tilde{y}_{\text{h} })=\det\nabla_{y_{\text{h}}}\zeta_{\text{h}}(y_{\text{h}},0,t)\geqslant 1/2,\] (B.6)
where \(\tilde{\zeta}(\tilde{y}):=(\zeta(y,t),t)\), \(\bar{\zeta}(\tilde{y}_{\text{h}}):=(\zeta_{\text{h}}(y_{\text{h}},0,t),t)\), \(X=\overline{\Omega_{\pm}^{T}}\) or \(\Omega_{\pm}^{T}\), and \(Y=\overline{\mathbb{R}_{T}^{2}}\) or \(\mathbb{R}_{T}^{2}\).
We denote the inverse functions of \(\tilde{\zeta}_{\pm}(\tilde{y})\) resp. \(\bar{\zeta}(\tilde{y}_{\text{h}})\) by \(\tilde{\zeta}_{\pm}^{-1}(\tilde{x})\) resp. \(\bar{\zeta}^{-1}(\tilde{x}_{\text{h}})\). Recalling the regularity
\[\zeta-y\in C^{0}([0,T],H^{3})\text{ and }\zeta_{t}=u\in C^{0}([0,T],H^{2}),\] (B.7)
we use the embedding theorem \(H^{k+2}(\Omega_{\pm})\hookrightarrow C^{k}(\overline{\Omega}_{\pm})\) for \(k\geqslant 0\) to get
\[\tilde{\zeta}_{\pm}\in C^{1}(\overline{\Omega_{\pm}^{T}}),\] (B.8) \[\tilde{\zeta}\in C^{1}(\overline{\mathbb{R}_{2}^{T}}).\] (B.9)
By (1.31)\({}_{4}\), we get
\[\tilde{\zeta},\ u,\nabla_{y_{\text{h}}}\zeta\in C^{0}(\overline{\Omega^{T}}).\] (B.10)
We further derive from (B.2) and the continuity of \(\tilde{\zeta}\) in (B.10) that (referring to (5.57) in [31])
\[\tilde{\zeta}(\tilde{y}):\overline{\Omega^{T}}\to\overline{\Omega^{T}}\text{ is a homeomorphism mapping.}\] (B.11)
Similarly to (A.28) and (B.11), we have by (B.3), (B.4) and (B.8) that
\[\tilde{\zeta}_{\pm}(\tilde{y}):\overline{\Omega_{\pm}^{T}}\to\tilde{\zeta}_{ \pm}(\overline{\Omega_{\pm}^{T}})\text{ are $C^{1}$-diffeomorphic mappings.}\] (B.12)
Moreover, \(\nabla_{\tilde{z}}\tilde{\zeta}_{\pm}^{-1}=(\nabla_{\tilde{y}}\tilde{\zeta}_{ \pm})^{-1}|_{\tilde{y}=\tilde{\zeta}_{\pm}^{-1}}\). In particular,
\[\partial_{t}\zeta_{\pm}^{-1}=-((\nabla_{y}\zeta_{\pm})^{-1}u_{\pm})|_{y=\zeta_ {\pm}^{-1}}=-(\mathcal{A}_{\pm}^{\top}u_{\pm})|_{y=\zeta_{\pm}^{-1}}.\] (B.13)
Similarly to (B.1) and (B.12), we deduce from (B.5)-(B.6) and (B.9) that
\[\bar{\zeta}(y_{\text{h}},t):\overline{\mathbb{R}_{T}^{2}}\to \overline{\mathbb{R}_{T}^{2}}\text{ is a $C^{1}$-diffeomorphic mapping,}\] (B.14) \[=\left.\left(\frac{1}{\det\nabla_{y_{\text{h}}}\zeta_{\text{h}}( y_{\text{h}},0,t)}\left(\begin{array}{cc}\partial_{2}\zeta_{2}(y_{\text{h}},t )&-\partial_{2}\zeta_{1}(y_{\text{h}},t)\\ -\partial_{1}\zeta_{2}(y_{\text{h}},t)&\partial_{1}\zeta_{1}(y_{\text{h}},t) \end{array}\right)\right)\right|_{y_{\text{h}}=(\zeta_{\text{h}})^{-1}(x_{ \text{h}},t)}.\] (B.15)
### Regularity of solutions in Eulerian coordinates
Let \(a:=\|\eta\|_{C^{0}([0,T],H^{3})}\), where \(a\in(0,\iota]\). In what follows, the notation
\[A\lesssim_{a}B\text{ means }A\leqslant c(a)B,\]
where \(c(a)\) denotes a generic positive constant, which may vary from line to line, depends on \(a\), \(\Omega\) and increases with respect to \(a\).
Thanks to the homeomorphism properties of \(\zeta\) and \(\zeta_{\text{h}}\), the following definitions make sense.
\[\rho:=(\bar{\rho}J^{-1})|_{y=\zeta^{-1}(x,t)},\ v:=u(\zeta^{-1}(x, t),t),\ \tilde{\rho}:=\bar{\rho}|_{y_{3}=\zeta_{3}^{-1}(x,t))},\] \[d:=\zeta_{3}((\zeta_{\text{h}})^{-1}(x_{\text{h}},t),0,t)\in(h_{ -},h_{+}),\ \nu:=(-\partial_{x_{1}}d,-\partial_{x_{2}}d,1)^{\top}/\sqrt{1+|\nabla_{x_{ \text{h}}}d|^{2}},\] (B.16) \[\Sigma(t):=\{(x_{\text{h}},x_{3})\ |\ x_{\text{h}}\in\mathbb{R}^{2}, \ x_{3}:=d(x_{\text{h}},t)\},\] \[\Omega_{+}(t):=\{(x_{\text{h}},x_{3})\ |\ x_{\text{h}}\in\mathbb{R}^{2}, \ d(x_{\text{h}},t)<x_{3}<h_{+}\},\] \[\Omega_{-}(t):=\{(x_{\text{h}},x_{3})\ |\ x_{\text{h}}\in\mathbb{R}^{2}, \ h_{-}<x_{3}<d(x_{\text{h}},t)\}.\]
#### b.2.1 Motion domains
By (A.26) and the fact
\[\zeta_{3}((\zeta_{\text{h}})^{-1}(x_{\text{h}},t),0,t)=\eta_{3}((\zeta_{\text{ h}})^{-1}(x_{\text{h}},t),0,t),\]
for any given \(x_{\text{h}}\), there is \(y_{\text{h}}\), such that
\[x_{\text{h}}:=\zeta_{\text{h}}(y_{\text{h}},0,t)=\eta_{\text{h}}(y_{\text{h}},0,t)\text{ and }d(x_{\text{h}},t)=\eta_{3}(y_{\text{h}},0,t).\] (B.17)
This means \(\Sigma(t)\subset\zeta(\Sigma,t)\). Similarly, we also have \(\zeta(\Sigma,t)\subset\Sigma(t)\). Consequently, we arrive at that, for any given \(t\geqslant 0\),
\[\zeta(\cdot,t):\{y_{3}=0\}\to\{x_{3}=d(x_{\text{h}},t)\}\text{ is a bijective mapping.}\] (B.18)
Moreover, we further deduce that
\[\Omega_{\pm}(t)=\zeta_{\pm}(\Omega_{\pm},t),\ \Omega=\Omega(t)\cup\Sigma(t),\ \ \Omega_{+}(t)\cap\Omega_{-}(t)=\emptyset\text{ and }\Omega_{\pm}(t)\cap\Sigma(t)=\emptyset,\quad t\geqslant 0,\]
where \(\Omega(t):=\Omega_{+}(t)\cup\Omega_{-}(t)\). In what follows, if \(f\) is defined in \(\Omega(t)\), resp. \(\Omega\), then \(f_{\pm}:=f|_{\Omega_{\pm}(t)}\), resp. \(f|_{\Omega_{\pm}}\).
#### b.2.2 Regularity and motion equations of \((\rho,v)\)
Thanks to the continuity of \((\nabla_{y}\zeta_{\pm},u)\) in (B.8) and (B.10)-(B.12), \(v\in C^{0}(\overline{\Omega^{r}})\) and \((\rho_{\pm},P_{\pm}(\rho_{\pm}))\)\(\in(C^{0}(\overline{\Omega_{\pm}^{T}}))^{2}\). Obviously, \(v=0\) on \(\partial\Omega\) due to \(u=\eta=0\) on \(\partial\Omega\).
By transform of Lagrangian coordinates (i.e., \(x=\zeta(y)\)) and the regularity of \((\zeta,u)\), we can bound that for any given \(t\geqslant 0\),
\[\|v(t)\|_{L^{2}(\Omega(t))}^{2}=\int|u|^{2}J\mathrm{d}y\lesssim_{a}\|u(t)\|_{0 }^{2}.\] (B.19)
Noting that \(\partial_{x_{i}}v=(\mathcal{A}_{il}\partial_{y_{l}}u)|_{y=\zeta^{-1}(x)}\) for \(1\leqslant i\leqslant 3\), we have
\[\|\partial_{x_{i}}v(t)\|_{L^{2}(\Omega(t))}^{2}=\int|\mathcal{A}_{il}\partial _{y_{l}}u|^{2}J\mathrm{d}y\lesssim_{a}\|u(t)\|_{1}^{2}.\] (B.20)
Similarly, we can further derive that for any \(1\leqslant i,j,k\leqslant 3\),
\[\|\partial_{x_{j}}\partial_{x_{i}}v(t)\|_{L^{2}(\Omega(t))}\lesssim_{a}\|u(t )\|_{2},\quad\|\partial_{x_{k}}\partial_{x_{j}}\partial_{x_{i}}v(t)\|_{L^{2}( \Omega(t))}\lesssim_{a}\|u(t)\|_{3}\]
by virtue of the relations
\[\partial_{x_{j}}\partial_{x_{i}}v=(\mathcal{A}_{jm}\partial_{y_{m}}(\mathcal{ A}_{il}\partial_{y_{l}}u))|_{y=\zeta^{-1}}\text{ and }\partial_{x_{k}}\partial_{x_{j}}\partial_{x_{i}}v=(\mathcal{A}_{kn} \partial_{n}(\mathcal{A}_{jm}\partial_{y_{m}}(\mathcal{A}_{il}\partial_{y_{l }}u)))|_{y=\zeta^{-1}}.\]
Therefore,
\[\|v(t)\|_{H^{2}(\Omega(t))}\lesssim_{a}\|u\|_{2}\text{ for any }t \in[0,T],\] (B.21) \[\|v(t)\|_{H^{3}(\Omega(t))}\lesssim_{a}\|u\|_{3}\text{ for a.e. }t \in(0,T).\] (B.22)
Recalling (1.33), we have
\[P_{\pm}(\rho_{\pm}(t))-P_{\pm}(\tilde{\rho}_{\pm}(t))=(R_{P}-P^{\prime}(\bar{ \rho})\bar{\rho}\mathrm{div}\eta)_{\pm}|_{y=\zeta_{\pm}^{-1}(x,t)}.\] (B.23)
Thus, similarly to (3.34) and (B.21), we can also get
\[\|P_{+}(\rho_{+}(t))-P_{+}(\tilde{\rho}_{+}(t))\|_{H^{2}(\Omega_ {+}(t))}+\|P_{-}(\rho_{-}(t))-P_{-}(\tilde{\rho}_{-}(t))\|_{H^{2}(\Omega_{-}( t))}\] \[\lesssim_{a}\|R_{P}-P^{\prime}(\bar{\rho})\bar{\rho}\mathrm{div} \eta\|_{2}\lesssim_{a}\|\eta(t)\|_{3}.\]
Similarly, we can also verify that
\[\|\rho(t)-\tilde{\rho}(t)\|_{H^{2}(\Omega_{\pm}(t))}\lesssim_{a}\|\eta(t)\|_ {3}\]
and \(\rho_{\pm}(t)\), \(P_{\pm}(\rho_{\pm}(t))\in H^{2}(\Omega_{\pm}(t)\cap B_{r})\) for any given \(r>0\), where \(B_{r}=\{x\ |\ x\in\mathbb{R}^{3}\ |\ |x|\leqslant r\}\).
By the definitions of \(\rho\) and \(v\), we can employ (B.1), (B.13), the relation \(J_{t}=J\mathrm{div}_{\mathcal{A}}u\) and the chain rule of differentiation to derive that
\[\rho_{t}+\mathrm{div}(\rho v)=((\bar{\rho}J^{-1})_{t}+\bar{\rho}J^{-1} \mathrm{div}_{\mathcal{A}}u)|_{y=\zeta^{-1}(x,t))}=0.\] (B.24)
Similarly, we can easily get from (1.31)\({}_{2}\) that \((\rho,v)\) satisfies
\[\rho v_{t}+\rho v\cdot\nabla v+\mathrm{div}\mathcal{S}=-\rho g\mathbf{e}^{3}.\]
#### b.2.3 Regularity and motion equations at interface
For any given \(t\geqslant 0\), \(\Xi(x,t):=((\zeta_{\rm h})^{-1}(x_{\rm h},t),x_{3}):\Omega_{+}\to\Omega_{+}\) is a bijective mapping. We denote the inverse function of \(\Xi(x,t)\) by \(\Xi^{-1}(y,t)\). Then \(\Xi\) and \(\Xi^{-1}\) enjoy the following properties:
\[\Xi^{-1}(y,t)=(\zeta_{\rm h}(y_{\rm h},0,t),y_{3}),\] \[\nabla_{x}\Xi(x,t)\] \[=\left(\frac{1}{\det\nabla_{y_{\rm h}}\zeta_{\rm h}(y_{\rm h},0,t )}\left.\left(\begin{array}{ccc}\partial_{2}\zeta_{2}(y_{\rm h},t)&-\partial _{2}\zeta_{1}(y_{\rm h},t)&0\\ -\partial_{1}\zeta_{2}(y_{\rm h},t)&\partial_{1}\zeta_{1}(y_{\rm h},t)&0\\ 0&0&\Theta(y_{\rm h},t)\end{array}\right)\right)\right|_{y_{\rm h}=(\zeta_{ \rm h})^{-1}(x_{\rm h},t)},\] \[1/2\leqslant\det\nabla_{y_{\rm h}}\zeta_{\rm h}(y_{\rm h},0,t)= \det\nabla_{y}\Xi^{-1}(y,t)\lesssim_{a}1,\] (B.25)
Similarly to (B.22), we easily get that
\[\|\eta_{3}^{+}(y,t)|_{y=\Xi(x,t)}\|_{H^{3}(\Omega_{+})}\lesssim_{a}\|\eta(t) \|_{3},\] (B.26)
where \(\eta_{3}^{+}\) denote the third component of \(\eta_{+}\). Exploiting the trace estimate, we can deduce from (B.26) that
\[|d(x_{\rm h},t)=(\eta_{3}^{+}(y,t)|_{y=\Xi(x,t)})|_{x_{3}=0}|_{5/2}\lesssim_{ a}\|\eta(t)\|_{3}.\] (B.27)
Moreover, from (B.15) we can obtain
\[\nabla_{x_{\rm h}}d= (\nabla_{x_{\rm h}}(\zeta_{\rm h})^{-1}(x_{\rm h},t))^{\top}\nabla _{y_{\rm h}}\zeta_{3}(y_{\rm h},0,t)|_{y_{\rm h}=(\zeta_{\rm h})^{-1}(x_{\rm h },t)}\] \[= ((\nabla_{y_{\rm h}}\zeta_{\rm h}(y_{\rm h},0,t))^{-\top}\nabla_{ y_{\rm h}}\zeta_{3}(y_{\rm h},0,t))|_{y_{\rm h}=(\zeta_{\rm h})^{-1}(x_{\rm h },t)}\] (B.28)
and
\[\nabla_{x_{\rm h}}\partial_{x_{i}}d=((\nabla_{y_{\rm h}}\zeta_{\rm h}(y_{\rm h },0,t))^{-\top}\nabla_{y_{\rm h}}((\nabla_{y_{\rm h}}\zeta_{\rm h}(y_{\rm h},0, t))^{-\top}\nabla_{y_{\rm h}}\zeta_{3}(y_{\rm h},0,t))_{i})|_{y_{\rm h}=(\zeta_{ \rm h})^{-1}(x_{\rm h},t)},\] (B.29)
where \((f)_{i}\) denotes the \(i\)-th component of \(f\) for \(i=1\) and \(2\). In addition, making use of the continuity of \((\zeta,\nabla_{y_{\rm h}}\zeta)\) in (B.14), (B.10) and (B.28), we see that \(\nabla_{\rm h}d\) is continuous on \(\overline{\mathbb{R}_{T}^{2}}\).
Analogously to (B.27), we obtain \(|1-(1+|\nabla_{x_{\rm h}}d(x_{\rm h},t)|^{2})^{-1/2}|_{3/2}\lesssim_{a}1\), which, together with (A.8) with \(j=1\) and (B.27), yields
\[\nu(x_{\rm h},t)-{\bf e}^{3}\in H^{3/2}\quad\text{for any $t\geqslant 0$}.\] (B.30)
Obviously, one has
\[u_{3}|_{y_{3}=0} =\partial_{t}\eta_{3}(y_{\rm h},0,t)=\partial_{t}d(\zeta_{\rm h}( y_{\rm h},0,t),t)\] \[=(u_{1}|_{y_{3}=0}\partial_{1}d+u_{2}|_{y_{3}=0}\partial_{2}d+d_ {t})|_{x_{\rm h}=\zeta_{\rm h}(y_{\rm h},0,t)},\] (B.31) \[\text{and }v(x_{\rm h},d(x_{\rm h},t),t)|_{x_{\rm h}=\zeta_{\rm h}(y_{ \rm h},t)}=v(\zeta_{\rm h}(y_{\rm h},0,t),\zeta_{3}(y_{\rm h},0,t),t)=u(y_{\rm h },0,t).\] (B.32)
Plugging \(y_{\rm h}=(\zeta_{\rm h})^{-1}(x_{\rm h},t)\) into (B.31) and then using (B.32), we get for any given \(t\geqslant 0\).
\[d_{t}+v_{1}\partial_{1}d+v_{2}\partial_{2}d=v_{3}\text{ on $\Sigma(t)$}.\] (B.33)
In terms of (B.32) and the definition of \(\Xi(x,t)\),
\[v_{+}(x_{\rm h},d(x_{\rm h},t),t)=u_{+}((\zeta_{\rm h})^{-1}(x_{\rm h},t),0,t)= (u_{+}(y,t)|_{y=\Xi(x,t)})|_{x_{3}=0}.\]
Thus similarly to (B.27), one gets
\[|v_{+}(x_{\rm h},d(x_{\rm h},t),t)|_{3/2}=|(u_{+}(y,t)|_{y=\Xi(x,t)})|_{x_{3}=0}|_ {3/2}\lesssim\|u_{+}(y,t)|_{y=\Xi(x,t)}\|_{2}\lesssim_{a}\|u\|_{2}.\] (B.34)
Thus from (A.8) with \(j=1\), (B.27), (B.33) and (B.34) it follows that \(d_{t}\in H^{3/2}\).
Using relations (1.19) and (B.28), we obtain after a straightforward calculation that
\[J\mathcal{A}\mathrm{e}^{3}|_{y_{3}=0}= (-\partial_{1}\eta_{3}+\partial_{1}\eta_{2}\partial_{2}\eta_{3}- \partial_{1}\eta_{3}\partial_{2}\eta_{2},\] \[-\partial_{2}\eta_{3}+\partial_{1}\eta_{3}\partial_{2}\eta_{1}- \partial_{1}\eta_{1}\partial_{2}\eta_{3},\det\nabla_{y_{\rm h}}\zeta_{\rm h} )^{\rm T}|_{y_{3}=0}\] \[= \det\nabla_{y_{\rm h}}\zeta_{\rm h}(y_{\rm h},0,t)(-\partial_{x _{1}}d,-\partial_{x_{2}}d,1)^{\rm T}|_{x_{\rm h}=\zeta_{\rm h}(y_{\rm h},0,t)},\]
which yields
\[\nu|_{x_{\rm h}=\zeta_{\rm h}(y_{\rm h},0,t)}=\left.\frac{(- \partial_{x_{1}}d,-\partial_{x_{2}}d,1)^{\rm T}}{\sqrt{1+|\nabla_{x_{\rm h}}d |^{2}}}\right|_{x_{\rm h}=\zeta_{\rm h}(y_{\rm h},0,t)}=\left.\frac{J\mathcal{ A}\mathrm{e}^{3}}{|J\mathcal{A}\mathrm{e}^{3}|}\right|_{y_{3}=0}.\] (B.35)
With the help of (B.28) and (B.29), we further obtain that
\[\mathcal{C}|_{x_{\rm h}=\zeta_{\rm h}(y_{\rm h},0,t)}=\mathcal{H}|_{y_{3}=0}.\] (B.36)
In addition, we have that for a.e. \(t\in(0,T)\),
\[\left(\mathcal{S}_{\pm}|_{\{x_{3}=d(x_{\rm h},t)\}}\nu\right)|_{x_{\rm h}= \zeta_{\rm h}(y_{\rm h},0,t)}=((P(\bar{\rho}J^{-1})\mathbb{I}-\mathbb{S}_{ \mathcal{A}}(u))|_{\Omega_{\pm}}J\mathcal{A}\mathrm{e}^{3}/|J\mathcal{A} \mathrm{e}^{3}|)|_{y_{3}=0}.\] (B.37)
Consequently, making use of (1.31)\({}_{3}\), (B.17), (B.18), (B.35)-(B.37) and the fact \(J\geqslant 1\), we get
\[(\mathcal{S}_{+}|_{\Sigma(t)}-\mathcal{S}_{-}|_{\Sigma(t)})\nu=\vartheta \mathcal{C}\nu\text{ on }\Sigma(t).\] (B.38)
### Higher regularity of \(d\) for \(\vartheta>0\)
Similarly to (B.32),
\[((P_{\pm}(\tilde{\rho}_{+}(t)))|_{\{x_{3}=d(x_{\rm h},t)\}})|_{x_{\rm h}= \zeta_{\rm h}(y_{\rm h},0,t)}=(P_{\pm}(\tilde{\rho}_{\pm}))|_{y_{3}=0},\] (B.39)
which, together with (1.9)\({}_{2}\), yields
\[P_{+}(\tilde{\rho}_{+})|_{\Sigma(t)}-P_{-}(\tilde{\rho}_{-})|_{\Sigma(t)}=0.\] (B.40)
Thanks to (B.38) and the above identity, we have
\[((\mathcal{S}_{+}-P_{+}(\tilde{\rho}_{+}))|_{\Sigma(t)}-(\mathcal{S}_{-}-P_{-} (\tilde{\rho}_{-})))|_{\Sigma(t)})\nu=\vartheta\mathcal{C}\nu\text{ on }\Sigma(t).\] (B.41)
Multiplying (B.41) by \(\nu\), we find that
\[((\mathcal{S}_{+}-P_{+}(\tilde{\rho}_{+}))|_{\Sigma(t)}-(\mathcal{S}_{-}-P_{-} (\tilde{\rho}_{-})))|_{\Sigma(t)})\nu\cdot\nu=\vartheta\mathcal{C}\text{ on }\mathbb{R}^{2},\] (B.42)
where \(\mathcal{C}\) can be rewritten as follows [22]
\[\mathcal{C}=\operatorname{div}_{\rm h}\left(\frac{\nabla_{\rm h}d}{\sqrt{1+| \nabla_{\rm h}d|^{2}}}\right).\]
Since \(|d|_{5/2}\lesssim\|\eta\|_{3}\), thanks to (B.23), (B.35), (B.37), (B.39) and the regularity theory of elliptic equation in [38, Lemma 3.2], we obtain from (B.42) that for sufficiently small \(\delta\),
\[|d|_{7/2}\lesssim |((\mathcal{S}_{+}-P_{+}(\tilde{\rho}_{+}))|_{\Sigma(t)}-(\mathcal{ S}_{-}-P_{-}(\tilde{\rho}_{-}))|_{\Sigma(t)}\nu\cdot\nu|_{3/2}\] \[= \big{|}(\llbracket(R_{P}-P^{\prime}(\bar{\rho})\bar{\rho}\mathrm{ div}\eta)\rrbracket-\mathbb{S}_{\mathcal{A}}(u)\rrbracket J^{2}\mathcal{A}\mathrm{e}^{3}\cdot \mathcal{A}\mathrm{e}^{3}/|J\mathcal{A}\mathrm{e}^{3}|^{2})|_{y=((\zeta_{\rm h })^{-1}(x_{\rm h},t),0)}\big{|}_{3/2}\,.\] (B.43)
Employing the same arguments as for (B.34), we obtain
\[\big{|}\llbracket\mathbb{S}_{\mathcal{A}}(u)J^{2}\mathcal{A} \mathrm{e}^{3}\cdot\mathcal{A}\mathrm{e}^{3}/|J\mathcal{A}\mathrm{e}^{3} \rrbracket\big{|}\big{|}_{y_{\rm h}=(\zeta_{\rm h})^{-1}(x_{\rm h},t)}\big{|} _{3/2}\] \[\lesssim_{a}\|\mathbb{S}_{\mathcal{A}}(u)J^{2}\mathcal{A}\mathrm{ e}^{3}\cdot\mathcal{A}\mathrm{e}^{3}/|J\mathcal{A}\mathrm{e}^{3}|\|_{2} \lesssim_{a}\|\nabla u\|_{2}.\]
Similarly,
\[\big{|}\llbracket R_{P}-P^{\prime}(\bar{\rho})\bar{\rho}\mathrm{ div}\eta\rrbracket\big{|}_{y_{\rm h}=(\zeta_{\rm h})^{-1}(x_{\rm h},t)}\big{|}_{3/2} \lesssim_{a}\|\eta\|_{3}.\]
Thus, inserting the above two estimates into (B.43), one gets
\[|d|_{7/2}\lesssim_{a}\|(\eta,u)\|_{3}.\] (B.44)
Analogously to (B.34), we can obtain
\[|v_{\pm}(x_{\rm h},d(x_{\rm h},t))|_{5/2}\lesssim_{a}\|u\|_{3}\quad\text{for a.e. }t>0.\]
Utilizing (A.8) with \(j=2\), (B.34) and the above inequality, one can derive from (B.33) that
\[|d_{t}|_{5/2} \lesssim_{a}|v_{+}^{3}|_{5/2}+|\partial_{x_{1}}dv_{+}^{1}+ \partial_{x_{2}}dv_{+}^{2}|_{5/2}\] \[\lesssim_{a}|v_{+}^{3}|_{5/2}+|d|_{7/2}|v_{+}|_{3/2}+|d|_{5/2}|v_ {+}|_{5/2}\] \[\lesssim_{a}\|u\|_{3}+|d|_{7/2}\|u\|_{2}.\] (B.45)
Moreover, from (B.44) and (B.45) one gets
\[|d|_{3}^{2}\lesssim_{a} |d^{0}|_{3}^{2}+\int_{0}^{t}|d|_{7/2}|d_{\tau}|_{5/2}\mathrm{d}\tau\] \[\lesssim_{a} |d^{0}|_{3}^{2}+\int_{0}^{t}\left(1+\|u\|_{2}\right)\|(\eta,u) \|_{3}^{2}\mathrm{d}\tau.\] (B.46)
By the regularity of \((\eta,u)\) and (B.44)-(B.46), we have \(d\in C([0,T],H^{3}(\mathbb{R}^{2}))\cap L^{2}((0,T),H^{7/2})\) and \(d_{t}\in L^{2}((0,T),H^{5/2})\) for \(\vartheta>0\).
Finally,
\[|\mathcal{H}|_{1}\lesssim_{a}|\mathcal{C}|_{1}\lesssim_{a}|d|_{3},\] (B.47)
where we have used (A.2), (A.26), (B.15), (B.25) and (B.36) to derive the first inequality by following the argument of (B.19) and (B.20), and we employ (A.3) and (A.4) to infer the second inequality.
**Acknowledgements.** The research of Fei Jiang was supported by NSFC (Grant Nos. 12022102 and 12231016) and the Natural Science Foundation of Fujian Province of China (Grant Nos. 2020J02013 and 2022J01105), and the research of Song Jiang by National Key R&D Program (2020YFA0712200), National Key Project (GJXM92579), the Sino-German Science Center (Grant No. GZ 1465) and the ISF-NSFC joint research program (Grant No. 11761141008). | 2011年に Guo--Tice が、層状圧縮可塑粘性流体においてレイリー・テイラー不安定性(レイリー・テイラー不安定性)が平板領域 $\mathbb{R}^2\times (h_-,h_+)$ で不可欠に発生することを公式に決定しました。界面表面張力が存在しない場合においても、この不安定性解は水平方向の変数 $x_1$ と $x_2$ に対して非周期的です。これは、線形化された運動方程式に、いわゆる「正規モード」と修正された変分法を用いることで導き出されました。しかしながら、Guo--Ticeの結論を厳密に検証できるかは、非線形方程式による検証が困難な課題です。これは、平板領域における線形化された運動方程式に、非周期性を持つ成長モード解の構築が不成立であるためです。 |
2309.10679 | Oracle Complexity Reduction for Model-free LQR: A Stochastic
Variance-Reduced Policy Gradient Approach | We investigate the problem of learning an $\epsilon$-approximate solution for
the discrete-time Linear Quadratic Regulator (LQR) problem via a Stochastic
Variance-Reduced Policy Gradient (SVRPG) approach. Whilst policy gradient
methods have proven to converge linearly to the optimal solution of the
model-free LQR problem, the substantial requirement for two-point cost queries
in gradient estimations may be intractable, particularly in applications where
obtaining cost function evaluations at two distinct control input
configurations is exceptionally costly. To this end, we propose an
oracle-efficient approach. Our method combines both one-point and two-point
estimations in a dual-loop variance-reduced algorithm. It achieves an
approximate optimal solution with only
$O\left(\log\left(1/\epsilon\right)^{\beta}\right)$ two-point cost information
for $\beta \in (0,1)$. | Leonardo F. Toso, Han Wang, James Anderson | 2023-09-19T15:03:18 | http://arxiv.org/abs/2309.10679v1 | # Oracle Complexity Reduction for Model-free LQR:
###### Abstract
We investigate the problem of learning an \(\epsilon\)-approximate solution for the discrete-time Linear Quadratic Regulator (LQR) problem via a Stochastic Variance-Reduced Policy Gradient (SVRPG) approach. Whilst policy gradient methods have proven to converge linearly to the optimal solution of the model-free LQR problem, the substantial requirement for two-point cost queries in gradient estimations may be intractable, particularly in applications where obtaining cost function evaluations at two distinct control input configurations is exceptionally costly. To this end, we propose an oracle-efficient approach. Our method combines both one-point and two-point estimations in a dual-loop variance-reduced algorithm. It achieves an approximate optimal solution with only \(\mathcal{O}\left(\log\left(1/\epsilon\right)^{\beta}\right)\) two-point cost information for \(\beta\in(0,1)\).
## I Introduction
Policy gradient (PG) methods have attracted significant attention in model-free reinforcement learning (RL), in large part due to their simplicity of implementation. Within the context of control, and the LQR problem specifically (where analytic solutions are known), a lot of recent work has focused on connecting system theoretic properties such as controllability, with learning theoretic measures such as sample complexity [1]. As first shown in [2] and further analyzed in [3, 4, 5], PG methods converge to the global optimal solutions despite the lack of convexity in the LQR problem. This significant result, combined with the adaptability of PG in the model-free setting, has opened up a line of research that addresses classical control problems using PG-based approaches [6, 7].
In the model-free LQR setting, policy gradient descent relies on a finite-sample estimate of the true gradient, often acquired through derivative-free (otherwise known as zeroth-order) methods. We refer the reader to [4] for specific application of zeroth-order methods to LQR control and [8] for general background. Zeroth-order gradient estimation approaches are particularly valuable for applications where the computational resources needed for exact gradient evaluations may be impractical, or when cost-query information is _only_ accessible through a black-box procedure.
Despite providing flexibility by avoiding the explicit computation of gradients, zeroth-order gradient estimations with one-point (ZO1P) or two-point (ZO2P) queries frequently produce biased estimations accompanied by large variances [4]. In order to counteract this, large sample sizes are required to accurately estimate the gradients.
Whilst ZO2P provides a reduced variance relative to ZO1P, it necessitates querying the cost function at two distinct control input configurations, which can be prohibitively impractical for certain applications (e.g., robot path planning [9]). Addressing this limitation is crucial for developing efficient approaches applicable to real-world scenarios.
Motivated by these challenges, one line of work focuses on leveraging data from multiple similar systems to mitigate variance and thereby reduce the sample complexity of policy gradient methods [10, 11]. However, for the single-agent setting it remains unclear how we can devise a more computationally efficient approach without resorting to second-order techniques.
On the other hand, in supervised learning and RL, SVRPG approaches have demonstrated their effectiveness in significantly reducing variance and enhancing sample efficiency for PG methods [12, 13]. Such methods leverage the well-known control variate analysis, which incorporates both current and past gradient information to form a descent direction that reduces the estimation's variance. This concept motivates the following question addressed in this work:
_Can we design an oracle-efficient solution for addressing the model-free LQR problem by building upon the success of stochastic variance-reduced approaches?_.
**Our Contributions**: Toward this end, our main contributions are summarized as follows:
* This is the first work to propose a stochastic variance-reduced policy gradient algorithm featuring a mixed zeroth-order gradient estimation scheme for tackling the model-free and discrete-time LQR problem.
* Theoretical guarantees demonstrate the convergence (Theorem 2) of our approach, while ensuring stability of the system under the iterated policy (Theorem 1).
* We establish conditions on the problem parameters under which our approach achieves an \(\epsilon\)-approximate solution with \(\mathcal{O}\left(\log\left(1/\epsilon\right)^{3-2\beta}\right)\) queries, while utilizing only \(\mathcal{O}\left(\log\left(1/\epsilon\right)^{\beta}\right)\) two-point query information for \(\beta\in(0,1)\). This oracle complexity improves upon the best known result \(\mathcal{O}\left(\log\left(1/\epsilon\right)\right)\) by a factor of \(\mathcal{O}\left(\log\left(1/\epsilon\right)^{1-\beta}\right)\) (Corollary 2).
**Main result overview:** The SVRPG method we propose requires a slightly larger number of queries, specifically we require \(\mathcal{O}\left(\log\left(1/\epsilon\right)^{3-2\beta}\right)\), (this includes one _and_ two-point
queries) in comparison to \(\mathcal{O}\left(\log(1/\epsilon)\right)\) required by the standard ZO2P approach, in order to achieve an \(\epsilon\)-approximate solution - the difference is only a logarithmic factor, for _large_\(\beta\). However, our approach requires considerably fewer two-point queries, specifically a factor of \(\mathcal{O}\left(\log\left(1/\epsilon\right)^{1-\beta}\right)\) fewer, for _small_\(\beta\). This underscores the benefit of our technique, particularly in applications where conducting two-point function evaluations is prohibitively costly.
### _Related Work_
**Model-free LQR via Policy Gradient Methods:** PG methods have been extensively explored as a solution to solve the model-free LQR problem in both discrete [1, 2, 3, 4, 5, 6] and continuous-time settings [14, 15, 16]. Despite of the non-convexity of the LQR landscape under the policy search, Fazel et al. [2] proved theoretical guarantees for the global convergence of PG methods for both model-based and model-free settings. Table I summarizes the sample complexity of the aforementioned work.
Although there has been an evident sample complexity reduction from \(\mathcal{O}(\frac{1}{\epsilon}\log\left(1/\epsilon\right))\)[4] to \(\mathcal{O}\left(\log\left(1/\epsilon\right)\right)\)[5], this is primarily a result of a more refined analysis rather than algorithmic development.1 In this work, we propose a SVRPG algorithm to reduce the number of two-point queries required to obtain an \(\epsilon\)-approximate solution for the LQR problem.
Footnote 1: We use big-O notation \(\mathcal{O}(\cdot)\) to omit constant factors in the argument.
**Stochastic Variance-Reduced Policy Gradient:** Stochastic variance-reduced gradient descent (SVRG) have emerged as a sample-efficient solution technique for non-convex finite-sum optimization problems. Whilst SVRG methods have long been established for non-convex optimization problems (e.g., SVRG [12], SAG [17], and SAGA [18]), their extension to online RL settings is a relatively recent development (e.g., SVRPG [19, 20, 13]). This extension has presented unique challenges, primarily stemming from policy non-stationarity and approximations in the computation of the gradient. Furthermore, SVRPG approaches generally rely on the assumption of unbiased gradient estimation, a condition that rarely holds for derivative-free techniques. This has been addressed in [21, 22] for finite-sum, non-convex problems.
We emphasize that our work does not revolve around a simple extension of the results in [19, 20, 13] (online RL setting) or [21, 22] (non-convex finite-sum problem). In contrast to the latter, our LQR setting encompasses an online optimization problem with a single cost function. As a result, the sampling variance reduction benefit of using zeroth-order variance-reduced methods cannot be simply extended to our setting. On the other hand, in our setting we have the stabilizing policy requirement which is commonly taken for granted in the Markov Decision Process (MDP) case [19, 20, 13] with irreducibly and aperiodicity assumptions on the policy. Moreover, the zeroth-order gradient estimation produces biased estimations. This necessitates further derivations to control this bias as we will discuss later.
## II Preliminaries
We summarize key policy gradient results for the LQR problem as well as derivative-free optimization techniques.
### _Discrete-time Linear Quadratic Regulator_
Consider the discrete-time LTI system
\[x_{\tau+1}=Ax_{\tau}+Bu_{\tau},\ \ x_{0}\stackrel{{\text{i.i.d.}}}{{ \sim}}\mathcal{X}_{0}, \tag{1}\]
where \(x_{\tau}\in\mathbb{R}^{n_{x}}\), \(u_{\tau}\in\mathbb{R}^{n_{u}}\), and \(x_{0}\) denote the state and input at time \(\tau\), and the initial condition. The optimal LQR policy associated with (1) is \(u_{\tau}=-K^{*}x_{\tau}\) where \(K^{*}\) solves
\[\operatorname*{argmin}_{K\in\mathcal{K}} \left\{C(K):=\mathbb{E}_{x_{0}\sim\mathcal{X}_{0}}\left[\sum_{ \tau=0}^{\infty}x_{\tau}^{\top}Qx_{\tau}+u_{\tau}^{\top}Ru_{\tau}\right] \right\},\] (2) subject to (1)
where \(Q\in\mathbb{S}_{>0}^{n_{x}}\), \(R\in\mathbb{S}_{>0}^{n_{u}}\), and \(\mathcal{K}:=\{K|\rho(A-BK)<1\}\) denotes all stabilizing controllers \(K\in\mathbb{R}^{n_{u}\times n_{x}}\). The optimal cost is assumed to be finite. This is satisfied when \((A,B)\) is controllable.
In the model-based setting the optimal controller is given by: \(K^{*}:=-\left(R+B^{\top}PB\right)^{-1}B^{\top}PA\), where \(P\in\mathbb{S}_{>0}^{n_{x}}\) is the solution of the Algebraic Riccati Equation (ARE) [23]. In the absence of the system model \((A,B)\), there is no way to implement an ARE-derived controller. Notably, motivated by the fact that traditional RL techniques aim to find optimal policies for unknown MDPs through direct exploration of the policy space, the line of work led by Fazel et al. [2] and followed by [3, 4, 5, 14, 16, 24] have proved guarantees for the global convergence of PG methods for both model-based and model-free LQR. This is achieved by leveraging fundamental properties of the LQR cost function. Next, we revisit the updating rule of the model-free LQR problem through policy gradient, as well as its important properties.
Suppose that instead of having the true gradient \(\nabla C(K_{l})\) at the \(l\)-th iteration, we posses a finite-sample estimate \(\widehat{\nabla}C(K_{l})\). The policy gradient method's update rule for the LQR problem can be expressed as follows:
\[K_{l+1}=K_{l}-\eta\widehat{\nabla}C(K_{l}),\quad l=0,1,\ldots,L-1 \tag{3}\]
where \(\eta\) represents a positive scalar step-size. We require the following standard assumption [2, 3, 4, 5].
**Assumption 1**: _We have access to an initial stabilizing controller \(K_{0}\) such that \(\rho(A-BK_{0})<1\)._
**Remark 1**: _Note that if the initial controller \(K_{0}\) fails to stabilize system (1), the PG in (3) cannot iteratively converge to a stabilizing policy since \(\widehat{\nabla}C(K_{0})\) becomes undefined._
**Definition 1**: _The sublevel set of stabilizing feedback controllers \(\mathcal{G}\subseteq\mathcal{K}\) is defined as follows_
\[\mathcal{G}:=\{K\ \mid\ C(K)-C(K^{*})\leq\xi\Delta_{0}\},\]
_where \(\Delta_{0}=C(K_{0})-C(K^{*})\) and \(\xi\) is any positive constant._
**Lemma 1**: _Given two stabilizing policies \(K^{\prime}\), \(K\in\mathcal{G}\) such that \(\|K^{\prime}-K\|_{F}\leq h_{\Delta}(K)\ <\infty\), it holds that_
\[|C\left(K^{\prime}\right)-C(K)|\leq h_{\text{cost}}(K)C(K)\|K^{ \prime}-K\|_{F},\] \[\|\nabla C\left(K^{\prime}\right)-\nabla C(K)\|_{F}\leq h_{\text{ grad}}(K)\|K^{\prime}-K\|_{F}.\]
**Lemma 2**: _Let \(K^{*}\in\mathcal{G}\) be the optimal policy that solves (2). Thus, it holds that_
\[C(K)-C\left(K^{*}\right)\leq\frac{1}{\lambda}\|\nabla C(K)\|_{F}^{2},\]
_for any stabilizing controller \(K\in\mathcal{G}\)._
A detailed proof of the above lemmas, along with the explicit expressions for \(h_{\Delta}(K)\), \(h_{\text{cost}}(K)\), \(h_{\text{grad}}(K)\), and \(\lambda\), can be found in [3]. We direct the reader to Appendix A for the definition of \(\bar{h}_{\text{grad}}\), \(\bar{h}_{\text{cost}}\), and \(\underline{h}_{\Delta}\) that are positive coefficients we use further in our derivations.
### _Zeroth-Order Gradient Estimation_
Given a positive scalar smoothing radius, denoted as \(r\), and randomly sampled matrices \(U_{1},\ldots,U_{m}\) drawn i.i.d. from the uniform distribution \(\mathcal{S}_{r}\) of matrices with \(\|U\|_{F}=r\), and considering a given stabilizing policy \(K\in\mathcal{G}\), we define the one-point and two-point zeroth-order gradient estimations of the true gradient \(\nabla C(K)\) as follows:
\[\textbf{ZO1P}:\overline{\nabla}C(K):=\sum_{i=1}^{m}\frac{dC(K+U_{i})U_{i}}{mr ^{2}},\]
\[\textbf{ZO2P}:\widetilde{\nabla}C(K):=\sum_{i=1}^{m}\frac{d\left(C(K+U_{i})- C(K-U_{i})\right)U_{i}}{2mr^{2}},\]
where \(d=n_{x}n_{u}\) and \(C(\cdot)\) denotes the true cost value provided by an oracle.
We emphasize that, in practice, we have a finite number of samples denoted by \(m\) to compute ZO1P and ZO2P. Consequently, both ZO1P and ZO2P gradient estimation schemes exhibit an inherent bias. In addition, for simplicity we assume access to the true cost, as provided by an oracle [4]. In reality, practical limitations prevent us from simulating our system over an infinite horizon. However, as in [3, Appendix B] the finite horizon approximation for the cost is upper-bounded by the true cost, with the approximation error controllable by the horizon length. Our work can thus be readily extended to this finite-horizon approximated cost setting.
Moreover, the expressions of ZO1P and ZO2P shed light on the fact that whilst ZO2P requires more computational resources due to the need for two cost-query information for each sampled matrix \(U\overset{\text{i.i.d.}}{\sim}\mathcal{S}_{r}\), it offers a lower-variance estimation, which results in a more efficient sample complexity, compared to ZO1P [4]. This makes ZO2P a more favorable choice over ZO1P gradient estimation. Next, we present the PG algorithm with ZO2P gradient estimations for solving the model-free LQR.
```
1:Input:\(L\), \(\eta\), \(n_{1}\), \(r\), \(K_{0}\)
2:for\(l=0,\ldots,L-1\)do
3: Compute \(\widetilde{\nabla}C(K_{l})\) with \(r\) via ZO2P
4:\(K_{l+1}=K_{l}-\eta\widetilde{\nabla}C(K_{l})\)
5:endfor
6:Output \(K_{\text{out}}:=K_{L}\)
```
**Algorithm 1** PG with ZO2P Gradient Estimation.
It is well-established [5] that under certain conditions on the quality of the estimated gradient, i.e., with \(n_{1}\) large and \(r\) small, Algorithm 1 converges linearly to the optimal solution of (2) while ensuring \(K_{l}\in\mathcal{G}\) at each iteration. However, due to the still high variance of the gradient estimation step, the required number of two-point queries to achieve an \(\epsilon\)-approximate solution may become prohibitively large.
## III An SVRPG Algorithm for model-free LQR
With the purpose of reducing the number of two-cost query information to achieve an \(\epsilon\)-approximate solution we propose a SVRPG approach featuring a mixed gradient estimation scheme. The idea is to use a ZO2P gradient estimate in the outer-loop and a ZO1P estimate in the inner-loop so as to lower the computational complexity associated with two-point cost queries compared to Algorithm 1. The need for two-point cost query information arises only periodically instead of at each iteration.
```
1:Input:\(N\), \(T\), \(\eta\), \(n_{1}\), \(n_{2}\), \(r_{\text{out}}\), \(r_{\text{in}}\), \(K_{T}^{0}:=\widetilde{K}^{0}:=K_{0}\).
2:for\(n=0,\ldots,N-1\)do
3:\(K_{0}^{n+1}:=\widetilde{K}^{n}:=K_{T}^{n}\)
4: Compute \(\tilde{\mu}=\widetilde{\nabla}C(\widetilde{K}^{n})\) with \(r_{\text{out}}\)\(\triangleright\) ZO2P
5:for\(t=0,\ldots,T-1\)do
6: Compute \(\overline{\nabla}C(K_{t}^{n+1})\), \(\overline{\nabla}C(\tilde{K}^{n})\) with \(r_{\text{in}}\)\(\triangleright\) ZO1P
7:\(v_{t}^{n+1}=\tilde{\mu}+\overline{\nabla}C(K_{t}^{n+1})-\overline{\nabla}C( \tilde{K}^{n})\)
8:\(K_{t+1}^{n+1}=K_{t}^{n+1}-\eta v_{t}^{n+1}\)
9:endfor
10:endfor
11:Output \(K_{\text{out}}:=K_{T}^{N}\).
```
**Algorithm 2** LQR via SVRPG
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Methods** & **Sample Complexity (\(\mathbb{S}_{c}\))** & **Two-point Oracle Complexity (\(N_{\text{ZO2P}}\))** \\ \hline PG - ZO1P (Fazel et al (2018), [2]) & \(\mathcal{O}(1/\epsilon^{4}\cdot\log\left(1/\epsilon\right))\) & - \\ \hline PG - ZO1P (Gravell et al (2019), [3]) & \(\mathcal{O}(1/\epsilon^{4}\cdot\log\left(1/\epsilon\right))\) & - \\ \hline PG - ZO1P (Malik et al. (2019), [4]) & \(\mathcal{O}(1/\epsilon^{2}\cdot\log\left(1/\epsilon\right))\) & - \\ \hline PG - ZO2P (Malik et al. (2019), [4]) & \(\mathcal{O}(1/\epsilon\cdot\log\left(1/\epsilon\right))\) & \(\mathcal{O}(1/\epsilon\cdot\log\left(1/\epsilon\right))\) \\ \hline PG - ZO2P (Mohammadi et al. (2020), [5]) & \(\mathcal{O}(\log\left(1/\epsilon\right))\) & \(\mathcal{O}(\log\left(1/\epsilon\right))\) \\ \hline SVRPG - Algorithm 2 (This paper) & \(\mathcal{O}\left(\log\left(1/\epsilon\right)^{3-2\beta}\right)\) & \(\mathcal{O}\left(\log\left(1/\epsilon\right)^{\beta}\right)\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparison on the sample complexity (\(\mathbb{S}_{c}\)), and two-point oracle complexity (\(\mathcal{N}_{\text{ZO2P}}\)) required to achieve \(\mathbb{E}\left(C(K_{\text{out}})-C(K^{*})\right)\leq\epsilon\). Here \(\beta\in(0,1)\).
In contrast to Algorithm 1 our SVRPG algorithm divides the total number of iterations into \(N\) epochs, each of length \(T\). For each epoch (outer-loop), we estimate gradients using \(n_{1}\) samples with smoothing radius \(r_{\text{out}}\), whereas inside each epoch (inner-loop) we use \(n_{2}\) samples with smoothing radius \(r_{\text{in}}\). In line 3, we fix the current policy \(\tilde{K}^{n}\) and compute \(\widehat{\nabla}C(\tilde{K}^{n})\) via ZO2P. Throughout the inner-loop iterations, we estimate \(\overline{\nabla}C(K_{t}^{n+1})\) and \(\overline{\nabla}C(\tilde{K}^{n})\) with the same set of samples via ZO1P. Finally, in line 8 we perform a gradient descent step, using the stochastic variance-reduced gradient computed in line 7.
To close this section, we briefly discuss the idea behind SVRG-based methods. Consider a fixed stabilizing policy \(\tilde{K}\in\mathcal{G}\) and estimate \(\widehat{\nabla}C(\tilde{K})\) using \(n_{1}\) samples. Then perform \(K\gets K-\eta v\), with
\[v=\widehat{\nabla}C(\tilde{K})+\overline{\nabla}C(K)-\overline{\nabla}C( \tilde{K}),\]
where \(\overline{\nabla}C(K)\) and \(\overline{\nabla}C(\tilde{K})\) are estimated by using the _same_ set of \(n_{2}\) samples. Note that \(\mathbb{E}\widehat{\nabla}C(\tilde{K})=\mathbb{E}\overline{\nabla}C(\tilde{K})\) (see Appendix D). Therefore, since \(\overline{\nabla}C(K)\), and \(\overline{\nabla}C(\tilde{K})\) are correlated through their samples, the variance of the stochastic gradient \(v_{l}\) might be reduced by controlling the covariance across the gradient estimations. That is, \(\mathbf{var}(v)=\mathbf{var}(X-Y)=\mathbf{var}(X)+\mathbf{var}(Y)-2\,\mathbf{ cov}(X,Y)\), with \(X=\overline{\nabla}C(K)\), \(Y=\overline{\nabla}C(\tilde{K})-\overline{\nabla}C(\tilde{K})\), and \(\mathbf{cov}(\cdot,\cdot)\) denotes the covariance operator.
## IV Theoretical Guarantees
Without loss of generality and for the purpose of the theoretical analysis only, set \(r_{\text{out}}=r_{\text{in}}=r\) in Algorithm 2. In Proposition 1 we first establish the convergence rate of Algorithm 1. This allows for a fair comparison on the sample and oracle complexities of Algorithm 2, detailed in Corollaries 1 and 2. Moreover, we outline the conditions under which Algorithm 2 converge to the optimal solution (Theorem 2), all while staying within the stabilizing sub-level set (Theorem 1) throughout the algorithm's iterations.
**Proposition 1**: _(Convergence of Algorithm 1) Suppose the smoothing radius, number of samples, and number of iterations are in the order of \(n_{1}=\mathcal{O}(1)\), \(r=\mathcal{O}(\sqrt{\epsilon})\) and \(L=\mathcal{O}\left(\log(1/\epsilon)\right)\), respectively. Then, Algorithm 1 achieves and \(\epsilon\)-approximate solution with \(\mathcal{O}\left(\log(1/\epsilon)\right)\) samples._
**Remark 2**: _We stress that linear convergence with ZO2P was first established in [5] for this problem and extended to continuous-time in [24, 16]. However, in Appendix B we present an alternative and straightforward proof, one that relies simply on the upper bound of the expectation2 of the estimated gradient, i.e., \(\mathbb{E}\|\widetilde{\nabla}(K)\|_{F}^{2}\) (Lemma 4) and does not involve proving that \(\langle\widehat{\nabla}C(K),\nabla C(K)\rangle\geq\mu_{1}\|\nabla C(K)\|_{F}^ {2}\), and \(\|\widehat{\nabla}C(K)\|_{F}^{2}\leq\mu_{2}\|\nabla C(K)\|_{F}^{2}\) are satisfied with high probability, for \(\mu_{1},\mu_{2}\in\mathbb{R}_{+}\)[5, Section V]._
Footnote 2: Expectation is taken with respect to \(U\stackrel{{\text{i.i.d.}}}{{\sim}}\mathcal{S}_{r}\) and \(x_{0}\stackrel{{\text{i.i.d.}}}{{\sim}}\mathcal{X}_{0}\).
**Assumption 2**: _Let \(\overline{g}(K)=\frac{d}{r^{2}}C(K+U)U\) be a single sample ZO1P gradient estimation with \(U\stackrel{{\text{i.i.d.}}}{{\sim}}\mathcal{S}_{r}\). Then, for any two stabilizing policies \(K\), \(K^{\prime}\in\mathcal{G}\), we assume that_
\[\mathbb{E}\|\overline{g}(K)-\overline{g}(K^{\prime})\|_{F}\leq C_{g}\mathbb{E} \|K-K^{\prime}\|_{F}.\]
_for some positive constant \(C_{g}\)._
**Remark 3**: _Note that this assumption on the local smoothness of the estimated gradient is a standard requirement for variance-reduced algorithms, as established in [25, 26]. In the context of the LQR problem, this assumption has the same flavor as the local Lipschitz condition on the empirical cost function in [4, Section 2]._
Next, we present two auxiliary results that are instrumental in proving our main results. First, we control the bias in the zeroth-order gradient estimation (Lemma 3) and establish a uniform bound for ZO2P estimated gradient (Lemma 4).
**Lemma 3**: _(Controlling the bias) Let \(\widehat{\nabla}C(K)\) be the ZO1P or ZO2P gradient estimations evaluated at the stabilizing policy \(K\in\mathcal{G}\). Then,_
\[\mathbb{E}\|\nabla C(K)-\mathbb{E}\widehat{\nabla}C(K)\|_{F}^{2}\leq\mathcal{ B}(r):=\left(\bar{h}_{\text{grad}}r\right)^{2}.\]
See Appendix D.
**Lemma 4**: _Let \(\nabla(K)\) be the ZO2P gradient estimation. For any stabilizing policy \(K\in\mathcal{G}\), it holds that_
\[\mathbb{E}\|\widetilde{\nabla}(K)\|_{F}^{2}\leq 8d^{2}\mathcal{B}(r)+2d^{2} \mathbb{E}\|\nabla C(K)\|_{F}^{2}.\]
See Appendix C.
### _Stability Analysis_
We now introduce the conditions on the number of samples \(\{n_{1},n_{2}\}\), step-size \(\eta\) and smoothing radius \(r\) to ensure that Algorithm 2 produces a stabilizing policy \(K_{t+1}^{n+1}\) at each epoch \(n\in\{0,\ldots,N-1\}\) and each \(t\in\{0,\ldots,T-1\}\).
**Theorem 1**: _(Per-iteration Stability) Given \(K_{0}\in\mathcal{G}\), suppose we set the number of outer and inner-loop samples such that satisfies \(\{n_{1},n_{2}\}\gtrsim\bar{h}_{s}\left(\frac{\psi}{6},\delta\right)\), the step-size \(\eta\lesssim\frac{r^{2}\Delta_{0}}{h_{\text{grad}}d^{2}}\), and the smoothing radius_
\[r\leq\underline{h}_{r}\left(\frac{\psi}{6}\right):=\min\left\{\underline{h}_{ \Delta},\frac{1}{\bar{h}_{\text{cost}}},\frac{\psi}{6\bar{h}_{\text{grad}}} \right\},\]
_with \(\delta\in(0,1)\), \(\psi:=\sqrt{\frac{\lambda\Delta_{0}}{4}}\). Then, with probability \(1-\delta\), it holds that \(K_{t+1}^{n+1}\in\mathcal{G}\), for all \(n\) and \(t\)._
A detailed proof with the explicitly expression of \(\bar{h}_{s}\left(\frac{\psi}{6},\delta\right)\) is provided in Appendix E.
**Discussion:** We emphasize that, unlike the RL setting in [13, 19], in the LQR optimal control problem, it is imperative to ensure the closed-loop stability of (1) under \(K_{t+1}^{n+1}\) for all \(n\in\{0,\ldots,N-1\}\) and \(t\in\{0,\ldots,T-1\}\). However, despite its dual-loop structure, demonstrating that \(K_{t+1}^{n+1}\in\mathcal{G}\) throughout the iterations of Algorithm 2 can be achieved by following a similar approach as outlined in previous works without variance reduction [2, 3, 4, 5].
To this end, we first set the first iteration as the base case and demonstrate that as long as \(K_{0}\in\mathcal{G}\) (Assumption 1), then \(C(K_{1}^{1})-C(K^{*})\leq C(K_{0})-C(K^{*})\) holds true, indicating that \(K_{1}^{1}\in\mathcal{G}\). To establish this, we use the Lipschitz property of the cost function (Lemma 1), along with the gradient domination condition (Lemma 2), and the matrix Bernstein inequality [27, Section 6]. The latter provides the necessary conditions on \(n_{1},n_{2}\) and \(r\) to upper
bound \(\|\nabla C(K)-v_{0}^{1}\|_{F}\leq\psi\). The stability analysis is then completed by applying an induction step to this base case.
### _Convergence Analysis_
We now proceed with our analysis to provide the necessary conditions on the number of samples \(\{n_{1},n_{2}\}\), smoothing radius \(r\), step-size \(\eta\), and total number of iterations \(NT\) to ensure the global convergence of Algorithm 2.
**Theorem 2**: _(Convergence Analysis) Suppose we select \(n_{2}\geq\max\left\{96d^{2},\frac{(3C_{g}^{2}+12\tilde{h}_{\text{grad}}^{2}d^{ 2})T^{2}}{h_{\text{grad}}^{2}}\right\}\), and \(\eta\leq\frac{1}{4h_{\text{grad}}}\). Then, the policy \(K_{out}\) returned by Algorithm 2 after \(NT\) iterations enjoys the following property:_
\[\mathbb{E}\left(C(K_{\text{out}})-C(K^{*})\right)\leq\Delta_{0}\times\left(1- \frac{\eta\lambda}{16}\right)^{NT}+\frac{\mathcal{B}(r)\phi}{\lambda n_{2}}.\]
_with \(\phi=120+192d^{2}\)._
Below we provide the proof strategy for this theorem. A detailed proof is presented in Appendix F.
**Proof Sketch:** Theorem 2 is proved as follows:
1) With the fact that \(K_{t+1}^{n+1}\in\mathcal{G}\) for all \(n\in\{0,\ldots,N-1\}\) and \(t\in\{0,\ldots,T-1\}\) (Theorem 1), along with Lemma 1 and Young's inequality we can write
\[\mathbb{E}\left(C(K_{t+1}^{n+1})-C(K_{t}^{n+1})\right)\leq\frac{ 3\eta}{4}\mathbb{E}\|\nabla C(K_{t}^{n+1})-v_{t}^{n+1}\|_{F}^{2}\] \[-\frac{\eta}{8}\mathbb{E}\|\nabla C(K_{t}^{n+1})\|_{F}^{2}-\frac{ \tilde{h}_{\text{grad}}}{2}\mathbb{E}\|K_{t+1}^{n+1}-K_{t}^{n+1}\|_{F}^{2}, \tag{4}\]
2) We control \(\mathbb{E}\|\nabla C(K_{t}^{n+1})-v_{t}^{n+1}\|_{F}^{2}\) in the above expression by decomposing it into bias and variance terms. In particular, we have: biases from the inner and outer-loop estimations + variance of the ZO2P outer-loop estimation + ZO1P gradient estimation difference at \(K_{t}^{n+1}\) and \(\tilde{K}^{n}\). Both ZO1P and ZO2P biases are controlled in Lemma 3. For the variance of the ZO2P gradient estimation we use Lemma 4 and for the ZO1P gradient difference term we assume local smoothness (Assumption 2). Thus, with \(n_{2}\geq 96d^{2}\), we have
\[\mathbb{E}\|\nabla C(K_{t}^{n+1})-v_{t}^{n+1}\|_{F}^{2}\leq\frac {\phi\eta\mathcal{B}(r)}{16n_{2}}+\tilde{\phi}\mathbb{E}\|K_{t}^{n+1}-\tilde{ K}^{n}\|_{F}^{2}\] \[+\frac{1}{16}\mathbb{E}\|\nabla C(K_{t}^{n+1})\|_{F}^{2},\text{ with }\tilde{\phi}=\frac{4}{3n_{2}}\left(\frac{3C_{g}^{2}}{2}+6\tilde{h}_{\text{grad}}^{2 }d^{2}\right).\]
3) The proof is completed by using the PL condition (Lemma 2) and telescoping (4) over outer and inner-loop iterations, with \(n_{2}\geq\frac{\left(3C_{g}^{2}+12\tilde{h}_{\text{grad}}^{2}d^{2}\right)T^{2} }{h_{\text{grad}}^{2}}\), and \(\eta\leq\frac{1}{4\tilde{h}_{\text{grad}}}\).
**Corollary 1**: _(Sample Complexity) Under the conditions of Theorem 2, and suppose we select the total number of iterations and smoothing radius according to_
\[NT\geq\frac{16\log\left(2\Delta_{0}/\epsilon\right)}{\eta\lambda},\text{ }r\leq\sqrt{\frac{n_{2}\lambda\epsilon}{2\phi\tilde{h}_{\text{grad}}^{2}}},\]
_then Algorithm 2 achieves \(\mathbb{E}\left(C(K_{\text{out}})-C(K^{*})\right)\leq\epsilon\) with \(\mathcal{O}\left(\log\left(1/\epsilon\right)^{3-2\beta}\right)\) cost queries._
The total number of cost queries required in Algorithm 2 is given by \(\mathbb{S}_{c}:=NTn_{2}+Nn_{1}\). Therefore, since \(n_{1}=\mathcal{O}(1)\), the sample complexity of Algorithm 2 is dominated by the order of \(NTn_{2}\). As a result, by setting \(N=\mathcal{O}\left(\log\left(1/\epsilon\right)\right)^{\beta}\) and \(T=\mathcal{O}\left(\log\left(1/\epsilon\right)\right)^{1-\beta}\), Algorithm 2 returns an \(\epsilon\)-approximate solution with \(\mathcal{O}\left(\log\left(1/\epsilon\right)^{3-2\beta}\right)\) total number of cost queries.
**Corollary 2**: _(Oracle Complexity Reduction) Under the conditions of Theorem 2 and Corollary 1, it holds that Algorithm 2 achieves an \(\epsilon\)-approximate solution with a reduction of \(\mathcal{O}\left(\log\left(1/\epsilon\right)\right)^{1-\beta}\) in the two-point cost queries when compared to Algorithm 1, where \(\beta\in(0,1)\)._
**Discussion:** Similar to Corollary 1, we set \(N=\mathcal{O}\left(\log\left(1/\epsilon\right)\right)^{\beta}\) and \(T=\mathcal{O}\left(\log\left(1/\epsilon\right)\right)^{1-\beta}\). Then, we observe that Algorithm 2, with number of outer-loop samples \(n_{1}=\mathcal{O}(1)\), demands only \(\mathcal{O}\left(\log\left(1/\epsilon\right)\right)^{\beta}\) two-point queries (i.e., the more resource-intensive cost queries to obtain) to achieve an \(\epsilon\)-approximate solution. This improves upon the two-point oracle complexity of Algorithm 1 by a factor of \(\mathcal{O}\left(\log\left(1/\epsilon\right)\right)^{1-\beta}\). To verify this we simply note that our algorithm necessitates \(\mathcal{N}_{\text{ZO2P}}=Nn_{1}=\mathcal{O}\left(\log\left(1/\epsilon \right)\right)^{\beta}\), whereas Algorithm 1 requires \(\mathcal{N}_{\text{ZO2P}}=\mathcal{O}\left(\log\left(1/\epsilon\right)\right)\) two-point cost queries to attain \(\mathbb{E}\left(C(K_{\text{out}})-C(K^{*})\right)\leq\epsilon\).
## V Numerical Experiments
Numerical experiments 3 are now conducted to illustrate and evaluate the effectiveness of Algorithm 2. To ensure a fair comparison on the performance of the algorithms we set \(x_{0}^{\top}=[1,1,1]\) for computing the normalized cost gap between the current and optimal cost, namely, \(\frac{C(K_{t})-C(K^{*})}{C(K_{0})-C(K^{*})}\), and \(\mathcal{X}_{0}\overset{d}{=}\mathcal{N}(0,I_{n_{x}})\) for the cost oracle generation.
Footnote 3: Code for exact reproduction of the proposed experiments can be downloaded from [https://github.com/jd-anderson/LQR_SVRPG](https://github.com/jd-anderson/LQR_SVRPG)
Consider a unstable system with \(n_{x}=3\) states and \(n_{u}=1\) input, where the system and cost matrices are detailed in Appendix G. We set the initialization parameters of Algorithms 1 and 2 as follows: 1) \(r=1\times 10^{-4}\), \(n_{1}=50\), \(\eta=1\times 10^{-4}\). 2) \(r_{\text{in}}=5\times 10^{-2}\), \(r_{\text{out}}=1\times 10^{-4}\), \(n_{1}=50\), \(n_{2}=25\), \(N=125\), \(T=4\), \(\eta=1\times 10^{-4}\).
Figure 1 demonstrates the convergence of Algorithms 1 and 2. It also includes the result for the policy gradient descent under the model-based setting. The latter highlights the limit of how well the PG algorithms discussing in this work can do without knowing the system model.
The figure shows that both Algorithms 1 and 2 achieve an equivalent convergence performance for the specified parameters. We emphasize that Algorithms 1 and 2 use \(\mathcal{S}_{c}=50000\) and \(\mathcal{S}_{c}=37500\) cost queries, respectively, to attain \(\epsilon=3\times 10^{-2}\). Moreover, in terms of two-point queries, Algorithm 2 necessitates only \(\mathcal{N}_{\text{ZO2P}}=Nn_{1}=6250\), whereas Algorithm 1 is entirely reliant on two-point queries, requiring \(25000\) to achieve the same accuracy as shown in the figure. The figure also shows that the performance of Algorithm 1 degrades when the number of two-point queries decreases to \(6500\). This demonstrates that with our SVRPG approach we are able to effectively reduce the two-point oracle complexity for solving the model-free LQR problem.
## VI Conclusions and Future Work
We proposed an oracle efficient algorithm to solve the model-free LQR problem. Our approach combines a SVRPG-based approach with a mixed zeroth-order gradient estimation scheme. This mixed gradient estimation yields a reduction in the number of two-point cost queries required to achieve an \(\epsilon\)-approximate solution since the more resource-expensive queries are now required less frequently. We proved that our approach improves by a factor of \(\mathcal{O}\left(\log\left(1/\epsilon\right)\right)^{1-\beta}\) two-point query information upon the standard ZO2P gradient estimation method. Future work will involve exploring loopless variants and recursive momentum-based approaches to further reduce the two-point oracle complexity required to solve the model-free LQR problem.
| |
2301.13552 | Minimal Left-Right Symmetric Model with $A_4$ modular symmetry | In this paper, we have realized the left-right symmetric model with modular
symmetry. We have used $\Gamma$(3) modular group which is isomorphic to
non-abelian discrete symmetry group $A_4$. The advantage of using modular
symmetry is the non-requirement for the use of extra particles called
'flavons'. In this model, the Yukawa couplings are expressed in terms of
modular forms $(Y_1,Y_2,Y_3)$. In this work, we have studied minimal Left-Right
Symmetric Model for both type-I and type-II dominances. Here, we have
calculated the values for the Yukawa couplings and then plotted it against the
sum of the neutrino masses. The results obtained are well within the
experimental limits for the desired values of sum of neutrino masses. We have
also briefly analyzed the effects of the implications of modular symmetry on
neutrinoless double beta decay with the new physics contributions within
Left-Right Symmetric Model. | Ankita Kakoti, Bichitra Bijay Boruah, Mrinal Kumar Das | 2023-01-31T11:04:01 | http://arxiv.org/abs/2301.13552v1 | # Minimal Left-Right Symmetric Model with \(A_{4}\) modular symmetry
###### Abstract
In this paper, we have realized the left-right symmetric model with modular symmetry. We have used \(\Gamma(3)\) modular group which is isomorphic to non-abelian discrete symmetry group \(A_{4}\). The advantage of using modular symmetry is the non-requirement for the use of extra particles called 'flavons'. In this model, the Yukawa couplings are expressed in terms of modular forms \((Y_{1},Y_{2},Y_{3})\). In this work, we have studied minimal Left-Right Symmetric Model for both type-I and type-II dominances. Here, we have calculated the values for the Yukawa couplings and then plotted it against the sum of the neutrino masses. The results obtained are well within the experimental limits for the desired values of sum of neutrino masses. We have also briefly analyzed the effects of the implications of modular symmetry on neutrinoless double beta decay with the new physics contributions within Left-Right Symmetric Model.
Introduction
Despite the huge and continued success of the Standard Model (SM) of particle physics, it leaves some of the puzzles unanswered like the existence of neutrino masses, baryon asymmetry of the universe, existence of dark matter etc. The discovery of neutrino oscillation by Sudbury neutrino observatory and Super-Kamiokande experiments was a milestone discovery in the area of neutrino physics. The experiments like MINOS [1], T2K [2], Daya-Bay [3], Double-Chooz [4], RENO [5] etc. provided evidence on the neutrinos being massive which is one of the most compelling revelation that we need to go beyond Standard Model. However inspite of the huge achievements in determining the neutrino oscillation parameters in solar, atmospheric, reactor and accelerator neutrino experiments, many questions related to neutrino still remain unsolved. Among these lies the question regarding the absolute mass scale of neutrinos, exact nature of the particle (Dirac or Majorana), hierarchical pattern of the mass spectrum (Normal or Inverted) and leptonic CP violation. The absolute mass scale of the neutrinos is not yet known. However experiments like Planck has given an upper bound on the sum of the light neutrino masses to be \(\Sigma|m_{\nu_{i}}|<0.23eV\) in 2012 [6] and recently the bound has been constrained to \(\Sigma|m_{\nu_{i}}|<0.11eV\)[7]. The most successful data pertaining to neutrino oscillation parameters is found in the \(3\sigma\) global fit data [8] as shown in table (1).
\begin{table}
\begin{tabular}{|l|c|l|} \hline Parameters & Normal & Inverted \\ & Ordering & Ordering \\ \hline \(\Delta\)\(m_{21}^{2}\) & 6.82 \(\rightarrow\) 8.04 & 6.82 \(\rightarrow\) 8.04 \\ (\(10^{-5}eV^{2}\)) & & \\ \hline \(\Delta\)\(m_{3l}^{2}\) & 2.435 \(\rightarrow\) 2.598 & \(-\)2.581 \(\rightarrow\) \\ (\(10^{-5}eV^{2}\)) & & \(-\)2.414 \\ \hline \(sin^{2}\)\(\theta_{12}\) & 0.264 \(\rightarrow\) 0.343 & 0.269 \(\rightarrow\) 0.343 \\ \hline \(sin^{2}\)\(\theta_{23}\) & 0.415 \(\rightarrow\) 0.616 & 0.419 \(\rightarrow\) 0.617 \\ \hline \(sin^{2}\)\(\theta_{13}\) & 0.02032 \(\rightarrow\) & 0.02052 \(\rightarrow\) \\ & 0.02410 & 0.02428 \\ \hline \end{tabular}
\end{table}
Table 1: Global fit \(3\sigma\) values for neutrino oscillation parameters.
We have used the definition,
\[\Delta m^{2}_{3l}=\Delta m^{2}_{31};\Delta m^{2}_{31}>0;NO \tag{1.1}\]
\[\Delta m^{2}_{3l}=\Delta m^{2}_{32};\Delta m^{2}_{32}<0;IO \tag{1.2}\]
The simplest way to look for neutrino masses is by the seesaw mechanism. The mechanism may be of type I [9], [10],type II [11], [12],type III [13] and Inverse Seesaw [14]. These are extensions of the SM where we incorporate extra particles like right-handed fermions,scalar fermion triplets, gauge singlet neutral fermions etc. The BSM physics also sheds light upon the phenomena like baryon asymmetry of the universe (BAU) [15], Lepton Number Violation (LNV) [16], Lepton Flavor violation (LFV) [17], existence of dark matter [18], [19] etc. A BSM framework which has been successful in explaining the first three of the phenomenologies is the Left- Right Symmetric Model (LRSM) [20; 21; 22; 23; 24], an extension of the SM corresponding to the addition of \(SU(2)_{R}\) group into the theory. The gauge group of LRSM is \(SU(3)_{C}\otimes SU(2)_{R}\otimes SU(2)_{L}\otimes U(1)_{B-L}\). The type I and type II seesaw masses appear naturally in the model. The right-handed neutrinos are an essential part of the model, which acquires Majorana mass when \(SU(2)_{R}\) symmetry is broken. LRSM provides a natural framework to understand the spontaneous breaking of parity and origin of small neutrino masses by seesaw mechanism [25].
Another concerning aspect is the ambiguity regarding nature of neutrinos which has not been yet predicted by the SM of particle physics, that whether neutrinos are Dirac or Majorana fermions. This problem is directly connected to the issue of lepton number conservation. One of the process of fundamental importance which arises in almost any extension of the SM is Neutrinoless Double Beta Decay(NDBD) [26], [27] which when verified can assure that neutrinos are Majorana fermions. NDBD is a slow, radiative process that transforms a nuclide of atomic number Z into its isobar with atomic number Z+2 [28],
\[N(A,Z)\to N(A,Z+2)+e^{-}+e^{-} \tag{1.3}\]
The main aim in the search of NDBD (\(0\nu\beta\beta\)) is the measurement of effective Majorana neutrino mass, which is a combination of the neutrino mass eigenstates and neutrino mixing matrix terms [28]. However, no experimental evidence regarding the decay has been in picture till date. In
addition to the determination of the effective masses, the half-life of the decay [29] combined with sufficient knowledge of the nuclear matrix elements (NME), we can set a constraint on the neutrino masses. The experiments like KamLAND-Zen [30] and GERDA [31] which uses Xenon-136 and Germanium-76 respectively have improved the lower bound on the half-life of the decay process. However, KamLAND-Zen imposes the best lower limit on the half life as \(T_{1/2}^{0\nu}>1.07\times 10^{26}\) yr at 90 % CL and the corresponding upper limit of the effective Majorana mass in the range (0.061-0.165)eV. There are several contributions in LRSM that appear due to additional RH current interactions, giving rise to sizeable LFV rates for TeV scale RH neutrino that occur at rates accessible in current experiments. It has been found that the most significant constraints has been provided by the decays, \(\mu\to 3e\) and \(\mu\rightarrow\gamma e\). In the Standard Model, these LFV decays are suppressed by the tiny neutrino masses. No experiment has so far observed any flavor violating processes including charged leptons. However, many experiments are currently going on to set strong limits on the most relevant LFV observables that will constrain the parameter space of many new models. The best bounds on the branching ratio for LFV decays of the form \(\mu\rightarrow\gamma e\) comes from MEG experiment and it is set at \(BR(\mu\rightarrow\gamma e)<4.2\times 10^{-13}\). In case of the decay \(\mu\to 3e\), the bound is set by the SINDRUM experiment at \(BR(\mu\to 3e)<1.0\times 10^{-12}\).
As mentioned LRSM is an important theory that incorporates the above mentioned phenomenologies, i.e., the phenomenologies related to neutrinos. There are many works where the authors make use of discrete symmetry groups like \(A_{4}\)[32],\(S_{4}\)[33],\(Z_{2}\) etc. [34] to analyze the problem of flavor structure of fermions and to study various related phenomenologies. In our work, we have used \(A_{4}\) modular symmetry to study neutrino masses and mixings and hence study Neutrinoless Double Beta Decay within the model. The advantage of using modular symmetry over discrete flavor symmetries is that the study of the model using symmetries can be done without the introduction of extra particles called 'flavons'. Hence the model is minimal.
However, in this work we have not done a very detailed analysis of the above mentioned phenomenologies, but only realized the left-right symmetric model with the help of \(A_{4}\) modular symmetry and studied the variations of new physics contributions of neutrinoless double beta decay within LRSM with the range of values for Yukawa couplings, which in our model is expressed as modular forms. In section (II), we have given a detailed explanation of the left-right symmetric model, the associated Lagrangian and the mass terms. We begin section (III) by introducing
modular symmetry and then in section (IV), we incorporate modular symmetry into LRSM and determine the associated mass matrices. In section (V), we present a very brief discussion of neutrinoless double beta decay and its associated contributions and their relations with the modular forms. In section (VI), the numerical analysis and results of this work has been discussed and the last section reads the conclusion for the present work.
## II Minimal Left-Right Symmetric Model
The Left-Right Symmetric Model (LRSM) was first introduced around 1974 by Pati and Salam. Rabindra N. Mohapatra and Goran Senjanovic were also some pioneers of this very elegant theory. LRSM is an extension of the Standard Model of particle physics, the gauge group being \(SU(3)_{C}\otimes SU(2)_{R}\otimes SU(2)_{L}\otimes U(1)_{B-L}\), which has been studied by several groups since 1970's [25], [21; 22; 23; 24]. The usual type-I and type-II seesaw neutrino masses arises naturally in the model. The seesaw scale is identified by the breaking of \(SU(2)_{R}\). Some other problems are also addressed in LRSM like parity, CP violation of weak interaction, massive neutrinos, hierarchy problems, etc. LRSM removes the disparity between the left and right-handed fields by considering the RH fields to be doublet under the additional \(SU(2)_{R}\) keeping the right sector couplings same as the left-one by left-right symmetry. In this model, the electric charge is given by \(Q=T_{3L}+T_{3R}+\frac{B-L}{2}\), where \(T_{3L}\) and \(T_{3R}\) are the generators of \(SU(2)_{L}\) and \(SU(2)_{R}\) respectively. \(B-L\) refers to baryon number minus lepton number. The particle content of the model along with their respective charge assignments are given in table(III). The matrix representation for the scalar sector is given by,
\[\phi=\begin{pmatrix}\phi_{1}^{0}&\phi_{1}^{+}\\ \phi_{2}^{-}&\phi_{2}^{0}\end{pmatrix} \tag{1}\]
\[\Delta_{L,R}=\begin{pmatrix}\frac{\delta_{L,R}^{+}}{\sqrt{2}}&\delta_{L,R}^{+} \\ \delta_{L,R}^{0}&-\frac{\delta_{L,R}^{+}}{\sqrt{2}}\end{pmatrix} \tag{2}\]
In order for the fermions to attain mass, a Yukawa Lagrangian is necessary which couples to the bidoublet \(\phi\). The Yukawa Lagrangian incorporating the bidoublet is given by,
\[\mathcal{L}_{\mathcal{D}}=\overline{l_{iL}}(Y_{ij}^{l}\phi+\widetilde{Y_{ij}^{ l}}\widetilde{\phi})l_{jR}+\overline{Q_{iL}}(Y_{ij}^{q}\phi+\widetilde{Y_{ij}^{q}} \widetilde{\phi})Q_{jR}+h.c \tag{3}\]
where, \(l_{L}\) and \(l_{R}\) are the left-handed and right-handed lepton fields, \(Q_{L}\) and \(Q_{R}\) are the left-handed and right-handed quark fields. \(Y^{l}\) being the Yukawa coupling corresponding to leptons and \(Y^{q}\) being the Yukawa coupling for the quarks. The Yukawa Lagrangian incorporating the scalar triplets which play a role in providing Majorana mass to the neutrinos is given by,
\[{\cal L_{M}}=f_{L,ij}{\Psi_{L,i}}^{T}Ci\sigma_{2}\Delta_{L}\Psi_{L,j}+f_{R,ij}{ \Psi_{R,i}}^{T}Ci\sigma_{2}\Delta_{R}\Psi_{R,j}+h.c \tag{2.4}\]
\(f_{L}\) and \(f_{R}\) are the Majorana Yukawa couplings and are equal subjected to discrete left-right symmetry. The scalar potential in LRSM is a combination of interaction terms consisting the potential and after spontaneous symmetry breaking the scalars attain VEVs given by,
\[<\Delta_{L,R}>=\frac{1}{\sqrt{2}}\begin{pmatrix}0&0\\ v_{L,R}&0\end{pmatrix} \tag{2.5}\]
\[<\phi>=\begin{pmatrix}k&0\\ 0&e^{i\theta}k^{\prime}\end{pmatrix} \tag{2.6}\]
The magnitudes of the VEVs follows the relation, \(|v_{L}|^{2}<|k^{2}+{k^{\prime}}^{2}|<|v_{R}|^{2}\). The breaking pattern of the LRSM gauge group takes place in two steps. The LRSM gauge group is first broken down to the Standard Model gauge group by the vev of the scalar triplet \(\Delta_{R}\), and then the Standard Model gauge group is broken down to the electromagnetic gauge group i.e., \(U(1)_{em}\) by the vev of the bidoublet and a tiny vev of the scalar triplet \(\Delta_{L}\).
The Dirac mass terms for the leptons come from the Yukawa Lagrangian, which for the charged leptons and the neutrinos are given by,
\[M_{l}=\frac{1}{\sqrt{2}}(k^{\prime}Y_{l}+k\tilde{Y_{l}}) \tag{2.7}\]
\[M_{D}=\frac{1}{\sqrt{2}}(kY_{l}+k^{\prime}\tilde{Y_{l}}) \tag{2.8}\]
The light neutrino mass after spontaneous symmetry breaking (SSB), generated within a type (I+II) seesaw can be written as,
\[M_{\nu}=M_{\nu}{}^{I}+M_{\nu}{}^{II}, \tag{2.9}\]
\[M_{\nu}=M_{D}M_{RR}{}^{-1}M_{D}{}^{T}+M_{LL} \tag{2.10}\]
where,
\[M_{LL}=\sqrt{2}v_{L}f_{L} \tag{11}\]
and,
\[M_{RR}=\sqrt{2}v_{R}f_{R} \tag{12}\]
The first and second terms in corresponds to type-I seesaw and type-II seesaw masses respectively. It is an interesting fact that in the context of LRSM both type-I and type-II terms can be equally dominant or either of the two terms can be dominant, but under certain conditions [35; 36]. It has been demonstrated in the Appendix A. In the context of LRSM however, both the type-I and type-II mass terms can be expressed in terms of the heavy right-handed Majorana mass matrix, so equation (10) will follow,
\[M_{\nu}=M_{D}M_{RR}^{-1}M_{D}^{T}+\gamma\Bigg{(}\frac{M_{W}}{v_{R}}\Bigg{)}^{2 }M_{RR} \tag{13}\]
where, \(\gamma\) is a dimensionless parameter which is a function of various couplings, appearing in the VEV of the triplet Higgs \(\Delta_{L}\), i.e., \(v_{L}=\gamma(\frac{v^{2}}{v_{R}})\) and here, \(v=\sqrt{k^{2}+k^{\prime 2}}\), and
\[\gamma=\frac{\beta_{1}kk^{\prime}+\beta_{2}k^{2}+\beta_{3}k^{\prime 2}}{(2 \rho_{1}-\rho_{3})(k^{2}+k^{\prime 2})} \tag{14}\]
In our model, the dimensionless parameter \(\gamma\) has been fine tuned to \(\gamma\approx 10^{-6}\) and \(v_{R}\) is of the order of \(TeV\).
## III Modular symmetry
Modular symmetry has gained much importance in aspects of model building [37], [38]. This is because it can minimize the extra particle called 'flavons' while analyzing a model with respect to a particular symmetry group. An element \(q\) of the modular group acts on a complex variable \(\tau\) which belongs to the upper-half of the complex plane given as [38][39]
\[q\tau=\frac{a\tau+b}{c\tau+d} \tag{15}\]
where \(a,b,c,d\) are integers and \(ad-bc=1\), Im\(\tau\)\(>\)0.
The modular group is isomorphic to the projective special linear group PSL(2,Z) = SL(2,Z)/\(Z_{2}\) where, SL(2,Z) is the special linear group of integer \(2\times 2\) matrices having determinant unity and \(Z_{2}=(I,-I)\) is the centre, \(I\) being the identity element. The modular group can be represented in terms of two generators \(S\) and \(T\) which satisfies \(S^{2}=(ST)^{3}=I\). \(S\) and \(T\) satisfies the following matrix representations:
\[S=\begin{pmatrix}0&1\\ -1&0\end{pmatrix} \tag{3.2}\]
\[T=\begin{pmatrix}1&1\\ 0&1\end{pmatrix} \tag{3.3}\]
corresponding to the transformations,
\[S:\tau\rightarrow-\frac{1}{\tau};T:\tau\rightarrow\tau+1 \tag{3.4}\]
Finite modular groups (N \(\leq\) 5) are isomorphic to non-abelian discrete groups, for example, \(\Gamma(3)\approx A_{4}\), \(\Gamma(2)\approx S_{3}\), \(\Gamma(4)\approx S_{4}\). While using modular symmetry, the Yukawa couplings can be expressed in terms of modular forms, and the number of modular forms present depends upon the level and weight of the modular form. For a modular form of level N and weight 2k, the table below shows the number of modular forms associated within and the non-abelian discrete symmetry group to which it is isomorphic [39].
\begin{table}
\begin{tabular}{|c|c|c|} \hline N & No. of modular forms & \(\Gamma(N)\) \\ \hline
2 & k + 1 & \(S_{3}\) \\ \hline
3 & 2k + 1 & \(A_{4}\) \\ \hline
4 & 4k + 1 & \(S_{4}\) \\ \hline
5 & 10k + 1 & \(A_{5}\) \\ \hline
6 & 12k & \\ \hline
7 & 28k - 2 & \\ \hline \end{tabular}
\end{table}
Table II: No. of modular forms corresponding to modular weight 2k.
In our work, we will be using modular form of level 3, that is, \(\Gamma(3)\) which is isomorphic to \(A_{4}\) discrete symmetry group. The weight of the modular form is taken to be 2, and hence it will have three modular forms \((Y_{1},Y_{2},Y_{3})\) which can be expressed as expansions of q given by,
\[Y_{1}=1+12q+36q^{2}+12q^{3}+84q^{4}+72q^{5}+36q^{6}+96q^{7}+180q^{8}+12q^{9}+216 q^{10} \tag{3.5}\]
\[Y_{2}=-6q^{1/3}(1+7q+8q^{2}+18q^{3}+14q^{4}+31q^{5}+20q^{6}+36q^{7}+31q^{8}+56q ^{9}) \tag{3.6}\]
\[Y_{3}=-18q^{2/3}(1+2q+5q^{2}+4q^{3}+8q^{4}+6q^{5}+14q^{6}+8q^{7}+14q^{8}+10q^{9}) \tag{3.7}\]
where, \(q=\exp(2\pi i\tau)\).
## IV Minimal LRSM with \(A_{4}\) modular symmetry
In particle physics, symmetries have always played a very crucial role. The realization of LRSM with the help of discrete flavor symmetries have been done in earlier works [40], [41]. In our work we have incorporated \(A_{4}\) modular symmetry into LRSM. The advantage of using modular symmetry rather than flavor symmetry is the minimal use of extra particles (flavons) and hence the model is minimal. The model contains usual particle content of LRSM [42]. The lepton doublets transform as triplets under \(A_{4}\) and the bidoublet and scalar triplets transform as 1 under \(A_{4}\)[43]. As we have considered modular symmetry, we assign modular weights to the particles, keeping in mind that matter multiplets corresponding to the model can have negative modular weights, but the modular forms cannot be assigned negative weights. The assignment of these weights are done in such a way that in the Lagrangian the sum of the modular weights in each term is zero. Modular weights corresponding to each particle is shown in table (III). The Yukawa Lagrangian for the leptonic and quark sector in LRSM is given by equation (2.3),(2.4) and with a reference to that we can write the Yukawa Lagrangian of our \(A_{4}\) modular symmetric LRSM, for the fermionic sector, by introducing Yukawa coupling in the form of modular forms \(Y\) is given as,
\[\mathcal{L_{Y}}=\overline{l_{L}}\phi l_{R}Y+\overline{l_{L}}\tilde{\phi}l_{R}Y +\overline{Q_{L}}\phi Q_{R}Y+\overline{Q_{L}}\tilde{\phi}Q_{R}Y+{l_{R}}^{T}Ci \tau_{2}\Delta_{R}l_{R}Y+{l_{L}}^{T}Ci\tau_{2}\Delta_{L}l_{L}Y \tag{4.1}\]
The Yukawa couplings \(Y=(Y_{1},Y_{2},Y_{3})\) are expressed as modular forms of level 3.
In our work, we are concerned with the mass of the neutrinos and as such, using \(A_{4}\) modular symmetry and using the multiplication rules for \(A_{4}\) group, we construct the Dirac and Majorana mass matrices as given below. The Dirac mass matrix is given by,
\[M_{D}=v\begin{pmatrix}2Y_{1}&-Y_{3}&-Y_{2}\\ -Y_{2}&-Y_{1}&2Y_{3}\\ -Y_{3}&2Y_{2}&-Y_{1}\end{pmatrix} \tag{4.2}\]
where, \(v\) is considered to be the VEV for the Higgs bidoublet.
The right-handed Majorana mass matrix is given by,
\[M_{R}=v_{R}\begin{pmatrix}2Y_{1}&-Y_{3}&-Y_{2}\\ -Y_{3}&2Y_{2}&-Y_{1}\\ -Y_{2}&-Y_{1}&2Y_{3}\end{pmatrix} \tag{4.3}\]
\begin{table}
\begin{tabular}{|c|c|} \hline & Y (modular forms) \\ \hline \(A_{4}\) & 3 \\ \hline \(k_{I}\) & 2 \\ \hline \end{tabular}
\end{table}
Table 4: Charge assignment and modular weight for the corresponding modular Yukawa form for the model.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Gauge group & \(Q_{L}\) & \(Q_{R}\) & \(l_{L}\) & \(l_{R}\) & \(\phi\) & \(\Delta_{L}\) & \(\Delta_{R}\) \\ \hline \(SU(3)_{C}\) & 3 & 3 & 1 & 1 & 1 & 1 & 1 \\ \hline \(SU(2)_{L}\) & 2 & 1 & 2 & 1 & 2 & 3 & 1 \\ \hline \(SU(2)_{R}\) & 1 & 2 & 1 & 2 & 2 & 1 & 3 \\ \hline \(U(1)_{B-L}\) & 1/3 & 1/3 & -1 & -1 & 0 & 2 & 2 \\ \hline \(A_{4}\) & 3 & 3 & 3 & 3 & 1 & 1 & 1 \\ \hline \(k_{I}\) & 0 & -2 & 0 & -2 & 0 & -2 & 2 \\ \hline \end{tabular}
\end{table}
Table 3: Charge assignments for the particle content of the model.
where, \(v_{R}\) is the VEV for the scalar triplet \(\Delta_{R}\). As it is seen that the Majorana mass matrix for our model is found to be symmetric in nature as it should be. Under these assumptions for modular symmetric LRSM and the basis that we have considered, our charged lepton mass matrix is also found to be diagonal.
The type-I seesaw mass is then given by,
\[M_{\nu_{I}}=M_{D}.{M_{R}}^{-1}.{M_{D}}^{T} \tag{4.4}\]
and, the type-II seesaw mass is given by,
\[M_{\nu_{II}}=M_{LL} \tag{4.5}\]
As mentioned above, in LRSM type-II seesaw mass can also be expressed in terms of the right-handed mass \(M_{R}\) as,
\[M_{\nu_{II}}=\gamma{\left(\frac{M_{W}}{v_{R}}\right)}^{2}M_{R} \tag{4.6}\]
### Type-I dominanace
In LRSM, the type-I seesaw mass dominates when the vev of the left-handed triplet is taken to be negligibly small and hence the type-II term is absent. In such a case the lightest neutrino mass can be given in terms of the type-I seesaw mass term given by,
\[M_{\nu}=M_{D}{M_{R}}^{-1}{M_{D}}^{T} \tag{4.7}\]
and the heavy right-handed Majorana mass term can be given as,
\[M_{R}=f_{R}v_{R} \tag{4.8}\]
where, \(f_{R}\) is the right-handed Majorana Yukawa coupling.
In the approximation that \(k^{\prime}<<k\), and if we consider that our Yukawa coupling \(Y^{l}\) corresponding to the neutrino masses is \(y_{D}\) and the coupling \(\widetilde{Y^{l}}\) for the charged fermion masses is denoted by \(y_{L}\), so considering \(y_{D}k>>y_{L}k^{\prime}\) we can write the type-I mass term as [44],
\[M_{\nu}=\frac{k^{2}}{v_{R}}y_{D}f_{R}^{-1}y_{D}^{T} \tag{4.9}\]
If we consider that \(U_{R}\) is a unitary matrix that diagonalizes \(M_{R}\), so since the VEV \(v_{R}\) is a constant the same matrix will also diagonalize the coupling matrix \(f_{R}\). Taking \(f_{R}=f_{L}=f\), so
\[f=U_{R}f^{dia}U_{R}^{T} \tag{4.10}\]
If we take inverse on both sides and taking into account the property of a unitary matrix (\(U_{R}^{-1}=U_{R}^{T}\)), we get,
\[f^{-1}=U_{R}^{T}(f^{dia})^{-1}U_{R} \tag{4.11}\]
Therefore, we get
\[M_{\nu}=\frac{k^{2}}{v_{R}}y_{D}U_{R}^{T}(f^{dia})^{-1}U_{R}y_{D}^{T} \tag{4.12}\]
Multiplying both sides of the equation with \(U_{R}^{T}\) from the right and with \(U_{R}\) from left, we finally arrive at the following equation,
\[U_{R}M_{\nu}U_{R}^{T}=(M_{\nu})^{dia} \tag{4.13}\]
where we have used \(U_{R}y_{D}U_{R}^{T}=y_{D}\). So, the unitary matrix diagonalizing the matrix \(M_{R}\) also diagonalizes the light neutrino mass matrix. So in this case it can be determined that if \(m_{i}\) denotes the light neutrino mass and \(M_{i}\) denotes the heavy neutrino mass, then they are related as
\[m_{i}\propto\frac{1}{M_{i}} \tag{4.14}\]
For our model, the Yukawa couplings are modular forms expressed as expansions of \(q\), and the mass matrices are expressed in terms of the modular forms \((Y_{1},Y_{2},Y_{3})\). So, the light neutrino mass matrix, \(M_{\nu}\) for type-I dominance is given by the equation (4.7). As already stated in equations (4.2) and (4.3), the Dirac and Majorana mass matrices are determined by the application of multiplication rules for the \(A_{4}\) group. So, for type-I dominance, our light neutrino mass matrix will be given by,
\[M_{\nu}=\frac{v^{2}}{v_{R}}\begin{pmatrix}2Y_{1}&-Y_{2}&-Y_{3}\\ -Y_{2}&2Y_{3}&-Y_{1}\\ -Y_{3}&-Y_{1}&2Y_{2}\end{pmatrix} \tag{4.15}\]
As mentioned previously, the value for \(v_{R}\) is of the order of \(TeV\) and that for \(v\) is in \(GeV\). We have computed the values of the sum of the neutrino masses for type-I dominance and checked the correctness of our model by plotting it against the Yukawa couplings and the result was found to match the experimental bounds.
Figure 1: Variation of \(|Y_{1}|\) with sum of neutrino masses.
Figure 3: Variation of \(|Y_{3}|\) with sum of neutrino masses.
Figure 2: Variation of \(|Y_{2}|\) with sum of neutrino masses.
### Type-II dominance
Type-II seesaw mass in LRSM dominates when the Dirac term connecting the right-handed and left-handed parts is negligible as compared to that of the type-II term [44]. In that case, our light neutrino mass \(m_{\nu}\) will given by the type-II seesaw mass term, i.e.,
\[M_{\nu_{L}}=f_{L}v_{L} \tag{4.16}\]
And the heavy mass matrix is given by,
\[M_{R}=f_{R}v_{R} \tag{4.17}\]
Again if we consider that \(U_{L}\) and \(U_{R}\) diagonalizes \(M_{\nu_{L}}\) and \(M_{R}\) respectively, so for the reason mentioned above the same matrices will also diagonalize \(f_{L}\) and \(f_{R}\) respectively and since in our model, \(f_{L}=f_{R}\), so we can consider \(U_{L}=U_{R}\). In such a case, we arrive at an important result that
\[m_{i}\propto M_{i} \tag{4.18}\]
Now using modular symmetry the light neutrino mass matrix for type-II dominance in our model is given by,
\[m_{\nu}=v_{L}\begin{pmatrix}2Y_{1}&-Y_{3}&-Y_{2}\\ -Y_{3}&2Y_{2}&-Y_{1}\\ -Y_{2}&-Y_{1}&2Y_{3}\end{pmatrix} \tag{4.19}\]
where, \(v_{L}\) is the vev for left-handed scalar triplet. The value of \(v_{L}\) is taken to be of the order of \(eV\). The sum of the neutrino masses is computed for type-II dominance and plotting of the sum is done with the Yukawa couplings which are found to be as shown under,
Figure 4: Variation of \(|Y_{1}|\) with sum of neutrino masses.
## V Neutrinoless double beta decay (\(0\nu\beta\beta\)) in minimal LRSM
Neutrinoless double beta decay is a lepton number violating process, which if proven to exist will directly imply the Majorana nature of neutrinos.
\[N(A,Z)\to N(A,Z+2)+e^{-}+e^{-} \tag{5.1}\]
Many groups have however already done a lot of work on NDBD in the model, [21],[28; 45; 46; 47; 48; 49; 50]. In LRSM [51], there are several contributions to NDBD in addition to the standard contribution via light Majorana neutrino exchange owing to the presence of several heavy additional scalar,vector
Figure 5: Variation of \(|Y_{2}|\) with sum of neutrino masses.
Figure 6: Variation of \(|Y_{3}|\) with sum of neutrino masses.
and fermionic fields [52, 53, 54, 55]. Various contributions to NDBD transition rate in LRSM are discussed as follows :
* Standard Model contribution to NDBD where the intermediate particles are the \(W_{L}\) bosons and light neutrinos, the process in which the amplitude depends upon the leptonic mixing matrix elements and light neutrino masses.
* Heavy right-handed neutrino contribution in which the mediator particles are the \(W_{L}\) bosons and the amplitude depends upon the mixing between light and heavy neutrinos as well as the mass of the heavy neutrino.
* Light neutrino contribution to NDBD where the intermediate particles are \(W_{R}\) bosons and the amplitude depends upon the mixing between light and heavy neutrinos as well as mass of the right-handed gauge boson \(W_{R}\).
* Heavy right-handed neutrino contribution where the mediator particles are the \(W_{R}\) bosons. The amplitude of this process is dependent on the elements of the right handed leptonic mixing matrix and mass of the right-handed gauge boson, \(W_{R}\) as well as the mass of the heavy right handed Majorana neutrino.
* Light neutrino contribution from the Feynman diagram mediated by both \(W_{L}\) and \(W_{R}\), and the amplitude of the process depends upon the mixing between light and heavy neutrinos, leptonic mixing matrix elements, light neutrino masses and the mass of the gauge bosons, \(W_{L}\) and \(W_{R}\).
* Heavy neutrino contribution from the Feynman diagram mediated by both \(W_{L}\) and \(W_{R}\), and the amplitude of the process depends upon the right handed leptonic mixing matrix elements, mixing between the light and heavy neutrinos, also the mass of the gauge bosons, \(W_{L}\) and \(W_{R}\) and the mass of the heavy right handed neutrino.
* Scalar triplet contribution (\(\Delta_{L}\)) in which the mediator particles are \(W_{L}\) bosons, and the amplitude for the process depends upon the masses of the \(W_{L}\) bosons, left-handed triplet Higgs, as well as their coupling to leptons.
* Right-handed scalar triplet contribution (\(\Delta_{R}\)) contribution to NDBD in which the mediator particles are \(W_{R}\) bosons, and the amplitude for the process depends upon the masses of the \(W_{R}\) bosons, right-handed triplet Higgs, \(\Delta_{R}\) as well as their coupling to leptons.
In our work, where we have incorporated \(A_{4}\) modular symmetry to LRSM and in our present work we have considered three of the above mentioned contributions, one from the standard light neutrino contribution and the other two new physics contribution mediated by \(W_{R}^{-}\) and \(\Delta_{R}\) respectively. For simple approximations, an assumption has been made in the mass scales of heavy particles, where,
\[M_{R}\approx M_{W_{R}}\approx M_{\Delta_{L}{}^{++}}\approx M_{\Delta_{R}{}^{++ }}\approx TeV\]
. Under these assumptions, the amplitude for the light-heavy mixing contribution which is proportional to \(\frac{m_{D}{}^{2}}{M_{R}}\) remains very small, since \(m_{\nu}\approx\frac{m_{D}{}^{2}}{M_{R}}\approx(0.01-0.1)eV,m_{D}\approx(10^{5 }-10^{6})eV\) which implies \(\frac{m_{D}}{M_{R}}\approx(10^{-7}-10^{-6})eV\). Thus in our model, we ignore the contributions involving the light and heavy neutrino mixings.
When NDBD is done in the framework of LRSM, the standard light neutrino contribution is given by,
\[m_{v}^{eff}=U_{Li}^{2}m_{i} \tag{5.2}\]
where, \(U_{Li}\) are the elements of the first row of the neutrino mixing matrix \(U_{PMNS}\), in which the elements are dependent on known mixing angles \(\theta_{13}\), \(\theta_{12}\) and the Majorana phases \(\kappa\) and \(\eta\). The \(U_{PMNS}\) matrix is given by,
\[U_{PMNS}=\begin{pmatrix}c_{12}c_{13}&s_{12}c_{13}&s_{13}e^{-i\delta}\\ -c_{23}s_{12}-s_{23}s_{13}c_{12}e^{i\delta}&-c_{23}c_{12}-s_{23}s_{12}s_{13}e^ {i\delta}&s_{23}c_{13}\\ s_{23}s_{12}-c_{23}s_{13}c_{12}e^{i\delta}&-s_{23}c_{12}-c_{23}s_{13}s_{12}e^ {i\delta}&c_{23}c_{13}\end{pmatrix}P \tag{5.3}\]
where, \(P=diag(1,e^{i\kappa},e^{i\eta})\). So the effective mass can be parametrized in terms of the elements of the diagonalizing matrix and the eigenvalues as,
\[m_{v}^{eff}=m_{1}c_{12}^{2}c_{13}^{2}+m_{2}s_{12}^{2}c_{13}^{2}e^{2i\kappa}+m _{3}s_{13}^{2}e^{2i\eta}. \tag{5.4}\]
## VI Numerical analysis and results
In our present work, we have modified left-right symmetric model by incorporating \(A_{4}\) modu
lar symmetry for both type-I and type-II dominances. As we are using modular symmetry, the Yukawa couplings are expressed as expansions of \(q\) as shown in equations (3.5),(3.6) and (3.7). In our model, the value of \(q\) is found to be of the order of \(10^{-1}\). The aboslute value of the modulus should however be greater than 1.
\[\tau=Re(\tau)+Im(\tau) \tag{6.1}\]
Yukawa couplings against the sum of the neutrino masses. The range of the values for sum of neutrino masses for both the cases are given as under,
### Standard Light Neutrino Contribution to \(0\nu\beta\beta\)
As mentioned above, in the standard light neutrino contribution to \(0\nu\beta\beta\), the intermediate particles are the \(W_{L}\) bosons and light neutrino. The effective mass for the contribution is given by equation (5.2). Simplifying for the respective elements of \(U_{Li}\) and \(m_{i}\), the value of the effective mass is obtained in terms of the modular forms \((Y_{1},Y_{2},Y_{3})\) as,
\[m_{\nu}^{eff}=m_{1}^{eff}+m_{2}^{eff}+m_{3}^{eff} \tag{6.2}\]
where,
\[m_{1}^{eff}=\frac{\nu(Y_{2}-Y_{3})^{2}(Y_{1}+Y_{2}+Y_{3})}{\nu_{R}(Y_{1}-Y_{3} )^{2}}\]
\[m_{2}^{eff}=\frac{\nu^{2}(Y_{1}-Y_{2})^{2}(Y_{1}+Y_{2}+Y_{3}-\sqrt{3}\sqrt{3Y_{ 1}^{2}-2Y_{1}Y_{2}+3Y_{2}^{2}-2Y_{1}Y_{3}-2Y_{2}Y_{3}+3Y_{3}^{2}})}{2\nu_{R}(Y _{1}-Y_{3})^{2}}\]
\[m_{3}^{eff}=\frac{\nu^{2}(Y_{1}+Y_{2}+Y_{3}-\sqrt{3}\sqrt{3Y_{1}^{2}-2Y_{1}Y_{ 2}+3Y_{2}^{2}-2Y_{1}Y_{3}-2Y_{2}Y_{3}+3Y_{3}^{2}})}{2\nu_{R}}\]
for type-I dominance, and the plots are shown as,
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(\sum m_{\nu}\) & Normal Hierarchy & Inverted hierarchy \\ \hline \(Type-I(min)\) & 0.000980556 & 0.000437758 \\ \(Type-I(max)\) & 0.177296 & 0.186377 \\ \hline \(Type-II(min)\) & 0.000219304 & 0.000035 \\ \(Type-II(max)\) & 0.0200981 & 0.0203081 \\ \hline \end{tabular}
\end{table}
Table 7: Range of values for sum of neutrino masses for type-I and type-II dominances for both normal and inverted hierarchy.
Figure 8: Variation of \(|Y_{2}|\) with effective neutrino mass for standard light neutrino contribution.
Figure 7: Variation of \(|Y_{1}|\) with effective neutrino mass for standard light neutrino contribution.
Figure 9: Variation of \(|Y_{3}|\) with effective neutrino mass for standard light neutrino contribution.
For type-II dominance, we have
\[m_{1}^{eff}=-\frac{\nu_{L}(-Y_{2}+Y_{3})(Y_{1}+Y_{2}+Y_{3})}{Y_{1}-Y_{2}}\]
\[m_{2}^{eff}=-\frac{\nu_{L}(Y_{1}-Y_{3})(Y_{1}+Y_{2}+Y_{3}-\sqrt{3}\sqrt{3Y_{1}^{ 2}-2Y_{1}Y_{2}+3Y_{2}^{2}-2Y_{1}Y_{3}-2Y_{2}Y_{3}+3Y_{3}^{2}})}{2(Y_{1}-Y_{2})}\]
\[m_{3}^{eff}=\frac{\nu_{L}(Y_{1}-Y_{3})(Y_{1}+Y_{2}+Y_{3}+\sqrt{3}\sqrt{3Y_{1}^{ 2}-2Y_{1}Y_{2}+3Y_{2}^{2}-2Y_{1}Y_{3}-2Y_{2}Y_{3}+3Y_{3}^{2}})}{2}\]
### Heavy Right-Handed Neutrino contribution to \(0\nu\beta\beta\)
In our work, we have considered contributions of heavy right-handed neutrino and scalar Higgs triplet to NDBD. The effective mass for heavy right-handed neutrino is given by,
\[m_{R}^{eff}=p^{2}\Bigg{(}\frac{M_{W_{L}}^{4}}{M_{W_{R}}^{4}}\Bigg{)}\Bigg{(}\frac{ U_{Rei}^{*^{2}}}{M_{i}}\Bigg{)} \tag{6.3}\]
where, \(p^{2}\) is the typical momentum exchange of the process. As it is known that TeV scale LRSM plays a very important role in the process of neutrinoless double beta decay (\(0\nu\beta\beta\)), we have considered the values as \(M_{W_{R}}=10TeV\), \(M_{W_{L}}=80GeV\), \(M_{\Delta_{R}}\approx 3TeV\) and after calculation, the value for heavy right-handed neutrino is found to be in the scale of \(TeV\). The allowed value of p is in the range \((100-200)MeV\) and so we consider, \(p\approx 180MeV\). Thus, we get,
\[p^{2}\Bigg{(}\frac{M_{W_{L}}^{4}}{M_{W_{R}}^{4}}\Bigg{)}=10^{10}eV^{2} \tag{6.4}\]
where, \(U_{Rei}\) refers to the first row elements of the diagonalizing matrix of the heavy Majorana mass matrix and \(M_{i}\) are its eigenvalues. The effective mass corresponding to the heavy right-handed neutrino can be expressed in terms of the modular forms as,
\[m_{eff}^{R}=10^{10}(m_{eff}^{R_{1}}+m_{eff}^{R_{2}}+m_{eff}^{R_{3}}) \tag{6.5}\]
where,
\[m_{eff}^{R_{1}}=\frac{2}{\nu_{R}(Y_{1}+Y_{2}+Y_{3}+\sqrt{3}\sqrt{3Y_{1}^{2}-2Y_ {1}Y_{2}+3Y_{2}^{2}-2Y_{1}Y_{3}-2Y_{2}Y_{3}+3Y_{3}^{2}})}\]
Figure 12: Variation of \(|Y_{3}|\) with effective neutrino mass for standard light neutrino contribution.
\[m_{eff}^{R_{2}}=\frac{2(Y_{1}^{*}-Y_{3}^{*})^{2}}{\nu_{R}(Y_{1}+Y_{2}+Y_{3}-\sqrt{3 }\sqrt{3Y_{1}^{2}-2Y_{1}Y_{2}+3Y_{2}^{2}-2Y_{1}Y_{3}-2Y_{2}Y_{3}+3Y_{3}^{2})}(Y _{1}^{*}-Y_{2}^{*})^{2}}\]
\[m_{eff}^{R_{3}}=\frac{(-Y_{2}^{*}+Y_{3}^{*})^{2}}{\nu_{R}(Y_{1}+Y_{2}+Y_{3})(Y _{1}^{*}-Y_{2}^{*})^{2}}\]
The total effective mass is also calculated for the standard light and right-handed heavy neutrino contribution, given by,
\[|m_{\nu}^{eff^{total}}|=|m_{\nu}^{eff}+m_{eff}^{R}| \tag{6.6}\]
which can be obtained in terms of the modular forms as a summation of the above mentioned terms.
Figure 14: Variation of \(|Y_{2}|\) with total effective neutrino mass.
Figure 13: Variation of \(|Y_{1}|\) with total effective neutrino mass.
The plots above are for type-I dominance.
### Scalar Triplet contribution to \(0\nu\beta\beta\)
The magnitude of \(\Delta_{R}\) contribution is controlled by the factor \(\frac{M_{i}}{M_{\Delta_{R}}}\)[44]. However, scalar triplet contribution is not included in the total contribution under the assumption \(\frac{M_{i}}{M_{\Delta_{R}}}<0.1\). But, some the mixing parameters in the large part of the parameter space may result in a higher \(\frac{M_{i}}{M_{\Delta_{R}}}\) ratio and in such cases we will have to include it in the total contribution. The impact of this contribution here is studied in the limit, \(M_{\Delta_{R}}\approxq M_{heaviest}\).
The effective mass for scalar triplet contribution is given as,
\[|m_{\Delta}^{eff}|=|p^{2}\frac{M_{W_{L}}^{4}}{M_{W_{R}}^{4}}\frac{U_{Rei}^{2}M _{i}}{M_{\Delta_{R}}^{2}}| \tag{6.7}\]
The value of the mass for the right-handed scalar triplet is taken as, \(M_{\Delta_{R}}=3TeV\). So, the value of the coefficient results as,
\[p^{2}\frac{M_{W_{L}}^{4}}{M_{W_{R}}^{4}}\frac{1}{M_{\Delta_{R}}^{2}}=\frac{10 ^{10}}{9\times 10^{24}} \tag{6.8}\]
In terms of modular forms, the effective scalar mass can be expressed as,
\[m_{eff}^{\Delta_{R}}=m_{eff_{1}}^{\Delta_{R}}+m_{eff_{2}}^{\Delta_{R}}+m_{eff_ {3}}^{\Delta_{R}} \tag{6.9}\]
where,
\[m_{eff_{1}}^{\Delta_{R}}=\frac{\nu_{R}(Y_{2}+Y_{3})^{2}(Y_{1}+Y_{2}Y_{3})}{(Y _{1}-Y_{2})^{2}}\]
Figure 18: Variation of \(|Y_{3}|\) with total effective neutrino mass.
\[m_{eff_{2}}^{\Delta_{R}}=\frac{\nu_{R}(Y_{1}-Y_{3})^{2}(Y_{1}+Y_{2}+Y_{3}-\sqrt{3} \sqrt{3Y_{1}^{2}-2Y_{1}Y_{2}+3Y_{2}^{2}-2Y_{1}Y_{3}-2Y_{2}Y_{3}+3Y_{3}^{2}})}{2 (Y_{1}-Y_{2})^{2}}\]
\[m_{eff_{3}}^{\Delta_{R}}=\frac{\nu_{R}(Y_{1}+Y_{2}+Y_{3}+\sqrt{3}\sqrt{3Y_{1}^{ 2}-2Y_{1}Y_{2}+3Y_{2}^{2}-2Y_{1}Y_{3}-2Y_{2}Y_{3}+3Y_{3}^{2}})}{2}\]
The plots are shown as under.
## VII Conclusion
The discovery of neutrino oscillations paved the gateway for physics beyond the Standard Model. In our paper, we have realized LRSM with the help of modular \(A_{4}\) symmetry for both type-I and type-II dominance. Using modular symmetry provides the advantage of using no extra particles called 'flavons'. The Yukawa couplings are represented as modular forms expressed as expansions of \(q\). The values of the Yukawa couplings \((Y_{1},Y_{2},Y_{3})\) are calculated using 'Mathematica'. The mass matrices are then determined using the multiplication rules for \(A_{4}\) group stated in the Appendix. The Majorana mass matrix is found to be symmetric and under the considered basis, the charged lepton mass matrix is also diagonal. We have expressed the light neutrino and heavy right-handed neutrino mass matrix in terms of the modular forms. We have also studied briefly the contributions of \(0\nu\beta\beta\) in LRSM. The effective masses corresponding to standard light neutrino contribution, right-handed contribution and scalar triplet contributions are determined in terms of \((Y_{1},Y_{2},Y_{3})\) and we have plotted the effective mass corresponding to the considered contributions against the Yukawa couplings. To summarize our work, some results are stated as under,
* The absolute value of the modulus was found to be within the range 1.073 to 1.197, which is greater than unity, that is the desired result.
* The Yukawa couplings, expressed in terms of modular forms ranges from \(10^{-9}\) to \(10^{-6}\).
* The sum of the neutrino masses for type-I dominance ranges from the order of \(10^{-4}\) to
Figure 21: Variation of \(|Y_{3}|\) with effective neutrino mass for scalar triplet contribution.
for both normal and inverted hierarchy.
* The sum of the neutrino masses for type-II dominance ranges from the order of \(10^{-4}\) to \(10^{-2}\) for both normal and inverted hierarchy.
The effective masses for the \(0\nu\beta\beta\) contributions are calculated and by determining their relations with the modular forms, we have plotted the effective masses with the three Yukawa couplings and it has been found that the values for the effective mass corresponding to each contribution is well within the experimental bounds, which infact makes us clearly state that the building of the model with modular symmetry is advantageous to that of flavor symmetries. In this model, we have not used any extra particles and the analysis has been done taking into consideration the calculated and computed values for the model parameters and the results are found to be satisfactory, so it can be stated that the Left-Right Symmetric Model can be constructed with modular symmetry while satisfying the experimental bounds on the desired parameters.
## VIII Appendix A
Let us consider the Higgs potential of our model that has quadratic and quartic coupling terms given by [36],
\[V_{\phi,\Delta_{L},\Delta_{R}}=-\mu_{ij}^{2}Tr[\phi_{i}^{\dagger} \phi_{j}]+\lambda_{ijkl}Tr[\phi_{i}^{\dagger}\phi_{j}]Tr[\phi_{k}^{\dagger} \phi_{l}]+\lambda_{ijkl}^{{}^{\prime}}Tr[\phi_{i}^{\dagger}\phi_{j}\phi_{k}^{ \dagger}\phi_{l}]-\mu_{ij}^{2}Tr[\Delta_{L}^{\dagger}\Delta_{L}+\Delta_{R}^{ \dagger}\Delta_{R}]\] \[\rho_{1}[(Tr[\Delta_{L}^{\dagger}\Delta_{L}])^{2}+(Tr[\Delta_{L}^ {\dagger}\Delta_{L}])^{2}]+\rho_{2}(Tr[\Delta_{L}^{\dagger}\Delta_{L}\Delta_{ L}^{\dagger}\Delta_{L}]+Tr[\Delta_{R}^{\dagger}\Delta_{R}\Delta_{R}^{\dagger} \Delta_{R}])+\rho_{3}Tr[\Delta_{L}^{\dagger}\Delta_{L}\Delta_{R}^{\dagger} \Delta_{R}]+\] \[\alpha_{ij}Tr[\phi_{i}^{\dagger}\phi_{j}](Tr[\Delta_{L}^{\dagger }\Delta_{L}]+Tr[\Delta_{R}^{\dagger}\Delta_{R}])+\beta_{ij}(Tr[\Delta_{L}^{ \dagger}\Delta_{L}\phi_{i}\phi_{j}^{\dagger}]+Tr[\Delta_{R}^{\dagger}\Delta_{ R}\phi_{i}\phi_{j}^{\dagger}])+\gamma_{ij}(Tr[\Delta_{L}^{\dagger}\phi_{i}\Delta_{R} \phi_{j}^{\dagger}]+h.c) \tag{10}\]
where, i,j,k,l runs from 1 to 2 with \(\phi_{1}=\phi\) and \(\phi_{2}=\tilde{\phi}\). As mentioned above after SSB, the scalar sector obtains VEV. So after the substitution of the respective VEVs and determining the traces, so after simplification the potential can be written as,
\[V=-\mu^{2}(v_{L}^{2}+v_{R}^{2})+\frac{\rho}{4}(v_{L}^{4}+v_{R}^{4})+\frac{ \rho^{\prime}}{2}+\frac{\alpha}{2}(v_{L}^{2}+v_{R}^{2})k_{1}^{2}+\gamma v_{L} v_{R}k^{2} \tag{11}\]
where, we have used the approximation \(k^{\prime}<<k\), and \(\rho^{\prime}=2\rho_{3}\). Our minimization conditions are, \(\frac{\delta V}{\delta v_{L}}=\frac{\delta V}{\delta v_{R}}=\frac{\delta V}{ \delta k}=\frac{\delta V}{\delta k^{\prime}}=0\)
Therefore, we get,
\[\frac{\delta V}{\delta v_{L}}=-2\mu^{2}v_{L}+\rho v_{L}^{3}+\rho^{\prime}v_{L}k^{2 }+\gamma v_{R}k^{2} \tag{8.3}\]
Here, it is evident that the Majorana mass of the left-handed neutrino \(M_{LL}\) is dependent on the vev \(v_{L}\) as already defined above. Again, we have
\[\frac{\delta V}{\delta v_{R}}=-2\mu^{2}v_{R}+\rho v_{R}^{3}+\rho^{\prime}v_{R}k ^{2}+\gamma v_{L}k^{2} \tag{8.4}\]
So, the right handed Majorana mass \(M_{RR}\) is dependent on the vev \(v_{R}\). Similarly, the calculations for the same can be carried out and it can be found out the Dirac mass term \(M_{D}\) can be expressed in terms of the vev for the Higgs bidoublet as also defined previously.
Now, we are to determine a relation between the VEVs for the scalars and so after using the minimization conditions and simplifying the equations, we come to a relation given by,
\[v_{L}v_{R}=\frac{\gamma}{\xi}k \tag{8.5}\]
where, \(\xi=\rho-\rho^{\prime}\).
The neutrino mass for LRSM is given as a summation of the type-I and type-II term as already mentioned above. So, in the approximation that \(k^{\prime}<<k\), and if we consider that our Yukawa coupling \(Y^{l}\) corresponding to the neutrino masses is \(y_{D}\) and the coupling \(\widetilde{Y^{l}}\) for the charged fermion masses is denoted by \(y_{L}\), so considering \(y_{D}k>>y_{l}k^{\prime}\) we can write,
\[M_{\nu}=\frac{k^{2}}{v_{R}}y_{D}f_{R}^{-1}y_{D}^{T}+f_{L}v_{L} \tag{8.6}\]
Since, for due to left-right symmetry, we can consider \(f_{L}=f_{R}=f\), so the above equation can be written as,
\[M_{\nu}=\frac{k^{2}}{v_{R}}y_{D}f^{-1}y_{D}^{T}+fv_{L} \tag{8.7}\]
So, from this equation we can come to a relation given by,
\[M_{\nu}=(f\frac{\gamma}{\xi}+y_{D}f^{-1}y_{D}^{T})\frac{k^{2}}{v_{R}} \tag{8.8}\]
Here, we can consider two situations, namely
* If \(f(\frac{\gamma}{\xi})<<y_{D}f^{-1}y_{D}^{T}\), the light neutrino mass is given by the type-I term \(M_{D}M_{RR}^{-1}M_{D}^{T}\). That is, here type-I is dominant and the light neutrino mass is from the suppression of heavy \(\nu_{R}\).
* If \(f(\frac{\gamma}{\xi})>>y_{D}f^{-1}y_{D}^{T}\), the light neutrino mass is given by the type-II term \(fv_{L}\). That is, in this case type-II mass term is dominant and the light neutrino mass is because of the tiny value of \(\nu_{L}\).
Appendix B
### Properties of \(A_{4}\) group
\(A_{4}\) is a non-abelian discrete symmetry group which represents even permuatations of four objects. It has four irreducible representations, three out of which are singlets \((1,1^{\prime},1^{\prime\prime})\) and one triplet 3 (\(3_{A}\) represents the anti-symmetric part and \(3_{S}\) the symmetric part). Products of the singlets and triplets are given by,
\[1\otimes 1=1\]
\[1^{\prime}\otimes 1^{\prime}=1^{\prime\prime}\]
\[1^{\prime}\otimes 1^{\prime\prime}=1\]
\[1^{\prime\prime}\otimes 1^{\prime\prime}=1^{\prime}\]
\[3\otimes 3=1\oplus 1^{\prime}\oplus 1^{\prime\prime}\oplus 3_{A}\oplus 3_{S}\]
If we have two triplets under \(A_{4}\) say, \((a_{1},a_{2},a_{3})\) and \((b_{1},b_{2},b_{3})\), then their multiplication rules are given by,
\[1\approx a_{1}b_{1}+a_{2}b_{3}+a_{3}b_{2}\]
\[1^{\prime}\approx a_{3}b_{3}+a_{1}b_{2}+a_{2}b_{1}\]
\[1^{\prime\prime}\approx a_{2}b_{2}+a_{3}b_{1}+a_{1}b_{2}\] \[3_{S}\approx\begin{pmatrix}2a_{1}b_{1}-a_{2}b_{3}-a_{3}b_{2}\\ 2a_{3}b_{3}-a_{1}b_{2}-a_{2}b_{1}\\ 2a_{2}b_{2}-a_{1}b_{3}-a_{3}b_{1}\end{pmatrix}\] \[3_{A}\approx\begin{pmatrix}a_{2}b_{3}-a_{3}b_{2}\\ a_{1}b_{2}-a_{2}b_{1}\\ a_{3}b_{1}-a_{1}b_{3}\end{pmatrix}\]
| この論文では、左右対称モデルをモジュラー対称性で実現しました。私たちは、非<h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1>非</h1><h1> |
2309.07305 | SHIELD: Secure Haplotype Imputation Employing Local Differential Privacy | We introduce Secure Haplotype Imputation Employing Local Differential privacy
(SHIELD), a program for accurately estimating the genotype of target samples at
markers that are not directly assayed by array-based genotyping platforms while
preserving the privacy of donors to public reference panels. At the core of
SHIELD is the Li-Stephens model of genetic recombination, according to which
genomic information is comprised of mosaics of ancestral haplotype fragments
that coalesce via a Markov random field. We use the standard forward-backward
algorithm for inferring the ancestral haplotypes of target genomes, and hence
the most likely genotype at unobserved sites, using a reference panel of
template haplotypes whose privacy is guaranteed by the randomized response
technique from differential privacy. | Marc Harary | 2023-09-13T20:51:11 | http://arxiv.org/abs/2309.07305v1 | # SHIELD: Secure Haplotype Imputation Employing Local Differential Privacy
###### Abstract
We introduce Secure Haplotype Imputation Employing Local Differential privacy (SHIELD), a program for accurately estimating the genotype of target samples at markers that are not directly assayed by array-based genotyping platforms while preserving the privacy of donors to public reference panels. At the core of SHIELD is the Li-Stephens model of genetic recombination, according to which genomic information is comprised of mosaics of ancestral haplotype fragments that coalesce via a Markov random field. We use the standard forward-backward algorithm for inferring the ancestral haplotypes of target genomes--and hence the most likely genotype at unobserved sites--using a reference panel of template haplotypes whose privacy is guaranteed by the randomized response technique from differential privacy.
## 1 Introduction
In the context of biomedical analyses of large patient cohorts, whole-genome sequencing still remains prohibitively expensive for existing high-throughput technology. On the other hand, array-based genotyping platforms provide a more efficient method of collecting data for large-scale studies of human disease, albeit at the expense of the statistical power of genome-wide association (GWA) studies that intend to fine-map causal variants or facilitate meta-analyses [1, 2, 3, 4, 5].
One solution is genotype imputation, a preliminary stage in many GWA studies that consists of inferring the genotype for a given target genome at loci that have not been directly assayed, essentially expanding the dimensionality of the original dataset [2, 5, 6, 7, 8, 9, 10]. Employing a reference panel of donated haplotypes sequenced via higher-quality technology and at a far denser set of variants, imputation algorithms like MaCH [7], Minimac [8], BEAGLE [9], PLINK [10], fastPHASE [11], and IMPUTE [2] have been demonstrated to reliably augment both the coverage and statistical power of GWA analyses and hence become an essential component of many clinical studies [6].
Further to this end, public databases like the UK biobank (UKB) [12], All of Us research program [13], Haplotype Reference Consortium [14], and 1,000 Genomes Project (1KG) [15] have been made available to facilitate genomic research in part by offering standardized and readily accessible reference panels [16]. In cases where running imputation algorithms using large reference panels is impractical on local hardware or the direct access to the biobank data is prohibited, public web services like the Michigan Impute Server [17] are often established to answer queries to clients submitting target haplotypes for imputation.
Unfortunately, as part of a growing literature on privacy concerns in genomic research, it has also been documented that coordinated attacks on the part of cryptographic adversaries are capable of compromising the privacy of research subjects that donate to public reference panels [18, 19, 20, 21]. For example, attackers have been able to exploit ancestral data [22] or other personally identifying information [23] to reconstruct reference genomes. An urgent challenge is therefore to develop a suite of imputation algorithms that can simultaneously facilitate high-utility, statistically reliable GWA studies while protecting the privacy of contributors to reference haplotype panels [18, 24, 25].
One solution is the technique of differential privacy, which has rapidly become the "gold-standard" for statistical queries by being able to provide both robust privacy guarantees for participants in studies and meaningful results for researchers in commercial and scientific settings [26, 27, 28]. At the crux of the technique is a rigorous mathematical formalization of privacy that quantifies the extent to which adding pseudorandom noise to the results of computations can protect the anonymity of members of a database [29].
The following work introduces Secure Haplotype Imputation Employing Local Differential privacy (SHIELD), a program that employs the Li-Stephens model of genetic recombination [5, 30] to impute missing haplotype variants in target genomes while incorporating differential privacy techniques to protect reference panel donors. Specifically, SHIELD proceeds in two stages: (i) initial input perturbation to guarantee local differential privacy [31] via randomized response [32, 33] and (ii) fitting a hidden Markov model [34] to each subsequent client query via the forward-backward algorithm [35]. In an experiment that closely simulates a real-world use case for haplotype imputation, we show that SHIELD is able to obtain state-of-the-art imputation accuracy while providing mathematically formalized privacy guarantees.
## 2 Results
### Overview
The setting for which SHIELD is intended consists of a client user uploading target genomes to a public imputation server [18]. In the standard imputation workflow, contributors to a biobank upload their sequenced genomic data to a central, publicly available server, where the data are then collated to create a haplotype reference panel to pass as an argument to an imputation algorithm [12, 14, 15]. Subsequently, client researchers may then upload target genomes as part of a clinical study to the server, where the targets are imputed using the private haplotype reference panel and, most often, an algorithm based in hidden Markov models [2, 7, 8, 9, 10, 34] and the forward-backward algorithm [35]. At no point in the workflow is the haplotype reference panel directly visible to client researchers submitting jobs to the server. However, while the privacy of the contributors to the reference panel may appear guaranteed, it has been demonstrated that adversarial attacks employing carefully coordinated queries to the server can divulge the sequences of reference haplotypes [18].
To this end, SHIELD modifies the imputation workflow by leveraging local [31] differential privacy [26, 27, 28, 32, 36]. Haplotype data can be represented as a bitstring in which a 1 at the \(i\)th position in the sequence indicates that the haplotype possesses the minor allele at the \(i\)th site and a 0 the major allele [8]. Prior to submission to the central imputation server, pseudorandom noise is added to the two bitstrings denoting each individual's pair of haplotypes via randomized response, a technique from differential privacy that simply consists of flipping a random subset of the bits from 0 to 1 and vice versa [32, 33]. The likelihood that a given bit in the haplotype bitstring is flipped varies as a function of a parameter \(\varepsilon\)--called the privacy budget [29]--such that lower values of \(\varepsilon\) entail a higher probability that any bit is flipped and therefore a higher degree of privacy. The tradeoff, however, is that lower privacy budgets incur a greater expense to imputation accuracy, rendering it a hyperparameter that the database curator must carefully adjust to strike an acceptable balance between donor privacy and client utility. Once all perturbed haplotypes are collected at the central server, imputation is subsequently performed using the modified haplotypes as a reference panel.
Privacy is guaranteed by the fact that no contributor's data will, on average, be unmodified when input to the imputation algorithm invoked by client researchers. In this way, no adversary could be certain that the results that they obtain from an attack accurately reflect the true reference panel. These privacy guarantees are also local; even if an adversary were to access the reference panel directly rather than through coordinated queries, the data obtained would again not perfectly reflect any individual's true genome [36].
### State-of-the-art imputation accuracy
To evaluate SHIELD's performance on a realistic simulation of an imputation query, we performed an ablation study on the 1KG Phase 3 [15] dataset. We withheld 100 genomes (equivalent to 200 haplotypes) from the reference panel to impute via the remaining 2,404 samples. The first 10,000 single-nucleotide polymorphisms (SNPs) were extracted from 1KG; the remaining were discarded to render run times more tractable. To
simulate an array-based assay of the 200 target haplotypes, we ablated all sites except those included in the Illumina Human1M-Duo v3.0 DNA Analysis BeadChip manifest, the intersection of which with the first 10,000 sites in the 1KG data consisted of a total of 253 sites for an _a priori_ coverage of 2.53%.
To quantify accuracy, we summed the imputed dosages for each pair of haplotypes to compute a final genotype dosage for each sample, then computed the coefficient of determination (\(R^{2}\)) between the genotype dosages and the ground-truth exome data. Because sites vary massively by minor allele frequency (MAF), the loci were divided into three bins corresponding to MAFs of \((0\%,0.5\%)\), \([0.5\%,5\%)\), and \([5\%,50\%]\). Respectively, these bins contained 5,943, 2,157, and 1,900 variants in the reference set. Accuracy was assessed, by bin, both to compare the performance of SHIELD to that of Minimac3 [8] and to characterize the effect of the privacy budget on our method's accuracy.
Our analyses show nearly identical performance between SHIELD and Minimac3 when no input perturbation is applied, with the former obtaining scores of 0.571, 0.784, and 0.902, respectively, on the three bins enumerated above and the latter scores of 0.584, 0.787, and 0.901 (Figure 2). SHIELD's accuracy was reevaluated at various values of our privacy budget along the interval \([0.01,10]\), reflecting the typical range of values that \(\varepsilon\) is assigned in many differentially private algorithms [26]. Expectedly, accuracy exhibits a negative association with \(\varepsilon\). At an upper bound of \(\varepsilon=10\), SHIELD performs nearly identically to Minimac3
Figure 1: Overview of the SHIELD pipeline, with the key algorithms in orange. Noise is added once to the reference data (purple) via Perturb, then collated and stored on the server to guarantee local DP (modified bits in bold). The client (green) then calls Impute on the server with the target haplotype (missing sites denoted \(\varnothing\)) and the reference panel as arguments.
Figure 2: A. The reference haplotype matrix corresponding to the first 128 SNPs on chromosome 20 and 200 haplotypes in 1KG. Empty squares represent the presence of the major allele, yellow of the minor. B. The same haplotype matrix perturbed by SHIELD.
(0.564, 0.784, 0.901; Figure 3), while performance degrades significantly at \(\varepsilon=0.01\) (0.014, 0.038, 0.218; Figure 3).
### Impact on Markov parameters
As noted above, the parameters for the Markov random field [34] modeling genomic recombination [30], namely the mutation and recombination rates, were computed on the unperturbed data by Minimac3 [8]. The rationale was that the noise added to the reference panel mimicked the behavior of extremely rapid genomic recombination, causing Minimac3's expectation-maximization procedure to dramatically overestimate the recombination rates (5.93 \(\times 10^{-3}\) vs. 4.84\(\times 10^{-4}\)) and, conversely, to underestimate the mutation rates (Figure 3B). These atypical rates exerted a decidedly negative impact on imputation accuracy, with performance decreasing by 35.5%, 16.1%, and 5.46% for each of the three bins, respectively, when the rates were computed on the reference panel perturbed at \(\varepsilon=5.0\). In sum, it is clearly superior to estimate population parameters _a priori_, although, notably, doing so on the reference panel itself is not differentially private and may leak information.
### Impact on compression rates
An additional feature of haplotype imputation introduced by Minimac3 was the M3VCF format for genomic data, which both substantially decreases total file size over the traditional VCF format and enables the state-space reduction technique that further improves imputation runtime [8]. The key insight enabling the format is the observation that, due to identity-by-descent [5], most haplotypes share identical \(k\)-mers of genomic material at intervals of contiguous loci despite being unique overall. In other words, given an arbitrary interval along the genome, the number of unique \(k\)-mers collectively exhibited by the reference panel is almost always smaller than the total number of reference haplotypes _per se_. Therefore, it is possible to implement a compression scheme in which the genome is partitioned into intervals and only the unique \(k\)-mer strings are retained, substantially compressing the original reference panel [8].
An unfortunate consequence of local differential privacy via randomized response is that, on average, random noise will destroy the exact equality between haplotypes substrings. From the perspective of a compression algorithm attempting to identify the set of unique \(k\)-mers along a given interval, an apparently larger number of unique fragments will exist, rendering M3VCF-style compression will less efficient. As an illustration, we partitioned the genomic data into mutually exclusive, exhaustive blocks of uniform size ranging from 2 to 500. We then computed the data compression ratio when M3VCF-style state-space reduction was applied at each block size by dividing the total \(5.008\times 10^{8}\) bits in the uncompressed panel by the number of bits following compression and plotted the ratio against block size (Figure 3C). Input perturbation resulted in compression rates up to an order of magnitude smaller.
Figure 3: A. Comparison between the accuracy by MAF of imputed dosages for targets withheld from 1KG for both SHIELD (non-differentially private) and Minimac3. B. SHIELD’s accuracy by MAF versus privacy budget.
## 3 Discussion
In this work, we develop Secure Haplotype Imputation Employing Local Differential privacy (SHIELD), a program for performing genomic imputation with strong privacy guarantees for reference haplotypes via the randomized response technique [33]. Analysis shows that SHIELD is able to obtain state-of-the-art accuracy in realistic experimental settings at typical privacy budgets.
We note that the strong performance of SHIELD parallels the effectiveness of RAPPOR [37], a differentially private algorithm for mining strings in commercial contexts that is also based on randomized response. Unlike SHIELD, however, RAPPOR is not intended for data that is inherently binary; rather, arbitrary alphanumeric strings are hashed onto Bloom filters [38] that are subsequently perturbed. The fact that haplotype data intrinsically consist of bitstrings makes randomized response particularly convenient in a genomic context.
But despite the strong performance exhibited in the experiments above, it should be acknowledged that the privacy guarantees made by our program are limited to individual variants. In other words, for a given privacy budget \(\varepsilon\)[26, 27, 28], SHIELD can provably ensure protection for each sample's genotype at any one site, but not across the entire genome _per se_. Certain adversarial attacks are therefore still feasible with SHIELD even though accurate reconstruction of reference haplotypes is not [19, 20, 21, 22, 23]. Whole-genome privacy would instead require the division \(\varepsilon\) across each site (see [27] for a discussion on composition in differential privacy), which is prohibitively difficult for datasets containing tens of thousands of variants. On the other hand, such divisions may be possible if a fairly limited segment of the genome is to be imputed. Future research into genomic privacy may investigate these scenarios or alternative differentially private mechanisms.
A second limitation of our program is its dependence on accurate _a priori_ estimates of population pa
Figure 4: A. SHIELD’s imputed dosage accuracy \(\left(R^{2}\right)\) by MAF using parameters derived from both the original and perturbed reference panels. B. The mean recombination and error rates using the original and perturbed parameters. C. M3VCF-like compression ratio versus haplotype block size on the original and perturbed parameters.
rameters [5, 8, 30], which are non-trivial to compute while still enforcing local differential privacy. Subsequent work may inquire into the feasibility of computing population parameters _a posteriori_ by performing some manner of statistical correction.
Nevertheless, the capacity for basic differentially private mechanisms to easily provide meaningful results is highly promising for the prospect of privacy in practical genomic research.
## 4 Methods
The SHIELD algorithm consists of two subroutines, Perturb and Impute, that are described below. The former is called once on a reference panel \(\mathbf{X}\) to produce a locally [31] differentially private [26, 27, 28, 29] reference panel \(\mathbf{\tilde{X}}\) that is stored on the imputation server, whereas the latter is then called by the client for each subsequent query haplotype \(\mathbf{z}\) using \(\mathbf{\tilde{X}}\) as the reference panel.
### Differential Privacy and Randomized Response
We derive the privacy guarantees of SHIELD from the notion of differential privacy [26, 27, 28, 29]. Preliminarily, we develop the notion of _neighboring datasets_. Given a universe of datasets \(\mathcal{X}\), we say that two datasets \(x,y\in\mathcal{X}\) are neighbors if and only if they differ by at most one individual sample. We will also call a randomized algorithm \(\mathcal{M}:\mathcal{X}\rightarrow\mathcal{F}\), where \(\mathcal{F}\) is an arbitrary probability space, a _mechanism_. We then say that a mechanism \(\mathcal{M}:\mathcal{X}\rightarrow\mathcal{F}\) satisfies \(\left(\epsilon,\delta\right)\)-differential privacy if and only if for all \(\mathcal{S}\subseteq\mathcal{F}\) and for all \(x,y\in\mathcal{X}\) such that \(x\) are \(y\) are neighboring, we have
\[P\left(\mathcal{M}\left(x\right)\in\mathcal{S}\right)\leq\exp\left(\varepsilon \right)P\left(\mathcal{M}\left(y\right)\in\mathcal{S}\right)+\delta. \tag{1}\]
Among the most common techniques in differential privacy, randomized response [32, 33] satisfies \(\epsilon\)-differential privacy for binary attributes. The randomized response scheme on a binary attribute \(X\) is a mechanism \(\mathcal{M}_{rr}:\left\{0,1\right\}\rightarrow\left\{0,1\right\}\) is characterized by a \(2\times 2\) distortion matrix
\[\mathbf{P}=\begin{pmatrix}p_{00}&p_{01}\\ p_{10}&p_{11}\end{pmatrix}, \tag{2}\]
where \(p_{uv}=P\left(\mathcal{M}_{rr}(x_{i})=u|x_{i}=v\right)\quad(u,v)\in\left\{0,1\right\}\). It can be shown [32] that the highest-utility value for \(\mathbf{P}\) is
\[\mathbf{P}=\begin{pmatrix}\frac{e^{\varepsilon}}{1+e^{\varepsilon}}&\frac{1}{ 1+e^{\varepsilon}}\\ \frac{1}{1+e^{\varepsilon}}&\frac{e^{\varepsilon}}{1+e^{\varepsilon}}\end{pmatrix}. \tag{3}\]
Fixing the number of samples in our reference panel \(n\) and the number of sites \(m\), we denote the universe of possible reference panels \(\mathcal{X}=\left\{0,1\right\}^{m\times n}\). Because haplotypes are vector-valued, applying the notion of neighboring datasets is non-trivial. For our purposes, we will say that two reference panels \(\mathbf{X},\mathbf{X}^{\prime}\in\mathcal{X}\) are neighboring if and only if their Hamming distance is less than or equal to \(1\). In other words, we consider \(\mathbf{X}\) and \(\mathbf{X}^{\prime}\) neighbors if and only if \(X_{i,j}\neq X_{i,j}^{\prime}\) for a single marker \(i\) and a single individual \(j\) as opposed to a whole-genome interpretation of neighboring datasets in which \(\mathbf{X}\) and \(\mathbf{X}^{\prime}\) may differ by an entire row.
It then follows that by applying the randomized response mechanism \(\mathcal{M}_{rr}\) to each entry in a reference panel matrix \(\mathbf{X}\), we may store a perturbed copy \(\mathbf{\tilde{X}}\) of the original reference panel that satisfies entry-wise \(\varepsilon\)-differential privacy. The perturbation step of SHIELD then consists of the procedure Perturb. We note that we use the symbol \(\ell\) to denote a pseudorandom sample and \(\text{Bern}\left(\vartheta\right)\) to denote a Bernoulli distribution with parameter \(\vartheta\).
A convenient property of differential privacy is _post-processing_[26]. If \(\mathcal{M}:\mathcal{X}\rightarrow\mathcal{F}\) is an \(\left(\varepsilon,\delta\right)\)-differentially private randomized algorithm and \(f:\mathcal{F}\rightarrow\mathcal{F}^{\prime}\) is an arbitrary mapping, then \(f\circ\mathcal{M}:\mathcal{X}\rightarrow\mathcal{F}^{\prime}\) is \(\left(\varepsilon,\delta\right)\)-differentially private. We set \(\mathcal{M}=\mathcal{M}_{rr}\) and define \(f\) such that \(f\left(\mathbf{\tilde{X}}\right)=\textsc{Impute}\left(\mathbf{z},\mathbf{ \tilde{X}},\boldsymbol{\mu},\boldsymbol{\rho}\right)\) for some fixed values \(\mathbf{z}\), \(\boldsymbol{\mu}\), and \(\boldsymbol{\rho}\) (see below on the meaning of these parameters). Then by post-processing, it follows that each call to Impute on the perturbed reference panel \(\mathbf{\tilde{X}}\) will satisfy \(\varepsilon\)-differential privacy. In other words, once \(\mathbf{\tilde{X}}\) has been collected on the imputation server and perturbed so as to satisfy local differential privacy, an unlimited number of queries are able to be made by an algorithmic adversary without divulging any one haplotype's value at any one site with a high degree of certainty.
```
1:procedurePerturb(\(\mathbf{X},\varepsilon\))
2:\(\mathbf{\tilde{X}}\leftarrow\) empty matrix
3:for\(i=1,2,\ldots,n\)do
4:for\(j=1,2,\ldots,m\)do
5:\(c\stackrel{{\leftarrow}}{{\leftarrow}}\mathrm{Bern}\left(\frac{ \varepsilon^{\varepsilon}}{1+e^{\varepsilon}}\right)\)
6:if\(c=1\)then
7:\(\tilde{X}_{i,j}\gets X_{i,j}\)
8:else
9:\(\tilde{X}_{i,j}\leftarrow\neg X_{i,j}\)
10:endif
11:endfor
12:endfor
13:return\(\mathbf{\tilde{X}}\)
14:endprocedure
```
**Algorithm 1** Applies randomized response mechanism to reference panel.
### HMM-based genotype imputation
We will also use the following notation:
* \(0\), \(1\), and \(\varnothing\): the minor allele, major allele, and constant denoting an unobserved site to be imputed;
* \(n\) and \(m\): the number of reference samples and reference markers;
* \([n]\): the set of reference haplotypes, represented as the index set;
* \(\mathbf{X}=[\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{m}]^{\intercal} \in\left\{0,1\right\}^{m\times n}\): the reference panel haplotype sequences, equivalent to a real (and more, specifically, binary) matrix;
* \((z_{k})_{k=1}^{m}\in\left\{0,1,\varnothing\right\}^{m}\): the sequence corresponding to the observed target haplotype that, because it may include the missing site letter, is _not_, strictly speaking, a real vector;
* \(\left(\hat{z}_{k}\right)_{k=1}^{m}\equiv\mathbf{\hat{z}}\in\left[0,1\right]^{m}\): the sequence of imputed haplotype dosages, equivalent to a real vector;
* \(\mathbf{y}\in[n]^{m}\): the site-wise identities of the reference haplotypes from which \(\mathbf{z}\) is descended;
* \(\boldsymbol{\rho}\in\left[0,1\right]^{m}\): the recombination rates [5, 30] such that \(\rho_{i}=P(y_{i+1}=j_{2}|y_{i}=j_{1})\quad j_{2}\neq j_{1}\), meaning that \(\boldsymbol{\rho}\) is equivalent to a real vector (we simply let \(\rho_{m}=0\) as a dummy value);
* \(\boldsymbol{\mu}\in\left[0,1\right]^{m}\): the mutation rates [5, 30] such \(\mu_{i}=P(z_{i}\neq X_{i,j}|y_{i}=j)\), meaning that \(\boldsymbol{\mu}\) is equivalent a real vector;
* \(\mathbf{M}=[\mathbf{m}_{1},\mathbf{m}_{2},\ldots,\mathbf{m}_{m}]^{\intercal} \in\left[0,1\right]^{m\times n}\): the emission probabilities in matrix form such that \(\mu_{i}=P(z_{i}\neq X_{i,j}|y_{i}=j)\) such that \[M_{i,j}=\begin{cases}1-\mu_{i}&\text{if}\quad z_{i}=X_{i,j}\\ \mu_{i}&\text{if}\quad z_{i}=\varnothing\quad\text{or}\quad z_{i}\neq X_{i,j} \end{cases};\] (4)
* \(\boldsymbol{\Gamma}=[\boldsymbol{\gamma}_{1},\boldsymbol{\gamma}_{2},\ldots, \boldsymbol{\gamma}_{m}]^{\intercal}\in\left\{0,1\right\}^{m\times n}\): the posterior probabilities for haplotype identity for all sites in matrix form \(\Gamma_{i,j}=P\left(y_{i}=j\right|(z_{k})_{k=1}^{m})\);
* \(\mathbf{A}=[\boldsymbol{\alpha}_{1},\boldsymbol{\alpha}_{2},\ldots,\boldsymbol {\alpha}_{m}]^{\intercal}\in\left\{0,1\right\}^{m\times n}\): the forward probabilities [35] for all sites in matrix form such that \(A_{i,j}=P(y_{i}=j|\left(z_{k}\right)_{k=1}^{i})\);
* \(\mathbf{B}=[\boldsymbol{\beta}_{1},\boldsymbol{\beta}_{2},\ldots,\boldsymbol {\beta}_{m}]^{\intercal}\in\left\{0,1\right\}^{m\times n}\): the backward probabilities [35] for all sites in matrix form such that \(B_{i,j}=P\left(y_{i}=j,(z_{k})_{k=i+1}^{m}\right)\); | ```
SHIELDは、アレイベースのgenotypingプラットフォームが直接測定していないマーカにおけるターゲットサンプルのgenotypeを正確に推定するためのプログラムです。このプログラムは、プライバシーを保護しながら、寄与者のプライバシーを公的参照パネルに保ちます。SHIELDの核となるのは、ライ・Stephensモデルであり、遺伝的再構成が、祖先ハプロタイプ断片のMosaikと、Markovランダムフィールドで合流することによって構成される。ターゲットゲノムの祖先ハプロタイプを推定するために、標準的な前向後向アルゴリズムを使用し、そして、プライバシーが保証された、参照パネルのテンプレートハプロタイプを用いて、見逃されたサイトにおける最も可能性の高いgenotypeを推定します。この方法は、差分プライバシーからのランダム化された応答技術によって保護された、プライバシーが保証された参照 |
2306.17793 | Screw and Lie Group Theory in Multibody Dynamics -- Recursive Algorithms
and Equations of Motion of Tree-Topology Systems | Screw and Lie group theory allows for user-friendly modeling of multibody
systems (MBS) while at the same they give rise to computationally efficient
recursive algorithms. The inherent frame invariance of such formulations allows
for use of arbitrary reference frames within the kinematics modeling (rather
than obeying modeling conventions such as the Denavit-Hartenberg convention)
and to avoid introduction of joint frames. The computational efficiency is owed
to a representation of twists, accelerations, and wrenches that minimizes the
computational effort. This can be directly carried over to dynamics
formulations. In this paper recursive $O\left( n\right) $ Newton-Euler
algorithms are derived for the four most frequently used representations of
twists, and their specific features are discussed. These formulations are
related to the corresponding algorithms that were presented in the literature.
The MBS motion equations are derived in closed form using the Lie group
formulation. One are the so-called 'Euler-Jourdain' or 'projection' equations,
of which Kane's equations are a special case, and the other are the Lagrange
equations. The recursive kinematics formulations are readily extended to higher
orders in order to compute derivatives of the motions equations. To this end,
recursive formulations for the acceleration and jerk are derived. It is briefly
discussed how this can be employed for derivation of the linearized motion
equations and their time derivatives. The geometric modeling allows for direct
application of Lie group integration methods, which is briefly discussed. | Andreas Mueller | 2023-06-30T16:48:25 | http://arxiv.org/abs/2306.17793v1 | Screw and Lie Group Theory in Multibody Dynamics Recursive Algorithms and Equations of Motion of Tree-Topology Systems
###### Abstract
Screw and Lie group theory allows for user-friendly modeling of multibody systems (MBS) while at the same they give rise to computationally efficient recursive algorithms. The inherent frame invariance of such formulations allows for use of arbitrary reference frames within the kinematics modeling (rather than obeying modeling conventions such as the Denavit-Hartenberg convention) and to avoid introduction of joint frames. The computational efficiency is owed to a representation of twists, accelerations, and wrenches that minimizes the computational effort. This can be directly carried over to dynamics formulations. In this paper recursive \(O\left(n\right)\) Newton-Euler algorithms are derived for the four most frequently used representations of twists, and their specific features are discussed. These formulations are related to the corresponding algorithms that were presented in the literature. The MBS motion equations are derived in closed form using the Lie group formulation. One are the so-called 'Euler-Jourdain' or 'projection' equations, of which Kane's equations are a special case, and the other are the Lagrange equations. The recursive kinematics formulations are readily extended to higher orders in order to compute derivatives of the motions equations. To this end, recursive formulations for the acceleration and jerk are derived. It is briefly discussed how this can be employed for derivation of the linearized motion equations and their time derivatives. The geometric modeling allows for direct application of Lie group integration methods, which is briefly discussed.
Keywords:Multibody system dynamics relative coordinates recursive algorithms O(n) screws Lie groups Newton-Euler equations Lagrange equations Kane's equations Euler-Jourdain equations projection equations Lie group integration linearization
## 1 Introduction
The core task in computational multibody system (MBS) dynamics is to either construct the equations of motion (EOM) explicitly, that can be written for an unconstrained tree-topology MBS in the form
\[\mathbf{M}\left(\mathbf{q}\right)\ddot{\mathbf{q}}+\mathbf{C}\left(\dot{\mathbf{ q}},\mathbf{q}\right)\dot{\mathbf{q}}=\mathbf{Q}\left(\dot{\mathbf{q}},\mathbf{q},t \right), \tag{1}\]
in a way that is easy to pursue, or to evaluate them for given \(\left(\dot{\mathbf{q}},\dot{\mathbf{q}},\mathbf{q}\right)\) and \(t\), respectively to solve them, in a computationally efficient way for \(\mathbf{q}\left(t\right)\). In continuation of [62] the aim of this paper is to present established \(O\left(n\right)\) formulations in a common geometric setting and to show that this setting allows for a flexible and user-friendly MBS modeling.
Screw and Lie group theory provides a geometric framework that allows for achieving optimal computational performance and at the same time allows for an intuitive and flexible modeling. In particular, it gives rise to a formulation of the MBS kinematics that does not involve body-fixed joint frames. The kinematics modeling is indeed reflected in the formulation used to evaluate the EOM. A central concept is the representation of velocities (twists) as screws. Four different variants were recalled in [62]. In this paper their application to dynamics modeling is reviewed. A well-known approach, which exploits the fact that rigid body twists are screws, is the so-called'spatial
vector' formulation introduced in [27; 30], respectively the so-called'spatial operator algebra' that was formalized in [75]. The latter is the basis for the \(O\left(n\right)\) forward dynamics algorithms introduced in [31; 38; 39; 45; 74; 76]. The fundamental operation underlying these formulations is the frame transformations of screws, i.e. twists and wrenches. The fact that the latter can be expressed in terms of compact matrix operations gave rise to a matrix formulation for the MBS kinematic and dynamics [5; 43; 44; 85] using screw algebra. While these formulations make merely use of the algebraic properties of screws (e.g. velocities, accelerations, wrenches) several algorithms for generating the EOM of MBS with tree topology were reported that also exploit the fact that finite rigid body motions constitute the Lie group \(SE\left(3\right)\) whose Lie algebra \(se\left(3\right)\) is isomorphic to the algebra of screws [16; 33; 34; 24; 25]. The central relation is the _product of exponentials_ (POE) introduced in [16]. The important feature of such a geometric Lie group formulation is the frame invariance, which makes it independent from any modeling convention like Denavit-Hartenberg. This allows for direct processing of CAD data, and gives further rise to numerically advantageous Lie group time integration methods. Yet there is no established Lie group algorithm for the generation respectively evaluation of the EOM that takes full advantage of the freedom to chose different motion representations enabled by the frame invariance.
This paper is organized as follows. Recursive relations for the acceleration and jerk, and thus for the time derivatives of the Jacobians, are first derived in section 2. The Newton-Euler equations for the four different representations of twists introduced in [62] are then recalled in section 3. The corresponding recursive \(O\left(n\right)\) inverse dynamics algorithm for evaluating the EOM are presented in section 4. The body-fixed algorithm is similar to that in [2; 7; 31; 35; 36; 45; 46; 69; 70; 73; 72; 78], the hybrid formulation to that in [1; 6; 38; 39; 75; 76], and the spatial formulation to that in [30]. Two versions of the EOM in closed form are presented in section 5. In section 5.1 the 'Euler-Jourdain' respectively 'projection' equations [15; 86] are presented that, together with the screw formulation of MBS kinematics, allows for an efficient MBS modeling in terms of readily available geometric data. In section 5.2 a closed form of the Lagrangian EOM is presented using the Lie group approach. It should be noticed that the presented formulations allow for modeling MBS without introduction of joint frames, while applying the recursive kinematics and dynamics algorithm that is deemed best suited. The significance of the Lie group formulation for the linearization of the EOM as well as the determination of derivative of the EOM w.r.t. geometric design parameters and time derivatives is discussed in section 6. Finally in section 7 the application of Lie group integration methods is briefly discussed. The kinematic relations that were presented in [62] are summarized in appendix A. The basic Lie group background can be found in [48; 77; 65].
## 2 Acceleration, Jerk, and Partial Derivatives of Jacobian
Besides the compact description of finite and instantaneous motions of a system of articulated bodies, a prominent feature of the screw theoretical approach is that it allows for expressing the partial derivatives explicitly in terms geometric objects. Moreover, the analytic formulation of the kinematics using the POE gives rise to compact expressions for higher derivatives of the instantaneous joint screws, i.e. of the Jacobian, which may be relevant for sensitivity analysis and linearization of motion equations. In this section results for the acceleration and jerk of a kinematic chain are presented for the body-fixed, spatial, and hybrid representation. The corresponding relations for the mixed representation are readily found from either one of these using the relations in table 3 of [62].
### Body-Fixed Representation
Starting from (101) the body-fixed acceleration is \(\dot{\mathbf{V}}_{i}^{\mathrm{b}}=\mathbf{J}_{i}^{\mathrm{b}}\ddot{\mathbf{q}}+ \mathbf{J}_{i}^{\mathrm{b}}\dot{\mathbf{q}}\), and explicitly in terms of the body-fixed instantaneous screw coordinates
\[\dot{\mathbf{V}}_{i}^{\mathrm{b}}=\sum_{j\leq i}\mathbf{J}_{i,j}^{\mathrm{b}} \ddot{q}_{j}+\sum_{j\leq i}\sum_{k\leq i}\frac{\partial}{\partial q_{k}}\mathbf{ J}_{i,j}^{\mathrm{b}}\dot{q}_{j}\dot{q}_{k}. \tag{2}\]
Using the matrix form of (103) the partial derivatives of the instantaneous screw coordinates are
\[\frac{\partial}{\partial q_{k}}\widehat{\mathbf{J}}_{i,j}^{\mathrm{b}}=\frac{ \partial}{\partial q_{k}}(\mathbf{C}_{i}^{-1}\mathbf{C}_{j})\mathbf{A}_{j}^{- 1}\widehat{\mathbf{Y}}_{j}\mathbf{A}_{j}\mathbf{C}_{i}^{-1}\mathbf{C}_{j}+ \mathbf{C}_{i}^{-1}\mathbf{C}_{j}\mathbf{A}_{j}^{-1}\widehat{\mathbf{Y}}_{j} \mathbf{A}_{j}\frac{\partial}{\partial q_{k}}(\mathbf{C}_{j}^{-1}\mathbf{C}_ {i}). \tag{2}\]
This can be evaluated with help of the POE formula (93) as
\[\frac{\partial}{\partial q_{k}}(\mathbf{C}_{i}^{-1}\mathbf{C}_{j}) =\frac{\partial}{\partial q_{k}}(\mathbf{A}_{i}^{-1}\exp(- \widehat{\mathbf{Y}}_{i}q_{i})\cdots\exp(-\widehat{\mathbf{Y}}_{j+1}q_{j+1}) \mathbf{A}_{j}))\] \[=-\mathbf{A}_{i}^{-1}\exp(-\widehat{\mathbf{Y}}_{i}q_{i})\cdots \exp(-\widehat{\mathbf{Y}}_{k+1}q_{k+1})\widehat{\mathbf{Y}}_{k}\exp(- \widehat{\mathbf{Y}}_{k}q_{k})\cdots\exp(-\widehat{\mathbf{Y}}_{j+1}q_{j+1}) \mathbf{A}_{j}\] \[=-\mathbf{C}_{i}^{-1}\mathbf{C}_{k}\mathbf{A}_{k}^{-1}\widehat{ \mathbf{Y}}_{k}\mathbf{A}_{k}\mathbf{C}_{k}^{-1}\mathbf{C}_{j}=-\mathbf{C}_{i }^{-1}\mathbf{C}_{k}\mathbf{A}_{k}^{-1}\widehat{\mathbf{Y}}_{k}\mathbf{A}_{k} \mathbf{C}_{k}^{-1}\mathbf{C}_{i}\mathbf{C}_{i}^{-1}\mathbf{C}_{j}\] \[=-\widehat{\mathbf{J}}_{i,k}^{\mathrm{b}}\mathbf{C}_{i}^{-1} \mathbf{C}_{j},\ j\leq k\leq i, \tag{3}\]
and in the same way follows that
\[\frac{\partial}{\partial q_{k}}(\mathbf{C}_{j}^{-1}\mathbf{C}_{i})=\mathbf{C} _{j}^{-1}\mathbf{C}_{i}\widehat{\mathbf{J}}_{i,j}^{\mathrm{b}},\ j\leq k\leq i. \tag{4}\]
Inserted into (2) yields \(\frac{\partial}{\partial q_{k}}\widehat{\mathbf{J}}_{i,j}^{\mathrm{b}}= \widehat{\mathbf{J}}_{i,j}^{\mathrm{b}}\widehat{\mathbf{J}}_{i,k}^{\mathrm{b}} -\widehat{\mathbf{J}}_{i,k}^{\mathrm{b}}\widehat{\mathbf{J}}_{i,j}^{\mathrm{b}}\), and noting (113), the final expression is
\[\frac{\partial\mathbf{J}_{i,j}^{\mathrm{b}}}{\partial q_{k}}=[\mathbf{J}_{i,j }^{\mathrm{b}},\mathbf{J}_{i,k}^{\mathrm{b}}],\ j<k\leq i. \tag{5}\]
Hence the partial derivative of the instantaneous joint screw \(\mathbf{J}_{i,j}^{\mathrm{b}}\) w.r.t. to \(q_{k}\) is simply the screw product (114) of \(\mathbf{J}_{i,j}^{\mathrm{b}}\) and \(\mathbf{J}_{i,k}^{\mathrm{b}}\). The final expression for the acceleration attains a very compact form
\[\dot{\mathbf{V}}_{i}^{\mathrm{b}}=\sum_{j\leq i}\mathbf{J}_{i,j}^{\mathrm{b}} \ddot{q}_{j}+\sum_{j<k\leq i}[\mathbf{J}_{i,j}^{\mathrm{b}},\mathbf{J}_{i,k}^ {\mathrm{b}}]\dot{q}_{j}\dot{q}_{k}. \tag{6}\]
Indeed the same result would be obtained using (103) in terms of \(\mathbf{Y}_{i}\). This expression has been derived, using different notations, for instance in [16; 65; 69; 51].
The equations (6) can be summarized for all bodies \(i=1,\ldots,n\) using the system twist (111) and system Jacobian (112). To this end, the derivative (5) is rewritten as
\[\frac{\partial\mathbf{J}_{i,j}^{\mathrm{b}}}{\partial q_{k}}=[\mathbf{J}_{i,j}^ {\mathrm{b}},\mathbf{J}_{i,k}^{\mathrm{b}}]=\mathbf{Ad}_{\mathbf{C}_{i,k}}[ \mathbf{J}_{k,j}^{\mathrm{b}},{}^{k}\mathbf{X}_{k}]=-\mathbf{Ad}_{\mathbf{C}_{ i,k}}\mathbf{ad}_{\mathbf{A}_{k}\mathbf{X}_{k}}\mathbf{J}_{k,j}^{\mathrm{b}},\ j<k\leq i \tag{7}\]
so that
\[\dot{\mathbf{J}}_{i,j}^{\mathrm{b}}=\sum_{j<k\leq i}[\mathbf{J}_{i,j}^{ \mathrm{b}},\mathbf{J}_{i,k}^{\mathrm{b}}]\dot{q}_{k}=-\sum_{j<k\leq i}\mathbf{ Ad}_{\mathbf{C}_{i,k}}\mathbf{ad}_{\mathbf{A}_{k}\mathbf{X}_{k}}\mathbf{J}_{k,j}^{ \mathrm{b}}\dot{q}_{k}.\]
Noticing that \(\mathbf{ad}_{\mathbf{\cdot}\mathbf{X}_{k}}\mathbf{J}_{k,k}^{\mathrm{b}}=\mathbf{0}\) the time derivative of the body-fixed system Jacobian factors as
\[\dot{\mathbf{J}}^{\mathrm{b}}\left(\mathbf{q},\dot{\mathbf{q}}\right)=-\mathbf{ A}^{\mathrm{b}}\left(\mathbf{q}\right)\mathbf{a}^{\mathrm{b}}\left(\dot{\mathbf{q}} \right)\mathbf{A}^{\mathrm{b}}\left(\mathbf{q}\right)\mathbf{X}^{\mathrm{b}}=- \mathbf{A}^{\mathrm{b}}\left(\mathbf{q}\right)\mathbf{a}^{\mathrm{b}}\left(\dot{ \mathbf{q}}\right)\mathbf{J}^{\mathrm{b}}\left(\mathbf{q}\right) \tag{8}\]
with \(\mathbf{A}^{\mathrm{b}}\) defined in (24) of [62] and with
\[\mathbf{a}^{\mathrm{b}}\left(\dot{\mathbf{q}}\right):=\mathrm{diag}\ (\dot{q}_{1} \mathbf{ad}_{\mathbf{\cdot}\mathbf{X}_{1}},\ldots,\dot{q}_{n}\mathbf{ad}_{\mathbf{ \cdot}\mathbf{X}_{n}}). \tag{9}\]
Hence the system acceleration is given in compact matrix form as
\[\dot{\mathbf{V}}^{\mathrm{b}}=\mathbf{J}^{\mathrm{b}}\ddot{\mathbf{q}}-\mathbf{A} ^{\mathrm{b}}\mathbf{a}^{\mathrm{b}}\mathbf{J}^{\mathrm{b}}\dot{\mathbf{q}}= \mathbf{J}^{\mathrm{b}}\ddot{\mathbf{q}}-\mathbf{A}^{\mathrm{b}}\mathbf{a}^{ \mathrm{b}}\mathbf{V}^{\mathrm{b}}. \tag{10}\]
Remark 1 (Overall inverse kinematics solution): The relation (10) gives rise to a solution of the inverse kinematics problem on acceleration level, i.e. the generalized accelerations for given configurations, twists, and accelerations of the bodies. The unique solution is
\[\ddot{\mathbf{q}}=((\mathbf{X}^{\mathrm{b}})^{T}\mathbf{X}^{\mathrm{b}})^{-1}( \mathbf{X}^{\mathrm{b}})^{T}((\mathbf{I}-\mathsf{D}^{\mathrm{b}})\dot{\mathbf{ V}}^{\mathrm{b}}+\mathsf{a}^{\mathrm{b}}\mathbf{V}^{\mathrm{b}}) \tag{11}\]
which is indeed the time derivative of (26) in [62]. In components this gives the acceleration of the individual joints as \(\ddot{q}_{i}={}^{i}\mathbf{X}_{i}^{T}(\dot{\mathbf{V}}_{i}^{\mathrm{b}}- \mathbf{A}\mathbf{d}\mathbf{c}_{\mathrm{i},i-1}\dot{\mathbf{V}}_{i-1}^{ \mathrm{b}}+\dot{q}_{i}[^{i}\mathbf{X}_{i},\mathbf{V}_{i}^{\mathrm{b}}])/ \left\|{}^{i}\mathbf{X}_{i}\right\|^{2}\).
A further time derivative of the twist yields the jerk of a body, which requires a further partial derivative of the Jacobian. Starting from (5), and using the Jacobi identity (116) and the bilinearity \(\frac{\partial}{\partial q_{k}}[\mathbf{J}_{i,j}^{\mathrm{b}},\mathbf{J}_{i, k}^{\mathrm{b}}]=\)[\(\frac{\partial}{\partial q_{k}}\mathbf{J}_{i,j}^{\mathrm{b}},\mathbf{J}_{i, k}^{\mathrm{b}}]+[\mathbf{J}_{i,j}^{\mathrm{b}},\frac{\partial}{\partial q _{k}}\mathbf{J}_{i,k}^{\mathrm{b}}]\), the non-zero second partial derivative is found as
\[\frac{\partial^{2}\mathbf{J}_{i,j}^{\mathrm{b}}}{\partial q_{k}\partial q_{r }}=\left\{\begin{array}{l}[[\mathbf{J}_{i,j}^{\mathrm{b}},\mathbf{J}_{i,k}^{ \mathrm{b}}],\mathbf{J}_{i,r}^{\mathrm{b}}],j<k\leq r\leq i\\ [[\mathbf{J}_{i,j}^{\mathrm{b}},\mathbf{J}_{i,r}^{\mathrm{b}}],\mathbf{J}_{i, k}^{\mathrm{b}}],j<r<k\leq i\end{array}\right.. \tag{12}\]
This gives rise to an explicit form for the body-fixed jerk
\[\dot{\mathbf{V}}_{i}^{\mathrm{b}} =\sum_{j\leq i}\mathbf{J}_{i,j}^{\mathrm{b}}\overset{..}{q}_{j}+ 2\!\!\sum_{j<k\leq i}[\mathbf{J}_{i,j}^{\mathrm{b}},\mathbf{J}_{i,k}^{\mathrm{ b}}]\ddot{q}_{j}\dot{q}_{k}\] (13) \[+\sum_{j<k\leq i}[\mathbf{J}_{i,j}^{\mathrm{b}},\mathbf{J}_{i,k}^ {\mathrm{b}}]\dot{q}_{j}\ddot{q}_{k}+\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
In matrix form the overall spatial acceleration can be summarized as
\[\dot{\mathbf{V}}^{\mathrm{s}}=\mathrm{J}^{\mathrm{s}}\ddot{\mathbf{q}}+\mathsf{Lb}^ {\mathrm{s}}\mathrm{diag}\;(\mathbf{J}^{\mathrm{s}}_{1},\ldots,\mathbf{J}^{ \mathrm{s}}_{n})\dot{\mathbf{q}} \tag{18}\]
with
\[\mathsf{b}^{\mathrm{s}}\left(\mathbf{V}^{\mathrm{s}}\right):=\mathrm{diag}\;( \mathbf{ad}_{\mathbf{V}^{\mathrm{s}}_{1}},\ldots,\mathbf{ad}_{\mathbf{V}^{ \mathrm{s}}_{n}}) \tag{19}\]
and \(\mathsf{L}\) being the lower triangular block identity matrix. A solution for \(\ddot{\mathbf{q}}\) similar to (10) exists.
The second partial derivative of the spatial Jacobian is
\[\frac{\partial^{2}\mathbf{J}^{\mathrm{s}}_{i}}{\partial q_{k}q_{j}}=\left\{ \begin{array}{ll}[\mathbf{J}^{\mathrm{s}}_{k},[\mathbf{J}^{\mathrm{s}}_{j}, \mathbf{J}^{\mathrm{s}}_{i}]],&k<j<i\\ [\mathbf{J}^{\mathrm{s}}_{j},[\mathbf{J}^{\mathrm{s}}_{k},\mathbf{J}^{ \mathrm{s}}_{i}]],&j\leq k<i\end{array}\right.. \tag{20}\]
Therewith the spatial representation of the jerk of body \(i\) is found as
\[\ddot{\mathbf{V}}^{\mathrm{s}}_{i} =\sum_{j\leq i}\left(\mathbf{J}^{\mathrm{s}}_{j}\dddot{q}_{j}+2[ \mathbf{V}^{\mathrm{s}}_{j},\mathbf{J}^{\mathrm{s}}_{j}]\ddot{q}_{j}+\sum_{k \leq j}[\mathbf{J}^{\mathrm{s}}_{k}\ddot{q}_{k},\mathbf{J}^{\mathrm{s}}_{j}] \dot{q}^{j}+[\mathbf{V}^{\mathrm{s}}_{j-1}+\mathbf{V}^{\mathrm{s}}_{j}- \mathbf{V}^{\mathrm{s}}_{i},[\mathbf{V}^{\mathrm{s}}_{j},\mathbf{J}^{ \mathrm{s}}_{j}]]\dot{q}_{j}\right) \tag{21}\] \[=\sum_{j\leq i}\left(\mathbf{J}^{\mathrm{s}}_{j}\dddot{q}_{j}+[ [\mathbf{V}^{\mathrm{s}}_{j},\mathbf{J}^{\mathrm{s}}_{j}],\mathbf{V}^{ \mathrm{s}}_{i}-2\mathbf{V}^{\mathrm{s}}_{j}]\dot{q}_{j}+[\sum_{k\leq j} \mathbf{J}^{\mathrm{s}}_{k}\ddot{q}_{k}+[\mathbf{V}^{\mathrm{s}}_{j},\mathbf{J }^{\mathrm{s}}_{j}]\dot{q}_{j},\mathbf{J}^{\mathrm{s}}_{j}]\dot{q}_{j}+2[ \mathbf{V}^{\mathrm{s}}_{j},\mathbf{J}^{\mathrm{s}}_{j}]\ddot{q}_{j}\right) \tag{22}\]
The instantaneous joint screws (104), and thus their derivatives (15) and (20), are independent of a particular body. The closed form of the \(\nu\)-th order partial derivative has been reported in [55]
\[\frac{\partial^{\nu}\mathbf{J}^{\mathrm{s}}_{i}}{\partial q_{\alpha _{1}}\partial q_{\alpha_{2}}\ldots\partial q_{\alpha_{\nu}}} =[\mathbf{J}^{\mathrm{s}}_{\beta_{\nu}},[\mathbf{J}^{\mathrm{s}}_ {\beta_{\nu-1}},[\mathbf{J}^{\mathrm{s}}_{\beta_{\nu-2}},\ldots[\mathbf{J}^{ \mathrm{s}}_{\beta_{1}},\mathbf{J}^{\mathrm{s}}_{i}]\ldots]]],\;\beta_{\nu} \leq\beta_{\nu-1}\leq\cdots\leq\beta_{1}<i \tag{23}\] \[=\mathbf{ad}_{\mathbf{J}^{\mathrm{s}}_{\beta_{\nu}}}\mathbf{ad}_{ \mathbf{J}^{\mathrm{s}}_{\beta_{\nu-1}}}\mathbf{ad}_{\mathbf{J}^{\mathrm{s}}_ {\beta_{\nu-2}}}\cdots\mathbf{ad}_{\mathbf{J}^{\mathrm{s}}_{\beta_{1}}}\mathbf {J}^{\mathrm{s}}_{i},\;\beta_{\nu}\leq\beta_{\nu-1}\leq\cdots\leq\beta_{1}<i\] \[=[\mathbf{J}^{\mathrm{s}}_{\beta_{\nu}},\frac{\partial^{\nu-1} \mathbf{J}^{\mathrm{s}}_{i}}{\partial q_{\beta_{1}}\partial q_{\beta_{2}} \cdots\partial q_{\beta_{\nu-1}}}],\;\beta_{\nu}\leq\beta_{\nu-1}<i\]
where again \(\beta_{\nu}\leq\beta_{\nu-1}\leq\cdots\leq\beta_{1}\) is the ordered sequence of the indices \(\alpha_{1},\ldots,\alpha_{\nu}\). The last form in (23) allows for a recursive determination. Moreover, a recursive formulation for the time derivative of spatial twists has been reported in [58]. Together with the very concise form (16) this makes the spatial representation computationally very attractive.
### Hybrid Form
The results in section 2.1 can be carried over to the hybrid twist making use of the relation (106). As in (118), denote with \(\overset{v}{\mathbf{J}}^{\mathrm{h}}_{i,k}\) and \(\overset{\omega}{\mathbf{J}}^{\mathrm{h}}_{i,k}\) the screw coordinate vectors comprising respectively the linear and angular part of the column of the hybrid Jacobian so that \(\mathbf{J}^{\mathrm{h}}_{i,k}=\overset{\omega}{\mathbf{J}}^{\mathrm{h}}_{i,k} +\overset{v}{\mathbf{J}}^{\mathrm{h}}_{i,k}\). Then
\[\frac{\partial\mathbf{J}^{\mathrm{h}}_{i,j}}{\partial q_{k}} =\frac{\partial\mathbf{Ad}_{\mathbf{R}_{i}}}{\partial q_{k}} \mathbf{J}^{\mathrm{b}}_{i,j}+\mathbf{Ad}_{\mathbf{R}_{i}}\frac{\partial \mathbf{J}^{\mathrm{b}}_{i,j}}{\partial q_{k}}=\mathbf{ad}_{\mathbf{J}^{ \mathrm{s}}_{i,k}}\mathbf{Ad}_{\mathbf{R}_{i}}\mathbf{J}^{\mathrm{b}}_{i,j}+ \mathbf{Ad}_{\mathbf{R}_{i}}[\mathbf{J}^{\mathrm{b}}_{i,j},\mathbf{J}^{ \mathrm{b}}_{i,k}]\] \[=[\overset{\omega}{\mathbf{J}}^{\mathrm{h}}_{i,k},\mathbf{J}^{ \mathrm{h}}_{i,j}]+[\mathbf{J}^{\mathrm{h}}_{i,j},\mathbf{J}^{\mathrm{h}}_{i,k} ]=[\mathbf{J}^{\mathrm{h}}_{i,j},\mathbf{J}^{\mathrm{h}}_{i,k}-\overset{ \omega}{\mathbf{J}}^{\mathrm{h}}_{i,k}] \tag{24}\]
and thus
\[\frac{\partial\mathbf{J}^{\mathrm{h}}_{i,j}}{\partial q_{k}} {=}[\mathbf{J}^{\mathrm{h}}_{i,j},\overset{v}{\mathbf{J}}^{ \mathrm{h}}_{i,k}]=-\mathbf{ad}_{\overset{v}{\mathbf{J}}^{\mathrm{h}}_{i,k}} \mathbf{J}^{\mathrm{h}}_{ij},\;\;j\leq k\leq i. \tag{25}\]
The similarity to (5) is apparent. The difference is that the convective term due to the angular motion is missing, which is why only \(\overset{v}{\mathbf{J}}\) appears. The time derivative of the hybrid Jacobian can thus be expressed as
\[\overset{\mathrm{h}}{\mathbf{J}}^{\mathrm{h}}_{i,j}=\sum_{k\leq j}[\mathbf{J}^{ \mathrm{h}}_{i,j},\overset{v}{\mathbf{J}}^{\mathrm{h}}_{i,k}]\dot{q}_{k}=[ \mathbf{J}^{\mathrm{h}}_{i,j},\Delta\overset{v}{\mathbf{V}}^{\mathrm{h}}_{j-1,i}] \tag{26}\]
where \(\Delta\mathbf{V}_{j-1,i}^{\mathrm{h}}:=\mathbf{V}_{i}^{\mathrm{h}}-\mathbf{Ad}_{ \mathbf{r}_{i,j-1}}\mathbf{V}_{j-1}^{\mathrm{h}}\) is the relative hybrid twist of body \(j-1\) and \(i\) as observed in the BFR on body \(i\). A simpler relation is obtained by directly differentiating (105)
\[\dot{\mathbf{J}}_{i,j}^{\mathrm{h}} =(\mathbf{ad}_{\dot{\mathbf{r}}_{i,j}}+\mathbf{Ad}_{\mathbf{r}_{ i,j-1}}\mathbf{ad}_{\mathbf{\omega}_{j}^{\ast}})^{0}\mathbf{X}_{j}^{j} \tag{27}\] \[=\mathbf{Ad}_{\mathbf{r}_{i,j-1}}(\mathbf{ad}_{\dot{\mathbf{d}}_ {i,j}}+\mathbf{ad}_{\mathbf{\omega}_{j}^{\ast}})^{0}\mathbf{X}_{j}^{j}=\mathbf{Ad} _{\mathbf{r}_{i,j-1}}(\mathbf{ad}_{\mathbf{V}_{j}^{\ast}}-\mathbf{ad}_{\dot{ \mathbf{r}}_{i}})^{0}\mathbf{X}_{j}^{j}.\]
This yields the following explicit expressions for the hybrid acceleration
\[\dot{\mathbf{V}}_{i}^{\mathrm{h}}= \sum_{j\leq i}\mathbf{J}_{i,j}^{\mathrm{h}}\ddot{q}_{j}+\sum_{j \leq k\leq i}\left[\mathbf{J}_{i,j}^{\mathrm{h}},\dot{\mathbf{v}}_{i,k}^{ \mathrm{h}}\right]\!\dot{q}_{j}\dot{q}_{k}=\sum_{j\leq i}(\mathbf{J}_{i,j}^{ \mathrm{h}}\ddot{q}_{j}+[\mathbf{J}_{i,j}^{\mathrm{h}},\Delta\mathbf{V}_{j-1,i}^{\mathrm{v}}]\!\dot{q}_{j}) \tag{28}\] \[= \sum_{j\leq i}(\mathbf{J}_{i,j}^{\mathrm{h}}\ddot{q}_{j}+(\mathbf{ ad}_{\mathbf{r}_{i,j}}+\mathbf{Ad}_{\mathbf{r}_{i,j}}\mathbf{ad}_{\mathbf{\omega}_{j}^{ \ast}})^{0}\mathbf{X}_{j}^{j}\dot{q}_{j}). \tag{29}\]
For the second derivative it is simplest to start from (27), and a straightforward calculation yields
\[\ddot{\mathbf{J}}_{i,j}^{\mathrm{h}}=\left(\mathbf{ad}_{\dot{\mathbf{r}}_{i,j }}+2\mathbf{ad}_{\mathbf{r}_{i,j}}\mathbf{ad}_{\mathbf{\omega}_{j}^{\ast}}+ \mathbf{Ad}_{\mathbf{r}_{i,j}}(\mathbf{ad}_{\mathbf{\omega}_{j}^{\ast}}+\mathbf{ ad}_{\mathbf{\omega}_{j}^{\ast}}\mathbf{ad}_{\mathbf{\omega}_{j}^{\ast}})\right)^{0} \mathbf{X}_{j}^{j}. \tag{30}\]
The jerk in hybrid representation can thus be written as
\[\ddot{\mathbf{V}}_{i}^{\mathrm{h}}= \sum_{j\leq i}\left(\mathbf{J}_{i,j}^{\mathrm{h}}\dddot{\mathbf{r} }_{j}+2\mathbf{ad}_{\mathbf{r}_{i,j}}\ddot{q}_{j}+\left(\mathbf{ad}_{\dot{ \mathbf{r}}_{i,j}}+2\mathbf{ad}_{\mathbf{r}_{i,j}}\mathbf{ad}_{\mathbf{\omega}_{j}^ {\ast}}\right)\!\dot{q}_{j}\right. \tag{31}\] \[\qquad+\left.\mathbf{Ad}_{\mathbf{r}_{i,j}}\big{(}2\mathbf{ad}_{ \mathbf{\omega}_{j}^{\ast}}\ddot{q}_{j}+\mathbf{ad}_{\mathbf{\omega}_{j}^{\ast}}+ \mathbf{ad}_{\mathbf{\omega}_{j}^{\ast}}\mathbf{ad}_{\mathbf{\omega}_{j}^{\ast}}\big{)} \dot{q}_{j}\big{)}^{0}\mathbf{X}_{j}^{j}\right). \tag{32}\]
These are the core relation in the so-called'spatial vector' formulation (i.e. using the hybrid representation of twists) [38; 39; 31; 45; 74; 76]. In this context the Lie bracket, respectively screw product, (114) has been termed the'spatial cross product' [28; 30].
### Mixed Representation
With (100), employing the results for the mixed representation, yields
\[\dot{\mathbf{J}}_{ij}^{\mathrm{m}}=\left(\begin{array}{cc}\mathbf{R}_{i}^{T} &\mathbf{0}\\ \mathbf{0}&\mathbf{I}\end{array}\right)\dot{\mathbf{J}}_{ij}^{\mathrm{h}},\ \ \dot{\mathbf{V}}_{i}^{\mathrm{m}}=\left(\begin{array}{cc}\mathbf{R}_{i}^{T}& \mathbf{0}\\ \mathbf{0}&\mathbf{I}\end{array}\right)\dot{\mathbf{V}}_{i}^{\mathrm{h}},\ \ \ddot{\mathbf{V}}_{i}^{\mathrm{m}}=\left( \begin{array}{cc}\mathbf{R}_{i}^{T}&\mathbf{0}\\ \mathbf{0}&\mathbf{I}\end{array}\right)\ddot{\mathbf{V}}_{i}^{\mathrm{h}}. \tag{33}\]
## 3 Newton-Euler Equations in Various Representations
### Spatial Representation
Consider a rigid body with body-fixed BFR \(\mathcal{F}_{\mathrm{b}}=\{\Omega;\vec{e}_{0,1},\vec{e}_{\mathrm{b},2},\vec{e}_ {\mathrm{b},3}\}\) located at an arbitrary point \(\Omega\). Denote the inertia matrix w.r.t. this BFR with \(\mathbf{M}^{\mathrm{b}}\), ref. (40). The configuration of the BFR \(\mathcal{F}_{\mathrm{b}}\) is described by \(\mathbf{C}=(\mathbf{R},\mathbf{r})\). The spatial inertia matrix expressed in the IFR is then
\[\mathbf{M}^{\mathrm{s}}=\mathbf{Ad}_{\mathbf{C}}^{-T}\mathbf{M}^{\mathrm{b}} \mathbf{Ad}_{\mathbf{C}}^{-1}. \tag{34}\]
The spatial canonical momentum co-screw \(\mathbf{\Pi}^{\mathrm{s}}=\left(\mathbf{L}^{\mathrm{s}},\mathbf{P}^{\mathrm{s}} \right)^{T}\in se^{\ast}\left(3\right)\), conjugate to the spatial twist, is thus
\[\mathbf{\Pi}^{\mathrm{s}}=\mathbf{M}^{\mathrm{s}}\mathbf{V}^{\mathrm{s}}= \mathbf{Ad}_{\mathbf{C}}^{-T}\mathbf{M}^{\mathrm{b}}\mathbf{Ad}_{\mathbf{C}}^{ -1}\mathbf{V}^{\mathrm{s}}=\mathbf{Ad}_{\mathbf{C}}^{-T}\mathbf{\Pi}^{ \mathrm{b}}. \tag{35}\]
The momentum balance yields the Newton-Euler (NE) equations in spatial representation, which attains the simple form
\[\dot{\mathbf{\Pi}}^{\mathrm{s}}=\mathbf{W}^{\mathrm{s}} \tag{36}\]
where \(\mathbf{W}^{\mathrm{s}}=\left(\mathbf{t}^{\mathrm{s}},\mathbf{f}^{\mathrm{s}} \right)^{T}\) is the applied wrench, with spatial torque \(\mathbf{t}^{\mathrm{s}}\equiv{}^{0}\mathbf{t}^{0}\) and force \(\mathbf{f}^{\mathrm{s}}\equiv{}^{0}\mathbf{f}\), both measured and resolved in the IFR. The momentum balance equation (36) is the simplest form
possible, which is achieved by using the spatial representation of twist, wrench, and momentum. Firstly, it does not involve any vectorial operation, e.g. cross products. Secondly, it is also numerically advantageous: any numerical discretization of the ODE (36) easily preserves the spatial momentum in the absence of external wrenches. This has been discussed already by Borri in [13]. In this context the spatial formulation is called the fixed pole equation. In a recent paper [32] the advantages of this form are exploited for geometrically exact modeling of beams.
The explicit and compact form in terms of the spatial twist is found, introducing (35) and using
\[\dot{\mathbf{M}}^{\mathrm{s}}=-\mathbf{ad}_{\mathbf{V}\cdot}^{T}\mathbf{M}^{ \mathrm{s}}-\mathbf{M}^{\mathrm{s}}\mathbf{ad}_{\mathbf{V}^{\mathrm{s}}} \tag{37}\]
along with \(\mathbf{ad}_{\mathbf{V}^{\mathrm{s}}}\mathbf{V}^{\mathrm{s}}=\mathbf{0}\), as
\[\boxed{\mathbf{W}^{\mathrm{s}}=\mathbf{M}^{\mathrm{s}}\dot{\mathbf{V}}^{ \mathrm{s}}-\mathbf{ad}_{\mathbf{V}^{\mathrm{s}}}^{T}\mathbf{M}^{\mathrm{s}} \mathbf{V}^{\mathrm{s}}.} \tag{38}\]
Remark 2: Writing (38) as \(\mathbf{W}^{\mathrm{s}}=\mathbf{M}^{\mathrm{s}}\dot{\mathbf{V}}^{\mathrm{s}}+ \mathbf{C}^{\mathrm{s}}\mathbf{V}^{\mathrm{s}}\) (with \(\mathbf{C}^{\mathrm{s}}:=-\mathbf{ad}_{\mathbf{V}\cdot}^{T}\mathbf{M}^{ \mathrm{s}}\)) shows that \(\dot{\mathbf{M}}^{\mathrm{s}}-2\mathbf{C}^{\mathrm{s}}=\mathbf{ad}_{\mathbf{ V}\cdot}^{T}\mathbf{M}^{\mathrm{s}}-\mathbf{M}^{\mathrm{s}}\mathbf{ad}_{ \mathbf{V}^{\mathrm{s}}}\) is skew symmetric. This property is called the skew symmetry of the motion equations [65].
### Body-fixed Representation
Let \(\mathcal{F}_{\mathrm{c}}=\{C;\vec{e}_{\mathrm{c},1},\vec{e}_{\mathrm{c},2}, \vec{e}_{\mathrm{c},3}\}\) be a body-fixed frame located at the COM. Its configuration is described by \(C_{\mathrm{c}}=\left(\mathbf{R}_{\mathrm{c}},\mathbf{r}_{\mathrm{c}}\right)\). The body-fixed twist of the COM frame is denoted with \(\widetilde{\boldsymbol{\omega}}_{\mathrm{c}}^{\mathrm{b}}=\mathbf{R}_{ \mathrm{c}}^{T}\dot{\mathbf{R}}_{\mathrm{c}},\mathbf{v}_{\mathrm{c}}^{\mathrm{ b}}=\mathbf{R}_{\mathrm{c}}^{T}\dot{\mathbf{r}}_{\mathrm{c}}\). The inertia matrix w.r.t. this COM frame is denoted
\[\mathbf{M}_{\mathrm{c}}^{\mathrm{b}}=\left(\begin{array}{cc}\mathbf{\Theta} _{\mathrm{c}}&\mathbf{0}\\ \mathbf{0}&m\mathbf{I}\end{array}\right) \tag{39}\]
with the body mass \(m\) and the inertia tensor \(\mathbf{\Theta}_{\mathrm{c}}\) expressed in the body-fixed COM frame \(\mathcal{F}_{\mathrm{c}}\). Let \(S_{\mathrm{bc}}=\left(\mathbf{R}_{\mathrm{bc}},\mathbf{b}^{\mathrm{d}}\mathbf{ b}_{\mathrm{bc}}\right)\in SE\left(3\right)\) be the transformation from the COM frame \(\mathcal{F}_{\mathrm{c}}\) to the BFR \(\mathcal{F}_{\mathrm{b}}\). Here \(\mathbf{b}\mathbf{d}_{\mathrm{bc}}\) is the position vector from the BFR to the COM resolved in the BFR. Then the configuration of \(\mathcal{F}_{\mathrm{c}}\) is given in terms of that of the BFR by \(\mathbf{C}_{\mathrm{c}}=\mathbf{C}\mathbf{S}_{\mathrm{bc}}\). The inertia matrix w.r.t. to the general BFR \(\mathcal{F}_{\mathrm{b}}\) is
\[\mathbf{M}^{\mathrm{b}} =\mathbf{Ad}_{\mathbf{S}_{\mathrm{bc}}}^{-T}\mathbf{M}_{\mathrm{ c}}^{\mathrm{b}}\mathbf{Ad}_{\mathbf{S}_{\mathrm{bc}}}^{-1}\] \[=\left(\begin{array}{cc}\mathbf{\Theta}_{\mathrm{b}}&m^{ \mathrm{b}}\widetilde{\boldsymbol{d}}_{\mathrm{bc}}\\ -m^{\mathrm{b}}\widetilde{\boldsymbol{d}}_{\mathrm{bc}}&m\mathbf{I}\end{array}\right) \tag{40}\]
with \(\mathbf{\Theta}_{\mathrm{b}}=\mathbf{R}_{\mathrm{bc}}\mathbf{\Theta}_{ \mathrm{c}}\mathbf{R}_{\mathrm{bc}}^{T}-m\widetilde{\boldsymbol{d}}_{\mathrm{ bc}}^{2}\) (which is the parallel axes theorem).
The momentum co-screw represented in the body-fixed RFR \(\mathcal{F}_{\mathrm{b}}\) is \(\mathbf{\Pi}^{\mathrm{b}}=\mathbf{M}^{\mathrm{b}}\mathbf{V}^{\mathrm{b}}\). The frame transformation of (38) to the BFR \(\mathcal{F}_{\mathrm{b}}\) yields the body-fixed momentum balance represented in \(\mathcal{F}_{\mathrm{b}}\) in the concise form
\[\boxed{\mathbf{W}^{\mathrm{b}}=\dot{\mathbf{\Pi}}^{\mathrm{b}}-\mathbf{ad}_{ \mathbf{V}^{\mathrm{b}}}^{T}\mathbf{\Pi}^{\mathrm{b}}} \tag{41}\]
with the applied wrench \(\mathbf{W}^{\mathrm{b}}=\left(\mathbf{t}^{\mathrm{b}},\mathbf{f}^{\mathrm{b}} \right)^{T}\) in body-fixed representation. The equations (41) are formally identical to the spatial equations (38). Written separately, this yields the NE equations expressed in an arbitrary body-fixed BFR
\[\mathbf{\Theta}_{\mathrm{b}}\dot{\boldsymbol{\omega}}^{\mathrm{b}} +\widetilde{\boldsymbol{\omega}}^{\mathrm{b}}\mathbf{\Theta}_{\mathrm{b}} \boldsymbol{\omega}^{\mathrm{b}}-m^{\mathrm{b}}\left(\mathbf{v}^{\mathrm{b}}+ \widetilde{\boldsymbol{\omega}}^{\mathrm{b}}\mathbf{v}^{\mathrm{b}}\right) \widetilde{\boldsymbol{d}}_{\mathrm{bc}} =\mathbf{t}^{\mathrm{b}} \tag{42}\] \[m\big{(}\dot{\mathbf{v}}^{\mathrm{b}}+\widetilde{\boldsymbol{ \omega}}^{\mathrm{b}}\mathbf{v}^{\mathrm{b}}+(\widetilde{\boldsymbol{\omega}}^{ \mathrm{b}}+\widetilde{\boldsymbol{\omega}}^{\mathrm{b}}\widetilde{\boldsymbol{ \omega}}^{\mathrm{b}})^{\mathrm{b}}\mathbf{d}_{\mathrm{bc}}\big{)} =\mathbf{f}^{\mathrm{b}}. \tag{43}\]
When using the COM frame as special case, the momentum represented in the body-fixed COM frame is \(\mathbf{\Pi}_{\mathrm{c}}^{\mathrm{b}}=\mathbf{M}_{\mathrm{c}}^{\mathrm{b}} \mathbf{V}_{\mathrm{c}}^{\mathrm{b}}\), and the momentum balance yields
\[\mathbf{W}_{\mathrm{c}}^{\mathrm{b}}=\mathbf{M}_{\mathrm{c}}^{\mathrm{b}} \mathbf{V}_{\mathrm{c}}^{\mathrm{b}}-\mathbf{ad}_{\boldsymbol{\omega}_{\mathrm{ c}}^{\mathrm{b}}}^{T}\mathbf{M}_{\mathrm{c}}^{\mathrm{b}}\mathbf{V}_{\mathrm{c}}^{ \mathrm{b}}. \tag{44}\]
Written in components, this yields the NE equations represented in the COM frame
\[\mathbf{\Theta}_{\mathrm{c}}\dot{\boldsymbol{\omega}}_{\mathrm{ c}}^{\mathrm{b}}+\widetilde{\boldsymbol{\omega}}_{\mathrm{c}}^{\mathrm{b}} \mathbf{\Theta}_{\mathrm{c}}\mathbf{\omega}_{\mathrm{c}}^{\mathrm{b}} =\mathbf{t}_{\mathrm{c}}^{\mathrm{b}} \tag{45}\] \[m\big{(}\dot{\mathbf{v}}_{\mathrm{c}}^{\mathrm{b}}+\widetilde{ \boldsymbol{\omega}}_{\mathrm{c}}^{\mathrm{b}}\mathbf{v}_{\mathrm{c}}^{ \mathrm{b}}\big{)} =\mathbf{f}_{\mathrm{c}}^{\mathrm{b}}. \tag{46}\]
Noticeably the angular and translational momentum equations are coupled even though the COM is used as reference. This is due to using body-fixed twists.
### Hybrid Form
The hybrid twist \(\mathbf{V}_{\mathrm{c}}^{\mathrm{h}}=\left(\boldsymbol{\omega}^{\mathrm{s}}, \dot{\mathbf{r}}_{\mathrm{c}}\right)^{T}\) of the COM frame is related to the body-fixed twist by \(\mathbf{Ad}_{\mathbf{R}_{\mathrm{c}}}^{-1}\mathbf{V}_{\mathrm{c}}^{\mathrm{b}}\), see (98), where \(\mathbf{R}_{\mathrm{c}}\) is the absolute rotation matrix of \(\mathcal{F}_{\mathrm{c}}\) in \(C_{\mathrm{c}}\). The hybrid momentum screw is thus \(\mathbf{\Pi}_{\mathrm{c}}^{\mathrm{h}}=\mathbf{M}_{\mathrm{c}}^{\mathrm{h}} \mathbf{V}_{\mathrm{c}}^{\mathrm{h}}\), where the hybrid representation of the inertia matrix is
\[\mathbf{M}_{\mathrm{c}}^{\mathrm{h}}=\mathbf{Ad}_{\mathbf{R}_{ \mathrm{c}}}^{-T}\mathbf{M}_{\mathrm{c}}^{\mathrm{b}}\mathbf{Ad}_{\mathbf{R}_{ \mathrm{c}}}^{-1}=\left(\begin{array}{cc}\mathbf{\Theta}_{\mathrm{c}}^{ \mathrm{h}}&\mathbf{0}\\ \mathbf{0}&m\mathbf{I}\end{array}\right),\ \ \mathbf{\Theta}_{\mathrm{c}}^{ \mathrm{h}}=\mathbf{R}_{\mathrm{c}}\mathbf{\Theta}_{\mathrm{c}}\mathbf{R}_{ \mathrm{c}}^{T}. \tag{47}\]
The hybrid momentum balance w.r.t. the COM follows from \(\dot{\mathbf{\Pi}}_{\mathrm{c}}^{\mathrm{h}}=\mathbf{W}_{\mathrm{c}}^{\mathrm{h}}\). Using \(\dot{\mathbf{M}}_{\mathrm{c}}^{\mathrm{h}}=-\mathbf{ad}_{\boldsymbol{\omega}^{ \mathrm{s}}}^{T}\mathbf{M}_{\mathrm{c}}^{\mathrm{h}}-\mathbf{M}_{\mathrm{c}}^{ \mathrm{h}}\mathbf{ad}_{\boldsymbol{\omega}^{\mathrm{s}}}\) yields
\[\mathbf{W}_{\mathrm{c}}^{\mathrm{h}}=\mathbf{M}_{\mathrm{c}}^{\mathrm{h}} \dot{\mathbf{V}}_{\mathrm{c}}^{\mathrm{h}}+\mathbf{ad}_{\boldsymbol{\omega}^{ \mathrm{s}}}\mathbf{M}_{\mathrm{c}}^{\mathrm{h}}\widetilde{\mathbf{V}}_{ \mathrm{c}}^{\mathrm{h}} \tag{48}\]
with \(\dot{\mathbf{V}}_{\mathrm{c}}^{\mathrm{h}}=\left(\boldsymbol{\omega}^{\mathrm{s} },\mathbf{0}\right)^{T}\) (notice \(-\mathbf{ad}_{\boldsymbol{\omega}^{\mathrm{s}}}^{T}=\mathbf{ad}_{\boldsymbol{ \omega}^{\mathrm{s}}}\)). Writing (48) separately for the angular and linear momentum balance
\[\mathbf{\Theta}_{\mathrm{c}}^{\mathrm{h}}\ddot{\boldsymbol{\omega }}^{\mathrm{s}}+\widetilde{\boldsymbol{\omega}}^{\mathrm{s}}\mathbf{\Theta}_{ \mathrm{c}}^{\mathrm{h}}\boldsymbol{\omega}^{\mathrm{s}} =\mathbf{t}_{\mathrm{c}}^{\mathrm{h}} \tag{49}\] \[m\dot{\mathbf{r}}_{\mathrm{c}} =\mathbf{f}_{\mathrm{c}}^{\mathrm{h}} \tag{50}\]
shows that the hybrid NE equations w.r.t. the COM are indeed decoupled. Here \(\mathbf{W}_{\mathrm{c}}^{\mathrm{h}}=(\mathbf{t}_{\mathrm{c}}^{\mathrm{h}}, \mathbf{f}_{\mathrm{c}}^{\mathrm{h}})^{T}\) denotes the hybrid wrench measured in the COM frame and resolved in the IFR.
Now consider the arbitrary body-fixed BFR \(\mathcal{F}_{\mathrm{b}}\) with configuration \(C=(\mathbf{R},\mathbf{r})\). The hybrid twist \(\mathbf{V}^{\mathrm{h}}=\left(\boldsymbol{\omega}^{\mathrm{s}},\dot{\mathbf{r }}\right)^{T}\) measured at this RFR is \(\mathbf{V}^{\mathrm{h}}=\mathbf{Ad}_{\mathbf{d}_{\mathrm{bc}}}\mathbf{V}_{ \mathrm{c}}^{\mathrm{h}}\), with the displacement vector \(\mathbf{d}_{\mathrm{bc}}\) from BFR to COM resolved in the IFR. The hybrid mass matrix w.r.t. to the BFR \(\mathcal{F}_{\mathrm{b}}\) is found as
\[\mathbf{M}^{\mathrm{h}}=\mathbf{Ad}_{\mathbf{d}_{\mathrm{bc}}}^{-T}\mathbf{M}_{ \mathrm{c}}^{\mathrm{h}}\mathbf{Ad}_{\mathbf{d}_{\mathrm{bc}}}^{-1}=\left( \begin{array}{cc}\mathbf{\Theta}^{\mathrm{h}}&m\widetilde{\mathbf{d}}_{ \mathrm{bc}}\\ -m\widetilde{\mathbf{d}}_{\mathrm{bc}}&m\mathbf{I}\end{array}\right),\ \ \mathbf{\Theta}^{\mathrm{h}}=\mathbf{\Theta}_{\mathrm{c}}^{\mathrm{h}}-m \widetilde{\mathbf{d}}_{\mathrm{bc}}^{2}. \tag{51}\]
The momentum balance in hybrid representation w.r.t. an arbitrary BFR
\[\dot{\mathbf{\Pi}}^{\mathrm{h}}=\mathbf{W}^{\mathrm{h}} \tag{52}\]
is found, using \(\dot{\mathbf{Ad}}_{\mathbf{d}_{\mathrm{bc}}}^{-1}=-\mathbf{ad}_{\mathbf{d}_{ \mathrm{bc}}}=\mathbf{Ad}_{\mathbf{d}_{\mathrm{bc}}}^{-1}\mathbf{ad}_{ \boldsymbol{\omega}^{\mathrm{s}}}-\mathbf{ad}_{\boldsymbol{\omega}^{\mathrm{s}}} \mathbf{Ad}_{\mathbf{d}_{\mathrm{bc}}}^{-1}\) to evaluate (52), as
\[\boxed{\mathbf{W}^{\mathrm{h}}=\mathbf{M}^{\mathrm{h}}\dot{\mathbf{V}}^{\mathrm{h}} +\mathbf{ad}_{\boldsymbol{\omega}^{\mathrm{s}}}\mathbf{M}^{\mathrm{h}}\dot{ \mathbf{V}}^{\mathrm{h}}.} \tag{53}\]
Separating the angular and translational part results in
\[\mathbf{\Theta}^{\mathrm{h}}\dot{\boldsymbol{\omega}}^{\mathrm{s}}+ \widetilde{\boldsymbol{\omega}}^{\mathrm{s}}\mathbf{\Theta}^{\mathrm{h}} \boldsymbol{\omega}^{\mathrm{s}}+m\widetilde{\mathbf{d}}_{\mathrm{bc}}\ddot{ \mathbf{r}} =\mathbf{t}^{\mathrm{h}} \tag{54}\] \[m(\ddot{\mathbf{r}}+(\dot{\widetilde{\boldsymbol{\omega}}}^{\mathrm{ s}}+\widetilde{\boldsymbol{\omega}}^{\mathrm{s}}\widetilde{\boldsymbol{\omega}}^{ \mathrm{s}})\mathbf{d}_{\mathrm{bc}}) =\mathbf{f}^{\mathrm{h}}. \tag{55}\]
These are simpler than the body-fixed equations (42) and (43). Finally notice that \(\mathbf{f}^{\mathrm{h}}=\mathbf{f}^{\mathrm{s}}\).
### Mixed Form
The mixed twists \(\mathbf{V}^{\mathrm{m}}=\big{(}\boldsymbol{\omega}^{\mathrm{b}},\dot{\mathbf{r}} \big{)}^{T}\) consists of the body-fixed angular velocity \(\boldsymbol{\omega}^{\mathrm{b}}\), i.e. measured and resolved in the BFR \(\mathcal{F}_{\mathrm{b}}\), and the translational velocity \(\dot{\mathbf{r}}\) measured at the BFR \(\mathcal{F}_{\mathrm{b}}\) and resolved in the IFR. The NE equations for the mixed representation w.r.t. a general BFR are directly found by combining (42) and (55), with \(\ddot{\mathbf{r}}=\dot{\mathbf{v}}^{\mathrm{b}}+\widetilde{\boldsymbol{ \omega}}^{\mathrm{b}}\mathbf{v}^{\mathrm{b}}\),
\[\boldsymbol{\Theta}^{\mathrm{b}}\dot{\boldsymbol{\omega}}^{ \mathrm{b}}+\widetilde{\boldsymbol{\omega}}^{\mathrm{b}}\boldsymbol{\Theta}^{ \mathrm{b}}\boldsymbol{\omega}^{\mathrm{b}}+m^{\mathrm{b}}\widetilde{\mathbf{ d}}_{\mathrm{bc}}\mathbf{R}^{T}\ddot{\mathbf{r}} = \mathbf{t}^{\mathrm{b}} \tag{56}\] \[m(\ddot{\mathbf{r}}+\mathbf{R}(\dot{\widetilde{\boldsymbol{ \omega}}}^{\mathrm{b}}+\widetilde{\boldsymbol{\omega}}^{\mathrm{b}}\widetilde{ \boldsymbol{\omega}}^{\mathrm{b}})^{\mathrm{b}}\mathbf{d}_{\mathrm{bc}}) = \mathbf{f}^{\mathrm{h}}. \tag{57}\]
If a COM frame is used, combining (45) and (50) yields
\[\boldsymbol{\Theta}_{c}\dot{\boldsymbol{\omega}}^{\mathrm{b}}_{ \mathrm{c}}+\widetilde{\boldsymbol{\omega}}^{\mathrm{b}}_{\mathrm{c}} \boldsymbol{\Theta}_{\mathrm{c}}\boldsymbol{\omega}^{\mathrm{b}}_{\mathrm{c}} = \mathbf{t}^{\mathrm{b}}_{\mathrm{c}} \tag{58}\] \[m\ddot{\mathbf{r}}_{\mathrm{c}} = \mathbf{f}^{\mathrm{h}}_{\mathrm{c}}.\]
### Arbitrary Representation
The NE equations of body \(i\) represented in an arbitrary frame \(\mathcal{F}_{j}\) are obtained by a frame transformation of the spatial momentum balance (36) as
\[\mathbf{Ad}^{T}_{\mathbf{C}_{j}}\dot{\mathbf{\Pi}}^{\mathrm{s}}_{i}=\mathbf{ Ad}^{T}_{\mathbf{C}_{j}}\mathbf{W}^{\mathrm{s}}_{i}. \tag{59}\]
The spatial twist in terms of the twist of body \(i\) represented in \(\mathcal{F}_{j}\) is \(\mathbf{V}^{\mathrm{s}}_{i}=\mathbf{Ad}_{\mathbf{C}_{j}}{}^{j}\mathbf{V}_{i}\). Using \(\dot{\mathbf{V}}^{\mathrm{s}}_{i}=\mathbf{Ad}_{\mathbf{C}_{j}}{}^{j}\dot{ \mathbf{V}}_{i}+\mathbf{ad}_{\mathbf{V}_{j}}{}^{j}\mathbf{V}^{\mathrm{s}}_{i}\), (38) yields
\[{}^{j}\mathbf{M}_{i}\big{(}{}^{j}\dot{\mathbf{V}}_{i}+\mathbf{ad}_{{}^{j}{}_{ \mathbf{V}_{j}}{}^{j}}\mathbf{V}_{i}\big{)}-\mathbf{ad}^{T}_{\mathbf{V}_{i}}{} ^{j}\mathbf{M}_{i}{}^{j}\mathbf{V}_{i}={}^{j}\mathbf{W}_{i} \tag{60}\]
with the inertia matrix of body \(i\) represented in frame \(j\)
\[{}^{j}\mathbf{M}_{i}:=\mathbf{Ad}^{T}_{\mathbf{C}_{j}}\mathbf{M}^{\mathrm{s} }_{i}\mathbf{Ad}_{\mathbf{C}_{j}}. \tag{61}\]
The spatial and body-fixed representations are special cases with \(i=j\).
Even more generally, the NE equations can be resolved in yet another frame \(\mathcal{F}_{k}\). This is achieved by transforming the momentum balance (36) as
\[\mathbf{Ad}^{T}_{\mathbf{R}_{j,k}}\mathbf{Ad}^{T}_{\mathbf{C}_{j}}\dot{\mathbf{ \Pi}}^{\mathrm{s}}_{i}=\mathbf{Ad}^{T}_{\mathbf{R}_{j,k}}\mathbf{Ad}^{T}_{ \mathbf{C}_{j}}\mathbf{W}^{\mathrm{s}}_{i} \tag{62}\]
where \(\mathbf{R}_{k,j}\) is the rotation matrix from \(\mathcal{F}_{i}\) to \(\mathcal{F}_{k}\). The final equations follow from (60) and the relation \({}^{j}\dot{\mathbf{V}}^{j}_{i}=\mathbf{Ad}_{\mathbf{R}_{j,k}}{}^{k}\dot{ \mathbf{V}}^{j}_{i}+\mathbf{ad}_{{}^{j}{}_{\widetilde{\mathbf{V}}^{j}_{k}}} \mathbf{Ad}_{\mathbf{R}_{j,k}}{}^{k}\mathbf{V}^{j}_{i}\) as
\[{}^{k}\mathbf{M}^{j}_{i}\big{(}{}^{k}\dot{\mathbf{V}}^{j}_{i}+\big{(}\mathbf{ ad}_{{}^{k}{}_{\mathbf{V}^{j}_{j}}}+\mathbf{ad}_{{}^{k}{}_{\widetilde{\mathbf{V}}^{j}_{k}}} \big{)}^{k}\mathbf{V}^{j}_{i}\big{)}-\mathbf{ad}^{T}_{\mathbf{x}^{\prime}_{i}}{} ^{k}\mathbf{M}^{j}_{i}{}^{k}\mathbf{V}^{j}_{i}={}^{k}\mathbf{W}^{j}_{i} \tag{63}\]
with the mass matrix of body \(i\) measured at frame \(\mathcal{F}_{j}\) and resolved in frame \(\mathcal{F}_{k}\)
\[{}^{k}\mathbf{M}^{j}_{i}:=\mathbf{Ad}^{T}_{\mathbf{R}_{k,j}}{}^{j}\mathbf{M}_{ i}\mathbf{Ad}_{\mathbf{R}_{k,j}}=\mathbf{Ad}^{T}_{\mathbf{R}_{k,j}}\mathbf{Ad}^{T}_{ \mathbf{C}_{j}}\mathbf{M}^{\mathrm{s}}_{i}\mathbf{Ad}_{\mathbf{C}_{j}}\mathbf{ Ad}_{\mathbf{R}_{k,j}}. \tag{64}\]
The spatial and body-fixed representations are special cases with \(i=j=k\), and the hybrid representation with \(i=j\) and \(k=0\). An alternative form of the NE equations in arbitrary reference frames was presented in [9].
## 4 Recursive Evaluation of the Motion Equations for a Kinematic Chain
The model-based control of complex MBS as well as the computational MBS dynamics rely on efficient recursive inverse and forward dynamics algorithms. A recursive Newton-Euler method for tree-topology MBS was presented in an abstract, i.e. coordinate-free, approach in [47]. However, the various recursive methods using different representations give rise to algorithmically equivalent methods but with different computational costs. In the following, the various inverse dynamics algorithms are presented and their computational effort is estimated. A detailed analysis as well as the forward dynamics algorithms are beyond the scope of this paper. The presented discussion is nevertheless indicative also for the corresponding forward dynamics algorithms. Some results on the forward kinematics complexity can be found in [67; 81; 87]. This depends on the actual implementation, however. A comparative study is still due, and shall be part of further research. The inverse dynamics consists in evaluating the motion equations for given joint coordinates \(\mathbf{q}\), joint rates \(\dot{\mathbf{q}}\), accelerations \(\ddot{\mathbf{q}}\), and applied wrenches \(\mathbf{W}_{i}^{\mathrm{app}}\), and hence to determine the joint forces \(\mathbf{Q}=(Q_{1},\ldots Q_{n})\). The starting point of recursive algorithms for rigid body MBS are the NE equations of the individual bodies. The MBS dynamics is indeed governed by the Lagrange equations. Consequently, summarizing the recursive steps yields the Lagrangian motion equations in closed form. This will be shown in the following.
It is assumed for simplicity that the inertia properties, i.e. the mass matrices \(\mathbf{M}_{i}^{\mathrm{b}}\), are expressed in the body-fixed BFR of body \(i\) determining its configuration, rather than introducing a second frame.
### Body-fixed Representation
Forward Kinematics RecursionGiven the joint variables \(\mathbf{q}\), the configurations of the \(n\) bodies are determined recursively by (92) or (93), and the twists by (108). Then also the accelerations are found recursively. The expression \(\mathbf{C}_{i-1,i}\left(q_{i}\right)=\mathbf{B}_{i}\exp({}^{i}\mathbf{X}_{i}q _{i})\) for the relative configuration yields \(\dot{\mathbf{Ad}}_{\mathbf{C}_{i,i-1}}\mathbf{V}_{i-1}^{\mathrm{b}}=[ \mathbf{Ad}_{\mathbf{C}_{i,i-1}}\mathbf{V}_{i-1}^{\mathrm{b}},{}^{i}\mathbf{ X}_{i}\dot{q}_{i}]\), and hence
\[\dot{\mathbf{V}}_{i}^{\mathrm{b}} =\mathbf{Ad}_{\mathbf{C}_{i,i-1}}\dot{\mathbf{V}}_{i-1}^{\mathrm{ b}}+[\mathbf{Ad}_{\mathbf{C}_{i,i-1}}\mathbf{V}_{i-1}^{\mathrm{b}},{}^{i} \mathbf{X}_{i}\dot{q}_{i}]+{}^{i}\mathbf{X}_{i}\ddot{q}_{i} \tag{65a}\] \[=\mathbf{Ad}_{\mathbf{C}_{i,i-1}}\dot{\mathbf{V}}_{i-1}^{\mathrm{ b}}+[\mathbf{Ad}_{\mathbf{C}_{i,i-1}}\mathbf{V}_{i-1}^{\mathrm{b}},{}^{ \mathrm{b}}_{i}]+{}^{i}\mathbf{X}_{i}\ddot{q}_{i}\] (65b) \[=\mathbf{Ad}_{\mathbf{C}_{i,i-1}}\dot{\mathbf{V}}_{i-1}^{\mathrm{ b}}+[\mathbf{V}_{i}^{\mathrm{b}},{}^{i}\mathbf{X}_{i}\dot{q}_{i}]+{}^{i} \mathbf{X}_{i}\ddot{q}_{i}. \tag{65c}\]
where (65b) and (65c) follow by replacing either argument in the Lie bracket using (108).
Remark 3: Notice that solving (108) for \(\dot{q}_{i}\) leads to the result in remark 9 of [62]. Solving (65c) for \(\ddot{q}_{i}\) yields (11). Using (65b) the latter can be expressed as \(\ddot{q}_{i}={}^{i}\mathbf{X}_{i}^{T}\left(\mathbf{V}_{i}^{\mathrm{b}}- \mathbf{Ad}_{\mathbf{C}_{i,i-1}}\dot{\mathbf{V}}_{i-1}^{\mathrm{b}}+[\mathbf{V }_{i}^{\mathrm{b}},\mathbf{Ad}_{\mathbf{C}_{i,i-1}}\mathbf{V}_{i-1}^{\mathrm{b }}]\right)\)/ \(\left\|{}^{i}\mathbf{X}_{i}\right\|^{2}\).
Recursive Newton-Euler AlgorithmOnce the configurations, twists, and accelerations of the bodies are computed with the forward kinematics recursion, the Newton-Euler equations (41) for each individual body can be evaluated by an inverse dynamics backward recursion. The momentum balance of body \(i\) then yields the resulting body-fixed wrench \(\mathbf{W}_{i}^{\mathrm{b}}\) acting on the body due to generalized joint forces and constraint reactions forces. Projecting the resultant wrench onto the screw axis \({}^{i}\mathbf{X}_{i}\) of joint \(i\) yields the generalized force \(Q_{i}\). Summarizing the forward and backward recursions yields the following recursive algorithm:
Forward Kinematics
* Input: \(\mathbf{q},\dot{\mathbf{q}},\ddot{\mathbf{q}}\)
* For \(i=1,\ldots,n\) \[\mathbf{C}_{i} =\mathbf{C}_{i-1}\mathbf{B}_{i}\exp({}^{i}\mathbf{X}_{i}q_{i})= \exp(\mathbf{Y}_{1}q_{1})\cdot\ldots\cdot\exp(\mathbf{Y}_{i}q_{i})\mathbf{A}_{i}\] (66a) \[\mathbf{V}_{i}^{\mathrm{b}} =\mathbf{Ad}_{\mathbf{C}_{i,i-1}}\mathbf{V}_{i-1}^{\mathrm{b}}+{} ^{i}\mathbf{X}_{i}\dot{q}_{i}\] (66b) \[\dot{\mathbf{V}}_{i}^{\mathrm{b}} =\mathbf{Ad}_{\mathbf{C}_{i,i-1}}\dot{\mathbf{V}}_{i-1}^{\mathrm{b }}-\dot{q}_{i}\mathbf{ad}\cdot_{\mathbf{X}_{i}}\mathbf{V}_{i}^{\mathrm{b}}+{} ^{i}\mathbf{X}_{i}\ddot{q}_{i}\] (66c)
* Output: \(\mathbf{C}_{i},\mathbf{V}_{i}^{\mathrm{b}},\dot{\mathbf{V}}_{i}^{\mathrm{b}}\)
Inverse Dynamics
* Input: \(\mathbf{C}_{i},\mathbf{V}_{i}^{\mathrm{b}},\dot{\mathbf{V}}_{i}^{\mathrm{b}}, \mathbf{W}_{i}^{\mathrm{b,app}}\)
* For \(i=n-1,\ldots,1\) \[\dot{\mathbf{W}}_{i}^{\mathrm{b}} =\mathbf{Ad}_{\mathbf{C}_{i+1,i}}^{T}\mathbf{W}_{i+1}^{\mathrm{b }}+\mathbf{M}_{i}^{\mathrm{b}}\dot{\mathbf{V}}_{i}^{\mathrm{b}}-\mathbf{ad}_{ \mathbf{V}_{i}^{\mathrm{b}}}^{T}\mathbf{M}_{i}^{\mathrm{b}}\mathbf{V}_{i}^{ \mathrm{b}}+\mathbf{W}_{i}^{\mathrm{b,app}}\] (67a) \[Q_{i} ={}^{i}\mathbf{X}_{i}^{T}\mathbf{W}_{i}^{\mathrm{b}}\] (67b)
* Output: \(\mathbf{Q}\)
The joint reaction wrench is omitted in (67a) since this is reciprocal to the joint screw, and does not contribute to (67b). Notice that, with (94), the body-fixed \({}^{i}\mathbf{X}_{i}\) as well as the spatial representation \(\mathbf{Y}_{i}\) of joint screw coordinates can be used. This form of the recursive body-fixed NE equations, using Lie group notation, has been reported in several publications [69; 73; 72; 51].
Computational EffortFor the kinematic chain comprising \(n\) bodies connected by \(n\)\(1\)-DOF joints, in total, the twist recursion (66b) and acceleration recursion (66c) each requires \(n-1\) frame transformations. The acceleration recursion (66c) further requires \(n-1\) Lie brackets. The second argument of the Lie bracket can be reused from (66b). Hence the twist and acceleration recursion need \(2\left(n-1\right)\) frame transformations and \(n-1\) Lie brackets. The backward recursion (67a) needs \(n-1\) frame transformations and \(n\) Lie brackets. In total, the NE algorithm needs \(3\left(n-1\right)\) frame transformations and \(2n-1\) Lie brackets. The evaluation of the Lie bracket in (66c) can be simplified using (65b) since the screw vector \({}^{i}\mathbf{X}_{i}\) expressed in RFR is sparse and often only contains one non-zero entry.
Remark on Forward DynamicsUsing the body-fixed representation, a recursive forward dynamics algorithm, making explicit use of Lie group concepts, was presented in [69; 70; 73; 72; 78]. The kinematic forward recursion together with the factorization in section 3.1.3 of [62] was used derive \(O\left(n\right)\) forward dynamics algorithms in [31; 45], where the Lie group concept is regarded as spatial operator algebra. Other \(O\left(n\right)\) forward dynamics algorithms were presented in [2; 7; 35; 36]. The inverse dynamics formulation was also presented in [30; 46] in the context of screw theory.
### Spatial Representation
Forward KinematicsRecursing the spatial twist in terms of the spatial Jacobian, the expressions (104) lead immediately to
\[\mathbf{V}_{i}^{\mathrm{s}}=\mathbf{V}_{i-1}^{\mathrm{s}}+\mathbf{J}_{i}^{ \mathrm{s}}\dot{q}_{i}. \tag{68}\]
The recursive determination of spatial accelerations thus only requires the time derivative (16) of the spatial Jacobian, so that
\[\dot{\mathbf{V}}_{i}^{\mathrm{s}} =\dot{\mathbf{V}}_{i-1}^{\mathrm{s}}+\mathbf{ad}_{\mathbf{V}_{i}^ {\mathrm{s}}}\mathbf{J}_{i}^{\mathrm{s}}\dot{q}_{i}+\mathbf{J}_{i}^{\mathrm{s} }\ddot{q}_{i} \tag{69}\] \[=\dot{\mathbf{V}}_{i-1}^{\mathrm{s}}+\mathbf{ad}_{\mathbf{V}_{i-1}^ {\mathrm{s}}}\mathbf{V}_{i}^{\mathrm{s}}+\mathbf{J}_{i}^{\mathrm{s}}\ddot{q}_ {i}.\]
The second form in (69) follows by inserting (68). This is the generalization of Euler's theorem, for the derivative of vectors resolved in moving frames, to screw coordinate vectors. Therefore the \(\mathbf{ad}\) operator is occasionally called the'spatial cross product'.
Recursive Newton-Euler AlgorithmThe momentum balance expressed with the spatial NE equations (38) together with (68) leads to the following algorithm:
Forward Kinematics * Input: \(\mathbf{q},\dot{\mathbf{q}},\ddot{\mathbf{q}}\) * For \(i=1,\ldots,n\) \[\mathbf{C}_{i} =\mathbf{C}_{i-1}\mathbf{B}_{i}\exp(i^{\text{\tiny{$\mathsf{i}$}}} \mathbf{X}_{i}q_{i})=\exp(\mathbf{Y}_{1}q_{1})\cdot\ldots\cdot\exp(\mathbf{Y}_ {i}q_{i})\mathbf{A}_{i}\] (70a) \[\mathbf{J}_{i}^{\text{s}} =\mathbf{Ad}_{\mathbf{C}_{i}}{}^{i}\mathbf{X}_{i}=\mathbf{Ad}_{ \mathbf{C}_{i}\mathbf{A}_{i}^{-1}}\mathbf{Y}_{i}=\mathbf{Ad}_{\mathbf{C}_{j} \mathbf{S}_{j,j}}{}^{j-1}\mathbf{Z}_{j}\] (70b) \[\mathbf{V}_{i}^{\text{s}} =\mathbf{V}_{i-1}^{\text{s}}+\mathbf{J}_{i}^{\text{s}}\dot{q}_{i}\] (70c) \[\dot{\mathbf{V}}_{i}^{\text{s}} =\dot{\mathbf{V}}_{i-1}^{\text{s}}+\mathbf{J}_{i}^{\text{s}} \ddot{q}_{i}+\mathbf{ad}_{\mathbf{V}_{i-1}^{\text{s}}}\mathbf{V}_{i}^{\text{ s}}\] (70d) * Output: \(\mathbf{C}_{i},\mathbf{V}_{i}^{\text{s}},\dot{\mathbf{V}}_{i}^{\text{s}}, \mathbf{J}_{i}^{\text{s}}\) Inverse Dynamics * Input: \(\mathbf{C}_{i},\mathbf{V}_{i}^{\text{s}},\dot{\mathbf{V}}_{i}^{\text{s}}, \mathbf{J}_{i}^{\text{s}},\mathbf{W}_{i}^{\text{s,app}}\) * For \(i=n-1,\ldots,1\) \[\mathbf{M}_{i}^{\text{s}} =\mathbf{Ad}_{\mathbf{C}_{i}}^{T}\mathbf{M}_{i}^{\text{b}} \mathbf{Ad}_{\mathbf{C}_{i}}^{-1}\] (71a) \[\mathbf{W}_{i}^{\text{s}} =\mathbf{W}_{i+1}^{\text{s}}+\mathbf{M}_{i}^{\text{s}}\dot{ \mathbf{V}}_{i}^{\text{s}}-\mathbf{ad}_{\mathbf{V}_{i}^{\text{s}}}^{T}\mathbf{ M}_{i}^{\text{s}}\mathbf{V}_{i}^{\text{s}}+\mathbf{W}_{i}^{\text{s,app}}\] (71b) \[Q_{i} =(\mathbf{J}_{i}^{\text{s}})^{T}\mathbf{W}_{i}^{\text{s}}\] (71c) * Output: \(\mathbf{Q}\) _Computational Effort_ In contrast to (66b), once the instantaneous screws (70b) and the spatial mass matrix (34) are computed, the recursions (70c), (70d), and (71b) do not require frame transformations of twists. Instead the spatial mass matrix is transformed, according to (71a), which is the frame transformation of a second-order tensor. Overall the spatial algorithm needs \(n\) frame transformations of screw coordinates, \(n\) frame transformation of a second-order tensor, and \(2n-1\) Lie brackets. Comparing body-fixed and spatial formulation, it must be noticed that the frame transformation of the second-order inertia tensor has the same complexity as two screw coordinate transformations (if just implemented in the form (34)), and hence the computational complexity of both would be equivalent. This fact is to be expected since body-fixed and spatial representations are related by frame transformations. Nevertheless the spatial version has some interesting features that shall be emphasized:
1. The NE equations (38) form a non-linear first-order ODE system on \(SE\left(3\right)\times se\left(3\right)\). Since a spatial reference is used, the momentum conservation of a rigid body can simply be written as \(\dot{\mathbf{\Pi}}_{i}^{\text{s}}=\mathbf{0}\), where \(\mathbf{\Pi}_{i}^{\text{s}}\in se^{\text{s}}\left(3\right)\) is the momentum co-screw. Using the spatial momentum balance (36) has potentially two advantages. Firstly, (36) is a linear ODE in \(\mathbf{\Pi}\) on the phase space \(SE\left(3\right)\times se^{\text{s}}\left(3\right)\). This implies that a numerical integration scheme can easily preserve the momentum, as pointed out in [13]. Secondly, \(O\left(n\right)\) formulations using canonical momenta have been shown to be computationally advantageous. An \(O\left(n\right)\) forward dynamics algorithm based on the canonical Hamilton equations was presented in [66] that uses on the hybrid form. It was shown to require less numerical operations than \(O\left(n\right)\) algorithms based on the NE equations. It is also known that \(O\left(n\right)\) algorithms based on the spatial representation can be computationally more efficient than those based on body-fixed or hybrid representations [30]. A further reduction of computational costs shall be expected from an algorithm using spatial momenta.
2. It is interesting to notice that the hybrid as well as the spatial twists appear in the recursive \(O\left(n\right)\) forward dynamics formulation in [6], where the first is called 'Cartesian velocity' and the latter'velocity state'. In this formulation the spatial twist plays a central role, and it was already remarked that the recursive relation of spatial twists, ref. (70c), is simpler than that for hybrid twists (75c) below.
3. If a _purely kinematic analysis_ is envisaged the forward recursion (70b)-(70d) is more efficient than the body-fixed and the hybrid version (see next section) [67] (disregarding possibly necessary transformations of the results to local reference frames). As pointed out in section 2.2 this advantage is retained for the higher-order kinematics (jerk, jounce, etc.) [55].
Remark on Forward DynamicsThe spatial formulation is rarely used for dynamics. Featherstone [27; 30] derived a forward dynamics \(O\left(n\right)\) algorithm. It was concluded that this requires the lowest computational effort compared to other methods. But this does not take into account the necessary transformations of twists and wrenches to local reference frames. Moreover, it was shown in [81] that the \(O\left(n\right)\) forward dynamics algorithm in body-fixed representation, using the body-fixed joint screw coordinates \({}^{i}\mathbf{X}_{i}\) and RFR at the joint axis, can be implemented in such a way that it requires less computational effort than the spatial version. The key is that when the BFR of \(\mathcal{F}_{i}\) is located at and aligned with the axis of joint \(i\), then \({}^{i}\mathbf{X}_{i}\) becomes sparse. From a users perspective this is a restraining presumption, however.
### Hybrid Form
Forward Kinematics RecursionThe hybrid twists is determined recursively by (110) with \({}^{0}\mathbf{X}_{i}^{i}\)=\(\mathbf{Ad_{R_{i}}}^{i}\mathbf{X}_{i}\). For the acceleration recursion note that \(\dot{\mathbf{Ad_{r}}}_{i,i-1}=\mathbf{ad_{\dot{r}_{i,i-1}}}=\mathbf{ad_{\dot{ r}_{i-1}}}-\mathbf{ad_{\dot{r}_{i}}}\) since \(\dot{\mathbf{r}}_{i,i-1}=\dot{\mathbf{r}}_{i-1}-\dot{\mathbf{r}}_{i}\). This yields
\[\dot{\mathbf{V}}_{i}^{\text{h}}=\mathbf{Ad_{r}}_{i,i-1}\dot{\mathbf{V}}_{i-1}^ {\text{h}}+{}^{0}\mathbf{X}_{i}^{i}\ddot{q}_{i}+\mathbf{ad_{\dot{r}_{i,i-1}}} \mathbf{V}_{i-1}^{\text{h}}+\mathbf{ad_{\omega_{i}}}{}^{0}\mathbf{X}_{i}^{i} \dot{q}_{i}. \tag{72}\]
Taking into account that \(\mathbf{ad_{\dot{r}_{i}}}\left(\mathbf{V}_{i}^{\text{h}}-\mathbf{X}_{i}^{ \text{h}}\dot{q}_{i}\right)=\mathbf{ad_{\dot{r}_{i}}}\mathbf{V}_{i-1}^{\text{h }}\) (because there is no angular part in \(\mathbf{ad_{\dot{r}_{i}}}\)), and \(\mathbf{ad_{\dot{r}_{i}}}+\mathbf{ad_{\omega_{i}^{\prime}}}=\mathbf{ad_{\dot{ \mathbf{V}_{i}^{\prime}}}}\), this can be transformed to
\[\dot{\mathbf{V}}_{i}^{\text{h}}=\mathbf{Ad_{r}}_{i,i-1}\dot{\mathbf{V}}_{i-1} ^{\text{h}}+{}^{0}\mathbf{X}_{i}^{i}\ddot{q}_{i}+\mathbf{ad_{\dot{r}_{i-1}}} \mathbf{V}_{i-1}^{\text{h}}-\mathbf{ad_{\dot{r}_{i}}}\mathbf{V}_{i}^{\text{h} }+\mathbf{ad_{\mathbf{V}_{i}^{\text{h}}}}{}^{0}\mathbf{X}_{i}^{i}\dot{q}_{i}. \tag{73}\]
Another form follows by solving (110) for \(\mathbf{V}_{i-1}^{\text{h}}\) and inserting this into (72), while noting that \(\mathbf{ad_{\dot{r}_{i,i-1}}}\mathbf{Ad_{r}}_{i,i-1}^{-1}=\mathbf{ad_{\dot{r}_{ i,i-1}}}\), as
\[\dot{\mathbf{V}}_{i}^{\text{h}}=\mathbf{Ad_{r}}_{i,i-1}\dot{\mathbf{V}}_{i-1}^ {\text{h}}+{}^{0}\mathbf{X}_{i}^{i}\ddot{q}_{i}+\mathbf{ad_{\dot{r}_{i,i-1}}} \mathbf{V}_{i}^{\text{h}}+(\mathbf{ad_{\dot{V}_{i}^{\text{h}}}}-\mathbf{ad_{ \dot{r}_{i-1}}}){}^{0}\mathbf{X}_{i}^{i}\dot{q}_{i}. \tag{74}\]
Comparing these three different recursive relations (72), (73), and (74) for the hybrid acceleration from a computational perspective (72) is the most efficient.
Recursive Newton-Euler AlgorithmWith the hybrid Newton-Euler equations (53) the recursive NE algorithm is as follows:
Forward Kinematics* Input: \(\mathbf{q},\dot{\mathbf{q}},\ddot{\mathbf{q}}\)* For \(i=1,\ldots,n\) \[\mathbf{C}_{i} =\mathbf{C}_{i-1}(\mathbf{q})\mathbf{B}_{i}\exp({}^{i}\mathbf{X}_ {i}q_{i})=\exp(\mathbf{Y}_{1}q_{1})\cdot\ldots\cdot\exp(\mathbf{Y}_{i}q_{i}) \mathbf{A}_{i}\] (75a) \[{}^{0}\mathbf{X}_{i}^{i} =\mathbf{Ad_{R_{i}}}^{i}\mathbf{X}_{i}=\mathbf{Ad_{-r_{i}}} \mathbf{Y}_{i}=\mathbf{Ad_{R_{i}}}\mathbf{Ad_{S_{i,i}}}^{i-1}\mathbf{Z}_{i}\] (75b) \[\mathbf{V}_{i}^{\text{h}} =\mathbf{Ad_{r_{i,i-1}}}\mathbf{V}_{i-1}^{\text{h}}+{}^{0}\mathbf{ X}_{i}^{i}\dot{q}_{i}\] (75c) \[\dot{\mathbf{V}}_{i}^{\text{h}} =\mathbf{Ad_{r_{i,i-1}}}\dot{\mathbf{V}}_{i-1}^{\text{h}}+\mathbf{ ad_{\dot{r}_{i,i-1}}}\mathbf{V}_{i-1}^{\text{h}}+\mathbf{ad_{\omega_{i}^{\prime}}}{}^{0} \mathbf{X}_{i}^{i}\dot{q}_{i}+{}^{0}\mathbf{X}_{i}^{i}\ddot{q}_{i}\] (75d)
* Output: \(\mathbf{C}_{i},\mathbf{V}_{i}^{\text{h}},\dot{\mathbf{V}}_{i}^{\text{h}},{}^{0} \mathbf{X}_{i}^{i}\)
Inverse Dynamics
* Input: \(\mathbf{C}_{i},\mathbf{V}_{i}^{\text{h}},\dot{\mathbf{V}}_{i}^{\text{h}},{}^{0} \mathbf{X}_{i}^{i},\mathbf{W}_{i}^{\text{h,app}}\)
* For \(i=n-1,\ldots,1\) \[\mathbf{M}_{i}^{\mathrm{h}} =\mathbf{Ad}_{\mathbf{R}_{i}}^{T}\mathbf{M}_{i}^{\mathrm{b}} \mathbf{Ad}_{\mathbf{R}_{i}}^{-1}\] (76a) \[\mathbf{W}_{i}^{\mathrm{h}} =\mathbf{Ad}_{\mathbf{d}_{i+1,i}}^{T}\mathbf{W}_{i+1}^{\mathrm{h} }+\mathbf{M}_{i}^{\mathrm{h}}\dot{\mathbf{V}}_{i}^{\mathrm{h}}+\mathbf{ad}_{ \omega^{\mathrm{h}}}\mathbf{M}_{i}^{\mathrm{h}}\dot{\mathbf{V}}_{i}^{\mathrm{h }}+\mathbf{W}_{i}^{\mathrm{h,app}}\] (76b) \[\mathbf{Q}_{i} =(^{0}\mathbf{X}_{i}^{i})^{T}\mathbf{W}_{i}^{\mathrm{h}}\] (76c)
* Output: \(\mathbf{Q}\)
Computational EffortThe hybrid representation is a compromise between using twists and wrenches measured in body-fixed frames (as for the body-fixed representation, where twists and wrenches are measured at the RFR origin) and those resolved in the IFR (as for the spatial representation, where twists and wrenches are measured at the IFR origin). It has therefore been used extensively for \(O\left(n\right)\) inverse and forward dynamics algorithms. The essential difference between the forward recursion for kinematic evaluation in body-fixed and hybrid formulation is that the body-fixed recursion (66a-66c) requires frame transformations of screws involving rotations and translations whereas the hybrid recursion (75a-75d) only requires the change of reference point using position vectors resolved in the IFR. The attitude transformation only appears in (75b) and in the computation of the hybrid inertia matrix (76a). In total the forward kinematics needs \(n\) rotational transformations, \(2n-2\) translational transformations. Further, (75d) needs \(n-1\) cross products of the form \(\mathbf{ad}_{\dot{\mathbf{r}}_{i,i-1}}\mathbf{V}_{i-1}^{\mathrm{h}}=\left( \mathbf{0},(\dot{\mathbf{r}}_{i-1}-\dot{\mathbf{r}}_{i})\times\omega_{i-1}^{ \mathrm{s}}\right)^{T}\) and \(n\) Lie brackets \(\mathbf{ad}_{\omega_{i}^{\mathrm{s}}}\mathbf{X}_{i}^{\mathrm{s}}\). The inverse dynamics needs the \(n\) rotational transformations (76a) of the second-order inertia tensor, \(n-1\) translational transformations of wrenches and \(n\) Lie brackets with \(\omega_{i}^{\mathrm{s}}\) in (76b). In total the hybrid NE algorithm needs \(3n-3\) translational and \(n\) rotational transformations of screw coordinates, \(n\) rotational transformations of the inertia tensor, and \(3n-1\) Lie brackets. Although the number of operations is equivalent to the body-fixed version the particular form of transformations is computationally very simple motivating its extensive us in \(O\left(n\right)\) forward dynamics algorithms. Moreover, the hybrid NE equations are commonly expressed in a body-fixed BFR at the COM, so that the hybrid NE equations simplify to (48) and (49),(50), respectively.
Instead of transforming the joint screws \({}^{i}\mathbf{X}_{i}\) or \(\mathbf{Y}_{i}\) in the reference configuration, the instantaneous hybrid joint screws can be determined using the defining expression (36) in [62] with the current \(\mathbf{b}_{j,i}\) and \(\mathbf{e}_{j}\).
Remark on Forward DynamicsThe above inverse dynamics formulation was presented in [38; 39; 75; 76] together with \(O\left(n\right)\) forward dynamics algorithms. An \(O\left(n\right)\) forward dynamics method was presented in [1; 6]. These algorithms are deemed efficient taking into account that the computation results do not have to be transformed to the body-fixed reference points of interest, as in case of the spatial version. An \(O\left(n\right)\) forward dynamics algorithm was developed in [66] using canonical Hamilton equations in hybrid representation, i.e. the momentum balance (52) in terms of the conjugate momenta \(\mathbf{\Pi}_{i}^{\mathrm{h}}\), rather than the NE equations. It was concluded that its performance is comparable to that of Featherstone's [30] method in terms of spatial twists.
### Choice of Body-Fixed Reference Frames
The Lie group formulation involves geometric and inertia properties that are readily available, e.g. from CAD data.
In [62] and in the preceding sections two approaches to the description of MBS geometry (with and without body-fixed joint frames) and three versions for representing velocities and accelerations (body-fixed, spatial, hybrid) were presented, of which each has its merits. The description of the geometry is independent from the representation of twists. For instance, the geometry could be described in terms of joint screws expressed in the IFR while the kinematics and dynamics is modeled using body-fixed twists. This allows to take advantage of the low-complexity hybrid or spatial recursive NE equations while still having the freedom to use or avoid body-fixed joint frames.
The standard approach to model an MBS is to introduce 1.) an IFR, 2.) body-fixed BFRs, and 3.) body-fixed JFRs. The latter is avoided using spatial joint screws \(\mathbf{Y}_{i}\), as already presented. It still remains to introduce body-fixed BFRs kinematically representing the bodies. However, even the _explicit_ definition of RFRs can be avoided by properly placing them. Their location is usually dictated by the definition of the inertia tensors, and it is customary to relate the inertia data to the COM. If instead the body-fixed BFRs are assigned such that they coincide in the reference configuration (\(\mathbf{q}=\mathbf{0}\)) with the IFR, then no reference configurations of bodies need to be determined (\(\mathbf{A}_{i}=\mathbf{I}\)). This normally means that the RFR is outside the physical extension of a body. That is, the inertia properties of all bodies are determined in the assembly reference configuration w.r.t. to the global IFR. In other words they are deduces from the design drawing (corresponding to \(\mathbf{q}=\mathbf{0}\)) relative to a single construction frame. This can be exploited when using CAD systems. The required kinematic data then reduces to the direction and position vectors, \(\mathbf{e}_{i}\) and \(\mathbf{y}_{i}\), in order to compute \(\mathbf{Y}_{i}\) in (94). As a side effect it is \(\mathbf{Y}_{i}={}^{i}\mathbf{X}_{i}\). This is an important result, and does apply to any of the discussed twist representations, since the representation of twists has nothing to do with the geometry description. Moreover, then the POE (93) and the Jacobian (103), (104), and thus (106), in terms of spatial screw coordinates simplify. The only computational drawback is that the hybrid Newton and Euler equations are not decoupled since the spatial IFR, to which the inertia data is related, is unlikely to coincide with the COM of the bodies in the reference configuration. Details can be found in [54].
## 5 Motion Equations in Closed Form
### Euler-Jourdain Equations
The body-fixed NE equations for the individual bodies within the MBS are
\[\mathbf{M}_{i}^{\mathrm{b}}\mathbf{\dot{V}}_{i}^{\mathrm{b}}-\mathbf{ad}_{ \mathbf{V}_{i}}^{T}\mathbf{M}_{i}^{\mathrm{b}}\mathbf{V}_{i}^{\mathrm{b}}- \mathbf{W}_{i}^{\mathrm{b,app}}-\mathbf{W}_{i}^{\mathrm{b,c}}=\mathbf{0} \tag{77}\]
where \(\mathbf{W}_{i}^{\mathrm{b,c}}\) is the constraint reaction wrench of joint \(i\), and \(\mathbf{W}_{i}^{\mathrm{b,app}}\) represents the total wrench applied to body \(i\) including the applied wrench in joint \(i\). Jourdain's principle of virtual power, using the admissible variation \(\delta\mathbf{V}^{\mathrm{b}}=\mathrm{J}^{\mathrm{b}}\delta\dot{\mathbf{q}}\) of the system twist, and noting that \(\delta\mathbf{V}^{\mathrm{b}}\) are reciprocal to the constraint wrenches (see appendix B.2), yields the system of \(n\) motion equations
\[\left(\mathrm{J}^{\mathrm{b}}\right)^{T}\left(\begin{array}{c}\mathbf{M}_{1} ^{\mathrm{b}}\mathbf{V}_{1}^{\mathrm{b}}-\mathbf{ad}_{\mathbf{V}_{1}^{ \mathrm{b}}}^{T}\mathbf{M}_{1}^{\mathrm{b}}\mathbf{V}_{1}^{\mathrm{b}}- \mathbf{W}_{1}^{\mathrm{b,app}}\\ \vdots\\ \mathbf{M}_{n}^{\mathrm{b}}\mathbf{V}_{n}^{\mathrm{b}}-\mathbf{ad}_{\mathbf{V}_ {n}^{\mathrm{b}}}^{T}\mathbf{M}_{n}^{\mathrm{b}}\mathbf{V}_{n}^{\mathrm{b}}- \mathbf{W}_{n}^{\mathrm{b,app}}\end{array}\right)=\mathbf{0}. \tag{78}\]
This form allows for a concise and computationally efficient construction of the motion equations. The point of departure are the NE equations of the individual bodies. The body-fixed system Jacobian (112) is determined by the (constant) joint screw coordinates in \(\mathsf{X}^{\mathrm{b}}\) and the screw transformations encoded in \(\mathsf{A}^{\mathrm{b}}\). The same applies to the other representations. The accelerations are determined by (6), respectively (10). Explicit evaluation of (78) leads to the recursive algorithm in section 4.1. Inserting the twists and accelerations in (78) yields the equations (1) that determine the MBS dynamics on the tangent bundle \(T\mathbb{V}^{n}\) with state vector \(\left(\mathbf{q},\dot{\mathbf{q}}\right)\in T\mathbb{V}^{n}\). Alternatively, combining (78) with (111) yields a system of \(n+6n\) ODEs in the state variables \(\left(\mathbf{q},\mathbf{V}^{\mathrm{b}}\right)\in\mathbb{V}^{n}\times se\left( 3\right)^{n}\) that govern the dynamics on the state space \(\mathbb{V}^{n}\times se\left(3\right)^{n}\). The advantage of this formulation is that it is an ODE system of first order, and that the system has block triangular structure. Yet another interesting formulation follows with the NE equations (36) in terms of the
conjugate momenta in spatial representation
\[\left(\mathsf{J}^{\mathrm{s}}\right)^{T}\left(\begin{array}{c}\dot{\vec{\Pi}}_ {1}^{\mathrm{s}}-\vec{\mathbf{W}}_{1}^{\mathrm{s,app}}\\ \vdots\\ \dot{\vec{\Pi}}_{n}^{\mathrm{s}}-\vec{\mathbf{W}}_{n}^{\mathrm{s,app}}\end{array} \right)=\vec{\mathbf{0}} \tag{79}\]
This is a system of \(n+6n\) first order ODEs in the phase space \(\left(\vec{\mathbf{q}},\vec{\mathbf{\Pi}}^{\mathrm{s}}\right)\in\mathbb{V}^{n }\times se^{*}\left(3\right)^{n}\). The system (79) can be solved for the \(\dot{\vec{\Pi}}_{i}^{\mathrm{s}}\) and \(\dot{\vec{\mathbf{q}}}_{i}\) noting the block triangular structure of \(\mathsf{J}^{\mathrm{s}}\)[62]. From a numerical point of view the momentum formulation in phase space shall allow for momentum preserving integration schemes.
Various versions of (78) have been published. Using the hybrid representation of twists, basically the same equations were reported in [4]. There the system Jacobian is called the 'natural orthogonal complement' motivated by the fact that the columns of \(\mathsf{J}^{\mathrm{b}}\) are orthogonal to the vectorial representations of constraint wrenches (although the former are screws while the latter are co-screws). In classical vector notation they were reported in [40; 49] and [15]. In [40] the equations (78) are called Euler-Jourdain equations. In [49], emphasizing on the recursive evaluation of the body Jacobian the instantaneous body-fixed joint screws \(\vec{\mathbf{J}}_{i}^{\mathrm{b}}\) are called 'kinematic basic functions' as they are the intrinsic objects in MBS kinematics. In [15] the equations (78) are called 'projection equations' since the NE equations of the individual bodies are restricted to the feasible motion (although \(\mathsf{J}^{\mathrm{b}}\) is not a projector). The equations (78) in body-fixed representation are equivalent to Kane's equations where \(\vec{\mathbf{J}}_{i}^{\mathrm{b}}\) are called 'partial velocities' [41]. The instantaneous joint screw coordinates, i.e. the columns \(\vec{\mathbf{J}}_{i}^{\mathrm{b}}\) of the geometric Jacobian, were also called 'kinematic influence coefficients' and their partial derivatives (5) the'second-order kinematic influence coefficients' [8; 84].
It should be finally remarked that due to the block triangular form of \(\mathsf{J}^{\mathrm{b}}\), solving (78), and using the inversion of \(\vec{\mathbf{A}}^{\mathrm{b}}\) (ref. (25) in [62]), leads immediately to an \(O\left(n\right)\) forward dynamics algorithm. This is the common starting point for deriving forward dynamics algorithms that applies to any twist representation.
### Lagrange Equations
The MBS motion equations can be derived as the Lagrange equations in terms of generalized coordinates. For simplicity, potential forces are omitted so that the Lagrangian is simply the kinetic energy. Then the equations attain the form
\[\frac{d}{dt}\left(\frac{\partial T}{\partial\dot{\vec{\mathbf{q}}}}\right)^{ T}-\left(\frac{\partial T}{\partial\vec{\mathbf{q}}}\right)^{T}=\vec{ \mathbf{M}}\left(\vec{\mathbf{q}}\right)\ddot{\vec{\mathbf{q}}}+\vec{\mathbf{C }}\left(\dot{\vec{\mathbf{q}}},\vec{\mathbf{q}}\right)\dot{\vec{\mathbf{q}}}= \vec{\mathbf{Q}}\left(\dot{\vec{\mathbf{q}}},\vec{\mathbf{q}},t\right) \tag{80}\]
with generalized mass matrix \(\vec{\mathbf{M}}\), and \(\vec{\mathbf{C}}\left(\dot{\vec{\mathbf{q}}},\vec{\mathbf{q}}\right)\dot{ \vec{\mathbf{q}}}\) representing Coriolis and centrifugal forces. The vector \(\vec{\mathbf{Q}}\) stands for all other generalized forces, including potential, dissipative, and applied forces. Using body-fixed twists, the kinetic energy of body \(i\) is \(T_{i}=\frac{1}{2}(\vec{\mathbf{V}}_{i}^{\mathrm{b}})^{T}\vec{\mathbf{M}}_{i}^{ \mathrm{b}}\vec{\mathbf{V}}_{i}^{\mathrm{b}}\). The kinetic energy of the MBS is \(T\left(\dot{\vec{\mathbf{q}}},\vec{\mathbf{q}}\right)=\sum_{i}T_{i}=\frac{1}{ 2}(\vec{\mathbf{V}}^{\mathrm{b}})^{T}\vec{\mathbf{M}}^{\mathrm{b}}\vec{ \mathbf{v}}^{\mathrm{b}}=\frac{1}{2}\vec{\mathbf{q}}^{T}\vec{\mathbf{M}}\vec {\mathbf{q}}\) with the generalized mass matrix
\[\left|\vec{\mathbf{M}}\left(\vec{\mathbf{q}}\right)=(\mathsf{J}^{\mathrm{b}}) ^{T}\vec{\mathbf{M}}^{\mathrm{b}}\mathsf{J}^{\mathrm{b}}\right| \tag{81}\]
and \(\vec{\mathbf{M}}^{\mathrm{b}}:=\mathrm{diag}\left(\vec{\mathbf{M}}_{1}^{ \mathrm{b}},\dots,\vec{\mathbf{M}}_{n}^{\mathrm{b}}\right)\). The conjugate momentum vector is thus \(\left(\frac{\partial T}{\partial\vec{\mathbf{q}}}\right)^{T}=(\mathsf{J}^{ \mathrm{b}})^{T}\vec{\mathbf{M}}^{\mathrm{b}}\vec{\mathbf{V}}^{\mathrm{b}}\). Its time derivative is with (10) given as \(\frac{d}{dt}\left(\frac{\partial T}{\partial\vec{\mathbf{q}}}\right)^{T}= \vec{\mathbf{M}}\left(\vec{\mathbf{q}}\right)\ddot{\vec{\mathbf{q}}}-(\mathsf{ J}^{\mathrm{b}})^{T}\left((\vec{\mathbf{M}}^{\mathrm{b}}\vec{\mathbf{A}}^{\mathrm{b}} \vec{\mathbf{a}})^{T}+\vec{\mathbf{M}}^{\mathrm{b}}\vec{\mathbf{A}}^{\mathrm{b }}\vec{\mathbf{a}}^{\mathrm{b}}\vec{\mathbf{a}}\right)\mathsf{J}^{\mathrm{b}}\dot {\vec{\mathbf{q}}}\), and \(\vec{\mathbf{a}}^{\mathrm{b}}\) defined in (9). From (7) follows \(\left(\frac{\partial T}{\partial\vec{\mathbf{q}}}\right)^{T}=(\vec{\mathbf{M}}^{ \mathrm{b}}\vec{\mathbf{A}}^{\mathrm{b}}\vec{\mathbf{b}}^{\mathrm{b}}\vec{ \mathbf{v}}^{\mathrm{b}})^{T}\mathsf{J}^{\mathrm{b}}\dot{\vec{\mathbf{q}}}\), with
\[\mathsf{b}^{\mathrm{b}}\left(\vec{\mathbf{V}}^{\mathrm{b}}\right):=\mathrm{ diag}\ (\vec{\mathbf{a}}\vec{\mathbf{d}}\vec{\mathbf{v}}_{1}^{\mathrm{b}},\dots,\vec{\mathbf{a}} \vec{\mathbf{d}}\vec{\mathbf{v}}_{n}^{\mathrm{b}}). \tag{82}\]
This admits to identify the generalized mass matrix (81) and the matrix
\[\mathbf{C}\left(\mathbf{q},\dot{\mathbf{q}}\right) =-(\mathbf{J}^{\mathrm{b}})^{T}\left((\mathbf{M}^{\mathrm{b}} \mathbf{A}^{\mathrm{b}}\mathbf{a}^{\mathrm{b}})^{T}+\mathbf{M}^{\mathrm{b}} \mathbf{A}^{\mathrm{b}}\mathbf{a}^{\mathrm{b}}\right)\mathbf{J}^{\mathrm{b}}-( \mathbf{M}^{\mathrm{b}}\mathbf{A}^{\mathrm{b}}\mathbf{b}^{\mathrm{b}}\mathbf{ Y}^{\mathrm{b}})^{T}\mathbf{J}^{\mathrm{b}}\] \[=-(\mathbf{a}^{\mathrm{b}}\mathbf{J}^{\mathrm{b}}+\mathbf{b}^{ \mathrm{b}}\mathbf{X}^{\mathrm{b}})^{T}(\mathbf{A}^{\mathrm{b}})^{T}\mathbf{M} ^{\mathrm{b}}\mathbf{J}^{\mathrm{b}}-(\mathbf{J}^{\mathrm{b}})^{T}\mathbf{M}^ {\mathrm{b}}\mathbf{A}^{\mathrm{b}}\mathbf{a}^{\mathrm{b}}\mathbf{J}^{\mathrm{ b}}. \tag{83}\]
The first term on the right hand side in (83) can be simplified so that
\[\boxed{\mathbf{C}\left(\mathbf{q},\dot{\mathbf{q}}\right)}\quad=-(\mathbf{J }^{\mathrm{b}})^{T}(\mathbf{M}^{\mathrm{b}}\mathbf{A}^{\mathrm{b}}\mathbf{a}^ {\mathrm{b}}+(\mathbf{b}^{\mathrm{b}})^{T}\mathbf{M}^{\mathrm{b}})\mathbf{J}^ {\mathrm{b}}. \tag{84}\]
The concise expressions (81) and (84) allow for construction of the Lagrange equations in closed form. Similar expressions can be derived using the spatial and hybrid representation of twists.
For analytic investigations of the MBS dynamics it may be useful to write the Lagrange equations in components as
\[\sum_{j=1}^{n}M_{ij}\left(\mathbf{q}\right)\ddot{q}_{j}+\sum_{j,k=1}^{n}\Gamma _{ijk}\left(\mathbf{q}\right)\dot{q}_{j}\dot{q}_{k}=Q_{i}\left(\mathbf{q}, \dot{\mathbf{q}},t\right) \tag{85}\]
where the Christoffel symbols of first kind are defined as \(\Gamma_{ijk}=\frac{1}{2}\left(\frac{\partial M_{ik}}{\partial q_{j}}+\frac{ \partial M_{ij}}{\partial q_{k}}-\frac{\partial M_{ik}}{\partial q_{i}}\right)= \Gamma_{ikj}\). The recursive relations (5) give rise to the closed form expressions
\[\Gamma_{ijk} =\frac{1}{2}\sum_{l=k}^{n}\left((\mathbf{J}_{l,k}^{\mathrm{b}}) ^{T}\mathbf{M}_{l}\mathbf{a}\mathbf{d}_{\mathbf{J}_{l,i}^{\mathrm{b}}} \mathbf{J}_{l,j}^{\mathrm{b}}+(\mathbf{J}_{lj}^{\mathrm{b}})^{T}\mathbf{M}_{ l}\mathbf{a}\mathbf{d}_{\mathbf{J}_{l,i}^{\mathrm{b}}}\mathbf{J}_{l,k}^{ \mathrm{b}}+(\mathbf{J}_{l,i}^{\mathrm{b}})^{T}\mathbf{M}_{l}\mathbf{a} \mathbf{d}_{\mathbf{J}_{l,s}^{\mathrm{b}}}\mathbf{J}_{l,r}^{\mathrm{b}}\right) \tag{86}\] \[\text{with }i<j\leq k\text{ or }j\leq i<k,\ r=\max\left(i,j \right),s=\min\left(i,j\right).\]
This expression for the Christoffel symbols in Lie group notation was reported in [17; 51], and already in [49] in tensor notation. This expression simplifies when Binet's inertia tensor \(\boldsymbol{\vartheta}_{i}=\frac{1}{2}\mathrm{tr}\left(\boldsymbol{\Theta}_{i} \right)\mathbf{I}-\boldsymbol{\Theta}_{i}\) is used in the mass matrix \(\mathbf{M}_{i}^{\mathrm{b}}\). Then (39) is replaced by \(\mathbf{\tilde{M}}_{ic}^{\mathrm{b}}=\text{diag}\left(\boldsymbol{\vartheta}_{ i},m_{i}\mathbf{I}\right)\), and (40) by \(\mathbf{\tilde{M}}_{i}^{\mathrm{b}}=\mathbf{A}\mathbf{d}_{\mathbf{S}_{\mathrm{ bc}}}^{-T}\mathbf{\tilde{M}}_{ic}^{\mathrm{b}}\mathbf{A}\mathbf{d}_{\mathbf{S}_{ \mathrm{bc}}}^{-1}\). This leads to
\[\Gamma_{ijk} =\frac{1}{2}\sum_{l=k}^{n}(\mathbf{J}_{l,j}^{\mathrm{b}})^{T} \mathbf{\tilde{M}}_{l}\mathbf{a}\mathbf{d}_{\mathbf{\tilde{J}}_{l,k}^{\mathrm{ b}}}\mathbf{J}_{l,i}^{\mathrm{b}} \tag{87}\] \[\text{with }i<j\leq k\text{ or }j\leq i<k.\]
The equations (86) were presented in [51; 70; 72; 69; 73], and (87) in [51]. Prior to these publications the equations (86) and (87) have been reported in [49; 50] using tensor notation rather than Lie group notation. Another publication that should be mentioned is [20] where the Lagrange equations were derived using similar algebraic operations.
The above closed forms of EOM are derived using body-fixed twists. The potential benefit of using spatial or hybrid twists remains to be explored.
## 6 Derivatives of Motion Equations
In various contexts the information about the sensitivity of the MBS kinematics and dynamics is required either w.r.t. joint angles, geometric parameters, or dynamic parameters. Whereas it is know that the EOM of a rigid body MBS attain a form that is linear in the dynamic parameters, they depend non-linearly on the generalized coordinates and geometry. The POE formulation provides a means to determine sensitivity w.r.t. to kinematic parameters.
### Sensitivity of Motion Equations
Gradients w.r.t. generalized coordinates are required for the linearization of the EOM (as basis for stability analysis and controller design) as well as for optimal control of MBS. Since the second-order and higher derivatives (12) of the body-fixed Jacobian, (20) of the spatial Jacobian, and (25) of the hybrid Jacobian are given as algebraic closed form expressions in terms of screw products, the linearized EOM can be evaluated recursively as well as expressed in closed-form. Using the Lie group notation this was reported in [78].
The same results were already presented in [50] using tensor notation. Comparing the two formulations reveals once more that the matrix Lie group formulation provides a level of abstraction leading to compact expressions. A closed form for partial derivatives of the inverse mass matrix has been reported in [52], which is required for investigating the controllability of MBS. Using the body-fixed representation of twists, recursive \(O\left(n\right)\) where reported in [36; 3].
### Geometric Sensitivity
Optimizing the design of an MBS requires information about the sensitivity w.r.t. to geometric parameters. A recursive algorithm was reported in [36] and its parallel implementation in [3] where the partial derivatives are computed on a case by case basis. The Lie group formulation gives rise to a general closed-form expression. To this end, the POE formula (92) is extended as follows.
The geometry of the two bodies \(i\) and \(i-1\) connected by joint \(i\) is encoded in the constant part \(\mathbf{S}_{i,i}\) and \(\mathbf{S}_{i-1,i}\) in (91), respectively in \(\mathbf{B}_{i}\) in the formulation in (92). These are frame transformations, and can hence be parameterized in terms of screw coordinates. If \(\mathbf{B}_{i}\) depends on \(\lambda\leq 6\) geometric parameters, it is expressed as \(\mathbf{B}_{i}\left(\pi_{i}\right)=\mathbf{B}_{i0}\exp(\mathbf{U}_{i1}\pi_{i 1})\cdot\ldots\exp(\mathbf{U}_{i\lambda}\pi_{i\lambda})\). The screw coordinates \(\mathbf{U}_{i1}\) and corresponding parameters \(\pi_{i1}\) account for the considered variations from the nominal geometry, represented by \(\mathbf{B}_{i0}\in SE\left(3\right)\). The relative configuration due to joint \(i\) and the geometric variations is thus \(\mathbf{C}_{i-1,i}\left(\mathbf{q},\pi_{i}\right)=\mathbf{B}_{i}\left(\pi_{i} \right)\exp({}^{i}\mathbf{X}_{i}q_{i})\). The key observation is that partial derivatives of \(\mathbf{B}_{i}\left(\pi_{i}\right)\) are available in closed form, as for the joint screw coordinates. Hence also the sensitivity w.r.t. the MBS geometry can be expressed in closed form [53]. This fact has been applied to robot calibration [22; 23] where the POE accounts for geometric imperfections to be identified.
### Time Derivatives of the EOM
The design of feedback-linearizing flatness-based controllers for robotic manipulators that are modeled as rigid body MBS actuated by elastic actuators (so-called series elastic actuators) requires the time derivatives of the inverse dynamics solution \(\mathbf{Q}\left(t\right)\)[26; 68]. That is, the first and second time derivatives of the EOM are necessary. Extensions of the classical recursive Newton-Euler inverse dynamics algorithms in body-fixed representations were presented in [19]. As it can be expected the relation are very complicated. Using the presented Lie group formulation of the inverse dynamics algorithms gives rise to rather compact and thus fail-safe algorithm. This was presented in [63] for the body-fixed and hybrid version.
## 7 Geometric Integration
This paper focussed on the MBS modeling in terms of relative (joint) coordinates. Alternatively, the MBS kinematics can be described in terms of absolute coordinates.
One of the issues that is being addressed when modeling MBS in terms of absolute coordinates is the _kinematic reconstruction_, i.e. the determination of the motion of a rigid body, represented by \(\mathbf{C}\left(t\right)\), from its velocity field \(\mathbf{V}\left(t\right)\). This amounts to solving one of the equations (see appendix A2 in [62])
\[\mathbf{\widehat{V}}^{\mathrm{b}}=\mathbf{C}^{-1}\dot{\mathbf{C}},\qquad \mathbf{\widehat{V}}^{\mathrm{s}}=\dot{\mathbf{C}}\mathbf{C}^{-1} \tag{88}\]
together with the NE (41) or (38), respectively. Classically, the orientation is parameterized with three parameters. The problem encountered is that there is no singularity-free global parameterization of rotations with three parameters. Instead of local parameters (position and rotation angles) the absolute configurations of the rigid bodies within the MBS can be represented by \(\mathbf{C}\left(t\right)\). Then a numerical integration step from time \(t_{k-1}\) to \(t_{k}=t_{k-1}+h\) shall determine the incremental configuration update \(\Delta\mathbf{C}_{k}=\mathbf{C}_{k-1}^{-1}\mathbf{C}_{k}\) with \(\mathbf{C}_{k}=\mathbf{C}\left(t_{k}\right)\) and \(\mathbf{C}_{k-1}=\mathbf{C}\left(t_{k-1}\right)\). The equations (88) are ODEs on the Lie group \(SE\left(3\right)\). These can be replaced by ODEs on the Lie algebra \(se\left(3\right)\). The motion increment from \(t_{k-1}\) to \(t_{k}\) is parameterized as \(\Delta\mathbf{C}\left(t\right)=\exp\mathbf{X}\left(t\right)\) with an algorithmic instantaneous screw coordinate vector \(\mathbf{X}\). Then (88) are equivalent to the ODEs on the Lie algebra
\[\mathbf{V}^{\mathrm{s}}=\mathbf{dexp}_{\mathbf{X}}\dot{\mathbf{X}},\ \ \ \ \mathbf{V}^{\mathrm{b}}=\mathbf{dexp}_{-\mathbf{X}}\dot{\mathbf{X}} \tag{89}\]
where \(\mathbf{dexp}_{\mathbf{X}}:se\left(3\right)\to se\left(3\right)\) is the right-trivialized differential of the \(\exp\) mapping on \(SE\left(3\right)\)[13; 60; 61; 71]. This is the basic idea of the class of Munthe-Kaas integration schemes [21; 37; 64]. This scheme has been adapted to MBS in absolute coordinates [82]. The advantage of these integration methods is that no global parameterization is necessary since the numerical integration is pursued in terms of the incremental parameters \(\mathbf{X}\). The ODEs (89) can be solved with any vector space integration scheme (originally the Munthe-Kaas scheme uses a Runge-Kutta method) with initial value \(\mathbf{X}\left(t_{k-1}\right)=\mathbf{0}\).
Recently the geometric integration concepts were incorporated in the generalized \(\alpha\) method [42; 18] for MBS described in absolute coordinates. In this case the representation of proper rigid body motions is crucial as discussed in [60; 59], which is frequently incorrectly represented by \(SO\left(3\right)\times\mathbb{R}^{3}\). Also momentum preserving schemes were proposed [83]. It should be mentioned that the concept of geometric integration schemes on \(SE\left(3\right)\) can be transferred to the kinematics of flexible bodies undergoing large deformations described as Cosserat continua. In this context the spatial description (referred to as fixed pole formulation) has proven to be beneficial [32]. Recent results on Lie group modeling of beams can be found in [79; 80].
## 8 Conclusions and Outlook
The computational effort of recursive \(O\left(n\right)\) algorithms, but also of the formalisms for evaluating the EOM in closed form, depends on the representation of rigid body motions and of the motions of technical joints. Since the geometry of finite rigid body and relative motions is described by the Lie group \(SE\left(3\right)\), and that of instantaneous motions be the screw algebra \(se\left(3\right)\), Lie group theory provides the geometric framework. As already shown in [62], Lie group formulations for the MBS kinematics give rise to compact recursive formulations in terms of relative coordinates. In this paper the corresponding recursive NE algorithms were presented and related to the various \(O\left(n\right)\) algorithms scattered in the literature. This allows for a comparative investigation of their efficiency in conjunction with the modeling procedure. For instance, whereas most \(O\left(n\right)\) algorithms used the hybrid representation, the spatial representation, as used by Featherstone [30] and Bottasso [13] (where it is called fixed point formulation), is receiving increased attention since it gives easily rise to structure preserving integration schemes [11; 12; 13; 32]. A conclusive investigation will be the subject of future research. Future research will also focus on combining the \(O\left(n\right)\) forward dynamics algorithm by Featherstone [30], based on NE equations using spatial representations with Naudet's algorithm [66] based on Hamilton's canonical equations in hybrid representation. The use of the spatial momentum balance shall allow for momentum preserving integration of the EOM and at the same time to reduce the number of frame transformations. A further important research topic is the derivation of structure preserving Lie group integration schemes for which the spatial formulation of EOM will be formulation of choice.
### A Summary of basic Kinematic Relations
As prerequisite the kinematic relations derived in [62] are summarized. Denote with
\[\mathbf{C}_{i}=\left(\begin{array}{cc}\mathbf{R}_{i}&\mathbf{r}_{i}\\ \mathbf{0}&1\end{array}\right)\in SE\left(3\right) \tag{90}\]
the _absolute configuration_ of body \(i\) w.r.t. the inertial frame (IFR) \(\mathcal{F}_{0}\). This is alternatively denoted with \(C_{i}=(\mathbf{R}_{i},\mathbf{r}_{i})\). The _relative configuration_ of body \(i\) relative to body \(i-1\) is given as
\[\mathbf{C}_{i-1,i}\left(q_{i}\right)=\mathbf{S}_{i-1,i}\exp({}^{i-1}\mathbf{Z} _{i}q_{i})\mathbf{S}_{i,i}^{-1}=\mathbf{B}_{i}\exp({}^{i}\mathbf{X}_{i}q_{i}) \tag{91}\]
where \(\mathbf{B}_{i}:=\mathbf{S}_{i-1,i}\mathbf{S}_{i,i}^{-1}=\mathbf{C}_{i-1,i} \left(0\right)\) is the reference configuration of body \(i\) w.r.t. body \(i-1\), i.e. for \(q_{i}=0\), and \({}^{i-1}\mathbf{Z}_{i}\in\mathbb{R}^{6}\) is the screw coordinate vector of joint \(i\) represented in the joint frame (JFR) \(\mathcal{J}_{i-1,i}\) on body \(i-1\). Successive relative configurations can be combined to
\[\mathbf{C}_{i}\left(\mathbf{q}\right) = \mathbf{B}_{1}\exp({}^{1}\mathbf{X}_{1}q_{1})\cdot\mathbf{B}_{2} \exp({}^{2}\mathbf{X}_{2}q_{2})\cdot\ldots\cdot\mathbf{B}_{i}\exp({}^{i} \mathbf{X}_{i}q_{i}) \tag{92}\] \[= \exp(\mathbf{Y}_{1}q_{1})\cdot\exp(\mathbf{Y}_{2}q_{2})\cdot \ldots\cdot\exp(\mathbf{Y}_{i}q_{i})\mathbf{A}_{i} \tag{93}\]
where \({}^{i}\mathbf{X}_{i}\in\mathbb{R}^{6}\) is the screw coordinate vector of joint \(i\) represented in the joint frame fixed at body \(i\), \(\mathbf{Y}_{i}\in\mathbb{R}^{6}\) is the joint screw coordinate vector in spatial representation (measured and resolved in IFR) for the reference configuration \(\mathbf{q}=\mathbf{0}\), and \(\mathbf{A}_{i}=\mathbf{C}_{i}\left(\mathbf{0}\right)\) is the reference configuration of body \(i\). The two representations of joint screw coordinates are related by
\[\mathbf{Y}_{i}=\mathbf{Ad}_{\mathbf{A}_{i}}{}^{i}\mathbf{X}_{i},\ \ ^{i}\mathbf{X}_{i}=\mathbf{Ad}_{\mathbf{S}_{i,i}}{}^{i-1}\mathbf{Z}_{i} \tag{94}\]
where, in vector representation of screws, the adjoined transformation \(\mathbf{Ad}\) corresponding to \(\mathbf{C}\in SE\left(3\right)\) is given by the matrix
\[\mathbf{Ad}_{\mathbf{C}}=\left(\begin{array}{cc}\mathbf{R}&\mathbf{0}\\ \widetilde{\mathbf{r}}\mathbf{R}&\mathbf{R}\end{array}\right). \tag{95}\]
For sake of simplicity, the following notations are used
\[\mathbf{Ad}_{\mathbf{R}}=\left(\begin{array}{cc}\mathbf{R}&\mathbf{0}\\ \mathbf{0}&\mathbf{R}\end{array}\right),\ \text{for}\ \ C=\left(\mathbf{R}, \mathbf{0}\right)\ \ \ \ \ \ \ \ \mathbf{Ad}_{\mathbf{r}}=\left(\begin{array}{cc}\mathbf{I}&\mathbf{0}\\ \widetilde{\mathbf{r}}&\mathbf{I}\end{array}\right),\ \text{for}\ \ C=\left(\mathbf{I}, \mathbf{r}\right) \tag{96}\]
so that \(\mathbf{Ad}_{\mathbf{C}}=\mathbf{Ad}_{\mathbf{r}}\mathbf{Ad}_{\mathbf{R}}\).
The twist of body \(i\) in _body-fixed_ representation \(\mathbf{V}_{i}^{\text{b}}=\left(\boldsymbol{\omega}_{i}^{\text{b}},\mathbf{v}_{ i}^{\text{b}}\right)^{T}\) and in _spatial_ representation \(\mathbf{V}_{i}^{\text{s}}=\left(\boldsymbol{\omega}_{i}^{\text{s}},\mathbf{v}_{ i}^{\text{s}}\right)^{T}\) is defined by
\[\mathbf{\widehat{V}}_{i}^{\text{b}}=\left(\begin{array}{cc}\widetilde{ \boldsymbol{\omega}}_{i}^{\text{b}}&\mathbf{v}_{i}^{\text{b}}\\ \mathbf{0}&0\end{array}\right)=\mathbf{C}_{i}^{-1}\dot{\mathbf{C}}_{i},\ \mathbf{\widehat{V}}_{i}^{\text{s}}=\left(\begin{array}{cc} \widetilde{\boldsymbol{\omega}}_{i}^{\text{s}}&\mathbf{v}_{i}^{\text{s}}\\ \mathbf{0}&0\end{array}\right)=\dot{\mathbf{C}}_{i}\mathbf{C}_{i}^{-1}. \tag{97}\]
Here \(\mathbf{v}_{i}^{\text{b}}=\mathbf{R}_{i}^{T}\dot{\mathbf{r}}_{i}\) the body-fixed translational velocity, i.e. the velocity of the origin of the body-fixed reference frame (RFR) \(\mathcal{F}_{i}\) of body \(i\) measured in the IFR \(\mathcal{F}_{0}\) and resolved in \(\mathcal{F}_{i}\), whereas \(\mathbf{v}_{i}^{\text{s}}=\dot{\mathbf{r}}_{i}+\mathbf{r}_{i}\times\boldsymbol{ \omega}_{i}^{\text{s}}\) is the spatial translational velocity, i.e. the velocity of the point of the body that is momentarily passing through the origin of the IFR \(\mathcal{F}_{0}\) resolved in the IFR. The body-fixed and spatial angular velocity, \(\boldsymbol{\omega}_{i}^{\text{b}}\) and \(\boldsymbol{\omega}_{i}^{\text{s}}\), is defined by \(\widetilde{\boldsymbol{\omega}}_{i}^{\text{b}}=\mathbf{R}_{i}^{T}\dot{\mathbf{ R}}_{i}\) and \(\widetilde{\boldsymbol{\omega}}_{i}^{\text{s}}=\dot{\mathbf{R}}_{i}\mathbf{R}_{i}^{T}\), respectively. The _hybrid twist_ is defined as \(\mathbf{V}_{i}^{\text{h}}=\left(\boldsymbol{\omega}_{i}^{\text{s}},\dot{ \mathbf{r}}_{i}\right)^{T}\), and finally the _mixed twist_ as \(\mathbf{V}_{i}^{\text{m}}=\left(\boldsymbol{\omega}_{i}^{\text{b}},\dot{ \mathbf{r}}_{i}\right)^{T}\). The four representations are related as follows
\[\mathbf{V}_{i}^{\text{h}} = \left(\begin{array}{cc}\mathbf{R}_{i}&\mathbf{0}\\ \mathbf{0}&\mathbf{R}_{i}\end{array}\right)\mathbf{V}_{i}^{\text{b}}=\mathbf{Ad} _{\mathbf{R}_{i}}\mathbf{V}_{i}^{\text{b}} \tag{98}\] \[\mathbf{V}_{i}^{\text{s}} = \mathbf{Ad}_{\mathbf{C}_{i}}\mathbf{V}_{i}^{\text{b}}=\mathbf{Ad} _{\mathbf{C}_{i}}\mathbf{Ad}_{\mathbf{R}_{i}}^{-1}\mathbf{V}_{i}^{\text{h}}= \mathbf{Ad}_{\mathbf{r}_{i}}\mathbf{V}_{i}^{\text{h}}\] (99) \[\mathbf{V}_{i}^{\text{m}} = \left(\begin{array}{cc}\mathbf{I}&\mathbf{0}\\ \mathbf{0}&\mathbf{R}_{i}\end{array}\right)\mathbf{V}_{i}^{\text{b}}=\left( \begin{array}{cc}\mathbf{R}_{i}^{T}&\mathbf{0}\\ \mathbf{0}&\mathbf{I}\end{array}\right)\mathbf{V}_{i}^{\text{h}}=\left( \begin{array}{cc}\mathbf{R}_{i}^{T}&\mathbf{0}\\ -\ddot{\mathbf{r}}_{i}&\mathbf{I}\end{array}\right)\mathbf{V}_{i}^{\text{s}}. \tag{100}\]
The twist of body \(i\) within a kinematic chain is determined in terms of the generalized velocities \(\dot{\bf q}\) as
\[{\bf V}_{i}^{\rm b} =\sum_{j\leq i}{\bf J}_{i,j}^{\rm b}\dot{q}_{j}={\rm J}_{i}^{\rm b} \dot{\bf q}, {\bf V}_{i}^{\rm s}=\sum_{j\leq i}{\bf J}_{i}^{\rm s}\dot{q}_{j}={\rm J}_{i}^ {\rm s}\dot{\bf q} \tag{101}\] \[{\bf V}_{i}^{\rm h} =\sum_{j\leq i}{\bf J}_{i,j}^{\rm h}\dot{q}_{j}={\rm J}_{i}^{\rm h} \dot{\bf q}, {\bf V}_{i}^{\rm m}=\sum_{j\leq i}{\bf J}_{i,j}^{\rm m}\dot{q}_{j}={ \rm J}_{i}^{\rm m}\dot{\bf q} \tag{102}\]
with the Jacobian \({\rm J}_{i}^{\rm b}\) in body-fixed, \({\rm J}_{i}^{\rm s}\) in spatial, \({\rm J}_{i}^{\rm h}\) in hybrid, and \({\rm J}_{i}^{\rm m}\) in mixed representation. The \(i\)th column of the Jacobian is respectively given by
\[{\bf J}_{i,j}^{\rm b} ={\bf Ad}_{{\bf C}_{i,j}}{}^{j}{\bf X}_{j}={\bf Ad}_{{\bf C}_{i, j}{\bf A}_{j}^{-1}}{\bf Y}_{j} \tag{103}\] \[{\bf J}_{j}^{\rm s} ={\bf Ad}_{{\bf C}_{j}}{}^{j}{\bf X}_{j}={\bf Ad}_{{\bf C}_{j}{ \bf A}_{j}^{-1}}{\bf Y}_{j}\] (104) \[{\bf J}_{i,j}^{\rm h} ={\bf Ad}_{{\bf r}_{i,j}}{}^{0}{\bf X}_{j}^{j}, {\rm for}\ j\leq i. \tag{105}\]
These are the instantaneous joint screw coordinates in body-fixed, spatial, and hybrid representation. The Jacobians are related as
\[{\bf J}_{j}^{\rm s}={\bf Ad}_{{\bf r}_{i}}{\bf J}_{i,j}^{\rm h}={\bf Ad}_{{\bf C }_{i}}{\bf J}_{i}^{\rm b},\ \ {\bf J}_{i}^{\rm h}={\bf Ad}_{{\bf R}_{i}}{\bf J}_{i}^{\rm b}. \tag{106}\]
The representations of joint screw coordinates are related by (94) and by
\[{\bf Y}_{j}={\bf Ad}_{{\bf r}_{i}}{}^{0}{\bf X}_{j}^{j},\ \ \ {}^{0}{\bf X}_{j}^{j}={\bf Ad}_{{\bf R}_{j}}{}^{j}{\bf X}_{j} \tag{107}\]
where \({\bf r}_{i}\) is the current position of body \(i\) in \({\bf C}_{i}\). The twists admit the recursive expressions
\[{\bf V}_{i}^{\rm b} ={\bf Ad}_{{\bf C}_{i,i-1}}{\bf V}_{i-1}^{\rm b}+{}^{i}{\bf X}_{i }\dot{q}_{i} \tag{108}\] \[{\bf V}_{i}^{\rm s} ={\bf V}_{i-1}^{\rm s}+{\bf J}_{i}^{\rm s}\dot{q}_{i}\] (109) \[{\bf V}_{i}^{\rm h} ={\bf Ad}_{{\bf r}_{i,i-1}}{\bf V}_{i-1}^{\rm h}+{}^{0}{\bf X}_{i }^{i}\dot{q}^{i}. \tag{110}\]
Summarizing the twists of all bodies in \({\rm V}^{\rm b},{\sf V}^{\rm s},{\sf V}^{\rm h}\in\mathbb{R}^{6n}\), respectively, admits the expressions
\[{\sf V}^{\rm b}={\rm J}^{\rm b}\dot{\bf q},\ \ {\sf V}^{\rm s}={\rm J}^{\rm s} \dot{\bf q},\ \ {\sf V}^{\rm h}={\rm J}^{\rm h}\dot{\bf q} \tag{111}\]
in terms of the system Jacobians that admit the factorizations [62]
\[{\rm J}^{\rm b}={\sf A}^{\rm b}{\sf X}^{\rm b},\ \ {\sf J}^{\rm s}={\sf A}^{\rm s }{\sf Y}^{\rm s}={\sf A}^{\rm sb}{\sf X}^{\rm b},\ \ {\sf J}^{\rm h}={\sf A}^{\rm h}{\sf X}^{\rm h}. \tag{112}\]
This provides a compact description of the overall MBS kinematics. The explicit relations for the inverse of the matrices \({\sf A}\) is the starting point for deriving recursive forward dynamics \(O\left(n\right)\) algorithms.
## Appendix B Rigid Body Motions and the Lie Group \(\boldsymbol{SE\left(3\right)}\)
For an introduction to screws and to the motion Lie group \(SE\left(3\right)\) the reader is referred to the text books [5; 48; 65; 77].
### Derivatives of Screws
Let \(\mathbf{C}_{i}\) be time dependent. According to (95), the corresponding frame transformation of screw coordinates from \(\mathcal{F}_{i}\) to \(\mathcal{F}_{0}\) is \(\mathbf{X}\equiv{}^{0}\mathbf{X}=\mathbf{Ad}_{\mathbf{C}_{i}}{}^{i}\mathbf{X}\). Assume that the screw coordinates expressed in body-fixed frame are constant. The rate of change of the screw coordinates expressed in IFR is \(\frac{d}{dt}\widehat{\mathbf{X}}=\frac{d}{dt}\mathbf{Ad}_{\mathbf{C}_{i}}({}^{ i}\widehat{\mathbf{X}})=\hat{\mathbf{C}}_{i}\mathbf{C}_{i}^{-1}\mathbf{C}_{i} {}^{i}\widehat{\mathbf{X}}\mathbf{C}_{i}^{-1}-\mathbf{C}_{i}{}^{i}\widehat{ \mathbf{X}}\mathbf{C}_{i}^{-1}\hat{\mathbf{C}}_{i}\mathbf{C}_{i}^{-1}=\widehat {\mathbf{V}}_{i}^{\mathrm{s}}\widehat{\mathbf{X}}-\widehat{\mathbf{V}}_{i}^{ \mathrm{s}}\widehat{\mathbf{X}}=[\widehat{\mathbf{V}}_{i}^{\mathrm{s}}, \widehat{\mathbf{X}}]\). Therein
\[[\widehat{\mathbf{X}}_{1},\widehat{\mathbf{X}}_{2}]=\widehat{\mathbf{X}}_{1} \widehat{\mathbf{X}}_{2}-\widehat{\mathbf{X}}_{2}\widehat{\mathbf{X}}_{1}= \mathrm{ad}_{\mathbf{X}_{1}}(\mathbf{X}_{2}) \tag{113}\]
is the Lie bracket on \(se\left(3\right)\), also called the _adjoint_ mapping. In vector notation of screws, denoting a general screw vector with \(\mathbf{X}=\left(\boldsymbol{\xi},\boldsymbol{\eta}\right)^{T}\), this is
\[[\mathbf{X}_{1},\mathbf{X}_{2}]=\left(\boldsymbol{\xi}_{1}\times\boldsymbol{ \xi}_{2},\boldsymbol{\eta}_{1}\times\boldsymbol{\xi}_{2}+\boldsymbol{\xi}_{1} \times\boldsymbol{\eta}_{2}\right)^{T}=\mathbf{ad}_{\mathbf{X}_{1}}\mathbf{X} _{2} \tag{114}\]
with
\[\mathbf{ad}_{\mathbf{X}}=\left(\begin{matrix}\widetilde{\boldsymbol{\xi}}& \mathbf{0}\\ \widetilde{\boldsymbol{\eta}}&\widetilde{\boldsymbol{\xi}}\end{matrix}\right). \tag{115}\]
The form (114) is known as the screw product [14; 77]. The matrix (115) has appeared under different names, such as'spatial cross product' in [29; 30; 39], or the 'north-east cross product' [13]. The Lie bracket obeys the Jacobi identity
\[[\mathbf{X}_{1},[\mathbf{X}_{2},\!\mathbf{X}_{3}]+[\mathbf{X}_{2},[\mathbf{X }_{3},\!\mathbf{X}_{1}]+[\mathbf{X}_{3},[\mathbf{X}_{1},\!\mathbf{X}_{2}]= \mathbf{0}. \tag{116}\]
Allowing for time dependent body-fixed screw coordinates \({}^{i}\mathbf{X}\), the above relation gives rise to an expression for the time derivative of screw coordinates in moving frames
\[\mathbf{Ad}_{\mathbf{C}_{i}}^{-1}\dot{\mathbf{X}}={}^{i}\dot{\mathbf{X}}+[ \mathbf{V}_{i}^{\mathrm{b}},{}^{i}\mathbf{X}]. \tag{117}\]
This is the spatial extension of Euler's formula for the derivative of a vector resolved in a moving frame.
For the sake of simplicity throughout the paper, the following notations are used
\[\dot{\mathbf{V}}=\left(\begin{matrix}\boldsymbol{\omega}\\ \mathbf{0}\end{matrix}\right),\quad\dot{\mathbf{V}}=\left(\begin{matrix} \mathbf{0}\\ \mathbf{v}\end{matrix}\right). \tag{118}\]
Then the matrices
\[\mathbf{ad}_{\boldsymbol{\omega}}=\left(\begin{matrix}\widetilde{\boldsymbol{ \omega}}&\mathbf{0}\\ \mathbf{0}&\widetilde{\boldsymbol{\omega}}\end{matrix}\right),\quad\mathbf{ad}_ {\mathbf{v}}=\left(\begin{matrix}\mathbf{0}&\mathbf{0}\\ \widetilde{\mathbf{v}}&\mathbf{0}\end{matrix}\right) \tag{119}\]
are used to denote the matrix (115) for twists \(\dot{\mathbf{V}}\) and \(\dot{\mathbf{V}}\), respectively, which are the infinitesimal versions on (96).
### Wrenches as Co-Screws - \(se^{\ast}\left(3\right)\)
Screws are the geometric objects embodying twists, wrenches, and momenta of rigid bodies. These different physical meanings imply different mathematical interpretations of the geometric object. A wrench, defined by a force and moment, is denoted with \(\mathbf{W}=\left(\mathbf{t},\mathbf{f}\right)^{T}\). The force applied at a point with position vector \(\mathbf{p}\) generates the moment \(\mathbf{t}=\mathbf{p}\times\mathbf{f}\). The dual to Chasles theorem is the Poisson theorem stating that every system of forces can be reduced to a force together with a couple with moment parallel to the force.
Geometrically a screw is determined by the Plucker coordinates of the line along the screw axis and the pitch. If \(\mathbf{e}\) is the unit vector along the screw axis, and \(\mathbf{p}\) is a position vector of a point on that axis, the screw coordinate vector of a twist is \(\mathbf{V}=\left(\boldsymbol{\omega},\mathbf{v}\right)^{T}=\omega\left( \mathbf{e},\mathbf{p}\times\mathbf{e}+h\mathbf{e}\right)^{T}\), where \(\omega=\|\boldsymbol{\omega}\|\) is its magnitude, and \(h=\mathbf{v}^{T}\boldsymbol{\omega}/\omega^{2}\) is its pitch. The screw coordinate vector of a wrench, i.e. the force \(\mathbf{f}\) producing a torque \(\mathbf{t}\) about the axis \(\mathbf{e}\) when the point of application is displaced according
to \(\mathbf{p}\) from the axis, is \(\mathbf{W}=\left(\mathbf{t},\mathbf{f}\right)^{T}=f\left(\mathbf{p}\times\mathbf{ e}+h\mathbf{e},\mathbf{e}\right)^{T}\), with pitch \(h=\mathbf{t}^{T}\mathbf{f}/\left\|\mathbf{f}\right\|^{2}\). Apparently the linear and angular components of the screw coordinates are interchanged for twists and wrenches. The different definition of screw coordinate vectors allows to describe the action of a wrench on a twist as the scalar product: \(\mathbf{W}^{T}\mathbf{V}\) is the power performed by the wrench acting on twist \(\mathbf{V}\).
A twist \({}^{2}\mathbf{V}\) represented in frame \(\mathcal{F}_{2}\) transforms to its representation in frame \(\mathcal{F}_{1}\) according to \({}^{1}\mathbf{V}=\mathbf{A}\mathbf{d}_{\mathbf{S}_{1,2}}{}^{2}\mathbf{V}\). The power conservation yields that a wrench represented in \(\mathcal{F}_{1}\) transforms to its representation in \(\mathcal{F}_{2}\) according to
\[{}^{2}\mathbf{W}=\mathbf{A}\mathbf{d}_{\mathbf{S}_{1,2}}^{T}{}^{1}\mathbf{W}. \tag{120}\]
While this notation is useful for kinetostatic formulations, it is inconsistent in the sense that it treats screw coordinates differently for twists and wrenches. In screw theory, aiming on a consistent treatment of screw entities, a screw is represented by its coordinates as defined by (67) in [62] and the so-called _reciprocal product_ of two screws is used [5; 14; 77]. The latter is defined for \(\mathbf{X}_{1}=(\boldsymbol{\xi}_{1},\boldsymbol{\eta}_{1})^{T}\) and \(\mathbf{X}_{2}=(\boldsymbol{\xi}_{2},\boldsymbol{\eta}_{2})^{T}\) as \(\mathbf{X}_{1}\odot\mathbf{X}_{2}=\boldsymbol{\xi}_{1}^{T}\boldsymbol{\eta}_{2 }+\boldsymbol{\eta}_{1}^{T}\boldsymbol{\xi}_{2}\). Two screws are said to be _reciprocal_ if \(\mathbf{X}_{1}\odot\mathbf{X}_{2}=0\). Obviously, if twists and wrenches are represented consistently with the same definition of screw coordinates, a reciprocal twist and wrench screws means that they perform no work. Geometrically, for zero pitch screws, this means that the screw axes intersect.
In screw theory wrench screws are called co-screws to distinguish them from motion screws and to indicate that a wrench acts on a motion screw (a twist) as a linear operator that returns work or power. As twists form the Lie algebra \(se\left(3\right)\), wrenches from the dual \(se^{*}\left(3\right)\).
## Appendix C Nomenclature
\begin{tabular}{l l} \(\mathcal{F}_{0}\) & - IFR \\ \(\mathcal{F}_{i}\) & - BFR of body \(i\) \\ \(\mathcal{J}_{i,i}\) & - JFR for joint \(i\) at body \(i\), joint \(i\) connects body \(i\) with its predecessor body \(i-1\) \\ \(\mathcal{J}_{i-1,i}\) & - JFR for joint \(i\) at body \(i-1\) \\ \({}^{i}\mathbf{r}\) & - Coordinate representation of a vector resolved in BFR on body \(i\). \\ & The index is omitted if this is the IFR: \(\mathbf{r}\equiv{}^{0}\mathbf{r}\). \\ \(\mathbf{R}_{i}\) & - Rotation matrix from BFR \(\mathcal{F}_{i}\) at body \(i\) to IFR \(\mathcal{F}_{0}\) \\ \(\mathbf{R}_{i,j}\) & - Rotation matrix transforming coordinates resolved in BFR \(\mathcal{F}_{j}\) \\ & to coordinates resolved in \(\mathcal{F}_{i}\) \\ \(\mathbf{r}_{i}\) & - Position vector of origin of BFR \(\mathcal{F}_{i}\) at body \(i\) resolved in IFR \(\mathcal{F}_{0}\) \\ \(\mathbf{r}_{i,j}\) & - Position vector from origin of BFR \(\mathcal{F}_{i}\) to origin of BFR \(\mathcal{F}_{j}\) \\ \(\widetilde{\mathbf{x}}\) & - skew symmetric matrix associated to the vector \(\mathbf{x}\in\mathbb{R}^{3}\) \\ \(C_{i}=\left(\mathbf{R}_{i},\mathbf{r}_{i}\right)\) & - Absolute configuration of body \(i\). This is denoted in matrix form with \(\mathbf{C}_{i}\) \\ \(\mathbf{C}_{i,j}=\mathbf{C}_{i}^{-1}\mathbf{C}_{j}\) & - Relative configuration of body \(j\) w.r.t. body \(i\) \\ \({}^{k}\mathbf{v}_{i}^{\mathbf{y}}\) & - Translational velocity of body \(i\) measured at origin of BFR \(\mathcal{F}_{j}\), resolved in BFR \(\mathcal{F}_{k}\) \\ \(\mathbf{v}_{i}^{\mathbf{b}}\equiv{}^{i}\mathbf{v}_{i}^{\mathbf{f}}\) & - Body-fixed representation of the translational velocity of body \(i\) \\ \({}^{k}\boldsymbol{\omega}_{i}\) & - Angular velocity of body \(i\) measured and resolved in BFR \(\mathcal{F}_{k}\) \\ \(\boldsymbol{\omega}_{i}^{\mathbf{b}}\equiv{}^{i}\boldsymbol{\omega}_{i}\) & - Body-fixed representation of the angular velocity of body \(i\) \\ \({}^{k}\mathbf{v}_{i}^{\mathbf{z}}\equiv{}^{0}\boldsymbol{\omega}_{i}\) & - Spatial representation of the angular velocity of body \(i\) \\ \({}^{k}\mathbf{V}_{i}^{\mathbf{z}}\) & - Twist of (RFR of) body \(i\) measured in \(\mathcal{F}_{j}\) and resolved in \(\mathcal{F}_{k}\) \\ \(\mathbf{V}_{i}^{\mathbf{b}}\equiv{}^{i}\mathbf{V}_{i}^{\mathbf{f}}\) & - Body-fixed representation of the twist of body \(i\) \\ \(\mathbf{V}_{i}^{\mathbf{s}}\equiv{}^{0}\mathbf{V}_{i}^{\mathbf{0}}\) & - Spatial representation of the twist of body \(i\) \\ \(\mathbf{V}_{i}^{\mathbf{h}}={}^{0}\mathbf{V}_{i}^{\mathbf{f}}\) & - Hybrid form of the twist of body \(i\) \\ \(\mathbf{V}_{i}^{\mathbf{b}}\) & - Vector of system twists in body-fixed representation \\ \(\mathbf{V}^{\mathbf{s}}\) & - Vector of system twists in spatial representation \\ \(\mathbf{V}^{\mathbf{h}}\) & - Vector of system twists in hybrid representation \\ \(\mathbf{V}^{\mathbf{m}}\) & - Vector of system twists in mixed representation \\ \(\mathbf{W}_{i}^{\mathbf{b}}\) & - Applied wrench at body \(i\) in body-fixed representation \\ \(\mathbf{W}_{i}^{\mathbf{s}}\) & - Applied wrench at body \(i\) in spatial representation \\ \(\mathbf{W}_{i}^{\mathbf{h}}\) & - Applied wrench at body \(i\) in hybrid representation \\ \(\mathbf{M}_{i}^{\mathbf{b}}\) & - Inertia matrix of body \(i\) in body-fixed representation \\ \(\mathbf{M}_{i}^{\mathbf{s}}\) & - Inertia matrix of body \(i\) in spatial representation \\ \(\mathbf{M}_{i}^{\mathbf{h}}\) & - Inertia matrix of body \(i\) in hybrid representation \\ \(\mathbf{Add}_{\mathbf{R}}\) & - Screw transformation associated with \(C=\left(\mathbf{R},\mathbf{0}\right)\) \\ \(\mathbf{Add}_{\mathbf{r}}\) & - Screw transformation associated with \(C=\left(\mathbf{I},\mathbf{r}\right)\) \\ \(\mathbf{Add}_{\mathbf{c}_{i,j}}\) & - Transformation matrix transforming screw coordinates represented in \(\mathcal{F}_{j}\) \\ & to screw coordinates represented in \(\mathcal{F}_{i}\) \\ \(\mathbf{ad}_{\mathbf{X}}\) & - Screw product matrix associated with screw coordinate vector \(\mathbf{X}\in\mathbb{R}^{6}\) \\ \(\left[\mathbf{X},\mathbf{Y}\right]\) & - Lie bracket of screw coordinate vectors \(\mathbf{X},\mathbf{Y}\in\mathbb{R}^{6}\). It holds \(\left[\mathbf{X},\mathbf{Y}\right]=\mathbf{ad}_{\mathbf{X}}\mathbf{Y}\). \\ \(\widetilde{\mathbf{X}}\in se\left(3\right)\) & - \(4\times 4\) matrix associated with the screw coordinate vectors \(\mathbf{X}\in\mathbb{R}^{6}\) \\ \(SE\left(3\right)\) & - Special Euclidean group in three dimensions - Lie group of rigid body motions \\ \(se\left(3\right)\) & - Lie algebra of \(SE\left(3\right)\) - algebra of screws \\ \(\mathbf{q}\in\mathbb{V}^{n}\) & - Joint coordinate vector \\ \(\mathbb{V}^{n}\) & - Configuration space \\ \end{tabular}
## Acknowledgement
The author acknowledges that this work has been partially supported by the Austrian COMET-K2 program of the Linz Center of Mechatronics (LCM). | スクリューと Lie 群理論は、多体系(MBS)をユーザーフレンドリーなモデル化するために使用でき、同時に、それらは計算的に効率的な再帰アルゴリズムを生み出します。そのような形式の固有のフレームインvarianceは、運動学モデル化(デニвіт-ハルトンの定理などのモデル規準を遵守するのではなく)において任意の基準フレームの使用を可能にし、関節フレームの導入を回避します。計算効率は、回転、加速度、そしてトルクの表現によって最小限に削減されることで得られます。この表現は、ダイナミクスモデル化にも適用できます。本論文では、回転の4つの最も一般的な表現に対して、再帰的な $O\left( n\right) $ ニュートン-Euler アルゴリズムを導出し、その具体的な特徴を議論します。これらの表現は、文献に提示された対応するアルゴリズムと関連しています。多体系 |
2301.12894 | F-transforms determined by overlap and grouping maps over a complete
lattice | This paper is about the study of F-transforms based on overlap and grouping
maps, residual and co-residual implicator over complete lattice from both
constructive and axiomatic approaches. Further, the duality, basic properties,
and the inverse of proposed F-transforms have been studied, and axiomatic
characterizations of proposed direct F-transforms are investigated. | Abha Tripathi, S. P. Tiwari, Sutapa Mahato | 2022-12-16T14:44:50 | http://arxiv.org/abs/2301.12894v1 | # \(F\)-transforms determined by overlap and grouping maps over a complete lattice
###### Abstract
This paper is about the study of \(F\)-transforms based on overlap and grouping maps, residual and co-residual implicator over complete lattice from both constructive and axiomatic approaches. Further, the duality, basic properties, and the inverse of proposed \(F\)-transforms have been studied, and axiomatic characterizations of proposed direct \(F\)-transforms are investigated.
**Keywords:** Complete lattice; Overlap map; Grouping map; Direct \(F\)-transforms; \(L\)-fuzzy transformation systems.
## 1 Introduction
The theory of fuzzy transform (\(F\)-transform) was firstly introduced by Perfilieva [22], a notion that piqued the curiosity of many researchers. It has now been greatly expanded upon, and a new chapter in the notion of semi-linear spaces has been opened. The fundamental idea of the \(F\)-transform is to factorize (or fuzzify) the precise values of independent variables by using a proximity relationship, and to average the precise values of dependent variables to an approximation value (cf., [22, 23]), from fuzzy sets to parametrized fuzzy sets [31] and from the single variable to the two (or more variables) (cf., [2, 3, 4, 32]). Recently, several studies have begun to look into \(F\)-transforms based on an arbitrary \(L\)-fuzzy partition of an arbitrary universe (cf., [11, 14, 15, 17, 18, 19, 20, 25, 26, 35]), where \(L\) is a complete residuated lattice. Among these researches, the concept of a general transformation operator determined by a monadic relation was introduced in [14], the links between \(F\)-transforms and semimodule homomorphisms were examined in [17], while the connections between \(F\)-transforms and similarity relations were discussed in [20]. Further, a fascinating relationship of \(L\)-fuzzy topologies/co-topologies and \(L\)-fuzzy approximation operators (all of which are ideas employed in
the study of an operator-oriented perspective of rough set theory) with \(F\)-transforms was also discovered in [25], while the connection of \(L^{M}\)-valued \(F\)-transforms with \(L^{M}\)-valued fuzzy approximation operators and \(ML\)-graded topologies/co-topologies was discussed in [35]. Also, the concept of \(F\)-transforms and \(L\)-fuzzy pretopologies were examined in [26]. In which it has been shown that weaker closure and interior operators, called after Cech, may also be expressed by using \(F\)-transforms, implying that \(L\)-valued \(F\)-transforms could be utilized in parallel with closure and interior operators as their canonical representation. Also, classes of \(F\)-transforms taking into account three well-known classes of implicators, namely \(R-,S-,QL-\) implicators were discussed in [34]. Several studies in the subject of \(F\)-transforms applications have been conducted, e.g., trend-cycle estimation [7], data compression [8], numerical solution of partial differential equations [10], scheduling [13], time series [21], data analysis [24], denoising [27], face recognition [30], neural network approaches [33] and trading [36].
### Motivation of our research
In contrast to the usual fuzzy logical connectives \(t\)-norm and \(t\)-conorms, the overlap and grouping maps can also be regarded as a new structure of classical logic's intersection and union operations on the unit interval. Even though these maps are closely linked to \(t\)-norm and \(t\)-conorm, they do not have any nontrivial zero divisors. Recently, several researchers have examined the construction technique and properties of overlap and grouping maps over complete lattices and conducted extensive research. Qiao presented the concepts of overlap and grouping maps over complete lattices in [29] and provided two construction techniques. In [37], complete homomorphisms and complete \(0_{L},1_{L}\)-endomorphisms were used to examine the construction techniques of overlap and grouping maps over complete lattices. Further, the ordinal sums of overlap and grouping maps were discussed in [38]. Also, the overlap and grouping maps have been used in various aspects of practical application problems such as in image processing [9], classification [5], and decision-making [1] problems. Specifically, these maps have more advantages than \(t\)-norm and \(t\)-conorm in dealing with some real issues. It seems that using the ideas of the overlap and grouping maps in \(F\)-transform may further open some new areas of application. Accordingly, the study of the theory of \(F\)-transform using the ideas of such maps is a theme of this paper.
### Main contributions
In this work, we present the theory of \(F\)-transforms based on overlap and grouping maps, residual and co-residual implicators over complete lattices. Interestingly, under certain conditions, the \(F\)-transforms introduced in [22, 25, 34] are special cases of proposed \(F\)-transforms. Further, we study \(F\)-transforms from constructive and axiomatic approaches based on the above logic operations over complete lattices. The main findings are summarized below:
* we discuss the duality of the proposed direct \(F\)-transforms and investigate their basic properties;
* we introduce the inverse of the proposed \(F\)-transforms and discuss some basic properties; and
* we show a close connection between proposed \(F\)-transforms and \(L\)-fuzzy transformation systems and discuss the duality of \(L\)-fuzzy transformation systems.
The remainder of this paper is arranged in the following manner. In Section 2, we recall some key concepts that will be used throughout the main sections. We introduce and examine various classes of direct \(F\)-transforms determined by overlap and grouping maps over the complete lattice in Section 3. In Section 4, we introduce the inverse of the proposed direct \(F\)-transforms. In the next section, we characterize proposed direct \(F\)-transforms from the axiomatic approach.
## 2 Preliminaries
Herein, we recall the basic ideas related to complete lattices, overlap and grouping maps, \(L\)-fuzzy sets from [6, 12, 28, 29, 37, 38]. Throughout this paper, a complete lattice with the smallest element \(0\) and the largest element \(1\) is denoted by \(L\equiv(L,\vee,\wedge,0,1)\). We start with the following.
**Definition 2.1**: _Let \(X\) be a nonempty set. Then an \(L\)_**-fuzzy set** _in \(X\) is a map \(f:X\to L\)._
The family of all \(L\)-fuzzy sets in \(X\) is denoted by \(L^{X}\). For all \(u\in L,\textbf{u}\in L^{X}\), \(\textbf{u}(x)=u,x\in X\) denotes **constant \(L\)-fuzzy set**. Also, the **core** of an \(L\)-fuzzy set \(f\) is given as a crisp set \(core(f)=\{x\in X,f(x)=1\}.\) If \(core(f)\neq\emptyset\), then \(f\) is called a **normal \(L\)-fuzzy set**. For \(A\subseteq X\), the **characteristic map** of \(A\) is a map \(1_{A}:X\rightarrow\{0,1\}\) such that
\[1_{A}(x)=\begin{cases}1&\text{ if }x\in A,\\ 0&\text{ otherwise.}\end{cases}\]
In the following, we recall and introduce the some basic concepts.
**Definition 2.2**: _An_ **overlap map** _on \(L\) is a map \(\theta:L\times L\to L\) such that for all \(u,v\in L,\{u_{i}:i\in J\},\{v_{i}:i\in J\}\subseteq L\)_
* \(\theta(u,v)=\theta(v,u)\)_,_
* \(\theta(u,v)=0\) _iff_ \(u=0\) _or_ \(v=0\)_,_
* \(\theta(u,v)=1\) _iff_ \(u=1\) _and_ \(v=1\)_,_
* \(\theta(u,v)\leq\theta(u,w)\) _if_ \(v\leq w\)_, and_
* \(\theta(u,\bigvee_{i\in J}v_{i})=\bigvee_{i\in J}\theta(u,v_{i}),\theta(\bigwedge _{i\in J}u_{i},v)=\bigwedge_{i\in J}\theta(u_{i},v)\)_._
If \(\theta(1,u)=u,\,\forall\,u\in L\), we say that \(1\) is a neutral element of \(\theta\). Also, an overlap map is called
1. **deflation** if \(\theta(1,u)\leq u,\,\forall u\in L\),
2. **inflation** if \(u\leq\theta(1,u),\,\forall u\in L\), and
3. \(EP\)**-overlap map** if \(\theta(u,\theta(v,w))=\theta(v,\theta(u,w)),\,\forall\,u,v,w\in L\).
**Example 2.1**: _(i) Every continuous \(t\)-norm \(\mathcal{T}\) with no nontrivial zero divisors is an overlap map, (ii) \(\theta_{M}(u,v)=u\wedge v,\,\forall\,u,v\in L\) on a frame with the prime element \(0\) is an overlap map._
**Definition 2.3**: \(A\) **grouping map** _on \(L\) is a map \(\eta:L\times L\to L\) such that for all \(u,v\in L,\{u_{i}:i\in J\},\{v_{i}:i\in J\}\subseteq L\)_
1. \(\eta(u,v)=\eta(v,u)\)_,_
2. \(\eta(u,v)=0\) _iff_ \(u=0\) _and_ \(v=0\)_,_
3. \(\eta(u,v)=1\) _iff_ \(u=1\) _or_ \(v=1\)_,_
4. \(\eta(u,v)\leq\eta(u,w)\) _if_ \(v\leq w\)_, and_
5. \(\eta(u,\bigvee\limits_{i\in J}v_{i})=\bigvee\limits_{i\in J}\eta(u,v_{i}),\eta (\bigwedge\limits_{i\in J}u_{i},v)=\bigwedge\limits_{i\in J}\eta(u_{i},v)\)_._
If \(\eta(0,u)=u,\,\forall\,u\in L\), we say that \(0\) is a neutral element of \(\eta\). Also, a grouping map is called
1. **deflation** if \(\eta(0,u)\geq u,\,\forall u\in L\),
2. **inflation** if \(u\geq\eta(0,u),\,\forall u\in L\), and
3. \(EP\)**-grouping map** if \(\eta(u,\eta(v,w))=\eta(v,\eta(u,w)),\,\forall\,u,v,w\in L\).
**Example 2.2**: _(i) Every continuous \(t\)-conorm \(\mathcal{S}\) with no nontrivial zero divisors is a grouping map, (ii) \(\eta_{M}(u,v)=u\lor v,\,\forall\,u,v\in L\) on a frame with the prime element \(0\) is an grouping map._
**Definition 2.4**: \(A\) **negator** _on \(L\) is a decreasing map \(\mathbf{N}:L\to L\) such that \(\mathbf{N}(0)=1\) and \(\mathbf{N}(1)=0\)._
A negator \(\mathbf{N}\) is called **involutive** (strong), if \(\mathbf{N}(\mathbf{N}(u))=u,\,\forall\,u\in L\). In addition, a negator \(\mathbf{N}\) is called **strict**, if \(\mathbf{N}\) is stictly decreasing and continuous, i.e., involutive (as every involutive negator is stictly decreasing and continuous).
The negator \(\mathbf{N}_{S}(u)=1-u\) on \(L=[0,1]\) is usually regarded as the standard negator. For a given negator \(\mathbf{N}\), an overlap map \(\theta\) and a grouping map \(\eta\) are dual with respect to \(\mathbf{N}\) if \(\eta(\mathbf{N}(u),\mathbf{N}(v))=\mathbf{N}(\theta(u,v)),\theta(\mathbf{N}(u ),\mathbf{N}(v))=\mathbf{N}(\eta(u,v)),\,\forall\,u,v\in L\).
**Definition 2.5**: _Let \(\mathbf{N}\) be a negator, \(\theta\) be an overlap map and \(\eta\) be a grouping map. Then_
1. _the_ **residual****implicator** _induced by an overlap map_ \(\theta\) _is a map_ \(\mathcal{I}_{\theta}:L\times L\to L\) _such that_ \(\mathcal{I}_{\theta}(u,v)=\{w\in L:\theta(u,w)\leq v\},\,\forall\,u,v\in L\)_, and_
2. _the_ **co-residual****implicator** _induced by a grouping map_ \(\eta\) _is a map_ \(\mathcal{I}_{\eta}:L\times L\to L\) _such that_ \(\mathcal{I}_{\eta}(u,v)=\{w\in L:\eta(u,w)\geq v\},\,\forall\,u,v\in L\)_._
**Example 2.3**: _Let \(L=[0,1],\theta=\theta_{M},\eta=\eta_{M}\). Then for all \(u,v\in L\)_
1. _the residual implicator_ \(\mathcal{I}_{\theta_{M}}\) _is given as_ \(\mathcal{I}_{\theta_{M}}(u,v)=\begin{cases}1&\text{ if }u\leq v,\\ v&\text{ otherwise},\,and\end{cases}\)__
2. _the co-residual__implicator_ \(\mathcal{I}_{\eta_{M}}\) _is given as_ \(\mathcal{I}_{\eta_{M}}(u,v)=\begin{cases}0&\text{ if }u\leq v,\\ u&\text{ otherwise}.\end{cases}\)__
**Lemma 2.1**: _Let \(\theta\) and \(\eta\) be overlap and grouping maps, respectively. Then \(\theta\) and \(\mathcal{I}_{\theta}\), \(\eta\) and \(\mathcal{I}_{\eta}\) form two adjoint pairs, respectively, i.e., for all \(u,v,w\in L,\,\theta(u,v)\leq w\Leftrightarrow u\leq\mathcal{I}_{\theta}(v,w), \,\eta(u,v)\geq w\Leftrightarrow u\geq\mathcal{I}_{\eta}(v,w)\), respectively._
**Lemma 2.2**: _Let \(\theta\) be an overlap map. Then for all \(u,v,w\in L\)_
1. \(\mathcal{I}_{\theta}(0,0)=\mathcal{I}_{\theta}(1,1)=1,\mathcal{I}_{\theta}(1, 0)=0\)_,_
2. \(\mathcal{I}_{\theta}(u,w)\geq\mathcal{I}_{\theta}(v,w),\,\mathcal{I}_{\theta} (w,u)\leq\mathcal{I}_{\theta}(w,v)\) _if_ \(u\leq v\)_,_
3. \(\mathcal{I}_{\theta}\) _is an_ \(OP\)_,_ \(NP\)_-residual implicator, i.e.,_ \(u\leq v\Leftrightarrow\mathcal{I}_{\theta}(u,v)=1,\mathcal{I}_{\theta}(1,u)=u\)_, respectively iff_ \(1\) _is a neutral element of_ \(\theta\)_,_
4. \(\mathcal{I}_{\theta}\) _is an_ \(IP\)_-residual implicator, i.e.,_ \(\mathcal{I}_{\theta}(u,u)=1\) _iff_ \(\theta\) _is a deflation overlap map,_
5. \(\mathcal{I}_{\theta}\) _is an_ \(EP\)_-residual implicator, i.e.,_ \(\mathcal{I}_{\theta}(u,\mathcal{I}_{\theta}(v,w))=\mathcal{I}_{\theta}(v, \mathcal{I}_{\theta}(u,w))\) _iff_ \(\theta\) _is an_ \(EP\)_-overlap map._
**Lemma 2.3**: _Let \(\theta\) be an overlap map. Then for all \(u,v,w\in L,\{u_{i}:i\in J\},\{v_{i}:i\in J\}\subseteq L\)_
1. \(\theta(u,\mathcal{I}_{\theta}(u,v))\leq v,\mathcal{I}_{\theta}(u,\theta(u,v)) \geq v,\mathcal{I}_{\theta}(\theta(u,v),0)=\mathcal{I}_{\theta}(u,\mathcal{I} (v,0))\)_,_
2. \(\mathcal{I}_{\theta}(u,\bigwedge\limits_{i\in J}v_{i})=\bigwedge\limits_{i\in J }\mathcal{I}_{\theta}(u,v_{i}),\mathcal{I}_{\theta}(\bigvee\limits_{i\in J}u_ {i},v)=\bigwedge\limits_{i\in J}\mathcal{I}_{\theta}(u_{i},v)\)_,_
3. \(\mathcal{I}_{\theta}(u,\bigvee\limits_{i\in J}v_{i})\geq\bigvee\limits_{i\in J }\mathcal{I}_{\theta}(u,v_{i})\)_,_
4. \(\theta\) _is an_ \(EP\)_-overlap map iff_ \(\mathcal{I}_{\theta}(\theta(u,v),w)=\mathcal{I}_{\theta}(u,\mathcal{I}_{\theta} (v,w))\)_._
If \(\theta\) and \(\eta\) are dual with respect to an involutive negator \(\mathbf{N}\), then \(\mathcal{I}_{\theta}\) and \(\mathcal{I}_{\eta}\) are dual with respect to the involutive negator \(\mathbf{N}\), i.e., \(\mathcal{I}_{\eta}(\mathbf{N}(u),\mathbf{N}(v))=\mathbf{N}(\mathcal{I}_{ \theta}(u,v)),\mathcal{I}_{\theta}(\mathbf{N}(u),\mathbf{N}(v))\)\(=\mathbf{N}(\mathcal{I}_{\eta}(u,v)),\,\forall\,u,v\in L\). Then we have the following dual properties of \(\mathcal{I}_{\eta}\) by the properties of \(\mathcal{I}_{\theta}\) as follows:
1. \(\mathcal{I}_{\eta}(0,0)=\mathcal{I}_{\eta}(1,1)=0,\mathcal{I}(0,1)=1\),
2. \(\mathcal{I}_{\eta}(u,w)\geq\mathcal{I}_{\eta}(v,w)\), \(\mathcal{I}_{\eta}(w,u)\leq\mathcal{I}_{\eta}(w,v)\) if \(u\leq v\),
3. \(\mathcal{I}_{\eta}\) is \(OP\) and \(NP\)-co-residual implicator, i.e., \(u\geq v\Leftrightarrow\mathcal{I}_{\eta}(u,v)=0\) and \(\mathcal{I}_{\eta}(0,u)=u\), respectively iff \(0\) is a neutral element of \(\eta\),
4. \(\mathcal{I}_{\eta}\) is an \(IP\)-co-residual implicator, i.e., \(\mathcal{I}_{\eta}(u,u)=0\) iff \(\eta\) is a deflation grouping map,
5. \(\mathcal{I}_{\eta}\) is an \(EP\)-co-residual implicator, i.e., \(\mathcal{I}_{\eta}(u,\mathcal{I}_{\eta}(v,w))=\mathcal{I}_{\eta}(v,\mathcal{I }_{\eta}(u,w))\) iff \(\eta\) is an \(EP\)-grouping map,
6. \(\eta(u,\mathcal{I}_{\eta}(u,v))\geq v,\mathcal{I}_{\eta}(u,\eta(u,v))\leq v, \mathcal{I}_{\eta}(\eta(u,v),1)=\mathcal{I}_{\eta}(u,\mathcal{I}_{\eta}(v,1))\),
7. \(\mathcal{I}_{\eta}(u,\bigvee\limits_{i\in J}v_{i})=\bigvee\limits_{i\in J} \mathcal{I}_{\eta}(u,v_{i}),\mathcal{I}_{\eta}(\bigwedge\limits_{i\in J}u_{i}, v)=\bigvee\limits_{i\in J}\mathcal{I}_{\eta}(u_{i},v)\),
8. \(\mathcal{I}_{\eta}(u,\bigwedge\limits_{i\in J}v_{i})\leq\bigwedge\limits_{i\in J }\mathcal{I}_{\eta}(u,v_{i})\),
9. \(\eta\) is an \(EP\)-grouping map iff \(\mathcal{I}_{\eta}(\eta(u,v),w)=\mathcal{I}_{\eta}(u,\mathcal{I}_{\eta}(v,w))\).
For any \(\mathcal{I}_{\theta}\) and \(\mathcal{I}_{\eta}\), \(\mathbf{N}_{\mathcal{I}_{\theta}}(u)=\mathcal{I}_{\theta}(u,0)\) and \(\mathbf{N}_{\mathcal{I}_{\eta}}(u)=\mathcal{I}_{\eta}(u,1),\forall\,u\in L\) are called the negators induced by \(\mathcal{I}_{\theta}\) and \(\mathcal{I}_{\eta}\), respectively. Next, we introduce the following notations which are going to be used in subsequent sections.
Given an overlap map \(\theta\), a grouping map \(\eta\), a residual implicator \(\mathcal{I}_{\theta}\), a co-residual implicator \(\mathcal{I}_{\eta}\), a negator \(\mathbf{N}\), and \(L\)-fuzzy sets \(f,g\in L^{X}\), we define \(L\)-fuzzy sets \(\theta(f,g),\eta(f,g),\mathcal{I}_{\theta}(f,g),\mathcal{I}_{\eta}(f,g)\) and \(\mathbf{N}(f)\) as follows:
\[\theta(f,g)(x) = \theta(f(x),g(x)),\forall\,x\in X,\] \[\eta(f,g)(x) = \eta(f(x),g(x)),\forall\,x\in X,\] \[\mathcal{I}_{\theta}(f,g)(x) = \mathcal{I}_{\theta}(f(x),g(x)),\forall\,x\in X,\] \[\mathcal{I}_{\eta}(f,g)(x) = \mathcal{I}_{\eta}(f(x),g(x)),\forall\,x\in X,\text{ and}\] \[(\mathbf{N}(f))(x) = \mathbf{N}(f(x)),\forall\,x\in X.\]
## 3 Direct \(F\)-transforms
Herein, we consider that \(\theta\) and \(\eta\) are overlap and grouping maps, and these are dual with respect to an involutive negator \(\mathbf{N}\). Also, \(\mathcal{I}_{\theta}\) and \(\mathcal{I}_{\eta}\) are residual and co-residual implicators induced by \(\theta\) and \(\eta\), respectively, introduced as in Section 2. The main content of this section is to present the concepts of the direct \(F\)-transforms of \(L\)-fuzzy sets with respect to the above logic operations. Further, we study and investigate their relationships and discuss their basic properties. We start with the definition of \(L\)-fuzzy partition from [25].
**Definition 3.1**: _A collection \(\mathcal{P}\) of normal \(L\)-fuzzy sets \(\{A_{j}:j\in J\}\) is called an \(L\)-fuzzy partition of a nonempty set \(X\) if the corresponding collection of ordinary sets \(\{core(A_{j}):j\in J\}\) is partition of \(X\). The pair \((X,\mathcal{P})\) is called a_ **space with \(L\)-fuzzy partition**_._
For an \(L\)-fuzzy partition \(\mathcal{P}=\{A_{j}:j\in J\}\), it is possible to associate the onto index map \(k:X\to J\) such that \(k(x)=j\) iff \(x\in core(A_{j})\).
The following is towards the direct \(F\)-transforms computed with \(\theta\), \(\eta\), \(\mathcal{I}_{\theta}\) and \(\mathcal{I}_{\eta}\), where \(\mathcal{I}_{\theta}\) and \(\mathcal{I}_{\eta}\) are residual and co-residual implicators induced by overlap and grouping maps \(\theta,\eta\), respectively. Now, we begin with the following.
**Definition 3.2**: _Let \(\mathcal{P}\) be an \(L\)-fuzzy partition of a set \(X\) and \(f\in L^{X}\). Then_
1. _the_ **(direct \(\theta\)-upper)**__\(F^{\uparrow,\theta}\)**-transform** _of_ \(f\) _computed with an overlap map_ \(\theta\) _over the_ \(L\)_-fuzzy partition_ \(\mathcal{P}\) _is a collection of lattice elements_ \(\{F^{\uparrow,\theta}_{j}[f]:j\in J\}\) _and the_ \(j^{th}\) _component of (direct_ \(\theta\)_-upper)_ \(F^{\uparrow,\theta}\)_-transform is given by_ \[F^{\uparrow,\theta}_{j}[f]=\bigvee_{x\in X}\theta(A_{j}(x),f(x)),\]
2. _the_ **(direct \(\eta\)-lower)**__\(F^{\downarrow,\eta}\)**-transform** _of_ \(f\) _computed with a grouping map_ \(\eta\) _over the_ \(L\)_-fuzzy partition_ \(\mathcal{P}\) _is a collection of lattice elements_ \(\{F^{\downarrow,\eta}_{j}[f]:j\in J\}\) _and the_ \(j^{th}\) _component of (direct_ \(\eta\)_-lower)_ \(F^{\downarrow,\eta}\)_-transform is given by_ \[F^{\downarrow,\eta}_{j}[f]=\bigwedge_{x\in X}\eta(\mathbf{N}(A_{j}(x)),f(x)),\]
3. _the_ **(direct \(\mathcal{I}_{\eta}\)-upper)**__\(F^{\uparrow,\mathcal{I}_{\eta}}\)**-transform** _of_ \(f\) _computed with a co-residual implicator_ \(\mathcal{I}_{\eta}\) _induced by a grouping map_ \(\eta\) _over the_ \(L\)_-fuzzy partition_ \(\mathcal{P}\) _is a collection of lattice elements_ \(\{F^{\uparrow,\mathcal{I}_{\eta}}_{j}[f]:j\in J\}\) _and the_ \(j^{th}\) _component of (direct_ \(\mathcal{I}_{\eta}\)_-upper)_ \(F^{\uparrow,\mathcal{I}_{\eta}}\)_-transform is given by_ \[F^{\uparrow,\mathcal{I}_{\eta}}_{j}[f]=\bigvee_{x\in X}\mathcal{I}_{\eta}( \mathbf{N}(A_{j}(x)),f(x)),\,and\]
4. _the_ **(direct \(\mathcal{I}_{\theta}\)-lower)**__\(F^{\downarrow,\mathcal{I}_{\theta}}\)**-transform** _of_ \(f\) _computed with a residual implicator_ \(\mathcal{I}_{\theta}\) _induced by an overlap map_ \(\theta\) _over the_ \(L\)_-fuzzy partition_ \(\mathcal{P}\) _is a collection of lattice elements_ \(\{F^{\downarrow,\mathcal{I}_{\theta}}_{j}[f]:j\in J\}\) _and the_ \(j^{th}\) _component of (direct_ \(\mathcal{I}_{\theta}\)_-lower)_ \(F^{\downarrow,\mathcal{I}_{\theta}}\)_-transform is given by_ \[F^{\downarrow,\mathcal{I}_{\theta}}_{j}[f]=\bigwedge_{x\in X}\mathcal{I}_{\theta }(A_{j}(x),f(x)).\]
The direct upper \(F\)-transform computed with a \(t\)-norm and the direct lower \(F\)-transform computed with an \(R\)-implicator proposed in [22, 25, 34] are special cases of \(F^{\uparrow,\theta}\) and \(F^{\downarrow,\mathcal{I}_{\theta}}\)-transforms, respectively. Also, the direct lower \(F\)-transform computed with an \(S\)-implicator proposed in [34] is a special case of \(F^{\downarrow,\eta}\)-transform. In above-introduced direct \(F\)-transforms, \(F^{\uparrow,\mathcal{I}_{\eta}}\)-transform is a new definition.
**Example 3.1**: _Let \(L=\{0,p,q,r,s,t,u,1\}\) be a complete lattice such that \(0<p<r<t<u<1,0<p<q<s<u<1\) and \(\{q,r\}\) and \(\{s,t\}\) are pairwise incomparable (Figure 1). Then \((X,\mathcal{P})\) is a space with an \(L\)-fuzzy partition \(\mathcal{P}\), where \(X=\{x_{1},x_{2},x_{3}\}\) and \(\mathcal{P}=\{A_{1},A_{2},A_{3}\}\) such that \(A_{1}=\frac{1}{x_{1}}+\frac{p}{x_{2}}+\frac{q}{x_{3}}\), \(A_{2}=\frac{s}{x_{1}}+\frac{1}{x_{2}}+\frac{u}{x_{3}}\), \(A_{3}=\frac{s}{x_{1}}+\frac{p}{x_{2}}+\frac{1}{x_{3}}\). Further, let \(f\in L^{X}\) such that \(f=\frac{p}{x_{1}}+\frac{q}{x_{2}}+\frac{u}{x_{3}}\) and \(\mathbf{N}\) be an involutive negator such that \(\mathbf{N}(0)=1,\mathbf{N}(a)=u,\mathbf{N}(q)=t,\mathbf{N}(r)=s,\mathbf{N}(s)= r,\mathbf{N}(t)=q,\mathbf{N}(u)=p,\mathbf{N}(1)=0\). The the direct \(F\)-transforms with respect to \(\theta_{M},\eta_{M},\mathcal{I}_{\eta_{M}},\mathcal{I}_{\theta_{M}}\) are \(F^{\uparrow,\theta_{M}}[f]=\{F^{\uparrow,\theta_{M}}_{1}[f]=q,F^{\uparrow, \theta_{M}}_{2}[f]=u,F^{\uparrow,\theta_{M}}_{2}[f]=u\}\), \(F^{\downarrow,\eta_{M}}[f]=\{F^{\downarrow,\eta_{M}}_{1}[f]=p,F^{\downarrow, \eta_{M}}_{2}[f]=r,F^{\downarrow,\eta_{M}}_{3}[f]=r\}\), \(F^{\uparrow,\mathcal{I}_{\eta_{M}}}_{1}[f]=\{F^{\uparrow,\mathcal{I}_{\eta_{M}} }_{1}[f]=u,F^{\uparrow,\mathcal{I}_{\eta_{M}}}_{2}[f]=r,F^{\uparrow,\mathcal{I} _{\eta_{M}}}_{3}[f]=u\}\), \(F^{\downarrow,\mathcal{I}_{\theta_{M}}}[f]=\{F^{\downarrow,\mathcal{I}_{\theta_{M}} }_{1}[f]=p,F^{\downarrow,\mathcal{I}_{\theta_{M}}}_{2}[f]=p\}\)._
**Remark 3.1**: _(i) If \(L=[0,1]\), \(\mathbf{N}=\mathbf{N}_{S},\theta=\theta_{M},\eta=\eta_{M},\mathcal{I}_{\eta}= \mathcal{I}_{\eta_{M}}\) and \(\mathcal{I}_{\theta}=\mathcal{I}_{\theta_{M}}\), then the \(j^{th}\) components of \(F^{\uparrow,\theta},F^{\downarrow,\eta},F^{\uparrow,\mathcal{I}_{\eta}}\) and \(F^{\downarrow,\mathcal{I}_{\theta}}\)-transforms become as follows:_
\[F^{\uparrow,\theta_{M}}_{j}[f] = \bigvee_{x\in X}(A_{j}(x)\wedge f(x)),\] \[F^{\downarrow,\eta_{M}}_{j}[f] = \bigwedge_{x\in X}((1-A_{j}(x))\lor f(x)),\] \[F^{\uparrow,\mathcal{I}_{\eta_{M}}}_{j}[f] = \bigvee_{x\in X}\mathcal{I}_{\eta_{M}}((1-A_{j}(x)),f(x)),\,\text{ and}\] \[F^{\downarrow,\mathcal{I}_{\theta_{M}}}_{j}[f] = \bigwedge_{x\in X}\mathcal{I}_{\theta_{M}}(A_{j}(x),f(x)),\, \forall\,j\in J,f\in L^{X}.\]
_Obviously \(F^{\uparrow,\theta_{M}}\) and \(F^{\downarrow,\mathcal{I}_{\theta_{M}}}\)-transforms coincide with the special cases of direct upper and lower \(F\)-transforms proposed in [22, 25, 34], respectively. Also, \(F^{\downarrow,\eta_{M}}\)-transform coincides with the special case of the direct lower \(F\)-transform proposed
Figure 1: Diagram for lattice \(L\)
_in [34]._
_(ii) If_ \(L=[0,1],\theta=\theta_{M},\eta=\eta_{M},\mathcal{I}_{\eta}=\mathcal{I}_{\eta_{M}}\) _and_ \(\mathcal{I}_{\theta}=\mathcal{I}_{\theta_{M}}\)_, then the_ \(j^{th}\) _components of_ \(F^{\uparrow,\theta},F^{\downarrow,\eta},F^{\uparrow,\mathcal{I}_{\eta}}\) _and_ \(F^{\downarrow,\mathcal{I}_{\theta}}\)_-transforms become as follows:_
\[F^{\uparrow,\theta_{M}}_{j}[f] = \bigvee_{x\in X}(A_{j}(x)\wedge f(x)),\] \[F^{\downarrow,\eta_{M}}_{j}[f] = \bigwedge_{x\in X}(\mathbf{N}(A_{j}(x))\lor f(x)),\] \[F^{\uparrow,\mathcal{I}_{\eta_{M}}}_{j}[f] = \bigvee_{x\in X}\mathcal{I}_{\eta_{M}}(\mathbf{N}(A_{j}(x)),f(x )),\,\text{and}\] \[F^{\downarrow,\mathcal{I}_{\theta_{M}}}_{j}[f] = \bigwedge_{x\in X}\mathcal{I}_{\theta_{M}}(A_{j}(x),f(x)),\, \forall\,j\in J,f\in L^{X}.\]
_Obviously_ \(F^{\uparrow,\theta_{M}}\) _and_ \(F^{\downarrow,\mathcal{I}_{\theta_{M}}}\)_-transforms coincide with the special cases of direct upper and lower_ \(F\)_-transforms proposed in_ _[_22, 25, 34_]__, respectively. Also,_ \(F^{\downarrow,\eta_{M}}\)_-transform coincides with the special case of the direct lower_ \(F\)_-transform proposed in_ _[_34_]__._
_(iii) If_ \(L=[0,1],\theta=\mathcal{T}\) _and_ \(\eta=\mathcal{S}\)_, where_ \(\mathcal{T},\mathcal{S}\) _are continuous_ \(t\)_-norm,_ \(t\)_-conorm with no nontrivial zero divisors, respectively, then the_ \(j^{th}\) _components of_ \(F^{\uparrow,\theta},F^{\downarrow,\eta},F^{\uparrow,\mathcal{I}_{\eta}}\) _and_ \(F^{\downarrow,\mathcal{I}_{\theta}}\)_-transforms become as follows:_
\[F^{\uparrow,\mathcal{T}}_{j}[f] = \bigvee_{x\in X}\mathcal{T}(A_{j}(x),f(x)),\] \[F^{\downarrow,\mathcal{S}}_{j}[f] = \bigwedge_{x\in X}\mathcal{S}(\mathbf{N}(A_{j}(x)),f(x)),\] \[F^{\uparrow,\mathcal{I}_{S}}_{j}[f] = \bigvee_{x\in X}\mathcal{I}_{\mathcal{S}}(\mathbf{N}(A_{j}(x)),f( x)),\,\text{and}\] \[F^{\downarrow,\mathcal{I}_{\mathcal{T}}}_{j}[f] = \bigwedge_{x\in X}\mathcal{I}_{\mathcal{T}}(A_{j}(x),f(x)),\, \forall\,j\in J,f\in L^{X}.\]
_Obviously_ \(F^{\uparrow,\mathcal{T}}\) _and_ \(F^{\downarrow,\mathcal{I}_{\mathcal{T}}}\)_-transforms coincide with the direct upper and lower_ \(F\)_-transforms computed with_ \(t\)_-norm and_ \(R\)_-implicator proposed in_ _[_22, 25, 34_]__, respectively. Also,_ \(F^{\downarrow,\mathcal{S}}\)_-transform coincide with the direct lower_ \(F\)_-transform computed with an_ \(S\)_-implicator proposed in_ _[_34_]__, respectively._
From the above, it is clear that some existing direct \(F\)-transforms are special cases of the proposed direct \(F\)-transforms. Among these, some direct \(F\)-transforms coincide with the proposed direct \(F\)-transforms and some of the proposed direct \(F\)-transforms coincide with the special cases of the existing direct \(F\)-transforms. That is to say; the proposed direct \(F\)-transforms are more general than other existing ones.
**Proposition 3.1**: _Let \(\theta\) and \(\eta\) be dual with respect to an involutive negator \(\mathbf{N}\). Then for all \(j\in J,f\in L^{X}\)_
* \(F_{j}^{\uparrow,\theta}[f]=\mathbf{N}(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[ \mathbf{N}(f)])\)_, i.e.,_ \(\mathbf{N}(F_{j}^{\uparrow,\theta}[f])=F_{j}^{\downarrow,\eta}[\mathbf{N}(f)]\)_, and_
* \(F_{j}^{\downarrow,\eta}[f]=\mathbf{N}(F_{j}^{\uparrow,\theta}[\mathbf{N}(f)])\)_, i.e,_ \(\mathbf{N}(F_{j}^{\downarrow,\eta}[f])=F_{j}^{\uparrow,\theta}[\mathbf{N}(f)]\)_._
**Proof:** (i) Let \(j\in J\) and \(f\in L^{X}\). Then from Definition 3.2
\[\mathbf{N}(F_{j}^{\downarrow,\eta}[\mathbf{N}(f)]) = \mathbf{N}(\bigwedge_{x\in X}\eta(\mathbf{N}(A_{j}(x)),(\mathbf{N }(f))(x)))\] \[= \mathbf{N}(\bigwedge_{x\in X}\eta(\mathbf{N}(A_{j}(x)),\mathbf{N }(f(x))))\] \[= \bigvee_{x\in X}\mathbf{N}(\eta(\mathbf{N}(A_{j}(x)),\mathbf{N}( f(x))))\] \[= \bigvee_{x\in X}\theta(A_{j}(x),f(x))\] \[= F_{j}^{\uparrow,\theta}[f].\]
Thus \(F_{j}^{\uparrow,\theta}[f]=\mathbf{N}(F_{j}^{\downarrow,\eta}[\mathbf{N}(f)])\), or that \(\mathbf{N}(F_{j}^{\uparrow,\theta}[f])=F_{j}^{\downarrow,\eta}[\mathbf{N}(f)]\). Similarly, we can show that \(F_{j}^{\downarrow,\eta}[f]=\mathbf{N}(F_{j}^{\uparrow,\theta}[\mathbf{N}(f)])\), or that, \(\mathbf{N}(F_{j}^{\downarrow,\eta}[f])=F_{j}^{\uparrow,\theta}[\mathbf{N}(f)]\).
**Proposition 3.2**: _Let \(\mathcal{I}_{\theta}\) and \(\mathcal{I}_{\eta}\) be dual with respect to an involutive negator \(\mathbf{N}\). Then for all \(j\in J,f\in L^{X}\)_
* \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]=\mathbf{N}(F_{j}^{\downarrow,\mathcal{I }_{\theta}}[\mathbf{N}(f)])\)_, i.e,_ \(\mathbf{N}(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f])=F_{j}^{\downarrow,\mathcal{I }_{\theta}}[\mathbf{N}(f)]\)_, and_
* \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]=\mathbf{N}(F_{j}^{\uparrow,\mathcal{ I}_{\eta}}[\mathbf{N}(f)])\)_, i.e.,_ \(\mathbf{N}(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f])=F_{j}^{\uparrow, \mathcal{I}_{\eta}}[\mathbf{N}(f)]\)_._
**Proof:** (i) Let \(j\in J\) and \(f\in L^{X}\). Then from Definition 3.2
\[\mathbf{N}(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[\mathbf{N}(f)]) = \mathbf{N}(\bigwedge_{x\in X}\mathcal{I}_{\theta}(A_{j}(x),( \mathbf{N}(f))(x)))\] \[= \mathbf{N}(\bigwedge_{x\in X}\mathcal{I}_{\theta}(A_{j}(x),\mathbf{ N}(f(x))))\] \[= \bigvee_{x\in X}\mathbf{N}(\mathcal{I}_{\theta}(A_{j}(x),\mathbf{ N}(f(x))))\] \[= \bigvee_{x\in X}\mathbf{N}(\mathcal{I}_{\theta}(\mathbf{N}( \mathbf{N}(A_{j}(x))),\mathbf{N}(f(x))))\] \[= \bigvee_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x)),f(x))\] \[= F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f].\]
Thus \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]=\mathbf{N}(F_{j}^{\downarrow,\mathcal{ I}_{\theta}}[\mathbf{N}(f)])\), or that \(\mathbf{N}(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f])=F_{j}^{\downarrow,\mathcal{ I}_{\theta}}[\mathbf{N}(f)]\). Similarly, we can prove that \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]=\mathbf{N}(F_{j}^{\uparrow,\mathcal{ I}_{\eta}}[\mathbf{N}(f)])\), or that, \(\mathbf{N}(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f])=F_{j}^{\uparrow,\mathcal{ I}_{\eta}}[\mathbf{N}(f)]\).
The above two propositions show that the duality of \(F_{j}^{\uparrow,\theta}\) and \(F_{j}^{\downarrow,\eta}\), \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}\) and \(F_{j}^{\uparrow,\mathcal{I}_{\theta}}\)
are dual with respect \(\mathbf{N}\). Generally, duality concept for \(F_{j}^{\uparrow,\theta}\) and \(F_{j}^{\uparrow,\mathcal{I}_{\theta}}\), \(F_{j}^{\downarrow,\eta}\) and \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}\) are not true with respect to \(\mathbf{N}\). But they satisfy the following result. For which, we assume \(\bigwedge\limits_{b\in L}\mathcal{I}_{\theta}(\mathcal{I}_{\theta}(u,v),v)=u\), \(\forall\,u\in L\).
**Proposition 3.3**: _Let \(\mathbf{N}\) be an involutive reactor, \(\theta\) and \(\eta\) be \(EP\)-overlap and \(EP\)-grouping maps, respectively. Then for \(j\in J,u\in L,\boldsymbol{u},f\in L^{X}\)_
1. \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]=\bigwedge\limits_{u\in L}\mathcal{I }_{\theta}(F_{j}^{\uparrow,\theta}[\mathcal{I}_{\theta}(f,\boldsymbol{u})],u)\)_,_ \(F_{j}^{\uparrow,\theta}[f]=\bigwedge\limits_{u\in L}\mathcal{I}_{\theta}(F_{j}^{ \downarrow,\mathcal{I}_{\theta}}[\mathcal{I}_{\theta}(f,\boldsymbol{u})],u)\)_, and_
2. \(F_{j}^{\downarrow,\eta}[f]=\bigvee\limits_{u\in L}\mathcal{I}_{\eta}(F_{j}^{ \uparrow,\mathcal{I}_{\eta}}[\mathcal{I}_{\eta}(f,\boldsymbol{u})],u)\)_,_ \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]=\bigvee\limits_{u\in L}\mathcal{I}_{\eta}(F_{j }^{\downarrow,\eta}[\mathcal{I}_{\eta}(f,\boldsymbol{u})],u)\)_._
**Proof:** Let \(u\in L\) and \(f\in L^{X}\). Then from Definition 3.2
\[\bigwedge\limits_{u\in L}\mathcal{I}_{\theta}(F_{j}^{\uparrow, \theta}[\mathcal{I}_{\theta}(f,\boldsymbol{u})],u) = \bigwedge\limits_{u\in L}\mathcal{I}_{\theta}(\bigvee\limits_{x\in X }\theta(A_{j}(x),\mathcal{I}_{\theta}(f,\boldsymbol{u})(x)),u)\] \[= \bigwedge\limits_{u\in L}\bigwedge\limits_{x\in X}\mathcal{I}_{ \theta}(\theta(A_{j}(x),\mathcal{I}_{\theta}(f(x),u)),u)\] \[= \bigwedge\limits_{u\in L}\bigwedge\limits_{x\in X}\mathcal{I}_{ \theta}(A_{j}(x),\mathcal{I}_{\theta}(\mathcal{I}_{\theta}(f(x),u),u))\] \[= \bigwedge\limits_{x\in X}\mathcal{I}_{\theta}(A_{j}(x),\bigwedge \limits_{u\in L}\mathcal{I}_{\theta}(\mathcal{I}_{\theta}(f(x),u),u))\] \[= \bigwedge\limits_{x\in X}\mathcal{I}_{\theta}(A_{j}(x),f(x))\] \[= F^{\downarrow,\mathcal{I}_{\theta}}[f].\]
Thus \(F^{\downarrow,\mathcal{I}_{\theta}}[f]=\bigwedge\limits_{u\in L}\mathcal{I}_{ \theta}(F_{j}^{\uparrow,\theta}[\mathcal{I}_{\theta}(f,\boldsymbol{u})],u)\) and
\[\bigwedge\limits_{u\in L}\mathcal{I}_{\theta}(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[\mathcal{I}_{\theta}(f,\boldsymbol{u})],u) = \bigwedge\limits_{u\in L}\mathcal{I}_{\theta}(\bigwedge\limits_{ x\in X}\mathcal{I}_{\theta}(A_{j}(x),\mathcal{I}_{\theta}(f,\boldsymbol{u})(x)),u)\] \[= \bigwedge\limits_{u\in L}\mathcal{I}_{\theta}(\bigwedge\limits_{ x\in X}\mathcal{I}_{\theta}(A_{j}(x),\mathcal{I}_{\theta}(f(x),u)),u)\] \[= \bigwedge\limits_{u\in L}\mathcal{I}_{\theta}(\bigwedge\limits_{ x\in X}\mathcal{I}_{\theta}(\theta(A_{j}(x),f(x)),u),u)\] \[= \bigwedge\limits_{u\in L}\mathcal{I}_{\theta}(\mathcal{I}_{\theta} (\bigvee\limits_{x\in X}\theta(A_{j}(x),f(x)),u),u)\] \[= \bigvee\limits_{x\in X}\theta(A_{j}(x),f(x))\] \[= F^{\uparrow,\theta}[f].\]
Thus \(F^{\uparrow,\theta}[f]=\bigwedge\limits_{u\in L}\mathcal{I}_{\theta}(F_{j}^{ \downarrow,\mathcal{I}_{\theta}}[\mathcal{I}_{\theta}(f,\boldsymbol{u})],u)\).
(ii) Let \(u\in L\) and \(f\in L^{X}\). Then from Definition 3.2 and Propositions 3.1 and 3.2
\[F^{\downarrow,\eta}[f] = {\bf N}(F^{\uparrow,\theta}[{\bf N}(f)])\] \[= {\bf N}(\bigwedge_{u\in L}{\cal I}_{\theta}(F^{\downarrow,{\cal I} _{\theta}}_{j}[{\cal I}_{\theta}({\bf N}(f),{\bf u})],u))\] \[= {\bf N}(\bigwedge_{u\in L}{\cal I}_{\theta}(\bigwedge_{x\in X}{ \cal I}_{\theta}(A_{j}(x),{\cal I}_{\theta}({\bf N}(f),{\bf u})(x)),u))\] \[= \bigvee_{u\in L}{\bf N}({\cal I}_{\theta}(\bigwedge_{x\in X}{\cal I }_{\theta}(A_{j}(x),{\cal I}_{\theta}({\bf N}(f),{\bf u})(x)),u))\] \[= \bigvee_{u\in L}{\cal I}_{\eta}(\mathbf{N}(\bigwedge_{x\in X}{ \cal I}_{\theta}(A_{j}(x),{\cal I}_{\theta}({\bf N}(f),{\bf u})(x))),{\bf N}(u))\] \[= \bigvee_{u\in L}{\cal I}_{\eta}(\bigvee_{x\in X}{\bf N}({\cal I} _{\theta}(A_{j}(x),{\cal I}_{\theta}({\bf N}(f),{\bf u})(x))),{\bf N}(u))\] \[= \bigvee_{u\in L}{\cal I}_{\eta}(\bigvee_{x\in X}{\cal I}_{\eta}( {\bf N}(A_{j}(x)),{\bf N}({\cal I}_{\theta}({\bf N}(f),{\bf u})(x))),{\bf N}(u))\] \[= \bigvee_{u\in L}{\cal I}_{\eta}(\bigvee_{x\in X}{\cal I}_{\eta}( {\bf N}(A_{j}(x)),{\cal I}_{\eta}(f,{\bf N}({\bf u}))(x)),{\bf N}(u))\] \[= \bigvee_{u\in L}{\cal I}_{\eta}(F^{\uparrow,{\cal I}_{\eta}}[{ \cal I}_{\eta}(f,{\bf N}({\bf u}))],{\bf N}(u)).\]
Thus \(F^{\downarrow,\eta}[f]=\bigvee_{u\in L}{\cal I}_{\eta}(F^{\uparrow,{\cal I}_{ \eta}}[{\cal I}_{\eta}(f,{\bf N}({\bf u}))],{\bf N}(u))\) and
\[F^{\downarrow,{\cal I}_{\eta}}[f] = {\bf N}(F^{\downarrow,{\cal I}_{\theta}}[{\bf N}(f)])\] \[= {\bf N}(\bigwedge_{u\in L}{\cal I}_{\theta}(F^{\uparrow,\theta}_{ j}[{\cal I}_{\theta}({\bf N}(f),{\bf u})],u))\] \[= {\bf N}(\bigwedge_{u\in L}{\cal I}_{\theta}(\bigvee_{x\in X}\theta (A_{j}(x),{\cal I}_{\theta}({\bf N}(f),{\bf u})(x)),u))\] \[= \bigvee_{u\in L}{\cal N}({\cal I}_{\theta}(\bigvee_{x\in X}\theta (A_{j}(x),{\cal I}_{\theta}({\bf N}(f),{\bf u})(x)),u))\] \[= \bigvee_{u\in L}{\cal I}_{\eta}(\mathbf{N}(\bigvee_{x\in X}\theta (A_{j}(x),{\cal I}_{\theta}({\bf N}(f),{\bf u})(x))),{\bf N}(u))\] \[= \bigvee_{u\in L}{\cal I}_{\eta}(\bigwedge_{x\in X}\mathbf{N}( \theta(A_{j}(x),{\cal I}_{\theta}({\bf N}(f),{\bf u})(x)),{\bf N}(u))\] \[= \bigvee_{u\in L}{\cal I}_{\eta}(\bigwedge_{x\in X}\eta(\mathbf{N }(A_{j}(x)),{\cal I}_{\eta}(f,{\bf N}({\bf u}))(x)),{\bf N}(u))\] \[= \bigvee_{u\in L}{\cal I}_{\eta}(F^{\downarrow,\eta}[{\cal I}_{ \eta}(f,{\bf N}({\bf u}))],{\bf N}(u)).\]
Thus \(F^{\downarrow,\mathcal{I}_{\eta}}[f]=\bigvee_{u\in L}\mathcal{I}_{\eta}(F^{\uparrow, \eta}[\mathcal{I}_{\eta}(f,\mathbf{N}(\mathbf{u}))],\mathbf{N}(u))\).
From above three results, we have the following result which present the connections between \(F^{\uparrow,\theta}\) and \(F^{\uparrow,\mathcal{I}_{\eta}}\), \(F^{\downarrow,\eta}\) and \(F^{\downarrow,\mathcal{I}_{\theta}}\).
**Proposition 3.4**: _Let \(\mathbf{N}\) be an involutive negator. Then for \(j\in J,u\in L,\boldsymbol{u},f\in L^{X}\)_
1. \(F^{\uparrow,\theta}_{j}[f]=\bigwedge_{u\in L}\mathcal{I}_{\theta}(\mathbf{N}(F^ {\uparrow,\mathcal{I}_{\eta}}_{j}[\mathcal{I}_{\eta}(\mathbf{N}(f),\mathbf{N} (\boldsymbol{u}))]),u)\)_,_
2. \(F^{\uparrow,\mathcal{I}_{\eta}}_{j}[f]=\bigvee_{u\in L}\mathcal{I}_{\eta}( \mathbf{N}(F^{\uparrow,\theta}_{j}[\mathcal{I}_{\eta}(\mathbf{N}(f),\mathbf{N }(\boldsymbol{u}))]),u)\)_,_
3. \(F^{\downarrow,\eta}_{j}[f]=\bigvee_{u\in L}\mathcal{I}_{\eta}(\mathbf{N}(F^{ \downarrow,\mathcal{I}_{\theta}}_{j}[\mathcal{I}_{\eta}(\mathbf{N}(f), \mathbf{N}(\boldsymbol{u}))]),u)\)_, and_
4. \(F^{\downarrow,\mathcal{I}_{\theta}}_{j}[f]=\bigwedge_{u\in L}\mathcal{I}_{ \theta}(\mathbf{N}(F^{\downarrow,\eta}_{j}[\mathcal{I}_{\eta}(\mathbf{N}(f), \mathbf{N}(\boldsymbol{u}))]),u)\)_._
**Proof:** Propositions 3.1, 3.2 and 3.3 lead to this proof.
The following are towards the duality of \(F^{\uparrow,\theta}_{j}\) and \(F^{\uparrow,\mathcal{I}_{\theta}}_{j}\), \(F^{\downarrow,\eta}_{j}\) and \(F^{\uparrow,\mathcal{I}_{\eta}}_{j}\) with respect to involutive negators \(\mathbf{N}_{\mathcal{I}_{\theta}},\mathbf{N}_{\mathcal{I}_{\eta}}\), respectively.
**Proposition 3.5**: _Let \(\mathbf{N}_{\mathcal{I}_{\theta}}\) be an involutive negator such that \(\mathbf{N}_{\mathcal{I}_{\theta}}(.)=\mathcal{I}_{\theta}(.,0)\). Then for all \(j\in J,f\in L^{X}\)_
1. \(F^{\uparrow,\theta}_{j}[f]=\mathbf{N}_{\mathcal{I}_{\theta}}(F^{\downarrow, \mathcal{I}_{\theta}}_{j}[\mathbf{N}_{\mathcal{I}_{\theta}}(f)])\)_, i.e.,_ \(\mathbf{N}_{\mathcal{I}_{\theta}}(F^{\uparrow,\theta}_{j}[f])=F^{\downarrow, \mathcal{I}_{\theta}}_{j}[\mathbf{N}_{\mathcal{I}_{\theta}}(f)]\)_, and_
2. \(F^{\downarrow,\mathcal{I}_{\theta}}_{j}[f]=\mathbf{N}_{\mathcal{I}_{\theta}}(F^ {\uparrow,\theta}_{j}[\mathbf{N}_{\mathcal{I}_{\theta}}(f)])\)_, i.e,_ \(\mathbf{N}_{\mathcal{I}_{\theta}}(F^{\downarrow,\mathcal{I}_{\theta}}_{j}[f])=F^ {\uparrow,\theta}_{j}[\mathbf{N}_{\mathcal{I}_{\theta}}(f)]\)_._
**Proof:** (i) Let \(j\in J\) and \(f\in L^{X}\). Then from Definition 3.2
\[\mathbf{N}_{\mathcal{I}_{\theta}}(F^{\downarrow,\mathcal{I}_{ \theta}}_{j}[\mathbf{N}_{\mathcal{I}_{\theta}}(f)]) = \mathbf{N}_{\mathcal{I}_{\theta}}(\bigwedge_{x\in X}\mathcal{I}_ {\theta}(A_{j}(x),(\mathbf{N}_{\mathcal{I}_{\theta}}(f))(x)))\] \[= \mathcal{I}_{\theta}(\bigwedge_{x\in X}\mathcal{I}_{\theta}(A_{j} (x),\mathbf{N}_{\mathcal{I}_{\theta}}(f(x))),0)\] \[= \mathcal{I}_{\theta}(\bigwedge_{x\in X}\mathcal{I}_{\theta}(A_{j} (x),\mathcal{I}_{\theta}(f(x),0)),0)\] \[= \mathcal{I}_{\theta}(\bigwedge_{x\in X}\mathcal{I}_{\theta}( \theta(A_{j}(x),f(x)),0),0)\] \[= \bigvee_{x\in X}\mathbf{N}_{\mathcal{I}_{\theta}}(\mathcal{I}_{ \theta}(A_{j}(x),\mathbf{N}(f(x))))\] \[= \bigvee_{x\in X}\theta(A_{j}(x),f(x))\] \[= F^{\uparrow,\theta}_{j}[f].\]
Thus \(F^{\uparrow,\theta}_{j}[f]=\mathbf{N}_{\mathcal{I}_{\theta}}(F^{\downarrow, \mathcal{I}_{\theta}}_{j}[\mathbf{N}_{\mathcal{I}_{\theta}}(f)])\), or that \(\mathbf{N}_{\mathcal{I}_{\theta}}(F^{\uparrow,\theta}_{j}[f])=F^{\downarrow, \mathcal{I}_{\theta}}_{j}[\mathbf{N}_{\mathcal{I}_{\theta}}(f)]\).
(ii) Let \(j\in J\) and \(f\in L^{X}\). Then from Definition 3.2
\[{\bf N}_{{\cal I}_{\theta}}(F_{j}^{\uparrow,{\cal I}_{\theta}}[{\bf N }_{{\cal I}_{\theta}}(f)]) = {\bf N}_{{\cal I}_{\theta}}(\bigvee_{x\in X}\theta(A_{j}(x),({\bf N }_{{\cal I}_{\theta}}(f))(x)))\] \[= {\cal I}_{\theta}(\bigvee_{x\in X}\theta(A_{j}(x),{\bf N}_{{\cal I }_{\theta}}(f(x))),0)\] \[= \bigwedge_{x\in X}{\cal I}_{\theta}(A_{j}(x),{\cal I}_{\theta}({ \bf N}_{{\cal I}_{\theta}}(f(x)),0))\] \[= \bigwedge_{x\in X}{\cal I}_{\theta}(A_{j}(x),{\bf N}_{{\cal I}_{ \theta}}({\bf N}_{{\cal I}_{\theta}}(f(x))))\] \[= \bigwedge_{x\in X}{\cal I}_{\theta}(A_{j}(x),f(x))\] \[= F_{j}^{\downarrow,{\cal I}_{\theta}}[f].\]
Thus \(F_{j}^{\uparrow,\theta}[f]={\bf N}_{{\cal I}_{\theta}}(F_{j}^{\downarrow,{ \cal I}_{\theta}}[{\bf N}_{{\cal I}_{\theta}}(f)])\), or that \({\bf N}_{{\cal I}_{\theta}}(F_{j}^{\uparrow,\theta}[f])=F_{j}^{\downarrow,{ \cal I}_{\theta}}[{\bf N}_{{\cal I}_{\theta}}(f)]\).
**Proposition 3.6**: _Let \({\bf N}_{{\cal I}_{\eta}}\) be an involutive negator such that \({\bf N}_{{\cal I}_{\eta}}(.)={\cal I}_{\eta}(.,1)\). Then for all \(j\in J,f\in L^{X}\)_
1. \(F_{j}^{\downarrow,\eta}[f]={\bf N}_{{\cal I}_{\eta}}(F_{j}^{\uparrow,{\cal I}_ {\eta}}[{\bf N}_{{\cal I}_{\eta}}(f)])\)_, i.e.,_ \({\bf N}_{{\cal I}_{\eta}}(F_{j}^{\downarrow,\eta}[f])=F_{j}^{\uparrow,{\cal I} _{\eta}}[{\bf N}_{{\cal I}_{\eta}}(f)]\)_, and_
2. \(F_{j}^{\uparrow,{\cal I}_{\eta}}[f]={\bf N}_{{\cal I}_{\eta}}(F_{j}^{\downarrow, \eta}[{\bf N}_{{\cal I}_{\eta}}(f)])\)_, i.e,_ \({\bf N}_{{\cal I}_{\eta}}(F_{j}^{\uparrow,{\cal I}_{\eta}}[f])=F_{j}^{ \downarrow,\eta}[{\bf N}_{{\cal I}_{\eta}}(f)]\)_._
**Proof:** Similar to that of Proposition 3.5.
Below, we discuss basic results of \(F_{j}^{\uparrow,\theta},F_{j}^{\downarrow,\eta},F_{j}^{\uparrow,{\cal I}_{\eta}}\) and \(F_{j}^{\uparrow,{\cal I}_{\theta}}\).
**Proposition 3.7**: _Let \({\cal P}=\{A_{j}:j\in J\},{\cal P}^{\prime}=\{B_{j^{\prime}}:j^{\prime}\in J\}\) be L-fuzzy partitions of \(X\) and \(A_{j}\leq B_{j^{\prime}},\,\forall\,j,j^{\prime}\in J\). Then for all \(f\in L^{X}\)_
1. \(F_{j}^{\uparrow,\theta}[f]\leq F_{j^{\prime}}^{\uparrow,\theta}[f],F_{j}^{ \downarrow,\eta}[f]\geq F_{j^{\prime}}^{\downarrow,\eta}[f]\)_, and_
2. \(F_{j}^{\uparrow,{\cal I}_{\eta}}[f]\leq F_{j^{\prime}}^{\uparrow,{\cal I}_{ \eta}}[f],F_{j}^{\downarrow,{\cal I}_{\theta}}[f]\geq F_{j^{\prime}}^{ \downarrow,{\cal I}_{\theta}}[f]\)_._
**Proof:** (i) Let \(j\in J\) and \(f\in L^{X}\). Then \(F_{j}^{\uparrow,\theta}[f]=\bigvee_{x\in X}\theta(A_{j}(x),f(x))\leq\bigvee_{x \in X}\theta(B_{j^{\prime}}(x),f(x))=F_{j^{\prime}}^{\uparrow,\theta}[f].\) Thus \(F_{j}^{\uparrow,\theta}[f]\leq F_{j^{\prime}}^{\uparrow,\theta}[f]\). Similarly, we can show \(F_{j}^{\downarrow,\eta}[f]\geq F_{j^{\prime}}^{\downarrow,\eta}[f]\).
(ii) Let \(j\in J\) and \(f\in L^{X}\). Then \(F_{j}^{\uparrow,{\cal I}_{\eta}}[f]=\bigvee_{x\in X}{\cal I}_{\eta}({\bf N}(A_ {j}(x)),f(x))\leq\bigvee_{x\in X}{\cal I}_{\eta}({\bf N}(A_{j}(x)),f(x))=\bigvee _{j^{\prime}}{\cal I}_{\eta}({\bf N}(A_{j}(x)),f(x))=F_{j^{\prime}}^{\uparrow,{ \cal I}_{\eta}}[f].\) Thus \(F_{j}^{\uparrow,{\cal I}_{\eta}}[f]\leq F_{j^{\prime}}^{\downarrow,{\cal I}_{ \eta}}[f]\). Similarly, we can show \(F_{j}^{\downarrow,{\cal I}_{\theta}}[f]\geq F_{j^{\prime}}^{\downarrow,{\cal I}_ {\theta}}[f]\).
**Proposition 3.8**: _Let \({\cal P}\) be an L-fuzzy partition of \(X\). Then for all \(j\in J,f\in L^{X},x_{j}\in core(A_{j})\)_
* \(F_{j}^{\uparrow,\theta}[f]\geq f(x_{j}),F_{j}^{\downarrow,\eta}[f]\leq f(x_{j})\) _if_ \(x_{j}\in core(A_{j})\)_, and_
* \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]\geq f(x_{j}),F_{j}^{\downarrow,\mathcal{I }_{\theta}}[f]\leq f(x_{j})\) _if_ \(x_{j}\in core(A_{j})\)_._
**Proposition 3.9**: _Let \(\mathcal{P}\) be an \(L\)-fuzzy partition of \(X\). Then for all \(j\in J,f,g\in L^{X}\) and \(f\leq g\)_
* \(F_{j}^{\uparrow,\theta}[f]\leq F_{j}^{\uparrow,\theta}[g],F_{j}^{\downarrow, \eta}[f]\leq F_{j}^{\downarrow,\eta}[g]\)_, and_
* \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]\leq F_{j}^{\uparrow,\mathcal{I}_{\eta }}[g],F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]\leq F_{j}^{\downarrow,\mathcal{ I}_{\theta}}[g]\)_._
**Proof:** (i) Let \(j\in J,f,g\in L^{X}\) and \(f\leq g\). Then \(F_{j}^{\uparrow,\theta}[f]=\bigvee\limits_{x\in X}\theta(A_{j}(x),f(x))\leq \bigvee\limits_{x\in X}\theta(A_{j}(x),\eta(x))=F_{j}^{\uparrow,\theta}[g].\) Thus \(F_{j}^{\uparrow,\theta}[f]\leq F_{j}^{\uparrow,\theta}[g]\). Similarly, we can show that \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]\leq F_{j}^{\downarrow,\mathcal{I}_ {\theta}}[g]\).
(ii) Let \(j\in J,f,g\in L^{X}\) and \(f\leq g\). Then \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]=\bigvee\limits_{x\in X}\mathcal{I}_{ \eta}(\textbf{N}(A_{j}(x)),f(x))\leq\bigvee\limits_{x\in X}\mathcal{I}_{\eta }(\textbf{N}(A_{j}(x)),\eta(x))=F_{j}^{\uparrow,\mathcal{I}_{\eta}}[g].\) Thus \(F_{j}^{\uparrow,\theta}[f]\leq F_{j}^{\uparrow,\theta}[g]\). Similarly, we can show that \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]\leq F_{j}^{\downarrow,\mathcal{I}_ {\theta}}[g]\).
**Proposition 3.10**: _Let \(\theta\) and \(\eta\) be \(EP\)-overlap and \(EP\)-grouping maps, respectively. Then for all \(u\in L,\textbf{u},f\in L^{X}\)_
1. \(F_{j}^{\uparrow,\theta}[\theta(\textbf{u},f)]=\theta(a,F_{j}^{\uparrow,\theta}[f]),F _{j}^{\downarrow,\eta}[\eta(\textbf{u},f)]=\eta(a,F_{j}^{\downarrow,\eta}[f])\)_, and_
2. \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[\mathcal{I}_{\eta}(\textbf{u},f)]= \mathcal{I}_{\eta}(a,F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]),F_{j}^{\downarrow, \mathcal{I}_{\theta}}[\mathcal{I}_{\theta}(\textbf{u},f)]=\mathcal{I}_{\theta }(a,F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f])\)_._
**Proof:** (i) Let \(\textbf{u},f\in L^{X}\). Then
\[F_{j}^{\uparrow,\theta}[\theta(\textbf{u},f)] = \bigvee_{x\in X}\theta(A_{j}(x),\theta(\textbf{u},f)(x))=\bigvee _{x\in X}\theta(A_{j}(x),\theta(u,f(x)))\] \[= \bigvee_{x\in X}\theta(u,\theta(A_{j}(x),f(x)))=\theta(u,\bigvee _{x\in X}\theta(A_{j}(x),f(x)))\] \[= \theta(u,F_{j}^{\uparrow,\theta}[f]).\]
Therefore \(F_{j}^{\uparrow,\theta}[\theta(\textbf{u},f)]=\theta(u,F_{j}^{\uparrow, \theta}[f])\). Similarly, we can show \(F_{j}^{\downarrow,\eta}[\eta(\textbf{u},f)]=\eta(u,F_{j}^{\downarrow,\eta}[f])\).
(ii) Let \(u\in L\) and \(\textbf{u},f\in L^{X}\). Then
\[F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f] = \bigvee_{x\in X}\mathcal{I}_{\eta}(\textbf{N}(A_{j}(x)),\mathcal{ I}_{\eta}(\textbf{u},f)(x))=\bigvee_{x\in X}\mathcal{I}_{\eta}(\textbf{N}(A_{j}(x)), \mathcal{I}_{\eta}(u,f(x)))\] \[= \bigvee_{x\in X}\mathcal{I}_{\eta}(u,\mathcal{I}_{\eta}(\textbf{N }(A_{j}(x)),f(x)))=\mathcal{I}_{\eta}(u,\bigvee_{x\in X}\mathcal{I}_{\eta}( \textbf{N}(A_{j}(x)),f(x)))\] \[= \mathcal{I}_{\eta}(u,F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]).\]
Therefore \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[\mathcal{I}_{\eta}(\textbf{u},f)]= \mathcal{I}_{\eta}(u,F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f])\). Similarly, we can show \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[\mathcal{I}_{\theta}(\textbf{u},f)]= \mathcal{I}_{\theta}(u,F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f])\).
**Proposition 3.11**: _Let \(\mathcal{P}\) be an \(L\)-fuzzy partition of \(X\). Then for all \(\textbf{u},f\in L^{X},\{f_{j}:j\in J\}\subseteq L^{X}\)_
1. \(F_{j}^{\uparrow,\theta}[\bigvee_{k\in J}f_{k}]=\bigvee_{k\in J}F_{j}^{ \uparrow,\theta}[f_{k}],F_{j}^{\downarrow,\eta}[\bigwedge_{k\in J}f_{k}]= \bigwedge_{k\in J}F_{j}^{\downarrow,\eta}[f_{k}]\)_, and_
2. \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[\bigvee_{k\in J}f_{k}]=\bigvee_{k\in J}F_ {j}^{\uparrow,\mathcal{I}_{\eta}}[f_{k}],F_{j}^{\downarrow,\mathcal{I}_{ \theta}}[\bigwedge_{k\in J}f_{k}]=\bigwedge_{k\in J}F_{j}^{\downarrow,\mathcal{I} _{\theta}}[f_{k}]\)_._
**Proof:** (i) Let \(\{f_{k}:k\in J\}\subseteq L^{X}\). Then \(F_{j}^{\uparrow,\theta}[\bigvee_{k\in J}f_{k}]=\bigvee_{x\in X}\theta(A_{j}(x),\bigvee_{k\in J}f_{k}(x))=\bigvee_{x\in X}\bigvee_{k\in J}\theta(A_{j}(x),f_ {k}(x))=\bigvee_{k\in J}F_{k}^{\uparrow,\theta}[f_{k}]\). Therefore \(F_{j}^{\uparrow,\theta}[\bigvee_{k\in J}f_{k}]=\bigvee_{k\in J}F_{k}^{\uparrow,\theta}[f_{k}]\). Similarly, we obtain \(F_{j}^{\downarrow,\eta}[\bigwedge_{j\in J}f_{k}]=\bigwedge_{j\in J}F_{j}^{ \downarrow,\eta}[f_{k}]\).
(ii) Let \(\{f_{k}:k\in J\}\subseteq L^{X}\). Then \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[\bigvee_{k\in J}f_{k}]=\bigvee_{x\in X} \mathcal{I}_{\eta}(\textbf{N}(A_{j}(x)),\bigvee_{k\in J}f_{k}(x))=\bigvee_{x \in X}\bigvee_{k\in J}\mathcal{I}_{\eta}(\textbf{N}(A_{j}(x)),f_{k}(x))= \bigvee_{k\in J}F_{k}^{\uparrow,\mathcal{I}_{\eta}}[f_{k}]\). Therefore \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[\bigvee_{k\in J}f_{k}]=\bigvee_{k\in J}F_ {k}^{\uparrow,\mathcal{I}_{\eta}}[f_{k}]\). Similarly, we obtain \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[\bigwedge_{j\in J}f_{k}]=\bigwedge_{j\in J }F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f_{k}]\).
**Proposition 3.12**: _Let \(\mathcal{P}\) be an \(L\)-fuzzy partition of \(X\). Then for \(u\in L,\textbf{u}\in L^{X}\)_
* \(F_{j}^{\uparrow,\theta}[\textbf{u}]=\theta(1,u),F_{j}^{\downarrow,\mathcal{I}_{ \theta}}[\textbf{u}]=\mathcal{I}_{\theta}(1,u)\)_, and_
* \(F_{j}^{\downarrow,\eta}[\textbf{u}]=\eta(\bigwedge\limits_{x\in X}(\mathbf{N} (A_{j}(x)),u),F_{j}^{\uparrow,\mathcal{I}_{\eta}}[\textbf{u}]=\mathcal{I}_{ \eta}(\bigwedge\limits_{x\in X}(\mathbf{N}(A_{j}(x)),u)\)_. In addition, for a strict negator_ \(\mathbf{N}\)_,_ \(F_{j}^{\downarrow,\eta}[\textbf{u}]=\eta(0,u),F_{j}^{\downarrow,\mathcal{I}_{ \eta}}[\textbf{u}]=\mathcal{I}_{\eta}(0,u)\)_._
**Proof:** (i) Let \(u\in L\) and \(\textbf{u}\in L^{X}\). Then \(F_{j}^{\uparrow,\theta}[\textbf{u}]=\bigvee\limits_{x\in X}\theta(A_{j}(x), \textbf{u}(x))=\theta(\bigvee\limits_{x\in X}A_{j}(x),a)\)
\(=\theta(1,u)\). Thus \(F_{j}^{\uparrow,\theta}[\textbf{u}]=\theta(1,u)\). Similarly, we can show \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[\textbf{u}]=\mathcal{I}_{\theta}(1,u)\).
(ii) Let \(u\in L\) and \(\textbf{u}\in L^{X}\). Then \(F_{j}^{\downarrow,\eta}[\textbf{u}]=\bigwedge\limits_{x\in X}\eta(\mathbf{N} (A_{j}(x)),\textbf{u}(x))=\eta(\bigwedge\limits_{x\in X}\mathbf{N}(A_{j}(x)),a)\). Thus \(F_{j}^{\downarrow,\eta}[\textbf{u}]=\eta(\bigwedge\limits_{x\in X}\mathbf{N} (A_{j}(x)),a)\). Now, let \(\mathbf{N}\) be a strict negator. Then we obtain \(F_{j}^{\downarrow,\eta}[\textbf{u}]=\eta(\bigwedge\limits_{x\in X}\mathbf{N} (A_{j}(x)),a)=\eta(\mathbf{N}(\bigvee\limits_{x\in X}A_{j}(x)),a)=\eta( \mathbf{N}(1),a)=\eta(0,u)\). Thus \(F_{j}^{\downarrow,\eta}[\textbf{u}]=\eta(0,u)\). Similarly, we can show that \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[\textbf{u}]=\mathcal{I}_{\eta}(\bigwedge \limits_{x\in X}\mathbf{N}(A_{j}(x)),a)\) and for a strict negator \(\mathbf{N}\), \(\mathcal{I}_{\eta}[\textbf{u}]=\mathcal{I}_{\eta}(0,u)\).
**Corollary 3.2**: _Let the conditions of Proposition 3.12 be fulfilled and \(1,0\) be neutral elements of \(\theta,\eta\), respectively. Then for all \(f\in L^{X}\)_
* \(F_{j}^{\uparrow,\theta}[\textbf{u}]=u,F_{j}^{\downarrow,\eta}[\textbf{u}]=u\)_, and_
* \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[\textbf{u}]=u,F_{j}^{\downarrow,\mathcal{I }_{\theta}}[\textbf{u}]=u\)_._
**Proof:** Let \(1,0\) be neutral elements of \(\theta,\eta\), respectively. Then we have, \(\theta(1,u)=u,\eta(0,u)=u,\mathcal{I}_{\eta}(0,u)=u\) and \(\mathcal{I}_{\theta}(1,u)=u\). Also, from Proposition 3.12, we have
* \(F_{j}^{\uparrow,\theta}[\textbf{u}]=u,F_{j}^{\downarrow,\eta}[\textbf{u}]=u\), and
* \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[\textbf{u}]=u,F_{j}^{\downarrow,\mathcal{I }_{\theta}}[\textbf{u}]=u\).
From Proposition 3.12, we have the follwoing.
**Proposition 3.13**: _Let \(\mathcal{P}\) be an \(L\)-fuzzy partition of \(X\). Then for all \(u\in L,\textbf{u}\in L^{X}\)_
* \(F_{j}^{\downarrow,\eta}[\textbf{u}]=\eta(0,u)\) _iff_ \(F_{j}^{\downarrow,\eta}[0_{X}]=0\)_, and_
* \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[\textbf{u}]=\mathcal{I}_{\eta}(0,u)\) _iff_ \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[1_{X}]=1\)_._
**Proof:** (i) Let \(F_{j}^{\downarrow,\eta}[\textbf{u}]=\eta(0,u)\), \(\forall\,u\in L,\textbf{u}\in L^{X}\). Then by assuming \(\textbf{u}=0_{X}\), we have \(F_{j}^{\downarrow,\eta}[0_{X}]=\eta(0,0)=0\). Thus \(F_{j}^{\downarrow,\eta}[0_{X}]=0\). Conversely, from Proposition 3.9(ii), we have \(F_{j}^{\downarrow,\eta}[0_{X}]=0\)\(\Leftrightarrow\)\(\eta(\bigwedge\limits_{x\in X}(\mathbf{N}(A_{j}(x)),0)=0\)\(\Leftrightarrow\)\(\bigwedge\limits_{x\in X}(\mathbf{N}(A_{j}(x))=0.\) Therefore \(F_{j}^{\downarrow,\eta}[\textbf{u}]=\eta(\bigwedge\limits_{x\in X}(\mathbf{N}(A_{j}(x)),u)= \eta(0,u)\). Thus \(F_{j}^{\downarrow,\eta}[\textbf{u}]=\eta(0,u)\).
(ii) Let \(F_{j}^{\uparrow,{\cal I}_{\eta}}[{\bf u}]={\cal I}_{\eta}(0,u)\), \(\forall\,u\in L,{\bf u}\in L^{X}\). Then by assuming \({\bf u}=1_{X}\), we have \(F_{j}^{\uparrow,{\cal I}_{\eta}}[1_{X}]={\cal I}_{\eta}(0,1)=1.\) Thus \(F_{j}^{\uparrow,{\cal I}_{\eta}}[1_{X}]=1\). Conversely, from Proposition 3.9(ii), we have \(F_{j}^{\uparrow,{\cal I}_{\eta}}[1_{X}]=1\Leftrightarrow{\cal I}_{\eta}( \bigwedge\limits_{x\in X}({\bf N}(A_{j}(x)),1)=1\Leftrightarrow\bigwedge \limits_{x\in X}({\bf N}(A_{j}(x))=0\). Therefore \(F_{j}^{\downarrow,\eta}[{\bf u}]={\cal I}_{\eta}(\bigwedge\limits_{x\in X}({ \bf N}(A_{j}(x)),u)={\cal I}_{\eta}(0,u)\). Thus \(F_{j}^{\downarrow,\eta}[{\bf u}]={\cal I}_{\eta}(0,u)\).
The following results are towards the characterization of the components of the direct \(F\)-transforms of an original \(L\)-fuzzy set as its lower and upper mean values give the greatest and the least elements to certain sets, respectively.
**Proposition 3.14**: _Let \({\cal P}\) be an \(L\)-fuzzy partition of \(X\) and \(f\in L^{X}\). Then_
* _the_ \(j^{th}\) _component of_ \(F^{\uparrow,\theta}\)_-transform of_ \(f\) _is the least element of the set_ \(U_{j}=\{u\in L:\theta(A_{j}(x),f(x))\leq u,\,\forall x\in X\},\,j\in J\)_, and_
* _the_ \(j^{th}\) _component of_ \(F^{\downarrow,\eta}\)_-transform of_ \(f\) _is the greatest element of the set_ \(V_{j}=\{v\in L:v\leq\eta({\bf N}(A_{j}(x)),f(x)),\,\forall x\in X\},\,j\in J\)_._
**Proof:** (i) To prove this, we need to show that \(F_{j}^{\uparrow,\theta}[f]\in U_{j}\) and \(F_{j}^{\uparrow,\theta}[f]\leq u\). It follows from Definition 3.2(i) that \(F_{j}^{\uparrow,\theta}[f]=\bigvee\limits_{x\in X}\theta(A_{j}(x),f(x))\geq \theta(A_{j}(x),f(x))\). Thus \(F_{j}^{\uparrow,\theta}[f]\in U_{j}\). Now, let \(u\in L,x\in X\). Then from the given condition \(\theta(A_{j}(x),f(x))\leq u\Rightarrow\bigvee\limits_{x\in X}\theta(A_{j}(x),f(x))\leq u\Rightarrow F_{j}^{\uparrow,\theta}[f]\leq u\). Thus the \(j^{th}\) component of \(F^{\uparrow,\theta}\)-transform is the least element of the set \(U_{j}\).
(ii) To prove this, we need to show that \(F_{j}^{\downarrow,\eta}[f]\in V_{j}\) and \(v\leq F_{j}^{\downarrow,\eta}[f]\). It follows from Definition 3.2(ii) that \(F_{j}^{\downarrow,\eta}[f]=\bigwedge\limits_{x\in X}\eta({\bf N}(A_{j}(x)),f(x)) \leq\eta({\bf N}(A_{j}(x)),f(x))\). Thus \(F_{j}^{\downarrow,\eta}[f]\in V_{j}\). Now, let \(v\in L,x\in X\). Then from the given condition \(v\leq\eta({\bf N}(A_{j}(x)),f(x))\Rightarrow v\leq\bigwedge\limits_{x\in X} \eta({\bf N}(A_{j}(x)),f(x))\Rightarrow v\leq F_{j}^{\downarrow,\eta}[f]\). Thus the \(j^{th}\) component of \(F^{\downarrow,\eta}\)-transform is the greatest element of the set \(V_{j}\).
**Proposition 3.15**: _Let \({\cal P}\) be an \(L\)-fuzzy partition of \(X\) and \(f\in L^{X}\). Then_
* _the_ \(j^{th}\) _component of_ \(F^{\uparrow,{\cal I}_{\eta}}\)_-transform of_ \(f\) _is the least element of the set_ \(U_{j}=\{u\in L:{\cal I}_{\eta}({\bf N}(A_{j}(x)),f(x))\leq u,\,\forall x\in X\}, \,j\in J\)_, and_
* _the_ \(j^{th}\) _component of_ \(F^{\downarrow,{\cal I}_{\theta}}\)_-transform of_ \(f\) _is the greatest element of the set_ \(V_{j}=\{v\in L:v\leq{\cal I}_{\theta}(A_{j}(x),f(x)),\,\forall x\in X\},\,j\in J\)_._
**Proof:** (i) To prove this, we need to show that \(F_{j}^{\uparrow,{\cal I}_{\eta}}[f]\in U_{j}\) and \(F_{j}^{\uparrow,{\cal I}_{\eta}}[f]\leq u\). It follows from Definition 3.2(i) that \(F_{j}^{\uparrow,{\cal I}_{\eta}}[f]=\bigvee\limits_{x\in X}{\cal I}_{\eta}({ \bf N}(A_{j}(x)),f(x))\geq{\cal I}_{\eta}({\bf N}(A_{j}(x)),f(x))\). Thus \(F_{j}^{\uparrow,{\cal I}_{\eta}}[f]\in U_{j}\). Now, let \(u\in L,x\in X\). Then from the given condition \({\cal I}_{\eta}({\bf N}(A_{j}(x)),f(x))\leq u\Rightarrow\bigvee\limits_{x\in X }{\cal I}_{\eta}({\bf N}(A_{j}(x)),f(x))\leq u\Rightarrow F_{j}^{\uparrow,\theta} [f]\leq u\Rightarrow F_{j}^{\uparrow,\theta}[f]\leq u\Rightarrow F_{j}^{ \uparrow,\theta}[f]\leq u\Rightarrow F_{j}^{\uparrow,\theta}[f]\leq u\Rightarrow F_{j}^{ \uparrow
\(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]\leq u\). Thus the \(j^{th}\) component of \(F^{\uparrow,\mathcal{I}_{\eta}}\)-transform is the least element of the set \(U_{j}\).
(ii) To prove this, we need to show that \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]\in V_{j}\) and \(v\leq F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]\). It follows from Definition 3.2(ii) that \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]=\bigwedge\limits_{x\in X}\mathcal{I }_{\theta}(A_{j}(x),f(x))\leq\mathcal{I}_{\theta}(A_{j}(x),f(x))\). Thus \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]\in V_{j}\). Now, let \(v\in L,x\in X\). Then from the given condition \(v\leq\mathcal{I}_{\theta}(A_{j}(x),f(x))\Rightarrow v\leq\bigwedge\limits_{x \in X}\mathcal{I}_{\theta}(A_{j}(x),f(x))\Rightarrow v\leq F_{j}^{\downarrow, \mathcal{I}_{\theta}}[f]\). Thus the \(j^{th}\) component of \(F^{\downarrow,\mathcal{I}_{\theta}}\)-transform is the greatest element of the set \(V_{j}\).
**Proposition 3.16**: _Let conditions of Proposition 3.14 be fulfilled, \(\theta\) and \(\eta\) be deflation overlap and deflation grouping maps, respectively. Then for all \(u\in U_{j},v\in V_{j}\)_
1. \(\bigwedge\limits_{x\in X}\mathcal{I}_{\theta}(\theta(A_{j}(x),f(x)),u))=1\) _and_ \(j^{th}\) _component of_ \(F^{\uparrow,\theta}\)_-transform is the smallest such_ \(u\)_, and_
2. \(\bigwedge\limits_{x\in X}\mathcal{I}_{\eta}(\eta(\mathbf{N}(A_{j}(x)),f(x)),v)=0\) _and_ \(j^{th}\) _component of_ \(F^{\downarrow,\eta}\)_-transform is the greatest such_ \(v\)_._
**Proof:** (i) Let \(j\in J\). Then for all \(x\in X\), \(\theta(A_{j}(x),f(x))\leq u\), or that, \(\bigwedge\limits_{x\in X}\mathcal{I}_{\theta}(\theta(A_{j}(x),f(x)),u)=1\), as \(\mathcal{I}_{\theta}\) is an \(IP\)-residual implicator.
(ii) Let \(j\in J\). Then for all \(x\in X\), \(\eta(\mathbf{N}(A_{j}(x)),f(x))\geq v\), or that, \(\bigwedge\limits_{x\in X}\mathcal{I}_{\eta}(\eta(\mathbf{N}(A_{j}(x)),f(x)),v)=0\), as \(\mathcal{I}_{\eta}\) is an \(IP\)-co-residual implicator.
**Proposition 3.17**: _Let conditions of Proposition 3.15 be fulfilled, \(\theta\) and \(\eta\) be deflation overlap and deflation grouping maps, respectively. Then for all \(u\in U_{j},v\in V_{j}\)_
1. \(\bigwedge\limits_{x\in X}\mathcal{I}_{\eta}(u,\mathcal{I}_{\eta}(\mathbf{N}( A_{j}(x)),f(x)))=0\) _and_ \(j^{th}\) _component of_ \(F^{\uparrow,\mathcal{I}_{\eta}}\)_-transform is the smallest such_ \(u\)_, and_
2. \(\bigwedge\limits_{x\in X}\mathcal{I}_{\theta}(v,\mathcal{I}_{\theta}(A_{j}(x),f(x)))=1\) _and_ \(j^{th}\) _component of_ \(F^{\downarrow,\mathcal{I}_{\theta}}\)_-transform is the greatest such_ \(v\)_._
**Proof:** (i) Let \(j\in J\). Then for all \(x\in X\), \(\mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x)),f(x))\leq u\), or that, \(\bigwedge\limits_{x\in X}\mathcal{I}_{\eta}(u,\mathcal{I}_{\eta}(\mathbf{N}( A_{j}(x)),f(x)))=0\), as \(\mathcal{I}_{\eta}\) is an \(IP\)-co-residual implicator.
(ii) Let \(j\in J\). Then for all \(x\in X\), \(\mathcal{I}_{\theta}(A_{j}(x),f(x))\geq v\), or that, \(\bigwedge\limits_{x\in X}\mathcal{I}_{\theta}(v,\mathcal{I}_{\theta}(A_{j}(x),f(x)))=1\), as \(\mathcal{I}_{\theta}\) is an \(IP\)-residual implicator.
## 4 Inverse \(F\)-transforms
In this section, we introduce the concepts of the inverse \(F\)-transforms computed with overlap and grouping maps, residual and co-residual implicators over \(L\), respectively. Further, we discuss their properties. Now, we begin with the following.
**Definition 4.1**: _Let \((X,\mathcal{P})\) be a space with an \(L\)-fuzzy partition \(\mathcal{P}\) and \(f\in L^{X}\), where \(\mathcal{P}=\{A_{j}\in L^{X}:j\in J\}\). Further, let \(F_{j}^{\uparrow,\theta}[f]\) and \(F_{j}^{\downarrow,\eta}[f]\) be the \(j^{th}\) components of \(F^{\uparrow,\theta}\)-transform of \(f\) computed with an overlap map \(\theta\) over \(\mathcal{P}\) and \(F^{\downarrow,\eta}\)-transform of \(f\) computed with a grouping map \(\eta\) over \(\mathcal{P}\), respectively. Then_
1. _the_ **inverse (upper)**__\(F^{\uparrow,\theta}\)**-transform** _of_ \(f\) _computed with a residual implication_ \(\mathcal{I}_{\theta}\) _over a fuzzy partition_ \(\mathcal{P}\) _is a map_ \(\hat{f}^{\uparrow,\mathcal{I}_{\theta}}:L^{X}\to L^{X}\) _such that_ \[\hat{f}^{\uparrow,\mathcal{I}_{\theta}}(x)=\bigwedge_{j\in J}\mathcal{I}_{ \theta}(A_{j}(x),F_{j}^{\uparrow,\theta}[f]),\]
2. _the_ **inverse (lower)**__\(F^{\downarrow,\mathcal{I}_{\theta}}\)**-transform** _of_ \(f\) _computed with an overlap map_ \(\theta\) _over a fuzzy partition_ \(\mathcal{P}\) _is a map_ \(\hat{f}^{\downarrow,\theta}:L^{X}\to L^{X}\) _such that_ \[\hat{f}^{\downarrow,\theta}(x)=\bigvee_{j\in J}\theta(A_{j}(x),F_{j}^{ \downarrow,\mathcal{I}_{\theta}}[f]),\]
3. _the_ **inverse (upper)**__\(F^{\uparrow,\mathcal{I}_{\eta}}\)**-transform** _of_ \(f\) _computed with a grouping map_ \(\eta\) _over a fuzzy partition_ \(\mathcal{P}\) _is a map_ \(\hat{f}^{\uparrow,\eta}:L^{X}\to L^{X}\) _such that_ \[\hat{f}^{\uparrow,\eta}=\bigwedge_{j\in J}\eta(\mathbf{N}(A_{j}(x)),F_{j}^{ \uparrow,\mathcal{I}_{\eta}}[f]),\,\,and\]
4. _the_ **inverse (lower)**__\(F^{\downarrow,\eta}\)**-transform** _of_ \(f\) _computed with a co-residual implicator_ \(\mathcal{I}_{\eta}\) _over a fuzzy partition_ \(\mathcal{P}\) _is a map_ \(\hat{f}^{\downarrow,\mathcal{I}_{\eta}}:L^{X}\to L^{X}\) _such that_ \[\hat{f}^{\downarrow,\mathcal{I}_{\eta}}(x)=\bigvee_{j\in J}\mathcal{I}_{\eta} (\mathbf{N}(A_{j}(x)),F_{j}^{\downarrow,\eta}[f]).\]
The inverse \(F\)-transforms computed with a \(t\)-norm and an \(R\)-implicator proposed in [22, 25, 34] are special cases of the proposed inverse \(F\)-transforms with respect to \(\theta\) and \(\mathcal{I}_{\theta}\). In the above-introduced inverse \(F\)-transforms, \(\hat{f}^{\downarrow,\eta}\), \(\hat{f}^{\downarrow,\mathcal{I}_{\eta}}\) are new definitions.
**Example 4.1**: _In continuation to Example 3.1, the inverse \(F\)-transforms with respect to \(\mathcal{I}_{\theta_{M}},\theta_{M},\eta_{M},\mathcal{I}_{\eta_{M}}\) are \(\hat{f}^{\uparrow,\mathcal{I}_{\theta_{M}}}=\frac{q}{x_{1}}+\frac{u}{x_{2}}+ \frac{u}{x_{3}},\,\,\hat{f}^{\downarrow,\theta_{M}}=\frac{p}{x_{1}}+\frac{p}{ x_{2}}+\frac{p}{x_{3}},\)\(\hat{f}^{\uparrow,\eta_{M}}=\frac{r}{x_{1}}+\frac{r}{x_{2}}+\frac{r}{x_{3}},\)\(\hat{f}^{\downarrow,\mathcal{I}_{\eta_{M}}}=\frac{0}{x_{1}}+\frac{u}{x_{2}}+\frac{t}{x_{3}}.\)_
**Remark 4.1**: _(i) If \(L=[0,1],\,\mathbf{N}=\mathbf{N}_{S},\theta=\theta_{M}\) and \(\eta=\eta_{M}\), then the inverse \(F\)-transforms \(\hat{f}^{\uparrow,\mathcal{I}_{\theta}},\hat{f}^{\downarrow,\theta},\hat{f}^{ \uparrow,\eta}\) and \(\hat{f}^{\downarrow,\mathcal{I}_{\eta}}\) become as follows:_
\[\hat{f}^{\uparrow,\mathcal{I}_{\theta_{M}}}(x) = \bigwedge_{j\in J}\mathcal{I}_{\theta_{M}}(A_{j}(x),F_{j}^{ \uparrow,\theta_{M}}[f]),\] \[\hat{f}^{\downarrow,\theta_{M}}(x) = \bigvee_{j\in J}(A_{j}(x)\wedge F_{j}^{\downarrow,\mathcal{I}_{ \theta_{M}}}[f]),\] \[\hat{f}^{\uparrow,\eta_{M}}(x) = \bigwedge_{j\in J}((1-A_{j}(x))\lor F_{j}^{\uparrow,\mathcal{I}_{ \eta_{M}}}[f]),\,\text{and}\] \[\hat{f}^{\downarrow,\mathcal{I}_{\eta_{M}}}(x) = \bigvee_{j\in J}\mathcal{I}_{\eta_{M}}((1-A_{j}(x)),F_{j}^{ \downarrow,\eta_{M}}[f]),\,\forall\,x\in X,f\in L^{X}.\]
_Obviously \(f^{\uparrow,{\cal I}_{\theta_{M}}}\) and \(f^{\downarrow,\theta_{M}}\) coincide with the special cases of inverse upper and lower \(F\)-transforms proposed in [22, 25, 34], respectively._
_(ii) If \(L=[0,1],\theta=\theta_{M}\) and \(\eta=\eta_{M}\), then the inverse transforms \(\hat{f}^{\uparrow,{\cal I}_{\theta}},\hat{f}^{\downarrow,\theta},\hat{f}^{ \uparrow,\eta}\) and \(\hat{f}^{\downarrow,{\cal I}_{\eta}}\) become as follows:_
\[\hat{f}^{\uparrow,{\cal I}_{\theta_{M}}}(x) = \bigwedge_{j\in J}{\cal I}_{\theta_{M}}(A_{j}(x),F_{j}^{\uparrow, \theta_{M}}[f]),\] \[\hat{f}^{\downarrow,\theta_{M}}(x) = \bigvee_{j\in J}(A_{j}(x)\wedge F_{j}^{\downarrow,{\cal I}_{\theta _{M}}}[f]),\] \[\hat{f}^{\uparrow,\eta_{M}}(x) = \bigwedge_{j\in J}({\bf N}(A_{j}(x))\lor F_{j}^{\uparrow,{\cal I}_ {\eta_{M}}}[f]),\,\text{and}\] \[\hat{f}^{\downarrow,{\cal I}_{\eta_{M}}}(x) = \bigvee_{j\in J}{\cal I}_{\eta_{M}}({\bf N}(A_{j}(x)),F_{j}^{ \downarrow,\eta_{M}}[f]),\,\forall\,x\in X,f\in L^{X}.\]
_Obviously \(f^{\uparrow,{\cal I}_{\theta_{M}}}\) and \(f^{\downarrow,\theta_{M}}\) coincide with the special cases of inverse upper and lower \(F\)-transforms proposed in [22, 25, 34], respectively._
_(iii) If \(L=[0,1],\theta={\cal T}\) and \(\eta={\cal S}\), where \({\cal T},{\cal S}\) are continuous \(t\)-norm, \(t\)-conorm with no nontrivial zero divisors, respectively, then the inverse transforms \(\hat{f}^{\uparrow,{\cal I}_{\theta}},\hat{f}^{\downarrow,\theta},\hat{f}^{ \uparrow,\eta}\) and \(\hat{f}^{\downarrow,{\cal I}_{\eta}}\) become as follows:_
\[\hat{f}^{\uparrow,{\cal I}_{\cal T}}(x) = \bigwedge_{j\in J}{\cal I}_{\cal T}(A_{j}(x),F_{j}^{\uparrow,{ \cal T}}[f]),\] \[\hat{f}^{\downarrow,{\cal T}}(x) = \bigvee_{j\in J}{\cal T}(A_{j}(x),F_{j}^{\downarrow,{\cal I}_{ \cal T}}[f]),\] \[\hat{f}^{\uparrow,{\cal S}}(x) = \bigwedge_{j\in J}{\cal S}({\bf N}(A_{j}(x)),F_{j}^{\uparrow,{ \cal I}_{\cal S}}[f]),\,\text{and}\] \[\hat{f}^{\downarrow,{\cal I}_{\cal S}}(x) = \bigvee_{j\in J}{\cal I}_{\cal S}({\bf N}(A_{j}(x)),F_{j}^{ \downarrow,{\cal S}}[f]),\,\forall\,x\in X,f\in L^{X}.\]
_Obviously \(f^{\uparrow,{\cal I}_{\cal T}}\) and \(f^{\downarrow,{\cal T}}\) coincide with the of inverse upper and lower \(F\)-transforms computed with \(t\)-norm and \(R\)-implicator proposed in [22, 25, 34], respectively._
From the above, it is clear that some existing inverse \(F\)-transforms are special cases of the proposed inverse \(F\)-transforms. Among these, some inverse \(F\)-transforms coincide with the proposed inverse \(F\)-transforms and some of the proposed inverse \(F\)-transforms coincide with the special cases of the existing inverse \(F\)-transforms. That is to say; the proposed inverse \(F\)-transforms are more general than some existing ones.
The following two results are towards the inverse \(F\)-transforms approximates the original \(L\)-fuzzy set.
**Proposition 4.1**: _Let \({\cal P}\) be an \(L\)-fuzzy partition of \(X\). Then for all \(x\in X,f\in L^{X}\)_
1. \(\hat{f}^{\uparrow,\mathcal{I}_{\theta}}(x)\geq f(x)\)_, and_
2. \(\hat{f}^{\downarrow,\theta}(x)\leq f(x)\)_._
**Proof:** (i) Let \(x\in X,f\in L^{X}\). Then from Definition 4.1
\[\hat{f}^{\uparrow,\mathcal{I}_{\theta}}(x) = \bigwedge_{j\in J}\mathcal{I}_{\theta}(A_{j}(x),F_{j}^{\uparrow, \theta}[f])=\bigwedge_{j\in J}\mathcal{I}_{\theta}(A_{j}(x),\bigvee_{y\in X} \theta(A_{j}(y),f(y)))\] \[\geq \bigwedge_{j\in J}\mathcal{I}_{\theta}(A_{j}(x),\theta(A_{j}(x),f (x)))\geq f(x).\]
Thus \(\hat{f}^{\uparrow,\mathcal{I}_{\theta}}(x)\geq f(x)\).
(ii) Let \(x\in X\) and \(f\in L^{X}\). Then from Definition 4.1
\[\hat{f}^{\downarrow,\theta}(x) = \bigvee_{j\in J}\theta(A_{j}(x),F_{j}^{\downarrow,\mathcal{I}_{ \theta}}[f])=\bigvee_{j\in J}\theta(A_{j}(x),\bigwedge_{y\in X}\mathcal{I}_{ \theta}(A_{j}(y),f(y)))\] \[\leq \bigvee_{j\in J}\theta(A_{j}(x),\mathcal{I}_{\theta}(A_{j}(x),f( x)))\leq f(x).\]
Thus \(\hat{f}^{\downarrow,\theta}(x)\leq\mathcal{I}_{\theta}(1,f(x))\).
**Proposition 4.2**: _Let \(\mathcal{P}\) be an \(L\)-fuzzy partition of \(X\). Then for all \(x\in X,f\in L^{X}\)_
1. \(\hat{f}^{\uparrow,\eta}(x)\geq f(x)\)_, and_
2. \(\hat{f}^{\downarrow,\mathcal{I}_{\eta}}(x)\leq\mathcal{I}_{\eta}(0,f(x))\)_._
**Proof:** (i) Let \(x\in X,f\in L^{X}\). Then from Definition 4.1
\[\hat{f}^{\uparrow,\eta}(x) = \bigwedge_{j\in J}\eta(\mathbf{N}(A_{j}(x)),F_{j}^{\uparrow, \mathcal{I}_{\eta}}[f])=\bigwedge_{j\in J}\eta(\mathbf{N}(A_{j}(x)),\bigvee_{y \in X}\mathcal{I}_{\eta}((\mathbf{N}(A_{j}(y)),f(y)))\] \[\geq \bigwedge_{j\in J}\eta(\mathbf{N}(A_{j}(x)),\mathcal{I}_{\eta}(( \mathbf{N}(A_{j}(x)),f(x)))\geq f(x).\]
Thus \(\hat{f}^{\uparrow,\eta}(x)\geq f(x)\).
(ii) Let \(x\in X\) and \(f\in L^{X}\). Then from Definition 4.1
\[\hat{f}^{\downarrow,\mathcal{I}_{\eta}}(x) = \bigvee_{j\in J}\mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x)),F_{j}^{ \downarrow,\eta}[f])=\bigvee_{j\in J}\mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x)), \bigwedge_{y\in X}\eta(\mathbf{N}(A_{j}(y)),f(y)))\] \[\leq \bigvee_{j\in J}\mathcal{I}_{\eta}(A_{j}(x),\eta(\mathbf{N}(A_{j }(x)),f(x)))\leq f(x).\]
Thus \(\hat{f}^{\downarrow,\mathcal{I}_{\eta}}(x)\leq f(x)\).
Below, we show that the \(L\)-fuzzy set \(f\) and inverse \(F\)-transforms have the same \(F\)-transforms, respectively. Therefore the inverse \(F\)-transforms of the inverse \(F\)-transforms is again inverse \(F\)-transforms, respectively. This can easily follows from the following.
**Proposition 4.3**: _Let \(\mathcal{P}\) be an \(L\)-fuzzy partition of \(X\). Then for all \(j\in J,f\in L^{X}\)_
1. \(F_{j}^{\uparrow,\theta}[f]=\bigvee\limits_{x\in X}\theta(A_{j}(x),\hat{f}^{ \uparrow,\mathcal{I}_{\theta}}(x))\)_, and_
2. \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]=\bigwedge\limits_{x\in X}\mathcal{I} _{\theta}(A_{j}(x),\hat{f}^{\downarrow,\theta}(x))\)_._
**Proof:** (i) From Proposition 4.1(i), \(\hat{f}^{\uparrow,\mathcal{I}_{\theta}}(x)\geq f(x),\,\forall\,x\in X\). It follows from Definition 3.2 that
\[F_{j}^{\uparrow,\theta}[f]=\bigvee\limits_{x\in X}\theta(A_{j}(x ),f(x)) \leq \bigvee\limits_{x\in X}\theta(A_{j}(x),\hat{f}^{\uparrow,\mathcal{ I}_{\theta}}(x))\text{ and }\] \[\theta(A_{j}(x),\hat{f}^{\uparrow,\mathcal{I}_{\theta}}(x)) = \theta(A_{j}(x),\bigwedge\limits_{k\in J}\mathcal{I}_{\theta}(A_{ k}(x),F_{k}^{\uparrow,\theta}[f])\] \[\leq \theta(A_{j}(x),\mathcal{I}_{\theta}(A_{j}(x),F_{j}^{\uparrow, \theta}[f])\] \[\leq F_{j}^{\uparrow}[f].\]
Thus \(\bigvee\limits_{x\in X}\theta(A_{j}(x),\hat{f}^{\uparrow,\mathcal{I}_{\theta}} (x))\leq F_{j}^{\uparrow,\theta}[f]\) or \(F_{j}^{\uparrow,\theta}[f]=\bigvee\limits_{x\in X}\theta(A_{j}(x),\hat{f}^{ \uparrow,\mathcal{I}_{\theta}}(x))\).
(ii) From Proposition 4.1(ii), \(\hat{f}^{\downarrow}(x)\leq f(x),\,\forall\,x\in X\). It follows from Definition 3.2 that
\[F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]=\bigwedge\limits_{x\in X }\mathcal{I}_{\theta}(A_{j}(x),f(x)) \geq \bigwedge\limits_{x\in X}\mathcal{I}_{\theta}(A_{j}(x),\hat{f}^{ \downarrow,\theta}(x))\text{ and }\] \[\mathcal{I}_{\theta}(A_{j}(x),\hat{f}^{\downarrow,\theta}(x)) = \mathcal{I}_{\theta}(A_{j}(x),\bigvee\limits_{k\in J}\theta(A_{k}( x),F_{k}^{\downarrow,\mathcal{I}_{\theta}}[f]))\] \[\geq \mathcal{I}_{\theta}(A_{j}(x),\theta(A_{j}(x),F_{j}^{\downarrow, \mathcal{I}_{\theta}}[f]))\] \[\geq F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f].\]
Thus \(\bigwedge\limits_{x\in X}\mathcal{I}_{\theta}(A_{j}(x),\hat{f}^{\downarrow, \theta}(x))\geq F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]\) or \(F_{j}^{\downarrow,\mathcal{I}_{\theta}}[f]=\bigwedge\limits_{x\in X} \mathcal{I}(A_{j}(x),\hat{f}^{\downarrow,\theta}(x))\).
**Proposition 4.4**: _Let \(\mathcal{P}\) be an \(L\)-fuzzy partition of \(X\). Then for all \(j\in J,f\in L^{X}\)_
1. \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]=\bigvee\limits_{x\in X}\mathcal{I}_{ \eta}(\mathbf{N}(A_{j}(x)),\hat{f}^{\uparrow,\eta}(x))\)_, and_
2. \(F_{j}^{\downarrow,\eta}[f]=\bigwedge\limits_{x\in X}\eta(\mathbf{N}(A_{j}(x)), \hat{f}^{\downarrow,\mathcal{I}_{\eta}}(x))\)_._
**Proof:** (i) From Proposition 4.2(i), \(\hat{f}^{\uparrow,\eta}(x)\geq f(x),\,\forall\,x\in X\). It follows from Definition 3.2 that
\[F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]=\bigvee\limits_{x\in X} \mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x)),f(x)) \leq \bigvee\limits_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x)), \hat{f}^{\uparrow,\eta}(x))\text{ and }\] \[\mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x)),\hat{f}^{\uparrow,\eta}(x )) = \mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x)),\bigwedge\limits_{k\in J} \eta(\mathbf{N}(A_{k}(x)),F_{k}^{\uparrow,\mathcal{I}_{\eta}}[f]))\] \[\leq \mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x)),\eta(\mathbf{N}(A_{j}(x)),F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]))\] \[\leq F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f].\]
Thus \(\bigvee\limits_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x)),\hat{f}^{\uparrow, \eta}(x))\leq F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]\) or \(F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f]=\bigvee\limits_{x\in X}\mathcal{I}_{ \eta}(A_{j}(x),\hat{f}^{\uparrow,\eta}(x))\).
(ii) From Proposition 4.2(ii), \(\hat{f}^{\downarrow,\mathcal{I}_{\eta}}(x)\leq f(x)\), \(\forall\,x\in X\). It follows from Definition 3.2 that
\[F_{j}^{\downarrow,\eta}[f]=\bigwedge\limits_{x\in X}\eta(\mathbf{ N}(A_{j}(x)),f(x)) \geq \bigwedge\limits_{x\in X}\eta(\mathbf{N}(A_{j}(x)),\hat{f}^{ \downarrow,\mathcal{I}_{\eta}}(x))\text{ and }\] \[\eta(\mathbf{N}(A_{j}(x)),\hat{f}^{\downarrow,\mathcal{I}_{\eta}} (x)) = \eta(\mathbf{N}(A_{j}(x)),\bigvee\limits_{k\in J}\mathcal{I}_{ \eta}(\mathbf{N}(A_{k}(x)),F_{k}^{\downarrow,\eta}[f]))\] \[\geq \eta(\mathbf{N}(A_{j}(x)),\mathcal{I}_{\eta}(\mathbf{N}(A_{j}(x) ),F_{j}^{\downarrow,\eta}[f]))\] \[\geq F_{j}^{\downarrow,\eta}[f].\]
Thus \(\bigwedge\limits_{x\in X}\eta(\mathbf{N}(A_{j}(x)),\hat{f}^{\downarrow, \mathcal{I}_{\eta}}(x))\geq F_{j}^{\downarrow,\eta}[f]\) or \(F_{j}^{\downarrow,\eta}[f]=\bigwedge\limits_{x\in X}\mathcal{I}(A_{j}(x),\hat {f}^{\downarrow,\mathcal{I}_{\eta}}(x))\).
## 5 Axiomatic approaches of \(F\)-transforms
In [16], the axiomatic approaches of the direct \(F\)-transforms computed with \(t\)-norm and \(R\)-implicator were studied in detail. This section focuses on the axiomatic characterizations of the direct \(F\)-transforms computed with respect to \(\theta,\eta,\mathcal{I}_{\eta},\mathcal{I}_{\theta}\), respectively by some independent axioms. Also, we first present the axioms for each direct \(F\)-transform that guarantee the existence of an \(L\)-fuzzy partition that produces the same \(F\)-transform. Now, we begin with the following.
For any \(f\in L^{X}\) and for an \(L\)-fuzzy partition \(\mathcal{P}\), it can be seen that the direct \(F^{\uparrow,\theta},F^{\downarrow,\eta},F^{\uparrow,\mathcal{I}_{\eta}}\) and \(F^{\uparrow,\mathcal{I}_{\theta}}\)-transforms induce the maps \(F_{\mathcal{P}}^{\uparrow,\theta},F_{\mathcal{P}}^{\downarrow,\eta},F_{ \mathcal{P}}^{\uparrow,\mathcal{I}_{\eta}},F_{\mathcal{P}}^{\uparrow,\mathcal{ I}_{\theta}}:L^{X}\to L^{J}\) such that
\[F_{\mathcal{P}}^{\uparrow,\theta}[f](j) = F_{j}^{\uparrow,\theta}[f],\ F_{\mathcal{P}}^{\downarrow,\eta} [f](j)=F_{j}^{\downarrow,\eta}[f],\] \[F_{\mathcal{P}}^{\uparrow,\mathcal{I}_{\eta}}[f](j) = F_{j}^{\uparrow,\mathcal{I}_{\eta}}[f],\ F_{\mathcal{P}}^{ \downarrow,\mathcal{I}_{\theta}}[f](j)=F_{j}^{\downarrow,\mathcal{I}_{\theta} }[f],\text{ respectively.}\]
Now, we introduce the concepts of \(L\)-fuzzy upper and lower transformation systems with respect to overlap and grouping maps \(\theta\) and \(\eta\) ( a co-residual and residual implications \(\mathcal{I}_{\eta}\) and \(\mathcal{I}_{\theta}\) induced by grouping and overlap maps \(\eta\) and \(\theta\)), respectively.
**Definition 5.1**: _Let \(X\) be a nonempty set, \(\theta\) be an overlap map and \(\mathcal{I}_{\eta}\) be a co-residual indicator over \(L\). Then a system \(\mathcal{U}_{F}=(X,Y,u,U_{F})\), where \(F=\theta\) or \(\mathcal{I}_{\eta}\) and_
1. \(Y\) _is a nonempty set,_
2. \(u:X\to Y\) _is an onto map,_
3. \(U_{F}:L^{X}\to L^{Y}\) _is a map, where_ 1. _for all_ \(\{f_{k}:k\in J\}\subseteq L^{X}\)_,_ \(U_{F}[\bigvee\limits_{k\in J}f_{k}]=\bigvee\limits_{k\in J}U_{F}[f_{k}]\)_,_ 2. _for all_ \(\textbf{u},f\in L^{X}\)_,_ \(U_{F}[F(\textbf{u},f)]=F(\textbf{u},U_{F}[f])\)
_for all_ \(x\in X,\,y\in Y\)_,_ \(U_{F}[1_{\{x\}}](y)=1\) _iff_ \(y=u(x)\)_,_
_is called an \(L\)_**-fuzzy upper transformation system** _on_ \(X\) _with respect to_ \(F\)_._
**Definition 5.2**: _Let \(X\) be a nonempty set, \(\eta,\mathcal{I}_{\theta}\) and \(\mathbf{N}\) be grouping map, residual implicator and negator over \(L\), respectively. Then system \(\mathcal{H}_{F}=(X,Y,v,H_{F})\), where \(F=\eta\) or \(\mathcal{I}_{\theta}\) and_
1. \(Y\) _is a nonempty set,_
2. \(v:X\to Y\) _is an onto map,_
3. \(H_{F}:L^{X}\to L^{Y}\) _is a map, where_ 1. _for all_ \(\{f_{k}:k\in J\}\subseteq L^{X}\)_,_ \(H_{F}[\bigwedge\limits_{k\in J}f_{k}](y)=\bigwedge\limits_{k\in J}H_{F}[f_{k}] (y),\)__ 2. _for all_ \(\textbf{u},f\in L^{X}\)_,_ \(H_{F}[F(\textbf{u},f)]=F(\textbf{u},H_{F}[f])\)_, and_ 3. _for_ \(y\in Y\) _and_ \(x\in X\)_,_ \((\mathbf{N}(H_{F}[\mathbf{N}(1_{\{x\}})]))(y)=1\) _iff_ \(y=v(x)\)_,_
_is called an \(L\)_**-fuzzy lower transformation system** _on \(X\) with respect to \(F\)._
The \(L\)-fuzzy upper transformation system with respect to a \(t\)-norm and the \(L\)-fuzzy lower transformation system with respect to an \(R\)-implicator proposed in [16, 34] are special cases of \(\mathcal{U}_{\theta}\) and \(\mathcal{H}_{\mathcal{I}_{\theta}}\), respectively. Also, the \(L\)-fuzzy lower transformation system with respect to an \(S\)-implicator proposed in [34] is a special case of \(\mathcal{H}_{\eta}\). The \(L\)-fuzzy upper transformation system with respect to \(\mathcal{U}_{\mathcal{I}_{\eta}}\) is a new definition.
**Example 5.1**: _Let \(X\) be a nonempty set and \(id:X\to X\) be an identity map. Now, we define maps \(U_{F},H_{F^{\prime}}:L^{X}\to L^{X}\) such that \(U_{F}[f](x)=f(x),H_{F^{\prime}}[f](x)=f(x),x\in X\), where \(F=\theta\) or \(\mathcal{I}_{\eta}\) and \(F^{\prime}=\eta\) or \(\mathcal{I}_{\theta}\). Then for all \(\{f_{k}:k\in J\}\subseteq L^{X}\), let \(\textbf{u},f\in L^{X}\). Then \(U_{F}[F(\textbf{u},f)]=F(\textbf{u},U_{F}[f])\) and \(H_{F^{\prime}}[F^{\prime}(\textbf{u},f)]=F^{\prime}(\textbf{u},H_{F^{\prime }}[f])\). Finally, let \(x,z\in X\). Then \(U_{F}[1_{\{x\}}](z)=U_{F}[1_{\{x\}}](z)=1_{\{x\}}(z)=1\) iff \(x=z\) and \((\mathbf{N}(H_{F^{\prime}}[\mathbf{N}(1_{\{x\}})])(z))=\mathbf{N}(H_{F^{ \prime}}[\mathbf{N}(1_{\{x\}})](z))=\mathbf{N}(\mathbf{N}(1_{\{x\}})(z))=1\) iff \(x=z\). Thus \(U_{F}(1_{\{x\}})(z)=1,(\mathbf{N}(H_{F^{\prime}}[\mathbf{N}(1_{\{x\}})]))(z)=1\) iff \(z=id(x)\). Hence \(\mathcal{U}_{F}=(X,X,id,U_{F})\) and \(\mathcal{H}_{F^{\prime}}=(X,X,id,H_{F^{\prime}})\) are \(L\)-fuzzy upper and lower transformation systems on \(X\) with respect to \(F\) and \(F^{\prime}\), respectively._
**Remark 5.1**: _(i) If \(L=[0,1],\,\mathbf{N}=\mathbf{N}_{S},\theta=\theta_{M},\eta=\eta_{M},\mathcal{ I}_{\eta}=\mathcal{I}_{\eta_{M}}\) and \(\mathcal{I}_{\theta}=\mathcal{I}_{\theta_{M}}\). Then \(\mathcal{U}_{\theta_{M}}\) and \(\mathcal{H}_{\mathcal{I}_{\theta_{M}}}\) coincide with the special cases of the \(L\)-fuzzy upper and lower transformation systems proposed in [16, 34], respectively. Also, \(\mathcal{H}_{\eta_{M}}\) coincides with the special case of the \(L\)-fuzzy lower transformation system proposed in [34]._
_(ii) If \(L=[0,1],\theta=\theta_{M},\eta=\eta_{M},\mathcal{I}_{\eta}=\mathcal{I}_{\eta _{M}}\) and \(\mathcal{I}_{\eta}=\mathcal{I}_{\theta_{M}}\). Then \(\mathcal{U}_{\theta_{M}}\) and \(\mathcal{H}_{\mathcal{I}_{\theta_{M}}}\) coincide with the special cases of the \(L\)-fuzzy upper and lower transformation systems proposed in [16, 34], respectively. Also, \(\mathcal{H}_{\eta_{M}}\) coincides with the special case of the \(L\)-fuzzy lower transformation system proposed in [34]._
_(iii) If_ \(L=[0,1],\theta=\mathcal{T}\) _and_ \(\eta=\mathcal{S}\)_, where_ \(\mathcal{T},\mathcal{S}\) _are continuous_ \(t\)_-norm,_ \(t\)_-conorm with no nontrivial zero divisors, respectively. Then_ \(\mathcal{U}_{\mathcal{T}}\) _and_ \(\mathcal{H}_{\mathcal{T}\tau}\) _coincide with the_ \(L\)_-fuzzy upper and lower transformation systems with respect to_ \(t\)_-norm and_ \(R\)_-implicator proposed in_ _[_16, 34_]__, respectively. Also,_ \(\mathcal{H}\)_s coincides with the_ \(L\)_-fuzzy lower transformation system with respect to_ \(S\)_-implicator proposed in_ _[_34_]__._
From the above remark, it is clear that some existing \(L\)-fuzzy transformation systems are special cases of the proposed \(L\)-fuzzy transformation systems. Among these, some \(L\)-fuzzy transformation systems coincide with the proposed \(L\)-fuzzy transformation systems, and some proposed \(L\)-fuzzy transformation systems coincide with the special cases of the existing \(L\)-fuzzy transformation systems. That is to say; the proposed \(L\)-fuzzy transformation systems are more extended form than some existing ones.
The following shows a close connection of the \(L\)-fuzzy transformation systems with the \(F\)-transforms. To do this, we need some results, which are given by the following proposition.
**Proposition 5.1**: _Let \(\mathbf{N}\) be a negator, \(\theta,\eta\) be overlap and grouping maps with neutral elements \(0,1\), respectively. In addition, let \(\mathbf{N}_{\mathcal{I}_{\eta}},\mathbf{N}_{\mathcal{I}_{\eta}}\) be involutive negators. Then for all \(f\in L^{X}\)_
* \(f=\bigvee\limits_{x\in X}\theta(\textbf{f(x)},1_{\{x\}}),f=\bigwedge\limits_{x \in X}\eta(\textbf{f(x)},\mathbf{N}(1_{\{x\}}))\)_, and_
* \(f=\bigvee\limits_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{\eta}}( \textbf{f(x)}),1_{\{x\}}),f=\bigwedge\limits_{x\in X}\mathcal{I}_{\theta}( \mathbf{N}_{\mathcal{I}_{\theta}}(\textbf{f(x)}),\mathbf{N}_{\mathcal{I}_{ \theta}}(1_{\{x\}}))\)_._
**Proof:** (i) Let \(y\in X,f\in L^{X}\). Then
\[f(y) = \bigvee\limits_{x\in X}\theta(\textbf{f(x)},1_{\{x\}})(y)= \bigvee\limits_{x\in X}\theta(f(x),1_{\{x\}}(y))\] \[= \theta(f(x),1_{\{y\}}(y))\vee\bigvee\limits_{x\neq y\in X}\theta (f(y),1_{\{x\}}(y))\] \[= \theta(f(y),1)=f(y).\]
Thus \(f=\bigvee\limits_{x\in X}\theta(\textbf{f(x)},1_{\{x\}})\) and
\[f(y) = \bigwedge\limits_{x\in X}\eta(\textbf{f(x)},\mathbf{N}(1_{\{x\}}) )(y)=\bigwedge\limits_{x\in X}\eta(f(x),\mathbf{N}(1_{\{x\}})(y))\] \[= \eta(f(y),\mathbf{N}(1_{\{y\}}(y)))\vee\bigwedge\limits_{x\neq y \in X}\eta(f(y),\mathbf{N}(1_{\{x\}}(y)))\] \[= \eta(f(y),0)=f(y).\]
Thus \(f=\bigwedge\limits_{x\in X}\eta(\textbf{f(x)},\mathbf{N}(1_{\{x\}}))\).
(ii) Let \(y\in X,f\in L^{X}\). Then
\[f(y) = \bigvee_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{\eta}}( \textbf{f(x)}),1_{\{x\}})(y)=\bigvee_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}_{ \mathcal{I}_{\eta}}(f(x)),1_{\{x\}}(y))\] \[= \mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{\eta}}(f(x)),1_{\{y\} }(y))\vee\bigvee_{x\neq y\in X}\mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{ \eta}}(f(x)),1_{\{x\}}(y))\] \[= \mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{\eta}}(f(y)),1)= \mathbf{N}_{\mathcal{I}_{\eta}}(\mathbf{N}_{\mathcal{I}_{\eta}}(f(y))=f(y).\]
Thus \(f=\bigvee_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{\eta}}( \textbf{f(x)}),1_{\{x\}})\) and
\[f(y) = \bigwedge_{x\in X}\mathcal{I}_{\theta}(\mathbf{N}_{\mathcal{I}_{ \theta}}(\textbf{f(x)}),\mathbf{N}_{\mathcal{I}_{\theta}}(1_{\{x\}}))(y)= \bigwedge_{x\in X}\mathcal{I}_{\theta}(\mathbf{N}_{\mathcal{I}_{\theta}}(f(x) ),\mathbf{N}_{\mathcal{I}_{\theta}}(1_{\{x\}}(y)))\] \[= \mathcal{I}_{\theta}(\mathbf{N}_{\mathcal{I}_{\theta}}(f(y)), \mathbf{N}_{\mathcal{I}_{\theta}}(1_{\{y\}}(y)))\wedge\bigwedge_{x\neq y\in X }\mathcal{I}_{\theta}(\mathbf{N}_{\mathcal{I}_{\theta}}(f(x)),\mathbf{N}_{ \mathcal{I}_{\theta}}(1_{\{x\}}(y)))\] \[= \mathcal{I}_{\theta}(\mathbf{N}_{\mathcal{I}_{\theta}}(f(y)),0)= \mathbf{N}_{\mathcal{I}_{\theta}}(\mathbf{N}_{\mathcal{I}_{\theta}}(f(y)))=f(y).\]
Now, we have the following.
**Proposition 5.2**: _Let \(\theta\) be an overlap map \(L\). Then the following statements are equivalent:_
* \(\mathcal{U}_{\theta}=(X,Y,u,U_{\theta})\) _is an_ \(L\)_-fuzzy upper transformation system on_ \(X\) _determined by an overlap map_ \(\theta\) _and_ \(Y\subseteq X\)_._
* _There exists an_ \(L\)_-fuzzy partition_ \(\mathcal{P}\) _of_ \(X\) _indexed by_ \(Y\) _such that_ \(u(x)=y\) _iff_ \(x\in core(A_{y})\) _and_ \(U_{\theta}=F_{\mathcal{P}}^{\uparrow,\theta}\)_._
**Proof:** Let \(\mathcal{U}_{\theta}=(X,Y,u,U_{\theta})\) be an \(L\)-fuzzy upper transformation system on \(X\) determined by \(\theta\). Also, let \(\mathcal{P}=\{A_{y}:y\in Y\}\) such that for all \(y\in Y\), \(A_{y}\in L^{X}\) is given by \(A_{y}(x)=U_{\theta}[1_{\{x\}}](y)\), \(x\in X\). Now, from Definition 5.1(iii), \(A_{u(x)}(x)=U_{\theta}[1_{\{x\}}](u(x))=1\), or that, \(x\in core(A_{u(x)})\). Further, for \(y,z\in Y,t\in core(A_{y})\cap core(A_{z}),U_{\theta}[1_{\{t\}}](y)=1=U_{ \theta}[1_{\{t\}}](z)\), i.e., \(A_{y}(t)=1=A_{z}(t)\) iff \(y=u(t)=z\). Thus \(\{core(A_{y}):y\in Y\}\) is a partition of \(X\) and therefore \(\mathcal{P}\) is an \(L\)-fuzzy partition of \(X\). Now, for all \(y\in Y\) and \(f\in L^{X}\)
\[F_{\mathcal{P}}^{\uparrow,\theta}[f](y) = \bigvee_{x\in X}\theta(A_{y}(x),f(x))\] \[= \bigvee_{x\in X}\theta(U_{\theta}[1_{\{x\}}](y),f(x))\] \[= \bigvee_{x\in X}\theta(f(x),U_{\theta}[1_{\{x\}}](y))\] \[= \bigvee_{x\in X}U_{\theta}[\theta(\textbf{f(x)},1_{\{x\}})](y)\] \[= U_{\theta}[\bigvee_{x\in X}\theta(\textbf{f(x)},1_{\{x\}})](y)\] \[= U_{\theta}[f](y).\]
Thus \(U_{\theta}=F_{\mathcal{P}}^{\uparrow,\theta}\). Conversely, let \(\mathcal{P}=\{A_{y}\in L^{X}:y\in Y\}\) be an \(L\)-fuzzy partition of base set \(X\neq\emptyset\). Let us define a map \(u:X\to Y\) such that \(u(x)=y\) iff \(x\in core(A_{y})\). Further, let \(\theta\) be an overlap map with mental element \(1\) and \(U_{\theta}=F_{\mathcal{P}}^{\uparrow,\theta}\). Then for all \(y\in Y,x\in X\)\(U_{\theta}[1_{\{x\}}](y)=F_{\mathcal{P}}^{\uparrow,\theta}[1_{\{x\}}](y)= \bigvee_{z\in X}\theta(A_{y}(z),\ 1_{\{x\}}(z))=\theta(A_{y}(x),1))=A_{y}(x)\). Thus \(U_{\theta}[1_{\{x\}}](y)=1\) iff \(A_{y}(x)=1\) iff \(v(x)=y\). From Propositions 3.10 and 3.11, \((X,Y,u,U_{\theta})\) is an \(L\)-fuzzy upper transformation system on \(X\) determined by \(\theta\).
**Proposition 5.3**: _Let \(\mathcal{I}_{\eta}\) be an \(EP\)-co-residual implicator over \(L\) such that \(\mathbf{N}_{\mathcal{I}_{\eta}}\) is an involutive negator. Then the following statements are equivalent:_
* \(\mathcal{U}_{\mathcal{I}_{\eta}}=(X,Y,u,U_{\mathcal{I}_{\eta}})\) _is an_ \(L\)_-fuzzy upper transformation system on_ \(X\) _determined by a co-residual implicator_ \(\mathcal{I}_{\eta}\) _and_ \(Y\subseteq X\)_._
* _There exists an_ \(L\)_-fuzzy partition_ \(\mathcal{P}\) _of_ \(X\) _indexed by_ \(Y\) _such that_ \(u(x)=y\) _iff_ \(x\in core(A_{y})\) _and_ \(U_{\mathcal{I}_{\eta}}=F_{\mathcal{P}}^{\uparrow,\mathcal{I}_{\eta}}\)_._
**Proof:** Let \(\mathcal{U}_{\mathcal{I}_{\eta}}=(X,Y,u,U_{\mathcal{I}_{\eta}})\) be an \(L\)-fuzzy upper transformation system on \(X\) determined by \(\mathcal{I}_{\eta}\). Also, let \(\mathcal{P}=\{A_{y}:y\in Y\}\) such that for all \(y\in Y\), \(A_{y}\in L^{X}\) is given by \(A_{y}(x)=U_{\mathcal{I}_{\eta}}[1_{\{x\}}](y)\), \(x\in X\). Now, from Definition 5.1(iii), \(A_{u(x)}(x)=U_{\mathcal{I}_{\eta}}[1_{\{x\}}](u(x))=1\), or that, \(x\in core(A_{u(x)})\). Further, for \(y,z\in Y,t\in core(A_{y})\cap core(A_{z})\), \(U_{\mathcal{I}_{\eta}}[1_{\{t\}}](y)=1=U_{\mathcal{I}_{\eta}}[1_{\{t\}}](z)\), i.e., \(A_{y}(t)=1=A_{z}(t)\) iff \(y=u(t)=z\). Thus \(\{core(A_{y}):y\in Y\}\) is a partition of \(X\) and therefore \(\mathcal{P}\) is an \(L\)-fuzzy partition of \(X\). Now, for all \(y\in Y\) and \(f\in L^{X}\)
\[F_{\mathcal{P}}^{\uparrow,\mathcal{I}_{\eta}}[f](y) = \bigvee_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{\eta }}(A_{y}(x)),f(x))\] \[= \bigvee_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{\eta }}(A_{y}(x)),\mathbf{N}_{\mathcal{I}_{\eta}}(\mathbf{N}_{\mathcal{I}_{\eta}}(f( x))))\] \[= \bigvee_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{\eta }}(f(x)),A_{y}(x))\] \[= \bigvee_{x\in X}\mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{\eta }}(f(x)),U_{\mathcal{I}_{\eta}}[1_{\{x\}}](y))\] \[= \bigvee_{x\in X}U_{\mathcal{I}_{\eta}}[\mathcal{I}_{\eta}( \mathbf{N}_{\mathcal{I}_{\eta}}(\mathbf{f(x)}),1_{\{x\}})](y)\] \[= U_{\mathcal{I}_{\eta}}[\bigvee_{x\in X}\mathcal{I}_{\eta}( \mathbf{N}_{\mathcal{I}_{\eta}}(\mathbf{f(x)}),1_{\{x\}})](y)\] \[= U_{\mathcal{I}_{\eta}}[f](y).\]
Thus \(U_{\mathcal{I}_{\eta}}=F_{\mathcal{P}}^{\uparrow,\mathcal{I}_{\eta}}\). Conversely, let \(\mathcal{P}=\{A_{y}\in L^{X}:y\in Y\}\) be an \(L\)-fuzzy partition of base set \(X\neq\emptyset\). Let us define a map \(u:X\to Y\) such that \(u(x)=y\) iff \(x\in core(A_{y})\). Further, let \(\mathcal{I}_{\eta}\) be a co-residual implicator such that \(\mathbf{N}_{\mathcal{I}_{\eta}}(\cdot)=\mathcal{I}_{\eta}(\cdot,1)\) is an involutive negator) and \(U_{\mathcal{I}_{\eta}}=F_{\mathcal{P}}^{\uparrow,\mathcal{I}_{\eta}}\). Then for all \(y\in Y,x\in X,\ U_{\mathcal{I}_{\eta}}[1_{\{x\}}](y)=F_{\mathcal{P}}^{ \uparrow,\mathcal{I}_{\eta}}[1_{\{x\}}](y)=F_{\mathcal{P}}^{\uparrow,\mathcal{ I}_{\eta}}[1_{\{x\}}](y)=\bigvee_{z\in X}\mathcal{I}_{\eta}(\mathbf{N}_{ \mathcal{I}_{\eta}}(A_{y}(z)),1_{\{x\}})(z))=\bigwedge_{z\in X}\mathcal{I}_{ \eta}(\mathbf{N}_{\mathcal{I}_{\eta}}(A_{y}(z)),1_{\{x\}}(z))\)\(=\mathcal{I}_{\eta}(\mathbf{N}_{\mathcal{I}_{\eta}}(A_{y}(x)),1))=\mathbf{N}_{ \mathcal{I}_{\eta}}(\mathbf{N}_{\mathcal{I}_{\eta}}(A_{y}(x)))=A_{y}(x)\). Thus \(U_{\mathcal{I}_{\eta}}[1_{\{x\}}](y)=1\) iff
\(A_{y}(x)=1\) iff \(u(x)=y\). From Propositions 3.10 and 3.11, \((X,Y,u,U_{\mathcal{I}_{\eta}})\) is an \(L\)-fuzzy upper transformation system on \(X\) determined by \(\mathcal{I}_{\eta}\).
**Proposition 5.4**: _Let \(\eta\) be an \(EP\)-grouping map with neutral element \(0\) over \(L\) such that \(\mathbf{N}\) be an involutive negator. Then the following statements are equivalent:_
* \(\mathcal{H}_{\eta}=(X,Y,v,H_{\eta})\) _is an_ \(L\)_-fuzzy lower transformation system on_ \(X\) _determined by_ \(\eta\) _and_ \(Y\subseteq X\)_._
* _There exists an_ \(L\)_-fuzzy partition_ \(\mathcal{P}\) _of_ \(X\) _indexed by_ \(Y\)_, such that_ \(v(x)=y\) _iff_ \(x\in core(A_{y})\) _and_ \(H_{\eta}=F_{\mathcal{P}}^{\downarrow,\eta}\)_._
**Proof:** Let \(\mathcal{H}_{\eta}=(X,Y,v,H_{\eta})\) be an \(L\)-fuzzy lower transformation system on \(X\) determined by \(\eta\). Also, let \(\mathcal{P}=\{A_{y}:y\in Y\}\) such that for all \(y\in Y\), \(A_{y}\in L^{X}\) is given by \(A_{y}(x)=\mathbf{N}(H_{\eta}[\mathbf{N}(1_{\{x\}})])(y)\), \(x\in X\). Now, from Definition 5.2(iii), \(A_{v(x)}(x)=(\mathbf{N}(H_{\eta}[\mathbf{N}(1_{\{x\}})]))(v(x))=1\), or that, \(x\in core(A_{v(x)})\). Further, for \(y,z\in Y,t\in core(A_{y})\cap core(A_{z}),(\mathbf{N}(H_{\eta}[\mathbf{N}(1_{ \{t\}})]))(y)=1=(\mathbf{N}(H_{\eta}[\mathbf{N}(1_{\{t\}})]))(z)\), i.e., \(A_{y}(t)=1=A_{z}(t)\) iff \(y=v(t)=z\). Thus \(\{core(A_{y}):y\in Y\}\) is a partition of \(X\) and therefore \(\mathcal{P}\) is an \(L\)-fuzzy partition of \(X\). Now, for all \(y\in Y\) and \(f\in L^{X}\)
\[F_{\mathcal{P}}^{\downarrow,\eta}[f](y) = \bigwedge_{x\in X}\eta(\mathbf{N}(A_{y}(x)),f(x))\] \[= \bigwedge_{x\in X}\eta(H_{\eta}[\mathbf{N}(1_{\{x\}})](y),f(x))\] \[= \bigwedge_{x\in X}\eta(f(x),H_{\eta}[\mathbf{N}(1_{\{x\}})](y))\] \[= \bigwedge_{x\in X}H_{\eta}[\eta(\textbf{f(x)},\mathbf{N}(1_{\{x \}}))](y)\] \[= H_{\eta}[\bigwedge_{x\in X}\eta(\textbf{f(x)},\mathbf{N}(1_{\{x \}}))](y)\] \[= H_{\eta}[f](y).\]
Thus \(H_{\eta}=F_{\mathcal{P}}^{\downarrow,\eta}\). Conversely, let \(\mathcal{P}=\{A_{y}\in L^{X}:y\in Y\}\) be an \(L\)-fuzzy partition of base set \(X\neq\emptyset\). Let us define a map \(v:X\to Y\) such that \(v(x)=y\) iff \(x\in core(A_{y})\). Further, let \(\eta\) be a grouping map with neutral element \(0\), \(\mathbf{N}\) be an involutive negator and \(H_{\eta}=F_{\mathcal{P}}^{\downarrow,\eta}\).Then for all \(y\in Y,x\in X\)
\[(\mathbf{N}(H_{\eta}[\mathbf{N}(1_{\{x\}})]))(y) = (\mathbf{N}(F_{\mathcal{P}}^{\downarrow,\eta}[\mathbf{N}(1_{\{x\}} )]))(y)\] \[= \mathbf{N}(F_{\mathcal{P}}^{\downarrow,\eta}[\mathbf{N}(1_{\{x \}})](y))\] \[= \mathbf{N}(\bigwedge_{z\in X}\eta(\mathbf{N}_{\eta}(A_{y}(z)),( \mathbf{N}(1_{\{x\}}))(z)))\] \[= \mathbf{N}(\bigwedge_{z\in X}\eta(\mathbf{N}(A_{y}(z)),\mathbf{N }(1_{\{x\}}(z))))\] \[= \mathbf{N}(\eta(\mathbf{N}(A_{y}(x)),0))\] \[= \mathbf{N}(\mathbf{N}(A_{y}(x)))\] \[= A_{y}(x).\]
Thus \(({\bf N}(H_{\eta}[{\bf N}(1_{\{x\}})]))(y)=1\) iff \(A_{y}(x)=1\) iff \(v(x)=y\). From Propositions 3.10 and 3.11, \((X,Y,v,H_{\eta})\) is an \(L\)-fuzzy lower transformation system on \(X\) determined by \(\eta\).
**Proposition 5.5**: _Let \({\cal I}_{\theta}\) be an \(EP\)-residual implicator over \(L\) such that \({\bf N}_{{\cal I}_{\theta}}\) is an involutive negator. Then the following statements are equivalent:_
* \({\cal H}_{{\cal I}_{\theta}}=(X,Y,v,H_{{\cal I}_{\theta}})\) _is an_ \(L\)_-fuzzy lower transformation system on_ \(X\) _determined by_ \({\cal I}_{\theta}\) _and_ \(Y\subseteq X\)_._
* _There exists an_ \(L\)_-fuzzy partition_ \({\cal P}\) _of_ \(X\) _indexed by_ \(Y\)_, such that_ \(v(x)=y\) _iff_ \(x\in core(A_{y})\) _and_ \(H_{{\cal I}_{\theta}}=F_{{\cal P}}^{\downarrow,{\cal I}_{\theta}}\)_._
**Proof:** Let \({\cal H}_{{\cal I}_{\theta}}=(X,Y,v,H_{{\cal I}_{\theta}})\) be an \(L\)-fuzzy lower transformation system on \({\bf X}\) determined by \({\cal I}_{\theta}\). Also, let \({\cal P}=\{A_{y}:y\in Y\}\) such that for all \(y\in Y\), \(A_{y}\in L^{X}\) is given by \(A_{y}(x)=({\bf N}_{{\cal I}_{\theta}}(H_{{\cal I}_{\theta}}[{\bf N}_{{\cal I} _{\theta}}(1_{\{x\}})]))(y)\), \(x\in X\). Now, from Definition 5.1(iii), \(A_{v(x)}(x)=({\bf N}_{{\cal I}_{\theta}}(H_{{\cal I}_{\theta}}[{\bf N}_{{\cal I }_{\theta}}(1_{\{x\}})]))(v(x))=1\), or that, \(x\in core(A_{v(x)})\). Further, for \(t\in core(A_{y})\cap core(A_{z})\), \(y,z\in Y\) and the fact that \({\bf N}_{{\cal I}_{\theta}}(x)={\cal I}_{\theta}(x,0)\), \(({\bf N}_{{\cal I}_{\theta}}(H_{{\cal I}_{\theta}}[{\bf N}_{{\cal I}_{\theta} }(1_{\{t\}})]))(y)=1=({\bf N}_{{\cal I}_{\theta}}(H_{{\cal I}_{\theta}}[{\bf N} _{{\cal I}_{\theta}}(1_{\{t\}})]))(z)\), i.e., \(A_{y}(t)=1=A_{z}(t)\) iff \(y=v(t)=z\). Thus \(\{core(A_{y}):y\in Y\}\) is a partition of \(X\) and therefore \({\cal P}\) is an \(L\)-fuzzy partition of \(X\). Now, for all \(y\in Y\) and \(f\in L^{X}\)
\[F_{{\cal P}}^{\downarrow,{\cal I}_{\theta}}[f](y) = \bigwedge_{x\in X}{\cal I}_{\theta}(A_{y}(x),f(x))\] \[= \bigwedge_{x\in X}{\cal I}_{\theta}(({\bf N}_{{\cal I}}(H_{{\cal I }_{\theta}}[{\bf N}_{{\cal I}_{\theta}}(1_{\{x\}})]))(y),f(x))\] \[= \bigwedge_{x\in X}{\cal I}_{\theta}({\bf N}_{{\cal I}_{\theta}}(H_ {{\cal I}_{\theta}}[{\bf N}_{{\cal I}_{\theta}}(1_{\{x\}})](y)),{\bf N}_{{ \cal I}_{\theta}}({\bf N}_{{\cal I}_{\theta}}(f(x))))\] \[= \bigwedge_{x\in X}{\cal I}_{\theta}({\bf N}_{{\cal I}_{\theta}}(f(x )),{\bf N}_{{\cal I}_{\theta}}({\bf N}_{{\cal I}_{\theta}}(H_{{\cal I}_{ \theta}}[{\bf N}_{{\cal I}_{\theta}}(1_{\{x\}})](y))))\] \[= \bigwedge_{x\in X}{\cal I}_{\theta}(({\bf N}_{{\cal I}_{\theta}}(f ))(x),H_{{\cal I}_{\theta}}[{\bf N}_{{\cal I}_{\theta}}(1_{\{x\}})](y))\] \[= \bigwedge_{x\in X}H_{{\cal I}_{\theta}}[{\bf\cal I}_{\theta}({\bf N }_{{\cal I}_{\theta}}({\bf f(x)}),{\bf N}_{{\cal I}_{\theta}}(1_{\{x\}}))](y)\] \[= H_{{\cal I}_{\theta}}[\bigwedge_{x\in X}{\cal I}_{\theta}({\bf N }_{{\cal I}_{\theta}}({\bf f(x)}),{\bf N}_{{\cal I}_{\theta}}(1_{\{x\}}))](y)\] \[= H_{{\cal I}_{\theta}}[{\bf N}_{{\cal I}_{\theta}}({\bf N}_{{\cal I }_{\theta}}(f))](y)\] \[= H_{{\cal I}_{\theta}}[f](y).\]
Thus \(H_{{\cal I}_{\theta}}=F_{{\cal P}}^{\downarrow,{\cal I}_{\theta}}\). Conversely, let \({\cal P}=\{A_{y}\in L^{X}:y\in Y\}\) be an \(L\)-fuzzy partition of base set \(X\neq\emptyset\). Let us define a map \(v:X\to Y\) such that \(v(x)=y\) iff \(x\in core(A_{y})\). Further, let \({\cal I}_{\theta}\) be a residual implicator such that \({\bf N}_{{\cal I}_{\theta}}(\cdot)={\cal I}_{\theta}(\cdot,0)\) is
an involutive negator) and \(H_{{\cal I}_{\theta}}=F_{\cal P}^{\downarrow,{\cal I}_{\theta}}\). Then for all \(y\in Y,x\in X\)
\[({\bf N}_{{\cal I}_{\theta}}(H_{{\cal I}_{\theta}}[{\bf N}_{{\cal I }_{\theta}}(1_{\{x\}})]))(y) = ({\bf N}_{{\cal I}_{\theta}}(F_{\cal P}^{\downarrow,{\cal I}_{ \theta}}[{\bf N}_{{\cal I}_{\theta}}(1_{\{x\}})]))(y)\] \[= {\bf N}_{{\cal I}_{\theta}}(F_{\cal P}^{\downarrow,{\cal I}_{ \theta}}[{\bf N}_{{\cal I}_{\theta}}(1_{\{x\}})](y))\] \[= {\bf N}_{{\cal I}_{\theta}}(\bigwedge_{z\in X}{\cal I}_{\theta}(A _{y}(z),({\bf N}_{{\cal I}_{\theta}}(1_{\{x\}}))(z)))\] \[= {\bf N}_{{\cal I}_{\theta}}(\bigwedge_{z\in X}{\cal I}_{\theta}(A _{y}(z),{\bf N}_{{\cal I}_{\theta}}(1_{\{x\}}(z))))\] \[= {\bf N}_{{\cal I}_{\theta}}({\cal I}_{\theta}(A_{y}(x),0))=A_{y}( x).\]
Thus \(({\bf N}_{{\cal I}_{\theta}}(H_{{\cal I}_{\theta}}[{\bf N}_{{\cal I}_{\theta}}(1 _{\{x\}})]))(y)=1\) iff \(A_{y}(x)=1\) iff \(v(x)=y\). From Propositions 3.10 and 3.11, \((X,Y,v,H_{{\cal I}_{\theta}})\) is an \(L\)-fuzzy lower transformation system on \(X\) determined by \({\cal I}_{\theta}\).
Next, we have the following.
**Proposition 5.6**: _Let \(\theta\) and \(\eta\) be dual with respect to an involutive negator \({\bf N}\), \({\cal U}_{\theta}=(X,Y,u,U_{\theta})\) and \({\cal H}_{\eta}=(X,Y,u,H_{\eta})\) be \(L\)-fuzzy upper and lower transformation systems, respectively. Then there exists an \(L\)-fuzzy partition \({\cal P}\) such that \(U_{\theta}=F_{\cal P}^{\uparrow,\theta},\)\(H_{\eta}=F_{\cal P}^{\downarrow,\eta}\) iff for all \(f\in L^{X}\),_
* \(U_{\theta}[f]={\bf N}(H_{\eta}[{\bf N}(f)])\)_, i.e.,_ \({\bf N}(U_{\theta}[f])=H_{\eta}[{\bf N}(f)]\)_, and_
* \(H_{\eta}[f]={\bf N}(U_{\theta}[{\bf N}(f)])\)_, i.e,_ \({\bf N}(H_{\eta})=U_{\theta}[{\bf N}(f)]\)_._
**Proof:** From Propositions 3.1, is can be easily show that conditions (i) and (ii) hold. Now, we only need to show that the converse part. For which, let condition (i) holds. Further, let \(\{A_{1,y}:y\in Y\},\{A_{2,y}:y\in Y\}\subseteq L^{X}\) such that \(A_{1,y}(x)=U_{\theta}[1_{\{x\}}](y)\), \(A_{2,y}(x)={\bf N}(H_{\eta}[{\bf N}(1_{\{x\}})])(y)\), \(\forall\,x\in X,y\in Y\). Then from propositions 5.2 and 5.3, it is clear that \(\{A_{1,y}:y\in Y\},\{A_{2,y}:y\in Y\}\subseteq L^{X}\) are \(L\)-fuzzy partitions of \(X\) and \(U_{\theta}=F_{1,\cal P}^{\uparrow,\theta},H_{\eta}=F_{2,\cal P}^{\downarrow,\eta}\). Now, from condition (i), we have \(U_{\theta}[f]={\bf N}(H_{\eta}[{\bf N}(f)])={\bf N}(F_{2,\cal P}^{\downarrow, \eta}[{\bf N}(f)])=F_{2,\cal P}^{\uparrow,\theta}[f]\). Thus \(F_{1,\cal P}^{\uparrow,\theta}=F_{2,\cal P}^{\uparrow,\theta}\) and \(A_{1y}=A_{2y},\,\forall\,y\in Y\). Similarly, we can show that when condition (ii) holds.
**Proposition 5.7**: _Let \(\theta\) and \(\eta\) be dual with respect to an involutive negator \({\bf N}\), \({\cal U}_{{\cal I}_{\eta}}=(X,Y,u,U_{{\cal I}_{\eta}})\) and \({\cal H}_{{\cal I}_{\theta}}=(X,Y,u,H_{{\cal I}_{\eta}})\) be \(L\)-fuzzy upper and lower transformation systems, respectively. Then there exists an \(L\)-fuzzy partition \({\cal P}\) such that \(F_{\cal P}^{\uparrow,\tau_{\eta}}=U_{{\cal I}_{\eta}},F_{\cal P}^{\downarrow, \tau_{\theta}}=H_{{\cal I}_{\theta}}\) iff for all \(f\in L^{X}\)_
* \(U_{{\cal I}_{\eta}}[f]={\bf N}(H_{{\cal I}_{\theta}}[{\bf N}(f)])\)_, i.e.,_ \({\bf N}(U_{{\cal I}_{\eta}}[f])=H_{{\cal I}_{\theta}}[{\bf N}(f)]\)_, and_
* \(H_{{\cal I}_{\theta}}[f]={\bf N}(U_{{\cal I}_{\eta}}[{\bf N}(f)])\)_, i.e,_ \({\bf N}(H_{{\cal I}_{\theta}})=U_{{\cal I}_{\eta}}[{\bf N}(f)]\)_._
**Proof:** Similar to that of Proposition 5.7.
**Proposition 5.8**: _Let \({\bf N}_{{\cal I}_{\theta}}\) be involutive negator, \({\cal U}_{\theta}=(X,Y,u,U_{\theta})\) and \({\cal H}_{{\cal I}_{\theta}}=(X,Y,u,H_{{\cal I}_{\eta}})\) be \(L\)-fuzzy upper and lower transformation systems, respectively. Then there exists an \(L\)-fuzzy partition \({\cal P}\) such that \(F_{\cal P}^{\uparrow,\theta}=U_{\theta},F_{\cal P}^{\downarrow,\tau_{\theta}}=H_{{ \cal I}_{\theta}}\) iff_
* \(U_{\theta}[f]={\bf N}_{{\cal I}_{\theta}}(H_{{\cal I}_{\theta}}[{\bf N}_{{\cal I} _{\theta}}(f)])\)_, i.e.,_ \({\bf N}_{{\cal I}_{\theta}}(U_{\theta}[f])=H_{{\cal I}_{\theta}}[{\bf N}_{{\cal I }_{\theta}}(f)]\)_, and_
* \(H_{{\cal I}_{\theta}}[f]={\bf N}_{{\cal I}_{\theta}}(U_{\theta}[{\bf N}_{{\cal I }_{\theta}}(f)])\)_, i.e,_ \({\bf N}_{{\cal I}_{\theta}}(H_{{\cal I}_{\theta}})=U_{\theta}[{\bf N}_{{\cal I }_{\theta}}(f)]\)_._
**Proof:** Similar to that of Proposition 5.7.
**Proposition 5.9**: _Let \({\bf N}_{{\cal I}_{\eta}}\) be involutive negators, \({\cal U}_{{\cal I}_{\eta}}=(X,Y,u,U_{{\cal I}_{\eta}})\) and \({\cal H}_{\eta}=(X,Y,u,H_{\eta})\) be \(L\)-fuzzy upper and lower transformation systems, respectively. Then there exists an \(L\)-fuzzy partition \({\cal P}\) such that \(F^{\uparrow,{\cal I}_{\eta}}_{\cal P}=U_{{\cal I}_{\eta}},F^{\downarrow,\eta} _{\cal P}=H_{\eta}\) iff for all \(f\in L^{X}\)_
* \(U_{{\cal I}_{\eta}}[f]={\bf N}_{{\cal I}_{\eta}}(H_{\eta}[{\bf N}_{{\cal I}_{ \eta}}(f)])\)_, i.e.,_ \({\bf N}_{{\cal I}_{\eta}}(U_{{\cal I}_{\eta}}[f])=H_{\eta}[{\bf N}_{{\cal I}_{ \eta}}(f)]\)_, and_
* \(H_{\eta}[f]={\bf N}_{{\cal I}_{\eta}}(U_{{\cal I}_{\eta}}[{\bf N}_{{\cal I}_{ \eta}}(f)])\)_, i.e,_ \({\bf N}_{{\cal I}_{\eta}}(H_{\eta})=U_{{\cal I}_{\eta}}[{\bf N}_{{\cal I}_{ \eta}}(f)]\)_._
**Proof:** Similar to that of Proposition 5.7.
## 6 Concluding remarks
In this contribution, we have presented the theory of direct \(F\)-transforms determined by overlap and grouping maps, residual and co-residual implicators from both constructive and axiomatic approaches. In which, \(F^{\uparrow,\theta},F^{\downarrow,\eta},F^{\downarrow,{\cal I}_{\theta}}\) are the extension of the direct \(F\)-transforms introduced in [22, 25, 34] and \(F^{\uparrow,{\cal I}_{\eta}}\) is a new definition. The main contributions of this paper are listed as follows.
* We have shown the duality of the proposed direct \(F\)-transform and established a connection among these direct \(F\)-transforms. In addition, we have discussed the basic results of these direct \(F\)-transforms.
* We have introduced the idea of the inverse of these \(F\)-transforms. Further, we have shown that the original \(L\)-fuzzy set and inverse of these \(F\)-transform have the same \(F\)-transform under certain conditions.
* Further, we have shown an axiomatic characterization of the proposed direct \(F\)-transforms.
* Finally, the duality of \(L\)-fuzzy transformation systems has been examined.
Both the theories viz., theory of \(F\)-transforms and the theory of overlap and grouping maps have already shown to be helpful in practical applications. Accordingly, combining both ideas may provide us with new applications in data analysis and image processing problems.
| この論文は、重なり合いとグループ化マップに基づくF変換の研究であり、完全な格子から構成的かつaxiomaticアプローチの両方から、残差および共残差インプロータを用いて研究されています。さらに、提案されたF変換の duality、基本特性、および逆も研究されています。また、提案された直線型F変換のaxiomatic特性化も調査されています。 |
2309.17116 | Sheaf Hypergraph Networks | Higher-order relations are widespread in nature, with numerous phenomena
involving complex interactions that extend beyond simple pairwise connections.
As a result, advancements in higher-order processing can accelerate the growth
of various fields requiring structured data. Current approaches typically
represent these interactions using hypergraphs. We enhance this representation
by introducing cellular sheaves for hypergraphs, a mathematical construction
that adds extra structure to the conventional hypergraph while maintaining
their local, higherorder connectivity. Drawing inspiration from existing
Laplacians in the literature, we develop two unique formulations of sheaf
hypergraph Laplacians: linear and non-linear. Our theoretical analysis
demonstrates that incorporating sheaves into the hypergraph Laplacian provides
a more expressive inductive bias than standard hypergraph diffusion, creating a
powerful instrument for effectively modelling complex data structures. We
employ these sheaf hypergraph Laplacians to design two categories of models:
Sheaf Hypergraph Neural Networks and Sheaf Hypergraph Convolutional Networks.
These models generalize classical Hypergraph Networks often found in the
literature. Through extensive experimentation, we show that this generalization
significantly improves performance, achieving top results on multiple benchmark
datasets for hypergraph node classification. | Iulia Duta, Giulia Cassarà, Fabrizio Silvestri, Pietro Liò | 2023-09-29T10:25:43 | http://arxiv.org/abs/2309.17116v1 | # Sheaf Hypergraph Networks
###### Abstract
Higher-order relations are widespread in nature, with numerous phenomena involving complex interactions that extend beyond simple pairwise connections. As a result, advancements in higher-order processing can accelerate the growth of various fields requiring structured data. Current approaches typically represent these interactions using hypergraphs. We enhance this representation by introducing cellular sheaves for hypergraphs, a mathematical construction that adds extra structure to the conventional hypergraph while maintaining their local, higher-order connectivity. Drawing inspiration from existing Laplacians in the literature, we develop two unique formulations of sheaf hypergraph Laplacians: linear and non-linear. Our theoretical analysis demonstrates that incorporating sheaves into the hypergraph Laplacian provides a more expressive inductive bias than standard hypergraph diffusion, creating a powerful instrument for effectively modelling complex data structures. We employ these sheaf hypergraph Laplacians to design two categories of models: Sheaf Hypergraph Neural Networks and Sheaf Hypergraph Convolutional Networks. These models generalize classical Hypergraph Networks often found in the literature. Through extensive experimentation, we show that this generalization significantly improves performance, achieving top results on multiple benchmark datasets for hypergraph node classification.
## 1 Introduction
The prevalence of relational data in real-world scenarios has led to rapid development and widespread adoption of graph-based methods in numerous domains [1; 2; 3; 4]. However, a major limitation of graphs is their inability to represent interactions that goes beyond pairwise relations. In contrast, real-world interactions are often complex and multifaceted. There is evidence that higher-order relations frequently occur in neuroscience [5; 6], chemistry [7], environmental science [8; 9] and social networks [10]. Consequently, learning powerful and meaningful representations for hypergraphs has emerged as a promising and rapidly growing subfield of deep learning [11; 12; 13; 14; 15; 16]. However, current hypergraph-based models struggle to capture higher-order relationships effectively. As described in [17], conventional hypergraph neural networks often suffer from the problem of over-smoothing. As we propagate the information inside the hypergraph, the representations of the nodes become uniform across neighbourhoods. This effect hampers the capability of hypergraph models to capture local, higher-order nuances.
More powerful and flexible mathematical constructs are required to capture real-world interactions' complexity better. Sheaves provide a suitable enhancement for graphs that allow for more diverse and expressive representations. A cellular sheaf [18] enables to attach data to a graph, by associating vector spaces to the nodes, together with a mechanism of transferring the information along the
edges. This approach allows for richer data representation and enhances the ability to model complex interactions.
Motivated by the need for more expressive structures, we introduce a _cellular sheaf for hypergraphs_, which allows for the representation of more sophisticated dynamics while preserving the higher-order connectivity inherent to hypergraphs. We take on the non-trivial challenge to generalize the two commonly used hypergraph Laplacians [19; 11] to incorporate the richer structure sheaves offer. Theoretically, we demonstrate that the diffusion process derived using the _sheaf hypergraph Laplacians_ that we propose induces a more expressive inductive bias than the classical hypergraph diffusion. Leveraging this enhanced inductive bias, we construct and test two powerful neural networks capable of inferring and processing hypergraph sheaf structure: the _Sheaf Hypergraph Neural Network_ (SheafHyperGNN) and the _Sheaf Hypergraph Convolutional Network_ (SheafHyperGCN).
The introduction of the cellular sheaf for hypergraphs expands the potential for representing complex interactions and provides a foundation for more advanced techniques. By generalizing the hypergraph Laplacians with the sheaf structure, we can better capture the nuance and intricacy of real-world data. Furthermore, our theoretical analysis provides evidence that the sheaf hypergraph Laplacians embody a more expressive inductive bias, essential for obtaining strong representations.
**Our main contributions** are summarised as follow:
1. We introduce the **cellular sheaf for hypergraphs**, a mathematical construct that enhances the hypergraphs with additional structure by associating a vector space with each node and hyperedge, along with linear projections that enable information transfer between them.
2. We propose both a **linear** and a **non-linear sheaf hypergraph Laplacian**, generalizing the standard hypergraph Laplacians commonly used in the literature. We also provide a theoretical characterization of the inductive biases generated by the diffusion processes of these Laplacians, showcasing the benefits of utilizing these novel tools for effectively modeling intricate phenomena.
3. The two sheaf hypergraph Laplacians are the foundation for **two novel architectures** tailored for hypergraph processing: **Sheaf Hypergraph Neural Network** and **Sheaf Hypergraph Convolutional Network**. Experimental findings demonstrate that these models achieve top results, surpassing existing methods on numerous benchmarking datasets.
## 2 Related work
**Sheaves on Graphs.** Utilizing graph structure in real-world data has improved various domains like healthcare [1], biochemistry [2], social networks [20], recommendation systems [3], traffic prediction [21], with graph neural networks (GNNs) becoming the standard for graph representations. However, in heterophilic setups, when nodes with different labels are likely to be connected, directly processing the graph structure leads to weak performance. In [22], they address this by attaching additional geometric structure to the graph, in the form of cellular sheaves [18].
A cellular sheaf on graphs associates a vector space with each node and each edge together with a linear projection between these spaces for each incident pair. To take into account this more complex geometric structure, SheafNN [23] generalised the classical GNNs [24; 25; 26] by replacing the graph Laplacian with a sheaf Laplacian [27]. Higher-dimensional sheaf-based neural networks are explored, with sheaves either learned from the graph [22] or deterministically inferred for efficiency [28]. Recent methods integrate attention mechanisms [29] or replace propagation with wave equations [30]. In recent developments, Sheaf Neural Networks have been found to significantly enhance the performance of recommendation systems, as they improve upon the limitations of graph neural networks [31].
In the domain of heterogeneous graphs, the concept of learning unique message functions for varying edges is well-established. However, there's a distinction in how sheaf-based methods approach this task compared to heterogeneous methods such as RGCN [32]. Unlike the latter, which learns individual parameters for each kind of incident relationship, sheaf-based methods dynamically predict projections for each relationship, relying on features associated with the node and hyperedge. As a result, the total parameters in sheaf networks do not escalate with an increase in the number of hyperedges. This difference underscores a fundamental shift in paradigm between the two methods.
**Hypergraph Networks.** Graphs, while useful, have a strong limitation: they represent only pairwise relations. Many natural phenomena involve complex, higher-order interactions [33; 34; 35; 9], requiring a more general structure like hypergraphs. Recent deep learning approaches have been developed for hypergraph structures. HyperGNN [11] expands the hypergraph into a weighted clique and applies message passing similar to GCNs [24]. HNHN [36] improves this with non-linearities, while HyperGCN [37] connects only the most discrepant nodes using a non-linear Laplacian. Similar to the trend in GNNs, attention models gain popularity also in the hypergraph domain. HCHA [38] uses an attention-based incidence matrix, computed based on a nodes-hyperedge similarity. Similarly, HERALD [39] uses a learnable distance to infer a soft incidence matrix. On the other hand, HEAT [15] creates messages by propagating information inside each hyperedge using Transformers [40].
Many hypergraph neural network (HNN) methods can be viewed as two-stage frameworks: 1) sending messages from nodes to hyperedges and 2) sending messages back from hyperedges to nodes. Thus, [41] proposes a general framework where the first step is the average operator, while the second stage could use any existing GNN module. Similarly, [42] uses either DeepSet functions [43] or Transformers [40] to implement the two stages, while [44] uses a GNN-like aggregator in both stages, with distinct messages for each (node, hyperedge) pair.
In contrast, we propose a novel model to improve the hypergraph processing by attaching a cellular sheaf to the hypergraph structure and diffusing the information inside the model according to it. We will first introduce the cellular sheaf for hypergraph, prove some properties for the associated Laplacians, and then propose and evaluate two architectures based on the sheaf hypergraph Laplacians.
## 3 Hypergraph Sheaf Laplacian
An undirected hypergraph is a tuple \(\mathcal{H}=(V,E)\) where \(V=\{1,2\ldots n\}\) is a set of nodes (also called vertices), and \(E\) is a set of hyperedges (also called edges when there is no confusion with the graph
Figure 1: Visual representation of linear and non-linear sheaf hypergraph Laplacian. **(Top)** In the linear case, the block matrix \((\mathcal{L}_{\mathcal{F}})_{uv}\) corresponding to the pair of nodes \((u,v)\) accumulates contributions from each hyperedge that simultaneously contains both nodes. **(Bottom)** In the non-linear version, for each hyperedge, we first select the two nodes that are the most dissimilar in the hyperedge stalk domain: \(u\sim_{e}v\) if \((u,v)=argmax_{u,v\in e}||\mathcal{F}_{uv\in\mathcal{E}}x_{u}-\mathcal{F}_{v \propto e}x_{v}||_{2}^{2}\). Then, the block matrix \((\bar{\mathcal{L}}_{\mathcal{F}})_{uv}\) associated with the pair of nodes \((u,v)\) only accumulates contributions from a hyperedge \(e\) if \(u\sim_{e}v\). The two operators (linear and non-linear sheaf hypergraph Laplacian) represent the building blocks for the Sheaf Hypergraph Neural Network and Sheaf Hypergraph Convolutional Network respectively and we theoretically show that they exhibit a more expressive implicit bias compared to the traditional Hypergraph Networks, leading to better performance.
edges). Each hyperedge \(e\) is a subset of the nodes set \(V\). We denote by \(n=|V|\) the number of nodes in the hypergraph \(\mathcal{H}\) and by \(m=|E|\) the number of hyperedges. In contrast to graph structures, where each edge contains exactly two nodes, in a hypergraph an edge \(e\) can contain any number of nodes. The number of nodes in each hyperedge (\(|e|\)) is called the _degree of the hyperedge_ and is denoted by \(\delta_{e}\). In contrast, the number of hyperedges containing each node \(v\) is called the _degree of the node_ and is denoted by \(d_{v}\).
Following the same intuition from defining sheaves on graphs [23; 22], we will introduce the cellular sheaf associated with a hypergraph \(\mathcal{H}\).
**Definition 1**.: A _cellular sheaf_\(\mathcal{F}\)_associated with a hypergraph \(\mathcal{H}\) is defined as a triple \(\langle\mathcal{F}(v),\mathcal{F}(e),\mathcal{F}_{v\unlhd e}\rangle\), where:
1. \(\mathcal{F}(v)\) are _vertex stalks:_ vector spaces associated with each node \(v\);
2. \(\mathcal{F}(e)\) are _hyperedge stalks:_ vector spaces associated with each hyperedge \(e\);
3. \(\mathcal{F}_{v\unlhd e}:\mathcal{F}(v)\rightarrow\mathcal{F}(e)\) are _restriction maps:_ linear maps between each pair \(v\unlhd e\), if hyperedge \(e\) contains node \(v\).
In simpler terms, a sheaf associates a space with each node and each hyperedge in a hypergraph and also provides a linear projection that enables the movement of representations between nodes and hyperedges, as long as they are adjacent. Unless otherwise specified, we assign the same d-dimensional space for all vertex stalks \(\mathcal{F}(v)=\mathbb{R}^{d}\) and all hyperedge stalks \(\mathcal{F}(e)=\mathbb{R}^{d}\). We refer to \(d\) as the dimension of the sheaf.
Previous works focused on creating hypergraph representations by relying on various methods of defining a Laplacian for a hypergraph. In this work, we will concentrate on two definitions: a linear version of the hypergraph Laplacian as used in [11], and a non-linear version of the hypergraph Laplacian as in [37]. We will extend both of these definitions to incorporate the hypergraph sheaf structure, analyze the advantages that arise from this, and propose two different neural network architectures based on each one of them. For a visual comparison between the two proposed sheaf hypergraph Laplacians, see Figure 1.
### Linear Sheaf Hypergraph Laplacian
**Definition 2**.: Following the definition of a cellular sheaf on hypergraphs, we introduce the _linear sheaf hypergraph Laplacian_ associated with a hypergraph \(\mathcal{H}\) as \((\mathcal{L}_{\mathcal{F}})_{vv}=\sum\limits_{e;v\in e}\frac{1}{\delta_{e}} \mathcal{F}_{v\unlhd e}^{T}\mathcal{F}_{v\unlhd e}\in\mathbb{R}^{d\times d}\) and \((\mathcal{L}_{\mathcal{F}})_{uv}=-\sum\limits_{e;u,v\in e}\frac{1}{\delta_{e} }\mathcal{F}_{u\unlhd e}^{T}\mathcal{F}_{v\unlhd e}\in\mathbb{R}^{d\times d}\), where \(\mathcal{F}_{v\unlhd e}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) represents the linear restriction maps guiding the flow of information from node \(v\) to hyperedge \(e\).
The linear sheaf Laplacian operator for node \(v\) applied on a signal \(x\in\mathbb{R}^{n\times d}\) can be rewritten as:
\[\mathcal{L}_{\mathcal{F}}(x)_{v}=\sum\limits_{e;v\in e}\frac{1}{\delta_{e}} \mathcal{F}_{v\unlhd e}^{T}(\sum\limits_{\begin{subarray}{c}u\in e\\ u\neq v\end{subarray}}(\mathcal{F}_{v\unlhd e}x_{v}-\mathcal{F}_{u\unlhd e}x _{u})). \tag{1}\]
When each hyperedge contains exactly two nodes (thus \(\mathcal{H}\) is a graph), the internal summation will contain a single term, and we recover the sheaf Laplacian for graphs as formulated in [22].
On the other hand, for the trivial sheaf, when the vertex and hyperedge stalks are both fixed to be \(\mathbb{R}\) and the restriction map is the identity \(\mathcal{F}_{v\unlhd e}=1\) we recover the usual linear hypergraph Laplacian [11; 45] defined as \(\mathcal{L}(x)_{v}=\sum_{e;v\in e}\frac{1}{\delta_{e}}\sum_{u\in e}(x_{v}-x_{ u})\). However, when we allow for higher-dimensional stalks \(\mathbb{R}^{d}\), the restriction maps for each adjacency pair \((v,e)\) become linear projections \(\mathcal{F}_{v\unlhd e}\in\mathbb{R}^{d\times d}\), enabling us to model more complex propagations, customized for each incident (node, hyperedge) pairs.
In the following sections, we will demonstrate the advantages of using this sheaf hypergraph diffusion instead of the usual hypergraph diffusion.
**Reducing energy via linear diffusion.** Previous work [17] demonstrates that diffusion using the classical symmetric normalised version of the hypergraph Laplacian \(\Delta=D^{-\frac{1}{2}}\mathcal{L}D^{-\frac{1}{2}}\), where \(D\) is a diagonal matrix containing the degrees of the vertices, reduces the following energy function:
\(E_{L_{2}}(x)=\frac{1}{2}\sum_{e}\frac{1}{\delta_{e}}\sum_{u,v\in e}||d_{v}^{- \frac{1}{2}}x_{v}-d_{u}^{-\frac{1}{2}}x_{u}||_{2}^{2}\). Intuitively, this means that applying diffusion using the _linear hypergraph Laplacian_ leads to similar representations for neighbouring nodes. While this is desirable in some scenarios, it may cause poor performance in others, a phenomenon known as over-smoothing [17]. In the following, we show that applying diffusion using the linear _sheaf_ hypergraph Laplacian addresses these limitations by implicitly minimizing a more expressive energy function. This allows us to model phenomena that were not accessible using the usual Laplacian.
**Definition 3**.: We define _sheaf Dirichlet energy_ of a signal \(x\in\mathbb{R}^{n\times d}\)_on a hypergraph \(\mathcal{H}\)_ as:
\[E_{L_{2}}^{\mathcal{F}}(x)=\frac{1}{2}\sum_{e}\frac{1}{\delta_{e}}\sum_{u,v \in e}||\underbrace{\mathcal{F}_{v\leq e}}_{v}D_{v}^{-\frac{1}{2}}x_{v}- \underbrace{\mathcal{F}_{u\leq e}}_{v}D_{u}^{-\frac{1}{2}}x_{u}||_{2}^{2},\]
where \(D_{v}=\sum\limits_{e:v\in e}\mathcal{F}_{v\leq e}^{T}\mathcal{F}_{v\leq e}\) is a normalisation term equivalent to the nodes degree \(d_{v}\) for the trivial sheaf and \(D=diag(D_{1},D_{2}\ldots D_{n})\) the corresponding block diagonal matrix.
This quantity measures the discrepancy between neighbouring nodes in the hyperedge stalk domain as opposed to the usual Dirichlet energy for hypergraphs that, instead, measures this distance in the node features domain. In the following, we are showing that, applying hypergraph diffusion using the linear sheaf Laplacian implicitly reduces this energy.
**Proposition 1**.: _The diffusion process using a symmetric normalised version of the linear sheaf hypergraph Laplacian minimizes the sheaf Dirichlet energy of a signal \(x\) on a hypergraph \(\mathcal{H}\). Moreover, the energy decreases with each layer of diffusion._
Concretely, defining the diffusion process as \(Y=(I-\Delta^{\mathcal{F}})X\) where \(\Delta^{\mathcal{F}}=D^{-\frac{1}{2}}\mathcal{L}^{\mathcal{F}}D^{-\frac{1}{2} }\in\mathbb{R}^{nd\times nd}\) represents the symmetric normalised version of the linear sheaf hypergraph Laplacian, we have that \(E_{L_{2}}^{\mathcal{F}}(Y)<\lambda_{s}E_{L_{2}}^{\mathcal{F}}(X)\), with \(\{\lambda_{i}\}\) the non-zero eigenvalues of \(\Delta^{\mathcal{F}}\) and \(\lambda_{*}=\max_{i}\left\{(1-\lambda_{i})^{2}\right\}<1\). All the proofs are in the Supplementary Material.
This result addresses some of the limitations of standard hypergraph processing. First, while classical diffusion using hypergraph Laplacian brings closer representations of the nodes in the nodes space (\(x_{v}\), \(x_{u}\)), our linear sheaf hypergraph Laplacian allows us to bring closer representations of the nodes in the more complex space associated with the hyperedges (\(\mathcal{F}_{v\leq e}x_{v}\), \(\mathcal{F}_{u\leq e}x_{u}\)). This encourages a form of hyperedge agreement, while preventing the nodes to become uniform. Secondly, in the hyperedge stalks, each node can have a different representation for each hyperedge it is part of, leading to a more expressive processing compared to the classical methods. Moreover, in many Hypergraph Networks, the hyperedges uniformly aggregate information from all its components. Through the presence of a restriction map for each (node, hyperedge) pair, we enable the model to learn the individual contribution that each node sends to each hyperedge.
From an opinion dynamic perspective [46] when the hyperedges represent group discussions, the input space \(x_{v}\) can be seen as the private opinion, while the hyperedge stalk \(\mathcal{F}_{v\leq e}x_{v}\) can be seen as a public opinion (what an individual \(v\) decide to express in a certain group \(e\)). Minimizing the _Dirichlet energy_ creates private opinions that are in consensus inside each hyperedge, while minimizing the _sheaf Dirichlet energy_ creates an _apparent_ consensus, by only uniformizing the expressed opinion. Through our sheaf setup, each individual is allowed to express varying opinion in each group it is part of, potentially different than their personal belief. This introduces a realistic scenario inaccessible in the original hypergraph diffusion setup.
### Non-Linear Sheaf Hypergraph Laplacian
Although the linear hypergraph Laplacian is commonly used to process hypergraphs, it falls short in fully preserving the hypergraph structure [47]. To address these shortcomings, [48] introduces the non-linear Laplacian, demonstrating that its spectral properties are more suited for higher-order processing compared to the linear Laplacian. For instance, compared to the linear version, the non-linear Laplacian leads to a more balanced partition in the minimum cut problem, a task known to be tightly related to the semi-supervised node classification. Additionally, while the linear Laplacian associates a clique for each hyperedge, the non-linear one offers the advantage of relying on a much sparser connectivity. We will adopt a similar methodology to derive the non-linear version of the sheaf hypergraph Laplacian and analyze the benefits of applying diffusion using this operator.
**Definition 4**.: We introduce the _non-linear sheaf hypergraph Laplacian_ of a hypergraph \(\mathcal{H}\) with respect to a signal \(x\) as following:
1. For each hyperedge \(e\), compute \((u_{e},v_{e})=argmax_{u,v\in e}||\mathcal{F}_{u\circ e}x_{u}-\mathcal{F}_{v\circ e }x_{v}||\), the set of pairs containing the nodes with the most discrepant features in the hyperedge stalk.
2. Build an undirected graph \(\mathcal{G}_{H}\) containing the same sets of nodes as \(\mathcal{H}\) and, for each hyperedge \(e\) connects the most discrepant nodes \((u,v)\) (from now on we will write \(u\sim_{e}v\) if they are connected in the \(\mathcal{G}_{H}\) graph due to the hyperedge e). If multiple pairs have the same maximum discrepancy, we will randomly choose one of them.
3. Define the sheaf non-linear hypergraph Laplacian as: \[\bar{\mathcal{L}}_{\mathcal{F}}(x)_{v}=\sum\limits_{e;u\sim_{e}v}\frac{1}{ \delta_{e}}\mathcal{F}_{v\trianglelefteqe}^{T}\big{(}\mathcal{F}_{v \trianglelefteqe}x_{v}-\mathcal{F}_{u\trianglelefteqe}x_{u}\big{)}.\] (2)
Note that the sheaf structure impacts the non-linear diffusion in two ways: by shaping the graph structure creation (Step 1), where the two nodes with the greatest distance in the hypergraph stalk are selected rather than those in the input space; and by influencing the information propagation process (Step 3). When the sheaf is restricted to the trivial case (\(d=1\) and \(\mathcal{F}_{v\trianglelefteqe}=1\)) this corresponds to the non-linear sheaf Laplacian of a hypergraph as introduced in [48].
Reducing total variation via non-linear diffusion.In the following discussion, we will demonstrate how transitioning from a linear to a non-linear sheaf hypergraph Laplacian alters the energy guiding the inductive bias. This phenomenon was previously investigated for the classical hypergraph Laplacian, with [48] revealing enhanced expressivity in the non-linear case.
**Definition 5**.: We define _the sheaf total variation_ of a signal \(x\in\mathbb{R}^{n\times d}\)_on a hypergraph \(\mathcal{H}\) as:
\[\bar{E}_{TV}^{\mathcal{F}}(x)=\frac{1}{2}\sum\limits_{e}\frac{1}{\delta_{e}} \max\limits_{u,v\in e}||\overline{\mathcal{F}_{v\trianglelefteqe}}D_{v}^{- \frac{1}{2}}x_{v}-\sqrt{\mathcal{F}_{u\trianglelefteqe}}D_{u}^{-\frac{1}{2}} x_{u}||_{2}^{2},\]
where \(D_{v}=\sum\limits_{e:v\in e}\mathcal{F}_{v\trianglelefteqe}^{T}\mathcal{F}_{v \trianglelefteqe}\) is a normalisation term equivalent to the node's degree in the classical setup and \(D=diag(D_{1},D_{2}\ldots D_{n})\) is the corresponding block diagonal matrix.
This quantity generalised the total variance (TV) \(\bar{E}_{TV}(x)=\frac{1}{2}\sum\limits_{e}\frac{1}{\delta_{e}}\max_{u,v\in e} ||d_{v}^{-\frac{1}{2}}x_{v}-d_{u}^{-\frac{1}{2}}x_{u}||_{2}^{2}\) minimized in the non-linear hypergraph label propagation [48; 49]. Unlike the TV, the sheaf total variation measures the highest discrepancy at the hyperedge level computed in the hyperedge stalk, as opposed to the TV, which gauges the highest discrepancy in the feature space. We will explore the connection between the sheaf TV and our _non-linear sheaf hypergraph diffusion_.
**Proposition 2**.: _The diffusion process using the symmetric normalised version of non-linear sheaf hypergraph Laplacian minimizes the sheaf total variance of a signal \(x\) on hypergraph \(\mathcal{H}\)._
Despite the change in the potential function being minimized, the overarching objective remains akin to that of the linear case: striving to achieve a coherent consensus among the representations within the hyperedge stalk space, rather than generating uniform features for each hyperedge in the input space. In contrast to the linear scenario, where a quadratic number of edges is required for each hyperedge, the non-linear sheaf hypergraph Laplacian associates a single edge with each hyperedge, thereby enhancing computational efficiency.
### Sheaf Hypergraph Networks
Popular hypergraph neural networks [45; 11; 37; 50] draw inspiration from a variety of hypergraph diffusion operators [47; 48; 51], giving rise to diverse message passing techniques. These techniques all involve the propagation of information from nodes to hyperedges and vice-versa. We will adopt a similar strategy and introduce the Sheaf Hypergraph Neural Network and Sheaf Hypergraph Convolutional Network, based on two message-passing schemes inspired by the sheaf diffusion mechanisms discussed in this paper.
Given a hypergraph \(\mathcal{H}=(V,E)\) with nodes characterised by a set of features \(X\in\mathbb{R}^{n\times f}\), we initially linearly project the input features into \(\tilde{X}\in\mathbb{R}^{n\times(df)}\) and then reshape them into \(\tilde{X}\in\mathbb{R}^{nd\times f}\). As a result, each node is represented in the vertex stalk as a matrix \(\mathbb{R}^{d\times f}\), where \(d\) denotes the dimension of the vertex stalk, and \(f\) indicates the number of channels.
A general layer of Sheaf Hypergraph Network is defined as:
\[Y=\sigma((I_{nd}-\overset{\bullet}{\Delta})(I_{n}\otimes W_{1})\tilde{X}W_{2}).\]
Here, \(\overset{\bullet}{\Delta}\) can be either \(\Delta^{\mathcal{F}}=D^{-\frac{1}{2}}\mathcal{L}^{\mathcal{F}}D^{-\frac{1}{2}}\) for the _linear_ sheaf hypergraph Laplacian introduced in Eq. 1 or \(\bar{\Delta}^{\mathcal{F}}=D^{-\frac{1}{2}}\bar{\mathcal{L}}^{\mathcal{F}}D^{- \frac{1}{2}}\) for the _non-linear_ sheaf hypergraph Laplacian introduced in Eq. 4. Both \(W_{1}\in\mathbb{R}^{d\times d}\) and \(W_{2}\in\mathbb{R}^{f\times f}\) are learnable parameters, while \(\sigma\) represents ReLU non-linearity.
**Sheaf Hypergraph Neural Network** (SheafHyperGNN). This model utilizes the _linear_ sheaf hypergraph Laplacian \(\overset{\bullet}{\Delta}=\Delta^{\mathcal{F}}\). When the sheaf is trivial (\(d=1\) and \(\mathcal{F}_{v\unlhd e}=1\)), and \(W_{1}=\mathbf{I}_{d}\), the SheafHyperGNN is equivalent to the conventional HyperGNN architecture [11]. However, by increasing dimension \(d\) and adopting dynamic restriction maps, our proposed SheafHyperGNN becomes more expressive. For every adjacent node-hyperedge pair \((v,e)\), we use a \(d\times d\) block matrix to discern each node's contribution instead of a fixed weight that only stores the incidence relationship. The remaining operations are similar to those in HyperGNN [11]. More details on how the block matrices \(\mathcal{F}_{v\unlhd e}\) are learned can be found in the following subsection.
**Sheaf Hypergraph Convolutional Network** (SheafHyperGCN). This model employs the non-linear Laplacian \(\overset{\bullet}{\Delta}=\bar{\Delta}^{\mathcal{F}}\). Analogous to the linear case, when the sheaf is trivial and \(W_{1}=\mathbf{I}_{d}\) we obtain the classical HyperGCN architecture [37]. In our experiments, we will use an approach similar to that in [37] and adjust the Laplacian to include mediators. This implies that we will not only connect the two most discrepant nodes but also create connections between each node in the hyperedge and these two most discrepant nodes, resulting in a denser associated graph. For more information on this variation, please refer to [37] or Supplementary Material.
In summary, the models introduced in this work, SheafHyperGNN and SheafHyperGCN serve as generalisations of the classical HyperGNN [11] and HyperGCN [37]. These new models feature a more expressive implicit regularisation compared to their traditional counterparts.
Learnable Sheaf Laplacian.A key advantage of Sheaf Hypergraph Networks lies in attaching and processing a more complex structure (sheaf) instead of the original standard hypergraph. Different sheaf structures can be associated with a single hypergraph, and accurately modeling the most suitable structure is crucial for obtaining effective and meaningful representation. In our proposed models, we achieve this by designing learnable restriction maps. For a d-dimensional sheaf, we predict the restriction maps for each pair of incident (vertex v, hyperedge e) as \(\mathcal{F}_{v\unlhd e}=\text{MLP}(x_{v}||h_{e})\in\mathbb{R}^{d^{2}}\), where \(x_{v}\) represent node features of \(v\), and \(h_{e}\) represents features of the hyperedge \(e\). This vector representation is then reshaped into a \(d\times d\) block matrix representing the linear restriction map for the \((v,e)\) pair. When hyperedge features \(h_{e}\) are not provided, any permutation-invariant operation can be applied to obtain hyperedge features from node-level features. We experiment with three types of \(d\times d\) block matrices: diagonal, low-rank and general matrices, with the diagonal version
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Name & Cora & Citeseer & Pubmed & Cora,CA & DBLP,CA & Senate & House & Congress \\ \hline HCHA & 79.14 \(\pm\) 1.02 & 72.42 \(\pm\) 1.42 & 86.41 \(\pm\) 0.36 & 82.55 \(\pm\) 0.97 & 90.92 \(\pm\) 0.22 & 48.62 \(\pm\) 4.41 & 61.36 \(\pm\) 2.53 & 90.43 \(\pm\) 1.20 \\ HNHN & 76.36 \(\pm\) 1.92 & 72.64 \(\pm\) 1.57 & 86.90 \(\pm\) 0.30 & 77.19 \(\pm\) 1.49 & 86.78 \(\pm\) 2.99 & 50.93 \(\pm\) 6.33 & 67.8 \(\pm\) 2.59 & 53.35 \(\pm\) 1.45 \\ AlSheafness & 76.88 \(\pm\) 1.80 & 70.83 \(\pm\) 1.63 & 75.03 \(\pm\) 5.03 & 81.97 \(\pm\) 1.50 & 52.17 \(\pm\) 0.27 & 48.17 \(\pm\) 5.67 & 68.24 \(\pm\) 2.40 & 91.80 \(\pm\) 1.53 \\ AllSetTransformers & 78.81 \(\pm\) 1.47 & 73.08 \(\pm\) 1.20 & 88.72 \(\pm\) 0.37 & 83.63 \(\pm\) 1.47 & 91.53 \(\pm\) 0.23 & 51.83 \(\pm\) 5.22 & 69.33 \(\pm\) 2.20 & 91.26 \(\pm\) 1.05 \\ UniGCNII & 78.81 \(\pm\) 1.05 & 73.05 \(\pm\) 2.21 & 88.25 \(\pm\) 0.33 & 83.60 \(\pm\) 1.14 & 91.69 \(\pm\) 0.19 & 49.30 \(\pm\) 2.45 & 67.25 \(\pm\) 2.57 & 94.81 \(\pm\) 0.81 \\ HyperGDN & 79.20 \(\pm\) 1.14 & 72.62 \(\pm\) 1.49 & 86.65 \(\pm\) 0.43 & 80.62 \(\pm\) 1.32 & 90.35 \(\pm\) 0.26 & 52.82 \(\pm\) 3.20 & 51.70 \(\pm\) 3.77 & 74.63 \(\pm\) 3.62 \\ ED-HNN & 80.31 \(\pm\) 1.35 & 73.70 \(\pm\) 1.38 & **80.93** \(\pm\) **0.53** & 83.97 \(\pm\) 1.55 & **91.09** \(\pm\) **0.19** & 64.79 \(\pm\) 1.54 & 72.45 \(\pm\) 2.28 & **95.00** \(\pm\) **0.99** \\ \hline HyperGCN\({}^{\dagger}\) & 78.36 \(\pm\) 2.01 & 71.01 \(\pm\) 2.21 & 80.81 \(\pm\) 1.24 & 79.50 \(\pm\) 2.11 & 89.42 \(\pm\) 0.16\({}^{\ddaggeragger}\) & 51.13 \(\pm\) 4.15 & 69.29 \(\pm\) 2.05 & 89.67 \(\pm\) 1.22 \\ SheafHyperGCN & 80.06 \(\pm\) 1.12 & 73.27 \(\pm\) 0.50 & 87.09 \(\pm\) 0.71 & 83.26 \(\pm\) 1.20 & 90.83 \(\pm\) 0.23 & 66.33 \(\pm\) 4.58 & 72.66 \(\pm\) 2.26 & 90.37 \(\pm\) 1.52 \\ \hline HyperGNN & 79.39 \(\pm\) 1.36 & 72.45 \(\pm\) 1.16 & 86.44 \(\pm\) 0.44 & 82.64 \(\pm\) 1.65 & 91.03 \(\pm\) 0.20 & 48.59 \(\pm\) 4.52 & 61.39 \(\pm\) 2.96 & 91.26 \(\pm\) 1.15 \\ SheafHyperGNN & **81.30 \(\pm\) 1.70** & **74.71** \(\pm\) **1.23** & 87.68 \(\pm\) 0.60 & **85.52** \(\pm\) **1.28** & 91.59 \(\pm\) 0.24 & **68.73** \(\pm\) **4.68** & **73.84** \(\pm\) **2.30** & 91.81 \(\pm\) 1.60 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Performance on a collection of hypergraph benchmarks. Our models using sheaf hypergraph Laplacians demonstrate a clear advantage over their counterparts using classical Laplacians (HyperGNN and HyperGCN).** **(**HyperGNN and HyperGCN). **(**HyperGNN**)** **or performance and attain state-of-the-art results in five of the datasets.**
consistently outperforming the other two. These restriction maps are further used to define the sheaf hypergraph Laplacians (Def. 2, or 4) used in the final Sheaf Hypergraph Networks. Please refer to the Supplementary Material for more details on how we constrain the restriction maps.
## 4 Experimental Analysis
We evaluate our model on eight real-world datasets that vary in domain, scale, and heterophily level and are commonly used for benchmarking hypergraphs. These include Cora, Citeseer, Pubmed, Cora-CA, DBLP-CA [37], House [52], Senate and Congress [53]. To ensure a fair comparison with the baselines, we follow the same training procedures used in [50] by randomly splitting the data into \(50\%\) training samples, \(25\%\) validation samples and \(25\%\) test samples, and running each model \(10\) times with different random splits. We report average accuracy along with the standard deviation.
Additionally, we conduct experiments on a set of synthetic heterophilic datasets inspired by those introduced by [50]. Following their approach, we generate a hypergraph using the contextual hypergraph stochastic block model [54; 55; 56], containing \(5000\) nodes: half belong to class \(0\) while the other half to class \(1\). We then randomly sample \(1000\) hyperedges with a cardinality \(15\), each containing exactly \(\beta\) nodes from class \(0\). The heterophily level is computed as \(\alpha=\text{min}(\beta,15-\beta)\). Node features are sampled from a label-dependent Gaussian distribution with a standard deviation of 1. As the original dataset is not publicly available, we generate our own set of datasets by varying the heterophily level \(\alpha\in\{1\dots 7\}\) and rerun their experiments for a fair comparison.
The experiments are executed on a single NVIDIA Quadro RTX 8000 with 48GB of GPU memory. Unless otherwise specified, our results represent the best performance obtained by each architecture using hyper-parameter optimisation with random search. Details on all the model choices and hyper-parameters can be found in the Supplementary Material.
**Laplacian vs Sheaf Laplacian.** As demonstrated in the previous section, SheafHyperGNN and SheafHyperGCN are generalisations of the standard HyperGNN [11] and HyperGCN [37], respectively. They transition from the trivial sheaf (\(d=1\) and \(\mathcal{F}_{\text{v-e}}=1\)) to more complex structures (\(d\geq 1\) and \(\mathcal{F}_{\text{v-e}}\) a \(d\times d\) learnable projection). The results in Table 1 and Table 3 show that both models significantly outperform their counterparts on all tested datasets. Among our models, the one based on linear Laplacian (SheafHyperGNN) consistently outperforms the model based on non-linear Laplacian (SheafHyperGCN) across all datasets. This observation aligns with the performance of the models based on standard hypergraph Laplacian, where HyperGCN is outperformed by HyperGNN in all but two real-world datasets, despite their theoretical advantage [48].
**Comparison to recent methods.** We also compare to several recent models from the literature such as HCHA [38], HNHN [36], AllDeepSets [42], AllSetTransformer [42], UniGCNII [57], HyperND [58], and ED-HNN [50]. Our models achieve competitive results on all real-world datasets, with state-of-the art performance on five of them (Table 1). These results confirm the advantages of using the sheaf Laplacians for processing hypergraphs. We also compare our models against a series baselines on the synthetic heterophilic dataset. The results are shown in Table 3. Our best model, SheafHyperGNN, consistently outperforms the other models across all levels of heterophily. Note that, our framework enhancing classical hypergraph processing with sheaf structure is not restricted to the two traditional models tested in this paper (HyperGNN and HyperGCN). Most of the recent state-of-the-art methods, such as ED-HNN, could be easily adapted to learn and process our novel cellular sheaf hypergraph instead of the standard hypergraph, leading to further advancement in the hypergraph field.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Name & Cera & Citeseer & Pubmed & Cora, CA & DBLP-CA & Senate & House & Congress \\ \hline Diag-SheafHyperGCN & 80.06 \(\pm\)1.12 & 73.27 \(\pm\)0.50 & 87.09 \(\pm\)0.71 & 83.26 \(\pm\)1.20 & 90.83 \(\pm\)0.23 & 66.33 \(\pm\)4.58 & 72.66 \(\pm\)2.26 & 90.37 \(\pm\)1.52 \\ LR-SheafHyperGCN & 78.70 \(\pm\)1.14 & 72.14 \(\pm\)1.09 & 86.99 \(\pm\)0.39 & 82.61 \(\pm\)1.28 & 90.84 \(\pm\)0.29 & 66.76 \(\pm\)4.58 & 70.70 \(\pm\)2.23 & 84.88 \(\pm\)2.31 \\ Gen-SheafHyperGCN & 79.13 \(\pm\)0.85 & 72.54 \(\pm\)2.32 & 86.90 \(\pm\)0.46 & 82.54 \(\pm\)2.08 & 90.57 \(\pm\)0.40 & 65.49 \(\pm\)5.17 & 71.05 \(\pm\)2.12 & 82.14 \(\pm\)2.81 \\ \hline Diag-SheafHyperGNN & 81.30 \(\pm\)1.70 & 74.11 \(\pm\)1.23 & 87.68 \(\pm\)0.60 & 85.52 \(\pm\)1.28 & 91.59 \(\pm\)2.04 & 87.33 \(\pm\)4.68 & 73.62 \(\pm\)2.29 & 91.81 \(\pm\)1.60 \\ LR-SheafHyperGNN & 76.65 \(\pm\)1.41 & 74.05 \(\pm\)1.34 & 87.09 \(\pm\)0.25 & 77.05 \(\pm\)1.00 & 85.13 \(\pm\)0.29 & 68.45 \(\pm\)2.46 & 73.84 \(\pm\)2.30 & 74.83 \(\pm\)2.32 \\ Gen-SheafHyperGNN & 76.82 \(\pm\)1.32 & 74.24 \(\pm\)1.05 & 87.35 \(\pm\)0.34 & 77.12 \(\pm\)1.14 & 84.99 \(\pm\)0.39 & 68.45 \(\pm\)4.98 & 69.47 \(\pm\)1.97 & 74.52 \(\pm\)1.27 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Ablation study on Restriction Maps:** we explore three types of \(d\times d\) restriction maps: diagonal, low-rank and general. Diagonal matrices consistently achieve better accuracy on most of the datasets, demonstrating a superior balance between complexity and expressivity
In the following sections, we conduct a series of ablation studies to gain a deeper understanding of our models. We will explore various types of restriction maps, analyze how performance changes when varying the network depth and study the importance of stalk dimension for final accuracy.
**Investigating the Restriction Maps.** Both linear and non-linear sheaf hypergraph Laplacians rely on attaching a sheaf structure to the hypergraph. For a cellular sheaf \(\mathcal{F}\) with vertex stalks \(\mathcal{F}(v)=\mathbb{R}^{d}\) and hyperedge stalks \(\mathcal{F}(e)=\mathbb{R}^{d}\) as used in our experiments, this involves inferring the restriction maps \(\mathcal{F}_{v\circ e}\in\mathbb{R}^{d\times d}\) for each incidence pair \((v,e)\). We implement these as a function dependent on corresponding nodes and hyperedge features: \(\mathcal{F}_{v\circ e}=\text{MLP}(x_{v}||h_{e})\in\mathbb{R}^{d^{2}}\). Learning these matrices can be challenging; therefore, we experimented with adding constraints to the type of matrices used as restriction maps. In Table 2 we show the performance obtained by our models when constraining the restriction maps to be either diagonal (Diag-SheafHyperNN), low-rank (LR-SheafHyperNN) or general matrices (Gen-SheafHyperNN). We observe that the sheaves equipped with diagonal restriction maps perform better than the more general variations. We believe that the advantage of the diagonal restriction maps is due to easier optimization, which overcomes the lose in expressivity. More details about predicting constrained \(d\times d\) matrices can be found in the Supplementary Material.
**Importance of Stalk Dimension.** The standard hypergraph Laplacian corresponds to a sheaf Laplacian with \(d=1\) and \(\mathcal{F}_{v\circ e}=1\). Constraining the stalk dimension to be \(1\), but allowing the restriction maps to be dynamically predicted, becomes similar to an attention mechanism [38]. However, attention models are restricted to guiding information via a scalar probability, thus facing the same over-smoothing limitations as traditional HyperGNN in the heterophilic setup. Our d-dimensional restriction maps increase the model's expressivity by enabling more complex information transfer between nodes and hyperedges, tailored for each individual pair. We validate this experimentally on the synthetic heterophilic dataset, using the diagonal version of the models, which achieves the best performance in the previous ablation. In Figure 2, we demonstrate how performance significantly improves when allowing higher-dimensional stalks (\(d>1\)). These results are consistent for both linear sheaf Laplacian-based models (SheafHyperGNN) and non-linear ones (SheafHyperGCN).
**Influence of Depth.** It is well-known that stacking many layers in a hypergraph network can lead to a decrease in model performance, especially in the heterophilic setup. This phenomenon, called over-smoothing, is well-studied in both graph [59] and hypergraph literature [17]. To analyse the extent to which our model suffers from this limitation, we train a series of models on the most heterophilic version of the synthetic dataset (\(\alpha=7\)). For both SheafHyperGNN and its HyperGNN equivalent, we vary the number of layers between \(1-8\). In Figure 2, we observe that while HyperGNN exhibits a drop in performance when going beyond \(3\) layers, SheafHyperGNN's performance remains mostly constant. Similar results were observed for the non-linear version when comparing SheafHyperGCN with HyperGCN (results in Supplementary Material). These results indicates potential advantages of our models in the heterophilic setup by allowing the construction of deeper architectures.
**Investigating Features Diversity.** Our theoretical analysis shows that, while conventional Hypergraph Networks tend to produce similar features for neighbouring nodes, our Sheaf Hypergraph
Figure 2: **Impact of Depth and Stalk Dimension** evaluated on the heterophilic dataset (\(\alpha=7\)). SheafHyperGNN’s performance is unaffected by increasing depth, and high-dimensional stalks is essential for achieving top performance. The Dirichlet energy shows that, while HyperGNN enforces the nodes to be similar, our SheafHyperGNN does not suffer from this limitation, encouraging features diversity.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & & \multicolumn{6}{c}{Interophily (\(\alpha\))} \\ Name & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline HyperGCN & 83.9 & 69.4 & 72.9 & 75.9 & 70.5 & 67.3 & 66.5 \\ HyperGNN & 98.4 & 83.7 & 79.4 & 74.5 & 69.5 & 66.9 & 63.8 \\ HGHA & 98.1 & 81.8 & 78.3 & 75.88 & 74.1 & 71.1 & 70.8 \\ ED-HNN & 99.9 & 91.3 & 88.4 & 84.1 & 80.7 & 78.8 & 76.5 \\ \hline SheafHCN & **100** & 87.1 & 84.8 & 79.2 & 78.1 & 76.6 & 75.5 \\ SheafHCNN & **100** & **94.2** & **90.8** & **86.5** & **82.1** & **79.8** & **77.3** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Accuracy on Synthetic Datasets with Varying Heterophily Levels**: Across all different level of heterophily (\(\alpha\)), our sheaf-based methods SheafGCN and SheafHGNN consistently outperform their counterparts. Additionally, they achieve top results for all heterophily levels, further demonstrating their effectiveness. For each experiment, the result represents average accuracy over \(10\) runs.
Networks reduce the distance between neighbouring nodes in the more complex hyperedge stalk space. As a result, the nodes' features do not become uniform, preserving their individual identities. We empirically evaluate this, by computing the Dirichlet energy for HyperGNN and SheafHyperGNN (shaded area in Figure 2), as a measure of similarity between neighbouring nodes. The results are aligned with the theoretical analysis: while increasing depth in HyperGNN creates uniform features, SheafHyperGNN does not suffer from this limitation, encouraging diversity between the nodes.
## 5 Conclusion
In this paper we introduce the cellular sheaf for hypergraphs, an expressive tool for modelling higher-order relations build upon the classical hypergraph structure. Furthermore, we propose two models capable of inferring and processing the sheaf hypergraph structure, based on linear and non-linear sheaf hypergraph Laplacian, respectively. We prove that the diffusion processes associated with these models induce a more expressive implicit regularization, extending the energies associated with standard hypergraph diffusion. This novel architecture generalizes classical Hypergraph Networks, and we experimentally show that it outperform existing methods on several datasets. Our technique of replacing the hypergraph Laplacian with a sheaf hypergraph Laplacian in both HyperGNN and HyperGCN establishes a versatile framework that can be employed to "sheafify" other hypergraph architectures. We believe that sheaf hypergraphs can contribute to further advancements in the rapidly evolving hypergraph community, extending far beyond the results presented in this work.
AcknowledgmentThe authors would like to thank Ferenc Huszar for fruitful discussions and constructive suggestions during the development of the paper and Eirik Flandmark and Laura Brinkholm Justesen for fixing a minor issue in the original HyperGCN code, which led to improved results in the baselines. Iulia Duta is a PhD student funded by a Twitter scholarship. This work was also supported by PNRR MUR projects PE0000013-FAIR, SERICS (PE00000014), Sapienza Project FedSSL, and IR0000013-SoBigData.it.
| 高い階層関係は自然界に広く存在しており、様々な現象が複雑な相互作用を通じて単一対の接続を超えて存在します。その結果、高階処理の進歩は構造化されたデータが必要とする様々な分野の成長を加速させます。従来の方法は、これらの相互作用をハイパーグラフを用いて表すことが一般的です。私たちは、ハイパーグラフに細胞のSheafを追加することで、この表現を向上させます。これは、従来のハイパーグラフの構造性を増大させながら、そのローカルのハイ階層接続を維持する数学的な構築物です。既存のLaplaciansのアイデアを参考に、私たちは、Sheaf hypergraph Laplaciansの2種類を開発します:線形と非線形です。理論的な分析により、SheafをハイパーグラフのLaplaciansに組み込むことで、標準的なハイパーグラフ拡散に比べてより表現的な指示のバイアスを創出し、 |
2309.16208 | Low-rank tensor completion via tensor joint rank with logarithmic
composite norm | Low-rank tensor completion (LRTC) aims to recover a complete low-rank tensor
from incomplete observed tensor, attracting extensive attention in various
practical applications such as image processing and computer vision. However,
current methods often perform well only when there is a sufficient of observed
information, and they perform poorly or may fail when the observed information
is less than 5\%. In order to improve the utilization of observed information,
a new method called the tensor joint rank with logarithmic composite norm
(TJLC) method is proposed. This method simultaneously exploits two types of
tensor low-rank structures, namely tensor Tucker rank and tubal rank, thereby
enhancing the inherent correlations between known and missing elements. To
address the challenge of applying two tensor ranks with significantly different
directly to LRTC, a new tensor Logarithmic composite norm is further proposed.
Subsequently, the TJLC model and algorithm for the LRTC problem are proposed.
Additionally, theoretical convergence guarantees for the TJLC method are
provided. Experiments on various real datasets demonstrate that the proposed
method outperforms state-of-the-art methods significantly. Particularly, the
proposed method achieves satisfactory recovery even when the observed
information is as low as 1\%, and the recovery performance improves
significantly as the observed information increases. | Hongbing Zhang | 2023-09-28T07:17:44 | http://arxiv.org/abs/2309.16208v2 | # Nonconvex third-order Tensor Recovery Based on Logarithmic Minimax Function
###### Abstract
Recent researches have shown that low-rank tensor recovery based non-convex relaxation has gained extensive attention. In this context, we propose a new Logarithmic Minimax (LM) function. The comparative analysis between the LM function and the Logarithmic, Minimax concave penalty (MCP), and Minimax Logarithmic concave penalty (MLCP) functions reveals that the proposed function can protect large singular values while imposing stronger penalization on small singular values. Based on this, we define a weighted tensor LM norm as a non-convex relaxation for tensor tubal rank. Subsequently, we propose the TLM-based low-rank tensor completion (LRTC) model and the TLM-based tensor robust principal component analysis (TRPCA) model respectively. Furthermore, we provide theoretical convergence guarantees for the proposed methods. Comprehensive experiments were conducted on various real datasets, and a comparison analysis was made with the similar EMLCP method. The results demonstrate that the proposed method outperforms the state-of-the-art methods.
keywords: Tensor recovery, Logarithmic Minimax (LM) function, low-rank tensor completion (LRTC), tensor robust principal component analysis (TRPCA). +
Footnote †: journal:
## 1 Introduction
As the dimensionality of real data increases and its structure becomes more complex, tensors, as high-order generalizations of vectors and matrices, have received widespread attention from researchers. Currently, tensors play an increasingly important role in various applications such as image/video processing [1; 2], hyperspectral/multispectral image (HSI/MSI) processing [3; 4], background subtraction [5; 6], and magnetic resonance imaging (MRI) data recovery [7; 8].
In general, tensor recovery problems can be described as the process of reconstructing the original tensor from partially observed or corrupted data, which involves solving the problems of low-rank tensor completion (LRTC) and tensor robust principal component analysis (TRPCA). Their corresponding models are as follows:
\[\min_{\mathcal{X}}\,rank(\mathcal{X})\;s.t.\;\mathcal{P}_{\Omega}(\mathcal{T} )=\mathcal{P}_{\Omega}(\mathcal{X}), \tag{1}\]
\[\min_{\mathcal{X},\mathcal{E}}\,rank(\mathcal{X})+\tau_{1}\|\mathcal{E}\|_{1}\,\,s.t. \,\,\mathcal{T}=\mathcal{X}+\mathcal{E}, \tag{2}\]
where \(\mathcal{T}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) is the observed tensor; \(\mathcal{X}\) is the original tensor; \(\mathcal{E}\) is the sparse tensor; \(\mathcal{P}_{\Omega}(\mathcal{X})\) is a projection operator that keeps the entries of \(\mathcal{X}\) in \(\Omega\) and sets all others to zero. A crucial issue of tensor recovery is how to define the tensor rank. Different from the matrix rank, the definition of tensor rank is not unique. The mainstream definitions of tensor rank are the CANDECOMP/PARAFAC (CP) rank based on CP decomposition [9], Tucker rank based on Tucker decomposition [10], and tubal rank [11] induced by tensor singular value decomposition (t-SVD) [12]. Nevertheless, directly solving the CP rank of the given tensor is NP-hard [13]. The calculation of Tucker rank requires data to be folded and unfolded, which will cause structural damage to data. Compared with CP rank and Tucker rank, the tubal rank can better maintain the data structure. Therefore, the LRTC problem is mostly based on tubal rank. Subsequently, Zhang et al. [14] defined the tensor nuclear norm (TNN) based on t-SVD and tensor tubal rank to solve the LRTC problem and obtained the most advanced tensor recovery results. Moreover, Lu et al. [15] performed TRPCA with TNN based on t-SVD by using Fourier transform.
In tensor recovery, larger singular values typically correspond to significant information such as contours, sharp edges, and smooth regions, while smaller singular values are mainly composed of noise or outliers [16]. For convex relaxation, although it is easier to solve, it will produce biased estimates [17]. The TNN method, as a convex relaxation with a uniform penalty on all singular values, may excessively penalize large singular values, resulting in suboptimal tensor recovery. Therefore, breaking away from convex relaxation methods with biased estimation constraints and adopting non-convex relaxation methods is crucial for further improving the accuracy of tensor recovery. Non-convex methods can impose smaller penalties on larger singular values and greater penalties on smaller singular values. Recently, the Logarithmic function [18, 19] and Minimax concave penalty (MCP) function [20] as non-convex relaxation has achieved good results in the tensor recovery problems. The Logarithmic function happens to be better able to penalize small singular values, but it is deficient in dealing with large singular values. The MCP function can better protect the large singular values, but its penalty for small singular values is weak. In order to break the limitation of the Logarithmic function and MCP function, Zhang et al.[21] proposed the Minimax Logarithmic Concave Penalty (MLCP) function. The MLCP function can both protect the large singular values well and also impose a strong penalty on the small singular values.
However, directly applying the MLCP function to tensor recovery problems may result in the inability to obtain explicit solutions, which is highly unfavorable for algorithmic solutions. Additionally, it is challenging to improve the penalty on small singular values while protecting large singular values. To overcome the limitations of the MLCP function's inability to be directly solved and further enhance
the penalty on small singular values, we propose the Logarithmic Minimax (LM) function in this paper. The LM function not only possesses the property of protecting large singular values like the MLCP function but also has stronger penalties on small singular values. Furthermore, we propose the proximal operator for the LM function theorem, which ensures that the LM function can directly obtain its explicit solution in tensor recovery problems, which the MLCP function lacks. Based on this, we further propose the weighted tensor \(LM\)-norm, which improves the flexibility of the LM function in handling different singular values of tensors.
The main contributions of this paper are summarized below:
Firstly, we propose the LM function, a new non-convex function. It possesses the property of protecting large singular values and has stronger penalties on small singular values compared to the MLCP function. In this paper, the introduction of the Proximal operator for the LM function theorem guarantees the direct solvability of the LM function in tensor recovery problems. Based on this, the proposed weighted tensor \(LM\)-norm further enhances the flexibility of the LM function in handling tensor singular values.
Secondly, we construct the TLM-based LRTC model and TLM-based TRPCA model for two typical tensor recovery problems, namely LRTC problem and TRPCA problem. The solution algorithms for the models are provided using the alternating direction multipliers method (ADMM). Furthermore, we prove that the proposed methods have the convergence guarantees under some assumptions.
Thirdly, we conduct experiments on various real-world datasets to evaluate the proposed methods. The LRTC experiments on MSI, MRI, and video, as well as the TRPCA experiment on HSI denoising, demonstrate the superior performance of the proposed methods. Additionally, we compare the proposed methods with the EMLCP method based on the MLCP function, validating that the LM function outperforms the MLCP function as a non-convex relaxation in tensor recovery.
The summary of this article is as follows: In Section 2, some preliminary knowledge and background of the tensors are given. The LM function and its properties and corresponding theorems are presented in Section 3. The main results, including the proposed models and algorithms, are shown in Section 4. Then, in section 5, we study the convergence of the proposed methods. The results of extensive experiments and discussions are presented in Section 6. Conclusions are drawn in Section 7.
## 2 Preliminaries
In this section, we list some basic notations and briefly introduce some definitions used throughout the paper. Generally, a lowercase letter and an uppercase letter denote a vector \(x\) and a matrix \(X\), respectively. An third-order tensor is denoted by a calligraphic uppercase letter \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) and \(\mathcal{X}_{i_{1},i_{2},i_{3}}\) is its \((i_{1},i_{2},i_{3})\)-th element. The Frobenius norm of a tensor is defined as
\((\sum_{i_{1},i_{2},i_{3}}\mathcal{X}_{i_{1},i_{2},i_{3}}^{2})^{1/2}\). For a third-order tensor \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\), the frontal slice \(\mathcal{X}(:,:,i)\) is denoted compactly as \(\mathcal{X}^{(i)}\), \(\mathcal{X}(i_{1},i_{2},:)\) is denoted \(i_{1},i_{2}\)-th tube of \(\mathcal{X}\). \(\bar{\mathcal{X}}\) represents fast Fourier transform (FFT) along the third dimension of tensor \(\mathcal{X}\), i.e., \(\bar{\mathcal{X}}=fft(\mathcal{X},[\![],3)\), and \(\mathcal{X}=ifft(\bar{\mathcal{X}},[\![],3)\). For a third-order tensor \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\), the block circulation operation is defined as
\[bcirc(\mathcal{X}):=\begin{pmatrix}\mathcal{X}^{(1)}&\mathcal{X}^{(I_{3})}& \ldots&\mathcal{X}^{(2)}\\ \mathcal{X}^{(2)}&\mathcal{X}^{(1)}&\ldots&\mathcal{X}^{(3)}\\ \vdots&\vdots&\ddots&\vdots\\ \mathcal{X}^{(I_{3})}&\mathcal{X}^{(I_{3}-1)}&\ldots&\mathcal{X}^{(1)}\end{pmatrix} \in\mathbb{R}^{I_{1}I_{3}\times I_{2}I_{3}}.\]
The block diagonalization operation and its inverse operation are respectively determined by
\[bdiag(\mathcal{X}):=\begin{pmatrix}\mathcal{X}^{(1)}&&&&\\ &\mathcal{X}^{(2)}&&&\\ &&\ddots&\\ &&&\mathcal{X}^{(I_{3})}\end{pmatrix}\in\mathbb{R}^{I_{1}I_{3}\times I_{2}I_{3 }},\quad bdfold(bdiag(\mathcal{X})):=\mathcal{X}.\]
The block vectorization operation and its inverse operation are respectively defined as
\[bvec(\mathcal{X}):=\begin{pmatrix}\mathcal{X}^{(1)}\\ \mathcal{X}^{(2)}\\ \vdots\\ \mathcal{X}^{(I_{3})}\end{pmatrix}\in\mathbb{R}^{I_{1}I_{3}\times I_{2}}, \quad bvfold(bvec(\mathcal{X})):=\mathcal{X}.\]
**Definition 1** (t-product [12]).: Let \(\mathcal{A}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) and \(\mathcal{B}\in\mathbb{R}^{I_{2}\times J\times I_{3}}\). Then the t-product \(\mathcal{A}*\mathcal{B}\) is defined to be a tensor of size \(I_{1}\times J\times I_{3}\),
\[\mathcal{A}*\mathcal{B}:=bvfold(bcirc(\mathcal{A})bvec(\mathcal{B})).\]
The **tensor conjugate transpose** of a tensor \(\mathcal{A}\in\mathbb{C}^{I_{1}\times I_{2}\times I_{3}}\) is the tensor \(\mathcal{A}^{H}\in\mathbb{C}^{I_{2}\times I_{1}\times I_{3}}\) obtained by conjugate transposing each of the frontal slices and then reversing the order of transposed frontal slices 2 through \(I_{3}\). The **identity tensor**\(\mathcal{I}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) is the tensor whose first frontal slice is the \(I_{1}\times I_{1}\) identity matrix, and whose other frontal slices are all zeros. It is clear that \(bcirc(\mathcal{I})\) is the \(I_{1}I_{3}\times I_{1}I_{3}\) identity matrix. So it is easy to get \(\mathcal{A}*\mathcal{I}=\mathcal{A}\) and \(\mathcal{I}*\mathcal{A}=\mathcal{A}\). A tensor \(\mathcal{Q}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) is **orthogonal tensor** if it satisfies \(\mathcal{Q}*\mathcal{Q}^{H}=\mathcal{Q}^{H}*\mathcal{Q}=\mathcal{I}.\) A tensor is called **f-diagonal** if each of its frontal slices is a diagonal matrix.
**Theorem 1** (t-SVD [15]).: _Let \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) be a third-order tensor, then it can be factored as_
\[\mathcal{X}=\mathcal{U}*\mathcal{S}*\mathcal{V}^{H},\]
_where \(\mathcal{U}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) and \(\mathcal{V}\in\mathbb{R}^{I_{2}\times I_{2}\times I_{3}}\) are orthogonal tensors, and \(\mathcal{S}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) is an f-diagonal tensor._
**Definition 2** (Tensor tubal-rank [11]): _The tubal-rank of a tensor \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\), denoted as \(rank_{t}(\mathcal{X})\), is defined to be the number of non-zero singular tubes of \(\mathcal{S}\), where \(\mathcal{S}\) comes from the t-SVD of \(\mathcal{X}:\mathcal{Y}=\mathcal{U}\ast\mathcal{S}\ast\mathcal{V}^{H}\). That is_
\[rank_{t}(\mathcal{X})=\#\{i:\mathcal{S}(i,:,:)\neq 0\}. \tag{3}\]
**Definition 3** (Tensor nuclear norm (TNN) [14]): _The tensor nuclear norm of a tensor \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\), denoted as \(\|\mathcal{X}\|_{TNN}\), is defined as the sum of the singular values of all the frontal slices of \(\bar{\mathcal{X}}\), i.e.,_
\[\|\mathcal{X}\|_{TNN}:=\frac{1}{I_{3}}\sum_{i=1}^{I_{3}}\|\bar{\mathcal{X}}^{( i)}\|_{\ast} \tag{4}\]
_where \(\bar{\mathcal{X}}^{(i)}\) is the \(i\)-th frontal slice of \(\bar{\mathcal{X}}\), with \(\bar{\mathcal{Y}}=fft(\mathcal{X},[],3)\)._
## 3 Logarithmic Minimax (LM) Function
In this section, we first define the definition of the Logarithmic Minimax (LM) function.
**Definition 4** (Logarithmic Minimax (LM) function): _Let \(\lambda>0,\gamma>0,\varepsilon>0\). The LM function \(f_{LM}:\mathbb{R}\rightarrow\mathbb{R}_{+}\) is defined as_
\[f_{LM}(x)=\left\{\begin{array}{ll}\log\left(\frac{\lambda|x|-\frac{|x|^{2}}{ 2\gamma}}{\varepsilon}+1\right),&|x|\leqslant\lambda\gamma,\\ \log\left(\frac{\gamma\lambda^{2}}{2\varepsilon}+1\right),&|x|>\lambda\gamma. \end{array}\right. \tag{5}\]
_where \(\mathbb{R}_{+}\) denotes the domain of non-negative real numbers._
The LM function is a symmetric function, so we only discuss its functional properties on \([0,+\infty)\).
**Proposition 1**: _The LM function defined in (5) satisfies the following properties: **(a)**: \(f_{LM}(x)\) is continuous, smooth and_
\[f_{LM}(0)=0,\lim_{x\rightarrow+\infty}\frac{f_{LM}(x)}{x}=0;\]
_(b): \(f_{LM}(x)\) is monotonically non-decreasing and concave on \([0,+\infty)\); **(c)**: \(f^{\prime}_{LM}(x)\) is non-negativity and monotonicity non-increasing on \([0,+\infty)\). Moreover, it is Lipschitz bounded, i.e., there exists constant \(L(\ell)\) such that_
\[|f^{\prime}_{LM}(x)-f^{\prime}_{LM}(y)|\leq L(\ell)|x-y|;\]
_(d): Especially, for the LM function, it is increasing in parameter \(\gamma\), and_
\[\lim_{\gamma\rightarrow+\infty}f_{LM}(x)=\log(\frac{\lambda|x|}{\varepsilon} +1). \tag{6}\]
Proof. **(a)**: \(\lim_{-}f_{LM}(\lambda\gamma)=\lim_{+}f_{LM}(\lambda\gamma)=\log\left(\frac{ \gamma\lambda^{2}}{2\varepsilon}+1\right)\) and \(f^{\prime}_{-LM}(\lambda\gamma)=f^{\prime}_{+LM}(\lambda\gamma)=0\), thus it's continuous and smooth; At last, the conclusions \(f_{LM}(0)=0\) and \(\lim\limits_{x\rightarrow+\infty}\frac{f_{LM}(x)}{x}=0\) are easily to verified though the formulas in (5).
**(b)**: This conclusion is direct from its first order and second order derivative function. Its first order and second order derivative functions are as follows:
\[\begin{array}{l}f^{\prime}_{LM}(x)=\left\{\begin{array}{l}\frac{2\lambda \gamma-2|x|}{2\lambda\gamma|x|-|x|^{2}+2\gamma\varepsilon},|x|\leqslant\lambda \gamma,\\ 0,\hskip 28.452756pt|x|>\lambda\gamma.\end{array}\right.\\ f^{\prime\prime}_{LM}(x)=\left\{\begin{array}{l}\frac{-2((|x|-\lambda\gamma )^{2}+\lambda^{2}\gamma^{2}+2\gamma\varepsilon)}{(2\lambda\gamma|x|-|x|^{2}+2 \gamma\varepsilon)^{2}},|x|\leqslant\lambda\gamma,\\ 0,\hskip 28.452756pt|x|>\lambda\gamma.\end{array}\right.\end{array} \tag{7}\]
We can find that its first order derivative functions are non-negative and second order derivative functions are non-positive, thus \(f_{LM}(x)\) is concave and monotonically non-decreasing on \([0,+\infty)\).
**(c)**: The non-negativity and monotonicity of \(f^{\prime}_{LM}(x)\) is direct from the formulas presented in (7). Next, we verify its Lipschitz bounded. The proof is mainly based on that \(f^{\prime\prime}_{LM}(x)\leqslant 0\) and monotonically non-decreasing on \((0,+\infty)\), which turns that \(f^{\prime\prime}_{LM}(x)\) is always bounded. Thus exists constant \(L(\ell):=\max\{f^{\prime\prime}_{LM}(x),f^{\prime\prime}_{LM}(y)\}\) for any \(x,y\in(0,+\infty)\), we have
\[|f^{\prime}_{LM}(x)-f^{\prime}_{LM}(y)|\leq L(\ell)|x-y|.\]
**(d)**: Consider \(f_{LM}(x)\) is a function with respect to \(\gamma\) when \(x\), \(\lambda\) and \(\varepsilon\) are fixed, then its derivative functions is computed as follows:
\[\left\{\begin{array}{ll}\frac{\lambda^{2}}{\lambda^{2}\gamma+2\varepsilon}, &\gamma<\frac{|x|}{\lambda},\\ \frac{|x|^{2}}{2\gamma^{2}\lambda|x|-\gamma|x|^{2}+2\gamma^{2}\varepsilon},& \gamma\geqslant\frac{|x|}{\lambda}.\end{array}\right. \tag{9}\]
It demonstrates that LM functions is increasing in \(\gamma\) since its derivative functions is non-negative. Note that as \(\gamma\rightarrow+\infty\),
\[\log\left(\frac{\lambda|x|-\frac{|x|^{2}}{2\gamma}}{\varepsilon}+1\right) \rightarrow\log\left(\frac{\lambda|x|}{\varepsilon}+1\right).\]
Then the limit results follow easily. This completes the proof.
From Fig. 1, it can be found that the LM function has an extremely strong similarity with the MCP function and MLCP function. Furthermore, it can be observed from Fig. 1 that the LM function yields a smaller value under the same set of parameters. This indicates that the LM function possesses the property of preserving large singular values and imposing stronger penalization on small singular values.
**Definition 5** (Tensor \(Lm\)-norm).: The tensor \(LM\)-norm of \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\), denoted by \(\|\mathcal{X}\|_{LM}\), is defined as follows:
\[\|\mathcal{X}\|_{LM}=\frac{1}{I_{3}}\sum_{i=1}^{I_{3}}\|\bar{ \mathcal{X}}^{(i)}\|_{LM}=\frac{1}{I_{3}}\sum_{i=1}^{I_{3}}\sum_{j=1}^{R}f_{LM} (\sigma_{j}(\bar{\mathcal{X}}^{(i)})). \tag{10}\]
where \(R=\min(I_{1},I_{2})\).
Unlike the tensor nuclear norm penalty, the Tensor \(LM\)-norm (10) do not satisfy the triangle inequality. Some vital properties of the Tensor \(LM\)-norm are given below.
**Proposition 2**.: _For \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\), the Tensor \(LM\)-norm is defined in (10) satisfies the following properties:_
_(a) Non-negativity_: _The Tensor_ \(LM\)_-norm is non-negative, i.e.,_ \(\|\mathcal{X}\|_{LM}\geqslant 0\)_. The equality holds if and only if_ \(\mathcal{X}\) _is the null tensor._
_(b) Concavity_: \(\|\mathcal{X}\|_{LM}\) _is concave in the modulus of the elements of_ \(\mathcal{X}\)_._
_(c) Orthogonal invariance_: _The Tensor_ \(LM\)_-norm is orthogonal invariant, i.e.,_ \(\|\mathcal{U}*\mathcal{X}*\mathcal{V}^{H}\|_{LM}=\|\mathcal{X}\|_{LM}\)_, for any orthogonal tensor_ \(\mathcal{U}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) _and_ \(\mathcal{V}\in\mathbb{R}^{I_{2}\times I_{2}\times I_{3}}\)_._
Proof.: Let \(p(\mathcal{X})=\|\mathcal{X}\|_{LM}=\frac{1}{I_{3}}\sum_{i=1}^{I_{3}}\sum_{j=1 }^{R}f_{LM}(\sigma_{j}(\bar{\mathcal{X}}^{(i)}))\).
(a) Since \(p(\mathcal{X})\) is the sum of non-negative functions, \(\|\mathcal{X}\|_{LM}\geqslant 0\). The equality holds if \(\mathcal{X}=\mathbf{0}\).
(b) The function \(p(\mathcal{X})\) is separable of \(\mathcal{X}\), i.e.,
\[p(\mathcal{X})=\frac{1}{I_{3}}\sum_{i=1}^{I_{3}}\sum_{j=1}^{R}p( \sigma_{j}(\bar{\mathcal{X}}^{(i)})).\]
Since \(p\) is an affine function in \(\sigma_{j}(\bar{\mathcal{X}}^{(i)}),\) we can write, for \(0\leqslant\alpha\leqslant 1,\) that
\[\|(\alpha\mathcal{X}_{1}+(1-\alpha)\mathcal{X}_{2})\|_{LM}\] \[=p(\alpha\mathcal{X}_{1}+(1-\alpha)\mathcal{X}_{2})\] \[\geqslant\alpha p(\mathcal{X}_{1})+(1-\alpha)p(\mathcal{X}_{2})\] \[=\alpha\|\mathcal{X}_{1}\|_{LM}+(1-\alpha)\|\mathcal{X}_{2}\|_{ LM}.\]
Hence, \(\|\mathcal{X}\|_{LM}\) is concave in the modulus of the singular values of \(\mathcal{X}.\)
(c) Suppose \(\mathcal{X}\) has t-SVD \(\mathcal{P}*\mathcal{S}*\mathcal{Q}^{H},\) where \(\mathcal{P},\mathcal{Q}\) are orthogonal and \(\mathcal{S}\) is f-diagonal, we have
\[\mathcal{U}*\mathcal{X}*\mathcal{V}^{H}=\mathcal{U}*\mathcal{P}*\mathcal{S}* \mathcal{Q}^{H}*\mathcal{V}^{H}=(\mathcal{U}*\mathcal{P})*\mathcal{S}*( \mathcal{V}*\mathcal{Q})^{H}.\]
Since
\[(\mathcal{U}*\mathcal{P})*(\mathcal{U}*\mathcal{P})^{H}=\mathcal{U}*\mathcal{ P}*\mathcal{P}^{H}*\mathcal{U}^{H}=\mathcal{I},\]
then \(\mathcal{U}*\mathcal{P}\) is an orthogonal tensor. The same is true for \(\mathcal{V}*\mathcal{Q}\). Thus \((\mathcal{U}*\mathcal{P})*\mathcal{S}*(\mathcal{V}*\mathcal{Q})^{H}\) is the t-SVD of \(\mathcal{U}*\mathcal{X}*\mathcal{V}^{H}\). Therefore,
\[\|\mathcal{U}*\mathcal{X}*\mathcal{V}^{H}\|_{LM}=\frac{1}{I_{3}}\sum_{i=1}^{I_ {3}}\sum_{j=1}^{R}f_{LM}(\sigma_{j}(\bar{\mathcal{X}}^{(i)}))=\|\mathcal{X}\| _{LM}.\]
**Definition 6** (Weighted tensor \(LM\)-norm).: The weighted tensor \(LM\)-norm of \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}},\) denoted by \(\|\mathcal{X}\|_{\omega-LM},\) is defined as follows:
\[\|\mathcal{X}\|_{\omega-LM}=\frac{1}{I_{3}}\sum_{i=1}^{I_{3}}\|\bar{\mathcal{ X}}^{(i)}\|_{\omega-LM}=\frac{1}{I_{3}}\frac{1}{I_{3}}\sum_{i=1}^{I_{3}}\sum_{j=1}^{R }\omega_{j,i}f_{LM}(\sigma_{j}(\bar{\mathcal{X}}^{(i)})). \tag{11}\]
where \(R=\min(I_{1},I_{2}).\)
**Theorem 2** (Proximal operator for the LM function).: _Consider the LM function given in (5). Its proximal operator denoted by \(S_{LM}:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\), \(\lambda>0,\gamma>0,\) and \(\varepsilon>0,\rho>0\), and defined as follows:_
\[S_{LM}(y)=\arg\min_{x}\{\frac{\rho}{2}(x-y)^{2}+f_{LM}(x)\}, \tag{12}\]
_is given by_
\[S_{LM}(y)=\left\{\begin{array}{ll}x^{\star},\;x\leqslant\lambda\gamma,\\ y\;,\;\;x>\lambda\gamma.\end{array}\right. \tag{13}\]
Proof.: Let
\[g(x) =\frac{\rho}{2}(x-y)^{2}+f_{LM}(x),\] \[=\left\{\begin{array}{ll}\frac{\rho}{2}(x-y)^{2}+\log\left( \frac{\lambda x-\frac{x^{2}}{2\gamma}}{\varepsilon}+1\right),\;x\leqslant \lambda\gamma,\\ \frac{\rho}{2}(x-y)^{2}+\log\left(\frac{\gamma\lambda^{2}}{2\varepsilon}+1 \right)\;\;\;\;\;,\;x>\lambda\gamma.\end{array}\right.\]
According to the definition of \(g(x)\), when \(x>\lambda\gamma\), \(S_{LM}(y)=y\).
Next, we consider the case \(x\leqslant\lambda\gamma\). Let the derivatives of the objective function \(g(x)\) with respect to \(x\) be zero. Therefore, we have
\[\rho(x-y)+\frac{2\lambda\gamma-2x}{2\lambda\gamma x-x^{2}+2\gamma \varepsilon}=0. \tag{14}\]
Due to \(2\lambda\gamma x-x^{2}+2\gamma\varepsilon>0,\rho>0\), equation 14 can be transformed into the following form:
\[-x^{3}+(2\lambda\gamma+y)x^{2}+(-\frac{2}{\rho}+2\gamma\varepsilon -2\lambda\gamma y)x+(\frac{2\lambda\gamma}{\rho}-2\gamma\varepsilon y)=0. \tag{15}\]
Let \(a=-1,b=2\lambda\gamma+y,c=-\frac{2}{\rho}+2\gamma\varepsilon-2\lambda\gamma y,d=\frac{2\lambda\gamma}{\rho}-2\gamma\varepsilon y,A=b^{2}-3ac,B=bc-9ad,C=c^ {2}-3bd,\Delta=B^{2}-4AC\), \(x_{1},x_{2}\), and \(x_{3}\) are the three solutions of equation 15. These results are derived by considering different values of \(A,B\), and \(\Delta\).
1) Case-1: \(A=B=0\). The solution to equation 15 in this case are \(x_{1}=x_{2}=x_{3}=-\frac{c}{b}\).
2) Case-2: \(\Delta>0\). The solution to equation 15 in this case are as follows:
\[x_{1} =\frac{-b-(\sqrt[3]{K_{1}}+\sqrt[3]{K_{2}})}{3a},\] \[x_{2} =\frac{-b+0.5(\sqrt[3]{K_{1}}+\sqrt[3]{K_{2}})+0.5\sqrt[3]{3}( \sqrt[3]{K_{1}}-\sqrt[3]{K_{2}})i}{3a},\] \[x_{3} =\frac{-b+0.5(\sqrt[3]{K_{1}}+\sqrt[3]{K_{2}})-0.5\sqrt[3]{3}( \sqrt[3]{K_{1}}-\sqrt[3]{K_{2}})i}{3a},\]
where \(K_{1}=Ab+1.5a(-B+\sqrt{B^{2}-4AC}),\ K_{2}=Ab+1.5a(-B-\sqrt{B^{2}-4AC})\). Since equation 12 is in the real number domain, only \(x_{1}\) is retained as the solution in this case.
3) Case-3: \(\Delta=0\). The solution to equation 15 in this case are \(x_{1}=\frac{B}{A}-\frac{b}{a},\ x_{2}=x_{3}=-\frac{B}{2A}\).
4) Case-4: \(\Delta<0\). The solution to equation 15 in this case are as follows:
\[x_{1} =\frac{-b-2\sqrt{A}\cos\frac{\theta}{3}}{3a},\] \[x_{2} =\frac{-b+\sqrt{A}(\cos\frac{\theta}{3}-\sqrt{3}\sin\frac{\theta} {3})}{3a},\] \[x_{3} =\frac{-b+\sqrt{A}(\cos\frac{\theta}{3}-\sqrt{3}\sin\frac{\theta} {3})}{3a},\]
where \(\theta=\arccos T,T=\frac{2Ab-3aB}{2A\sqrt{A}}\).
In addition, we need to further consider whether \(x_{i}\) (\(i=1,2,3\)) are within the domain, as well as what the optimal solution is. Therefore, we will take the following steps:
Step 1: \(x_{i}=\min\{\max\{x_{i},0\},\lambda\gamma\}\) (\(i=1,2,3\)). Step 2: \(g(x^{\star})=\min\{g(x_{i}),g(0),g(\lambda\gamma)\}\) (\(i=1,2,3\)).
The optimal solution at this moment is as follows:
\[x^{\star}=\left\{\begin{array}{l}x_{i},g(x^{\star})=g(x_{i}),\\ 0,g(x^{\star})=g(0),\\ y,g(x^{\star})=g(\lambda\gamma).\end{array}\right. \tag{16}\]
To sum up:
\[S_{LM}(y)=\left\{\begin{array}{l}x^{*},\;x\leqslant\lambda\gamma,\\ y\;,\;\;x>\lambda\gamma.\end{array}\right.\]
To observe \(S_{LM}(y)\) more clearly, Fig. 2 illustrates the variations of \(S_{LM}(y)\) with \(x\) under different parameters. The results in the figure indicate that \(S_{LM}(y)\) is significantly influenced by parameters \(\lambda\) and \(\varepsilon\). Therefore, in experiments with the same data but different values of \(\lambda\) and \(\varepsilon\), the optimal parameters for \(\lambda\) and \(\varepsilon\) do not undergo substantial changes. On the other hand, parameter \(\gamma\) has a relatively minor impact on \(S_{LM}(y)\). Thus, even though parameter \(\gamma\) varies greatly in the experiment, it does not have a significant effect on \(S_{LM}(y)\).
**Theorem 3** (Proximal operator for weighted tensor \(Lm\)-norm).: _Consider weighted tensor \(LM\)-norm given in (11). Its proximal operator denoted by \(S:\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\rightarrow\mathbb{R}^{I_{1} \times I_{2}\times I_{3}}\), \(\lambda>0,\gamma>0,\) and \(\varepsilon>0\), \(R=\min\{I_{1},I_{2}\}\) and defined as follows:_
\[S(\mathcal{X})=\arg\min_{\mathcal{L}}\{\frac{\rho}{2}\|\mathcal{L}-\mathcal{Y }\|_{F}^{2}+\|\mathcal{L}\|_{\omega-LM}\}, \tag{17}\]
_is given by_
\[S=\mathcal{U}*\mathcal{S}_{1}*\mathcal{V}^{H}, \tag{18}\]
_where \(\mathcal{U}\) and \(\mathcal{V}\) are derived from the t-SVD of \(\mathcal{Y}=\mathcal{U}*\mathcal{S}_{2}*\mathcal{V}^{H}\). More importantly, the \(i\)th front slice of DFT of \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\), i.e., \(\bar{\mathcal{S}}_{1}^{(i)}=\sigma(\bar{\mathcal{L}}^{(i)})\) and \(\bar{\mathcal{S}}_{2}^{(i)}=\sigma(\bar{\mathcal{Y}}^{(i)})\), has the following relationship \(\sigma_{j}(\bar{\mathcal{L}}^{(i)})=S_{LM}(\sigma_{j}(\bar{\mathcal{Y}}^{(i) }))\)._
Figure 2: Visual comparison of \(S_{LM}(y)\) on different parameters.
Proof.: Let \(\mathcal{Y}=\mathcal{U}*\mathcal{S}_{2}*\mathcal{V}^{H}\) and \(\mathcal{L}=\mathcal{W}*\mathcal{S}_{1}*\mathcal{R}^{H}\) be the t-SVD of \(\mathcal{Y}\) and \(\mathcal{L}\), respectively. Consider
\[S(\mathcal{Y}) =\arg\min_{\mathcal{L}}\frac{1}{2}\|\mathcal{L}-\mathcal{Y}\|_{F}^ {2}+\|\mathcal{L}\|_{\omega-LM}\] \[=\arg\min_{\mathcal{L}}\frac{1}{2}\|\mathcal{W}*\mathcal{S}_{1}* \mathcal{R}^{H}-\mathcal{U}*\mathcal{S}_{2}*\mathcal{V}^{H}\|_{F}^{2}+\| \mathcal{L}\|_{\omega-LM}\] \[=\arg\min_{\mathcal{L}}\frac{1}{I_{3}}(\sum_{i=1}^{I_{3}}\frac{1 }{2}\|\bar{\mathcal{W}}^{(i)}*\bar{\mathcal{S}_{1}}^{(i)}*\bar{\mathcal{R}}^{ (i)H}-\bar{\mathcal{U}}^{(i)}*\bar{\mathcal{S}_{2}}^{(i)}*\bar{\mathcal{V}}^{ (i)H}\|_{F}^{2}+\|\bar{\mathcal{L}}^{(i)}\|_{\omega-LM}). \tag{19}\]
It can be found that (19) is separable and can be divided into \(I_{3}\) sub-problems. For the \(i\)th sub-problem:
\[\arg\min_{\mathcal{L}^{(i)}}\frac{1}{2}\|\bar{\mathcal{W}}^{(i)}* \bar{\mathcal{S}_{1}}^{(i)}*\bar{\mathcal{R}}^{(i)H}-\bar{\mathcal{U}}^{(i)}* \bar{\mathcal{S}_{2}}^{(i)}*\bar{\mathcal{V}}^{(i)H}\|_{F}^{2}+\|\bar{\mathcal{ L}}^{(i)}\|_{\omega-LM}\] \[=\arg\min_{\bar{\mathcal{L}}^{(i)}}\frac{1}{2}Tr(\bar{\mathcal{S }_{1}}^{(i)}\bar{\mathcal{S}_{1}}^{(i)H})+\frac{1}{2}Tr(\bar{\mathcal{S}_{2}}^{ (i)}\bar{\mathcal{S}_{2}}^{(i)H})+Tr(\bar{\mathcal{L}}^{(i)H}\bar{\mathcal{Y}}^ {(i)})+\|\bar{\mathcal{L}}^{(i)}\|_{\omega-LM}.\]
Invoking von Neumann's trace inequality [22], we can write
\[\arg\min_{\mathcal{L}^{(i)}}\frac{1}{2}\|\bar{\mathcal{W}}^{(i)}* \bar{\mathcal{S}_{1}}^{(i)}*\bar{\mathcal{R}}^{(i)H}-\bar{\mathcal{U}}^{(i)}* \bar{\mathcal{S}_{2}}^{(i)}*\bar{\mathcal{V}}^{(i)H}\|_{F}^{2}+\|\bar{\mathcal{ L}}^{(i)}\|_{\omega-LM}\] \[\geq\arg\min_{\bar{\mathcal{S}_{1}}^{(i)}}\frac{1}{2}Tr(\bar{ \mathcal{S}_{1}}^{(i)}\mathcal{S_{1}}^{(i)H})+\frac{1}{2}Tr(\bar{\mathcal{S}_{ 2}}^{(i)}\mathcal{S_{2}}^{(i)H})+Tr(\bar{\mathcal{S}_{2}}^{(i)}\mathcal{S_{1} }^{(i)H})+\|\bar{\mathcal{L}}^{(i)}\|_{\omega-LM}\] \[=\arg\min_{\sigma(\mathcal{L}^{(i)})}\frac{1}{2}\|\sigma(\bar{ \mathcal{L}}^{(i)})-\sigma(\bar{\mathcal{Y}}^{(i)})\|_{F}^{2}+\|\bar{\mathcal{ L}}^{(i)}\|_{\omega-LM}.\] \[=\sum_{j=1}^{R}\arg\min_{\sigma_{j}(\bar{\mathcal{L}}^{(i)})} \frac{1}{2\omega_{j,i}}(\sigma_{j}(\bar{\mathcal{L}}^{(i)})-\sigma_{j}(\bar{ \mathcal{Y}}^{(i)}))^{2}+f_{LM}(\sigma_{j}(\bar{\mathcal{L}}^{(i)})). \tag{20}\]
The equality holds when \(\bar{\mathcal{W}}^{(i)}=\bar{\mathcal{U}}^{(i)}\) and \(\bar{\mathcal{R}}^{(i)}=\bar{\mathcal{V}}^{(i)}\). Hence, the optimal solution to (19) is obtained by solving the problem below: \(\sigma_{j}(\bar{\mathcal{L}}^{(i)})=S_{LM}(\sigma_{j}(\bar{\mathcal{Y}}^{(i)}))\).
## 4 TLM-based models and solving algorithms
In this section, we apply the weighted tensor \(LM\)-norm to low-rank tensor completion (LRTC) and tensor robust principal component analysis (TRPCA) and propose two new TLM-based models.
### TLM-based LRTC model
Low-rank tensor completion aims at estimating the missing elements from an incomplete observation tensor. Considering an third-order tensor \(\mathcal{T}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\), the proposed TLM-based LRTC model is formulated as follow
\[\min_{\mathcal{X}}~{}\|\mathcal{X}\|_{\omega-LM}~{}~{}s.t.~{}~{} \mathcal{P}_{\Omega}(\mathcal{T}-\mathcal{X})=\mathbf{0}. \tag{21}\]
First, we introduce a auxiliary tensor \(\mathcal{Z}=\mathcal{X}\) and transform optimization problem (21), in its augmented Lagrangian form, as follows:
\[L(\mathcal{X},\mathcal{Z},\mathcal{Q};\mu) =\|\mathcal{Z}\|_{\omega-LM}+\frac{\mu}{2}\|\mathcal{X}-\mathcal{ Z}\|_{F}^{2}+\langle\mathcal{X}-\mathcal{Z},\mathcal{Q}\rangle \tag{22}\] \[s.t.~{}\mathcal{P}_{\Omega}(\mathcal{T}-\mathcal{X})=\mathbf{0},\]
where \(\mathcal{Q}\) is Lagrangian multiplier; \(\mu>0\) are the augmented Lagrangian parameters. Similarly, we denote the variable updated by the iteration as \((\cdot)^{+}\), and omit the specific number of iterations. The update equations are derived in the following.
**Update \(\mathcal{X}\)**: The closed form of \(\mathcal{X}\) can be derived by setting the derivative of (22) to zero. We can now update \(\mathcal{X}\) by the following equation:
\[\mathcal{X}^{+}=\mathcal{P}_{\Omega^{c}}(\mathcal{Z}-\frac{\mathcal{Q}}{\mu} )+\mathcal{P}_{\Omega}(\mathcal{T}). \tag{23}\]
**Update \(\mathcal{Z}\)**: Fix other variables, and the corresponding optimization is as follows:
\[\mathcal{Z}^{+} =\arg\min_{\mathcal{Z}}\|\mathcal{Z}\|_{\omega-LM}+\frac{\mu}{2} \|\mathcal{X}^{+}-\mathcal{Z}\|_{F}^{2}+\langle\mathcal{X}^{+}-\mathcal{Z}, \mathcal{Q}\rangle. \tag{24}\] \[=\arg\min_{\mathcal{Z}}\|\mathcal{Z}\|_{\omega-LM}+\frac{\mu}{2} \|\mathcal{X}^{+}-\mathcal{Z}+\frac{\mathcal{Q}}{\mu}\|_{F}^{2}.\]
Recalling Theorem 3, the solution to the above optimization is given by:
\[\mathcal{Z}^{+}=S(\mathcal{X}^{+}+\frac{\mathcal{Q}}{\mu}), \tag{25}\]
where \(S\) denotes the proximal operator defined in (18).
**Update \(\mathcal{Q}\)**: Finally, multiplier \(\mathcal{Q}\) is updated as follows:
\[\mathcal{Q}^{+}=\mathcal{Q}+\mu(\mathcal{X}^{+}-\mathcal{Z}^{+}). \tag{26}\]
The optimization steps of TLM-LRTC formulation are listed in Algorithm 1. The main cost lies in the update of \(\mathcal{Z}\), which requires computing t-SVD. The per-iteration complexity is \(O(I_{1}I_{2}I_{3}[log(I_{3})+\min(I_{1},I_{2})])\).
### TLM-based TRPCA model
Tensor robust principal component analysis (TRPCA) aims to recover the tensor from grossly corrupted observations. Using the proposed weighted tensor \(LM\)-norm, we can get the following TLM-based TRPCA model:
\[\min_{\mathcal{X},\mathcal{E}}\|\mathcal{X}\|_{\omega-LM}+\tau_{1}\| \mathcal{E}\|_{1}\ s.t.\ \mathcal{T}=\mathcal{X}+\mathcal{E}, \tag{27}\]
Under the framework of the ADMM, the easy-to-implement optimization strategy could be provided to solve (27). We introduce tensor \(\mathcal{Z}=\mathcal{X}\) and transform optimization problem (27), in its augmented Lagrangian form, as follows:
\[L(\mathcal{X},\mathcal{Z},\mathcal{E},\mathcal{Q},\mathcal{G}; \mu,\rho) =\|\mathcal{Z}\|_{\omega-LM}+\tau_{1}\|\mathcal{E}\|_{1}+\frac{\mu}{2}\| \mathcal{X}-\mathcal{Z}\|_{F}^{2}+\langle\mathcal{X}-\mathcal{Z},\mathcal{Q}\rangle\] \[\quad+\frac{\rho}{2}\|\mathcal{T}-(\mathcal{X}+\mathcal{E})\|_{F }^{2}+\langle\mathcal{T}-(\mathcal{X}+\mathcal{E}),\mathcal{G}\rangle. \tag{28}\]
where \(\mathcal{Q},\mathcal{G}\) are Lagrangian multipliers; \(\mu,\rho>0\) are the augmented Lagrangian parameters. Besides, variables \(\mathcal{X},\mathcal{Z},\mathcal{E},\mathcal{Q},\mathcal{G}\) are updated alternately in the order of \(\mathcal{X}\rightarrow\mathcal{Z}\rightarrow\mathcal{E}\rightarrow\mathcal{Q} \rightarrow\mathcal{G}\). Since the update of variable \(\mathcal{Z}\) is consistent with the TLM-based LRTC model, it is omitted here and will not be repeated. We denote the variable updated by the iteration as \((\cdot)^{+}\), and omit the specific number of iterations. The update equations are derived in the following.
**Update**\(\mathcal{X}\): Fix other variables, and the corresponding optimization are as follows:
\[\mathcal{X}^{+}=\arg\min_{\mathcal{X}}\frac{\mu}{2}\|\mathcal{X}- \mathcal{Z}\|_{F}^{2}+\langle\mathcal{X}-\mathcal{Z},\mathcal{Q}\rangle+\frac {\rho}{2}\|\mathcal{T}-(\mathcal{X}+\mathcal{E})\|_{F}^{2}+\langle\mathcal{T} -(\mathcal{X}+\mathcal{E}),\mathcal{G}\rangle. \tag{29}\]
The closed form of \(\mathcal{X}\) can be derived by setting the derivative of (29) to zero. We can now update \(\mathcal{X}\) by the following equation:
\[\mathcal{X}^{+}=\frac{\mu\mathcal{Z}-\mathcal{Q}+\rho(\mathcal{T}- \mathcal{E})+\mathcal{G}}{\mu+\rho}, \tag{30}\]
**Update**\(\mathcal{E}\): Now, let's solve \(\mathcal{E}\). The minimization problem of \(\mathcal{E}\) is as follows:
\[\arg\min_{\mathcal{E}}\tau_{1}\|\mathcal{E}\|_{1}+\frac{\rho}{2} \|\mathcal{T}-(\mathcal{X}+\mathcal{E})\|_{F}^{2}+\langle\mathcal{T}-( \mathcal{X}+\mathcal{E}),\mathcal{G}\rangle. \tag{31}\]
Problem (31) has the following closed-form solution:
\[\mathcal{E}^{+}=S_{\frac{\tau_{1}}{\rho}}(\mathcal{T}-\mathcal{X}^{+}+\frac{ \mathcal{G}}{\rho}), \tag{32}\]
where \(S_{\lambda}(\cdot)\) is the soft thresholding operator [23]:
\[S_{\lambda}(x)=\left\{\begin{array}{ll}0,&if\quad|x|\leqslant \lambda,\\ sign(x)(|x|-\lambda),&if\quad|x|>\lambda.\end{array}\right. \tag{33}\]
**Update \(\mathcal{Q}\) and \(\mathcal{G}\)**: Finally, multipliers \(\mathcal{Q}\) and \(\mathcal{G}\) are updated as follows:
\[\mathcal{Q}^{+} =\mathcal{Q}+\mu(\mathcal{X}^{+}-\mathcal{Z}^{+}). \tag{34}\] \[\mathcal{G}^{+} =\mathcal{G}+\rho(\mathcal{T}-\mathcal{X}^{+}-\mathcal{E}^{+}). \tag{35}\]
The optimization steps of TLM formulation are listed in Algorithm 2. The main cost lies in the update of \(\mathcal{Z}\), which requires computing t-SVD. The per-iteration complexity is \(O(I_{1}I_{2}I_{3}[log(I_{3})+\min(I_{1},I_{2})])\).
```
Input: The corrupted observation tensor \(\mathcal{T}\), convergence criteria \(\epsilon\), maximum iteration number \(K\). Initialization:\(\mathcal{X}^{0}=\mathcal{T}\), \(\mathcal{Z}^{0}=\mathcal{X}^{0}\), \(\rho>0\), \(\mu>0\), \(\eta>1\). while not converged and \(k<K\)do Updating \(\mathcal{X}^{k}\) via (30); Updating \(\mathcal{Z}^{k}\) via (25); Updating \(\mathcal{E}^{k}\) via (32); Updating the multipliers \(\mathcal{Q}^{k}\) and \(\mathcal{G}^{k}\) via (34) and (35); \(\mu^{k}=\eta\mu^{k-1}\), \(\rho^{k}=\eta\rho^{k-1}\), \(k=k+1\); Check the convergence conditions \(\|\mathcal{X}^{k+1}-\mathcal{X}^{k}\|_{F}^{2}/\|\mathcal{X}^{k}\|_{F}^{2}\leq\epsilon\). endwhile return\(\mathcal{X}^{k+1}\) and \(\mathcal{E}^{k+1}\). Output:\(\mathcal{X}\) and \(\mathcal{E}\).
```
**Algorithm 2** TLM-TRPCA
## 5 Convergence analysis
The convergence analysis of algorithm 1 is similar to that of algorithm 2. Here, we provide the convergence analysis of algorithm 2, while omitting the convergence analysis of algorithm 1. To prove the convergence of the proposed algorithm 2, we first have the following two lemmas.
**Lemma 1**: _The sequence \(\{\mathcal{Q}^{K}\}\) and \(\{\mathcal{G}^{k}\}\) are bounded._
First, The optimal \(\mathcal{Z}^{k+1}\) needs to satisfy the first-order optimality condition, that is,
\[0 \in\partial_{\mathcal{Z}}L(\mathcal{X}^{k+1},\mathcal{Z}^{k}, \mathcal{E}^{k},\mathcal{Q}^{k},\mathcal{G}^{k};\mu^{k},\rho^{k})\] \[=\partial(\|\mathcal{Z}\|_{\omega-LM})|_{\mathcal{Z}^{k+1}}- \mathcal{Q}^{k}-\mu^{k}(\mathcal{X}^{k+1}-\mathcal{Z}^{k+1})\] \[=\partial(\|\mathcal{Z}\|_{\omega-LM})|_{\mathcal{Z}^{k+1}}- \mathcal{Q}^{k+1},\]
In terms of the analysis in proposition 1, the derivation of \(f_{LM}\) is bounded, and thereby, \(\partial(\|\mathcal{Z}\|_{\omega-LM})|_{\mathcal{Z}^{k+1}}\) is bounded. Then, it is seen that \(\{\mathcal{Q}^{k}\}\) is bounded.
Then, the optimal \(\mathcal{E}^{k+1}\) needs to satisfy the first-order optimality condition, that is,
\[0 \in\partial_{\mathcal{E}}L(\mathcal{X}^{k+1},\mathcal{Z}^{k+1}, \mathcal{E}^{k+1},\mathcal{Q}^{k},\mathcal{G}^{k};\mu^{k},\rho^{k})\] \[=\partial(\tau_{1}\|\mathcal{E}\|_{1})|_{\mathcal{E}^{k+1}}- \mathcal{G}^{k}-\rho^{k}(\mathcal{T}-(\mathcal{X}^{k+1}+\mathcal{E}^{k+1}))\] \[=\partial(\tau_{1}\|\mathcal{E}\|_{1})|_{\mathcal{E}^{k+1}}- \mathcal{G}^{k+1},\]
It can easily be proved that \(\partial(\tau_{1}\|\mathcal{E}\|_{1})|_{\mathcal{E}^{k+1}}\) is bounded [24], Thus, \(\{\mathcal{G}^{k}\}\) is bounded.
**Lemma 2**.: _Sequences \(\{\mathcal{X}^{k}\},\{\mathcal{Z}^{k}\}\), and \(\{\mathcal{E}^{k}\}\) are bounded if \(\sum_{j=1}^{\infty}\frac{\mu^{j}+\mu^{j-1}}{(\mu^{j-1})^{2}}<\infty\) and \(\sum_{j=1}^{\infty}\frac{\rho^{j}+\rho^{j-1}}{(\rho^{j-1})^{2}}<\infty\)._
Proof.: By simple manipulation, we can get,
\[L(\mathcal{X}^{k},\mathcal{Z}^{k},\mathcal{E}^{k},\mathcal{Q}^{ k},\mathcal{G}^{k};\mu^{k},\rho^{k})\] \[=L(\mathcal{X}^{k},\mathcal{Z}^{k},\mathcal{E}^{k},\mathcal{Q}^{ k-1},\mathcal{G}^{k-1};\mu^{k-1},\rho^{k-1})+\langle\mathcal{Q}^{k}-\mathcal{Q}^{ k-1},\mathcal{X}^{k}-\mathcal{Z}^{k}\rangle\] \[\quad+\frac{\mu^{k}-\mu^{k-1}}{2}\|\mathcal{X}^{k}-\mathcal{Z}^{ k}\|_{F}^{2}+\langle\mathcal{G}^{k}-\mathcal{G}^{k-1},\mathcal{T}-\mathcal{X}^{k}- \mathcal{E}^{k}\rangle\] \[\quad+\frac{\rho^{k}-\rho^{k-1}}{2}\|\mathcal{T}-\mathcal{X}^{k} -\mathcal{E}^{k}\|_{F}^{2}\] \[=L(\mathcal{X}^{k},\mathcal{Z}^{k},\mathcal{E}^{k},\mathcal{Q}^{ k-1},\mathcal{G}^{k-1};\mu^{k-1},\rho^{k-1})+\frac{\mu^{k}+\mu^{k-1}}{2(\mu^{k-1})^{2}}\| \mathcal{Q}^{k}-\mathcal{Q}^{k-1}\|_{F}^{2}\] \[\quad+\frac{\rho^{k}+\rho^{k-1}}{2(\rho^{k-1})^{2}}\|\mathcal{G} ^{k}-\mathcal{G}^{k-1}\|_{F}^{2}.\]
Then, it follows that,
\[L(\mathcal{X}^{k+1},\mathcal{Z}^{k+1},\mathcal{E}^{k+1},\mathcal{ Q}^{k},\mathcal{G}^{k};\mu^{k},\rho^{k})\] \[\quad\leqslant L(\mathcal{X}^{k},\mathcal{Z}^{k},\mathcal{E}^{k}, \mathcal{Q}^{k},\mathcal{G}^{k};\mu^{k},\rho^{k})\] \[\quad\leqslant L(\mathcal{X}^{k},\mathcal{Z}^{k},\mathcal{E}^{k}, \mathcal{Q}^{k-1},\mathcal{G}^{k-1};\mu^{k-1},\rho^{k-1})+\frac{\mu^{k}+\mu^{k -1}}{2(\mu^{k-1})^{2}}\|\mathcal{Q}^{k}-\mathcal{Q}^{k-1}\|_{F}^{2}\] \[\quad+\frac{\rho^{k}+\rho^{k-1}}{2(\rho^{k-1})^{2}}\|\mathcal{G} ^{k}-\mathcal{G}^{k-1}\|_{F}^{2}\] \[\quad\leqslant L(\mathcal{X}^{1},\mathcal{Z}^{1},\mathcal{E}^{1}, \mathcal{Q}^{0},\mathcal{G}^{0};\mu^{0},\rho^{0})+\sum_{j=1}^{k}\frac{\mu^{j}+ \mu^{j-1}}{2(\mu^{j-1})^{2}}\|\mathcal{Q}^{j}-\mathcal{Q}^{j-1}\|_{F}^{2}\] \[\quad+\sum_{j=1}^{k}\frac{\rho^{j}+\rho^{j-1}}{2(\rho^{j-1})^{2}} \|\mathcal{G}^{j}-\mathcal{G}^{j-1}\|_{F}^{2}.\]
By the bounded property of \(\|\mathcal{Q}^{j}-\mathcal{Q}^{j-1}\|_{F}^{2}\) and \(\|\mathcal{G}^{j}-\mathcal{G}^{j-1}\|_{F}^{2}\), as well as under the given condition on \(\{\mu^{k}\}\) and \(\{\rho^{k}\}\), the right hand side of the inequality is bounded, so \(L(\mathcal{X}^{k+1},\mathcal{Z}^{k+1},\mathcal{E}^{k+1},\mathcal{Q}^{k}, \mathcal{G}^{k};\mu^{k},\rho^{k})\)
is bounded.
\[L(\mathcal{X}^{k+1},\mathcal{Z}^{k+1},\mathcal{E}^{k+1},\mathcal{Q }^{k},\mathcal{G}^{k};\mu^{k},\rho^{k})+\frac{1}{2\mu^{k}}\|\mathcal{Q}^{k}\|_{F }^{2}+\frac{1}{2\rho^{k}}\|\mathcal{G}^{k}\|_{F}^{2}\] \[\quad=\|\mathcal{Z}^{k+1}\|_{\omega-LM}+\tau_{1}\|\mathcal{E}^{k+ 1}\|_{1}+\frac{\mu^{k}}{2}\|\mathcal{X}^{k+1}-\mathcal{Z}^{k+1}+\frac{\mathcal{ Q}^{k}}{\mu^{k}}\|_{F}^{2}\] \[\quad\quad+\frac{\rho^{k}}{2}\|\mathcal{T}-(\mathcal{X}^{k+1}+ \mathcal{E}^{k+1})+\frac{\mathcal{G}^{k}}{\rho^{k}}\|_{F}^{2}.\]
The terms on the right side of the equation are nonnegative and the terms on the left side of the equation are bounded, so \(\mathcal{Z}^{k}\) and \(\mathcal{E}^{k}\) are bounded. By observing the last regular term on the right side of the equation, \(\mathcal{X}^{k}\) is bounded. Therefore, \(\{\mathcal{X}^{k}\},\{\mathcal{Z}^{k}\}\), and \(\{\mathcal{E}^{k}\}\) are all bounded. The proof is completed.
**Theorem 4**.: _The sequence \(L(\mathcal{X}^{k},\mathcal{Z}^{k},\mathcal{E}^{k},\mathcal{Q}^{k},\mathcal{G} ^{k})\) generated by algorithm 2 has at least one accumulation point \(L(\mathcal{X}^{\star},\mathcal{Z}^{\star},\mathcal{E}^{\star},\mathcal{Q}^{ \star},\mathcal{G}^{\star})\). Then, \(L(\mathcal{X}^{\star},\mathcal{Z}^{\star},\mathcal{E}^{\star})\) is is a stationary point of optimization problem (27) under the condition that \(\lim\limits_{k\rightarrow\infty}\mu^{k}(\mathcal{Z}^{k+1}-\mathcal{Z}^{k})=0\), \(\lim\limits_{k\rightarrow\infty}\rho^{k}(\mathcal{E}^{k+1}-\mathcal{E}^{k})=0\), \(\sum_{j=1}^{\infty}\frac{\mu^{j}+\mu^{j-1}}{(\mu^{j-1})^{2}}<\infty\) and \(\sum_{j=1}^{\infty}\frac{\rho^{j}+\rho^{j-1}}{(\rho^{j-1})^{2}}<\infty\)._
Proof. The sequence \(L(\mathcal{X}^{k},\mathcal{Z}^{k},\mathcal{E}^{k},\mathcal{Q}^{k},\mathcal{G} ^{k})\) generated by algorithm 2 is bounded as proven in lemma 2. By the Bolzano-Weierstrass theorem, the sequence has at least one accumulation point \((\mathcal{X}^{\star},\mathcal{Z}^{\star},\mathcal{E}^{\star},\mathcal{Q}^{ \star},\mathcal{G}^{\star})\). Without loss of generality, we can assume that the sequence \((\mathcal{X}^{k},\mathcal{Z}^{k},\mathcal{E}^{k},\mathcal{Q}^{k},\mathcal{G} ^{k})\) converges to \((\mathcal{X}^{\star},\mathcal{Z}^{\star},\mathcal{E}^{\star},\mathcal{Q}^{ \star},\mathcal{G}^{\star})\). Actually, as \(\sum_{j=1}^{\infty}\frac{1}{\mu^{j}}<\sum_{j=1}^{\infty}\frac{\mu^{j}+\mu^{j-1 }}{(\mu^{j-1})^{2}}<\infty\), it follows from the update rule of \(\mathcal{G}^{k}\) that \(\lim\limits_{k\rightarrow\infty}\mathcal{T}-\mathcal{X}^{k}-\mathcal{E}^{k}= \lim\limits_{k\rightarrow\infty}(\mathcal{G}^{k}-\mathcal{G}^{k-1})/\mu^{k-1 }=0\), that is, \(\mathcal{T}=\mathcal{X}^{\star}+\mathcal{E}^{\star}\). Similarly, \(\sum_{j=1}^{\infty}\frac{1}{\rho^{j}}<\sum_{j=1}^{\infty}\frac{\rho^{j}+\rho^{ j-1}}{(\rho^{j-1})^{2}}<\infty\) also satisfies. It follows the date rule of \(\mathcal{Q}^{k}\) that \(\lim\limits_{k\rightarrow\infty}\mathcal{X}^{k}-\mathcal{Z}^{k}=\lim\limits_{ k\rightarrow\infty}(\mathcal{Q}^{k}-\mathcal{Q}^{k-1})/\rho^{k-1}=0\), that is, \(\mathcal{X}^{\star}=\mathcal{Z}^{\star}\). Therefore, the feasible condition is satisfied.
First, we list the Karush-Kuhn-Tucker (KKT) conditions the \(L(\mathcal{X}^{\star},\mathcal{Z}^{\star},\mathcal{E}^{\star},\mathcal{Q}^{ \star},\mathcal{G}^{\star})\) satisfies,
\[\left\{\begin{array}{l}\mathcal{Q}^{\star}-\mathcal{G}^{\star}=0,\\ 0\in\frac{\partial(\|\mathcal{Z}\|_{\omega-LM})}{\partial\mathcal{Z}}|_{ \mathcal{Z}^{\star}}-\mathcal{Q}^{\star},\\ 0\in\frac{\partial(\|\mathcal{E}\|_{1})}{\partial\mathcal{E}}|_{\mathcal{E}^{ \star}}-\mathcal{G}^{\star}.\end{array}\right. \tag{36}\]
For \(\mathcal{X}^{k+1}\), it is noted that,
\[\nabla_{\mathcal{X}}L(\mathcal{X},\mathcal{Z}^{k},\mathcal{E}^{ k},\mathcal{Q}^{k},\mathcal{G}^{k})|_{\mathcal{X}^{k+1}}\] \[\quad=\mathcal{Q}^{k}-\mathcal{G}^{k}+\mu^{k}(\mathcal{X}^{k+1}- \mathcal{Z}^{k})-\rho^{k}(\mathcal{T}-\mathcal{X}^{k+1}-\mathcal{E}^{k})\] \[\quad=\mathcal{Q}^{k+1}-\mathcal{G}^{k+1}+\mu^{k}(\mathcal{Z}^{k+1 }-\mathcal{Z}^{k})-\rho^{k}(\mathcal{E}^{k+1}-\mathcal{E}^{k}).\]
In terms of provided condition \(\lim\limits_{k\rightarrow\infty}\mu^{k}(\mathcal{Z}^{k+1}-\mathcal{Z}^{k})=0\) and \(\lim\limits_{k\rightarrow\infty}\rho^{k}(\mathcal{E}^{k+1}-\mathcal{E}^{k})=0\), and the bounded properties of \(\mathcal{Q}^{k}\) and \(\mathcal{G}^{k}\), we can obtain that \(\mathcal{Q}^{\star}-\mathcal{G}^{\star}=0\).
Similarly, for \(\mathcal{Z}^{k+1}\), we have
\[\frac{\partial L(\mathcal{X}^{k+1},\mathcal{Z},\mathcal{E}^{k},\mathcal{Q}^{k}, \mathcal{G}^{k})}{\partial\mathcal{Z}}|_{\mathcal{Z}^{k+1}}=\frac{\partial(\| \mathcal{Z}\|_{\omega-LM})}{\partial\mathcal{Z}}|_{\mathcal{Z}^{k+1}}- \mathcal{Q}^{k+1}.\]
By the bounded condition of \(\mathcal{Z}^{k+1},\mathcal{Q}^{k+1}\), and \(\frac{\partial(\|\mathcal{Z}\|_{\omega-LM})}{\partial\mathcal{Z}}|_{\mathcal{Z }^{*}}\), and the formula of \(\partial(\|\mathcal{Z}\|_{\omega-LM})\), we can obtain that \(0\in\frac{\partial(\|\mathcal{Z}\|_{\omega-LM})}{\partial\mathcal{Z}}|_{ \mathcal{Z}^{*}}-\mathcal{Q}^{*}\).
Additionally, for \(\mathcal{E}^{k+1}\), we have
\[\frac{\partial L(\mathcal{X}^{k+1},\mathcal{Z}^{k+1},\mathcal{E},\mathcal{Q}^ {k},\mathcal{G}^{k})}{\partial\mathcal{E}}|_{\mathcal{E}^{k+1}}=\frac{ \partial(\|\mathcal{E}\|_{1})}{\partial\mathcal{E}}|_{\mathcal{E}^{k+1}}- \mathcal{G}^{k+1}.\]
By the bounded condition of \(\mathcal{E}^{k+1},\mathcal{G}^{k+1}\), and \(\frac{\partial(\|\mathcal{E}\|_{1})}{\partial\mathcal{E}}|_{\mathcal{E}^{*}}\), and the formula of \(\partial(\|\mathcal{E}\|_{1})\), we have \(0\in\frac{\partial(\|\mathcal{E}\|_{1})}{\partial\mathcal{E}}|_{\mathcal{E}^{* }}-\mathcal{G}^{*}\). Through the above-mentioned, \(L(\mathcal{X}^{*},\mathcal{Z}^{*},\mathcal{E}^{*},\mathcal{Q}^{*},\mathcal{G}^ {*})\) satisfies the KKT conditions of problem (27). Therefore, \(L(\mathcal{X}^{*},\mathcal{Z}^{*},\mathcal{E}^{*})\) is a stationary point of optimization problem (27). This completes our proof.
## 6 Experiments
We evaluate the performance of the proposed TLM-based LRTC and TRPCA methods. We employ the peak signal-to-noise rate (PSNR) value, the structural similarity (SSIM) value [25], the feature similarity (FSIM) value [26], and erreur relative globale adimensionnelle de synth\(\grave{e}\)se (ERGAS) value [27] to measure the quality of the recovered results. The PSNR, SSIM, and FSIM values are the bigger the better, and the ERGAS value is the smaller the better. All tests are implemented on the Windows 11 platform and MATLAB (R2019a) with an 13th Gen Intel Core i5-13600K 3.50 GHz and 32 GB of RAM.
### Low-rank tensor completion
In this section, we test three kinds of real-world data: MSI, MRI, and Video. The method for sampling the data is purely random sampling. The comparative LRTC methods are as follows: HaLRTC [28], TNN [29] and PSTNN [30] methods.
**MSI completion:** We test nine MSIs in the dataset CAVE1. All testing data are of size \(256\times 256\times 31\). In Fig.3, we select six from nine MSIs, bringing the different sampling rates and band visible results. The individual MSI names and their corresponding bands are written in the caption of Fig.3. As shown from Fig.3, the visual effect of the proposed method is better than the contrast method at all three sampling rates. To further highlight the superiority of our method, the average quantitative results of nine MSIs are listed in Table 1. It can be seen that the proposed method has a great improvement compared to the suboptimal method. The PSNR value at both 10% sampling
rate is at least 3.1dB higher than the suboptimal TNN method, and even reaches 5dB at 5% sampling rate.
**MRI completion:** We test the performance of the proposed method and the comparative method on MRI2 data with the size of \(181\times 217\times 181\). First, we demonstrate the visual effect recovered by MRI data at sampling rates of 5%, 10% and 20% in Fig.4. Our method is clearly superior to the comparative methods. Then, we list the average quantitative results of frontal slice of MRI restored by all methods at different sampling rates in Table 2. At the sampling rate of 5% and 10%, the PSNR value of the proposed method is at least 0.9db higher than that of the suboptimal PSTNN method, and the values of SSIM, FSIM, and ERGAS are also better than the suboptimal PSTNN method.
Footnote 2: [http://brainweb.bic.mni.mcgill.ca/brainweb/selection_normal.html](http://brainweb.bic.mni.mcgill.ca/brainweb/selection_normal.html)
**Video completion:** We test nine Videos3(respectively named news, akiyo, hall, highway, foreman, container, coastguard, suzie, carphone) of size \(144\times 176\times 50\). Firstly, we demonstrate the visual results in our experiment in Fig.5. It is not hard to see from the Fig.5 that the recovery of our method on the vision effect is more better. Furthermore, we list the average quantitative results of nine Videos in Table 3. At this time, the suboptimal method is the suboptimal PSTNN method. When the sampling rate is 5%, the PSNR value of the proposed method is 0.8dB higher than it. In addition, at the sampling rate of 10% and 20%, the PSNR value of the proposed method is at least 0.9db higher than
\begin{table}
\begin{tabular}{c|c c c c|c c c c|c c c c} \hline SR & \multicolumn{4}{c|}{5\%} & \multicolumn{4}{c|}{10\%} & \multicolumn{4}{c}{20\%} \\ \hline Method & PSNR & SSIM & FSIM & ERGAS & PSNR & SSIM & FSIM & ERGAS & PSNR & SSIM & FSIM & ERGAS \\ \hline Observed & 15.000 & 0.140 & 0.636 & 846.663 & 15.237 & 0.178 & 0.634 & 823.947 & 15.745 & 0.246 & 0.634 & 777.072 \\ HaLRTC & 25.415 & 0.756 & 0.823 & 288.187 & 29.649 & 0.824 & 0.875 & 196.784 & 34.459 & 0.900 & 0.931 & 122.055 \\ TNN & 27.158 & 0.742 & 0.837 & 243.140 & 33.530 & 0.880 & 0.918 & 129.112 & 39.012 & 0.954 & 0.966 & 69.571 \\ PSTNN & 21.313 & 0.582 & 0.725 & 458.662 & 31.542 & 0.835 & 0.884 & 188.951 & 39.986 & 0.951 & 0.963 & 64.395 \\ TLM & **31.724** & **0.825** & **0.882** & **153.363** & **36.151** & **0.908** & **0.935** & **97.124** & **41.043** & **0.960** & **0.970** & **57.365** \\ \hline \end{tabular}
\end{table}
Table 1: The average PSNR, SSIM, FSIM and ERGAS values for nine MSIs tested by observed and the four utilized LRTC methods.
\begin{table}
\begin{tabular}{c|c c c c|c c c c|c c c c} \hline SR & \multicolumn{4}{c|}{5\%} & \multicolumn{4}{c|}{10\%} & \multicolumn{4}{c}{20\%} \\ \hline Method & PSNR & SSIM & FSIM & ERGAS & PSNR & SSIM & FSIM & ERGAS & PSNR & SSIM & FSIM & ERGAS \\ \hline Observed & 11.399 & 0.310 & 0.530 & 1021.103 & 11.632 & 0.323 & 0.565 & 994.042 & 12.145 & 0.350 & 0.613 & 937.038 \\ HaLRTC & 17.295 & 0.298 & 0.636 & 537.363 & 20.094 & 0.438 & 0.725 & 391.416 & 24.430 & 0.659 & 0.829 & 236.047 \\ TNN & 22.707 & 0.472 & 0.743 & 302.903 & 26.047 & 0.641 & 0.812 & 205.793 & 29.960 & 0.798 & 0.882 & 130.952 \\ PSTNN & 23.253 & 0.497 & 0.753 & 283.311 & 25.956 & 0.637 & 0.810 & 207.873 & 29.953 & 0.798 & 0.882 & 131.061 \\ TLM & **24.149** & **0.518** & **0.758** & **260.818** & **27.536** & **0.681** & **0.828** & **176.650** & **31.739** & **0.832** & **0.897** & **107.914** \\ \hline \end{tabular}
\end{table}
Table 2: The PSNR, SSIM, FSIM and ERGAS values for MRI tested by observed and the four utilized LRTC methods.
that of the suboptimal PSTNN method.
### Tensor robust principal component analysis
In this section, we evaluate the performance of the proposed TRPCA method through HSI noise denoising. The comparative TRPCA methods include the SNN [31], TNN [15] methods. In this paper, the pepper and salt noise is obtained randomly, and its noise level is \(\nu\). We test the Pavia City Center
Figure 3: Visual results for MSI. (a) Original image. (b) Observed image. (c) HaLRTC. (d) TNN. (e) PSTNN. (f) TLM. SR: top two rows are 5%, middle two rows are 10% and last two rows are 20%. The rows of MSIs are in order: stuffed_toys, photo_and_face, glass_tiles, fake_and_real_strawberries, fake_and_real_beers, chart_and_stuffed_toy. The corresponding bands in each row are: 15, 15, 20, 20, 25, 25.
data sets and Washington DC data sets. Pavia City Center data size is \(200\times 200\times 80\), where the spatial resolution is \(200\times 200\) and the spectral resolution is 80. Washington DC data size is \(256\times 256\times 150\), where the spatial resolution is \(256\times 256\) and the spectral resolution is 150. In Table 4, we list the quantitative numerical results of Pavia City Center and Washington DC Data under three noise level (NL) of the pepper and salt noise noise, respectively. According to Table 4, it can be seen that under the \(\nu=0.4\), the PSNR value result of the TLM method is 3.3 dB higher than that of the TNN method for the Pavia City Center data. It can be seen that under the \(\nu=0.2\), the proposed method achieves a PSNR value that is 1.2 dB higher than that of the suboptimal TNN method for Washington DC data. In Figs. 6-7, we display the visualization results of the two datasets in the order of noise level. From figure, it is easy to observe that our method outperforms the comparison methods in terms of denoising effectiveness.
Figure 4: Visual results for MRI. (a) Original image. (b) Observed image. (c) HaLRTC. (d) TNN. (e) PSTNN. (f) TLM. SR: top row is 5%, middle row is 10% and last row is 20%. MRI slice in the order: 30, 60, 90.
\begin{table}
\begin{tabular}{c|c c c c|c c c c|c c c c} \hline SR & \multicolumn{4}{c|}{5\%} & \multicolumn{4}{c|}{10\%} & \multicolumn{4}{c}{20\%} \\ \hline Method & PSNR & SSIM & FSIM & ERGAS & PSNR & SSIM & FSIM & ERGAS & PSNR & SSIM & FSIM & ERGAS \\ \hline Observed & 6.045 & 0.011 & 0.433 & 1167.724 & 6.280 & 0.018 & 0.425 & 1136.581 & 6.790 & 0.030 & 0.416 & 1071.805 \\ HaLRTC & 20.348 & 0.583 & 0.754 & 233.873 & 23.049 & 0.686 & 0.813 & 170.243 & 26.395 & 0.808 & 0.884 & 115.282 \\ TNN & 26.643 & 0.757 & 0.877 & 113.835 & 29.130 & 0.826 & 0.912 & 87.016 & 32.109 & 0.889 & 0.944 & 63.349 \\ PSTNN & 26.827 & 0.761 & 0.880 & 111.723 & 29.136 & 0.826 & 0.912 & 86.997 & 32.109 & 0.889 & 0.944 & 63.345 \\ TLM & **27.837** & **0.776** & **0.894** & **101.185** & **30.290** & **0.840** & **0.922** & **77.778** & **33.185** & **0.897** & **0.949** & **57.400** \\ \hline \end{tabular}
\end{table}
Table 3: The average PSNR, SSIM, FSIM and ERGAS values for nine videos tested by observed and the four utilized LRTC methods.
\begin{table}
\begin{tabular}{c|c|c c c|c c c|c c c} \hline & \multicolumn{2}{c|}{NL} & \multicolumn{2}{c|}{0.2} & \multicolumn{2}{c|}{0.3} & \multicolumn{2}{c}{0.4} \\ \hline & HSI & Method & PSNR & SSIM & FSIM & PSNR & SSIM & FSIM & PSNR & SSIM & FSIM \\ \hline & Observed & 11.813 & 0.125 & 0.565 & 10.055 & 0.074 & 0.479 & 8.797 & 0.048 & 0.422 \\ & SNN & 30.702 & 0.932 & 0.950 & 29.173 & 0.897 & 0.925 & 27.549 & 0.841 & 0.889 \\ Pavia City Center & TNN & 46.092 & 0.989 & 0.992 & 43.203 & 0.986 & 0.990 & 38.610 & 0.974 & 0.983 \\ & TLM & **52.118** & **0.992** & **0.994** & **46.456** & **0.989** & **0.992** & **41.949** & **0.982** & **0.987** \\ \hline & Observed & 11.429 & 0.122 & 0.553 & 9.664 & 0.073 & 0.467 & 8.418 & 0.048 & 0.412 \\ & SNN & 31.473 & 0.928 & 0.951 & 29.863 & 0.895 & 0.930 & 28.220 & 0.849 & 0.902 \\ Washington DC & TNN & 43.834 & 0.992 & 0.994 & 40.925 & 0.986 & 0.991 & 35.817 & 0.953 & 0.974 \\ & TLM & **45.952** & **0.995** & **0.996** & **42.556** & **0.989** & **0.993** & **38.740** & **0.977** & **0.985** \\ \hline \end{tabular}
\end{table}
Table 4: The PSNR, SSIM and FSIM values for 2 HSIs tested by observed and the three utilized TRPCA methods.
Figure 5: Visual results for videos. (a) Original image. (b) Observed image. (c) HaLRTC. (d) TNN. (e) PSTNN. (f) TLM. SR: top two rows are 5%, middle two rows are 10% and last two rows are 20%. Video frame in the order: 5, 10, 25, 30, 35, 40.
### Discussions
#### 6.3.1 Compared with EMLCP Method
To further highlight the advantages of the LM function, we compared it with the EMLCP method based on MLCP function [21]. To ensure a fair comparison, we also modified the EMLCP method to use a tubal rank model instead of the original N-tubal rank model. First, we compared the EMLCP and TLM LRTC methods using video data. Table 5 presents the quantitative results for the news, foreman, and container videos. According to the results in the table, it can be observed that the TLM method outperforms the EMLCP method and exhibits a significant improvement in terms of time. Next, we compared the EMLCP and TLM TRPCA methods for HSI denoising. Table 6 presents the quantitative denoising results for the Pavia City Center data and Washington DC data. Under different noise levels, the proposed TLM method significantly outperforms the EMLCP method, achieving better results while requiring much less computation time than the EMLCP method. In conclusion, the newly proposed LM function not only outperforms the MLCP function in terms of singular value manipulation but also exhibits faster computational speed in the corresponding algorithms.
Figure 6: Visual results for Pavia City Center. NL: top row is 0.2, middle row is 0.3 and last row is 0.4. HSI band in the order: 20, 40, 60.
#### 6.3.2 Parameters Setting
For the proposed TLM-based LRTC method, Table 7 shows the parameters of the \(\lambda\), \(\gamma\), and \(\varepsilon\) on different experiments. The weights are set to: \(\omega_{j,i}=\frac{1}{c+e^{-w_{N}-j+1,i}}\), where \(c=0.8,N=\min\{I_{1},I_{2}\}\), \(w_{N-j+1,i}=\frac{N\times\sigma_{j}(\bar{\mathcal{W}}^{(i)})}{m_{i}}\), \(\sigma_{j}(\bar{\mathcal{W}}^{(i)})\) is the \((j,j,i)\)-th singular value of \(\bar{\mathcal{W}},\mathcal{W}=\mathcal{X}+\frac{\mathcal{O}}{\mu}\) and \(m_{i}=\max\{\sigma_{j}(\bar{\mathcal{W}}^{(i)}),j=1,2,\ldots,N\}\). Besides, \(\mu_{0}=1/100000,\eta=1.1\).
For the proposed TLM-based TRPCA method, Table 8 shows the parameters of the \(\lambda\), \(\gamma\), \(\varepsilon\), and
\begin{table}
\begin{tabular}{c|c|c c c|c c c|c c} \hline & SR & \multicolumn{3}{c|}{5\%} & \multicolumn{3}{c|}{10\%} & \multicolumn{3}{c}{20\%} \\ \hline Video & Method & Observed & EMLCP & TLM & Observed & EMLCP & TLM & Observed & EMLCP & TLM \\ \hline \multirow{4}{*}{akiyo} & PSNR & 7.601 & 30.996 & **31.691** & 7.836 & 34.078 & **34.553** & 8.344 & 37.567 & **37.903** \\ & SSIM & 0.014 & 0.903 & **0.920** & 0.023 & 0.949 & **0.953** & 0.037 & 0.975 & **0.976** \\ & FSIM & 0.466 & 0.950 & **0.961** & 0.454 & 0.972 & **0.976** & 0.434 & 0.986 & **0.987** \\ & ERGAS & 1076.219 & 73.178 & **67.788** & 1047.496 & 51.827 & **49.331** & 988.013 & 35.510 & **34.426** \\ \hline \multirow{4}{*}{container} & PSNR & 4.600 & 28.569 & **29.342** & 4.835 & 32.522 & **32.945** & 5.347 & 37.226 & **37.652** \\ & SSIM & 0.007 & 0.870 & **0.895** & 0.011 & 0.930 & **0.940** & 0.021 & 0.966 & **0.970** \\ \cline{1-1} & FSIM & 0.395 & 0.927 & **0.944** & 0.391 & 0.963 & **0.968** & 0.393 & 0.983 & **0.985** \\ \cline{1-1} & ERGAS & 1239.987 & 79.861 & **73.724** & 1206.866 & 53.018 & **51.186** & 1137.888 & 33.976 & **33.136** \\ \hline \end{tabular}
\end{table}
Table 5: The PSNR, SSIM, FSIM, ERGAS and TIME values for three videos tested by observed and the EMLCP and the TLM LRTC methods.
Figure 7: Visual results for Washington DC. NL: top row is 0.2, middle row is 0.3 and last row is 0.4. HSI band in the order: 40, 80, 120.
\begin{table}
\begin{tabular}{c|c|c c c|c c c|c c c} \hline & \multicolumn{2}{c|}{NL} & \multicolumn{2}{c|}{0.2} & \multicolumn{2}{c|}{0.3} & \multicolumn{2}{c}{0.4} \\ \hline HSI & Method & PSNR & SSIM & FSIM & PSNR & SSIM & FSIM & PSNR & SSIM & FSIM \\ \hline \multirow{4}{*}{Pavia City Center} & Observed & 11.813 & 0.125 & 0.565 & 10.055 & 0.074 & 0.479 & 8.797 & 0.048 & 0.422 \\ & EMLCP & 50.406 & 0.990 & 0.993 & 45.594 & 0.988 & 0.991 & 41.081 & 0.979 & 0.985 \\ & TLM & **52.118** & **0.992** & **0.994** & **46.456** & **0.989** & **0.992** & **41.949** & **0.982** & **0.987** \\ \hline \multirow{4}{*}{Washington DC} & Observed & 11.429 & 0.122 & 0.553 & 9.664 & 0.073 & 0.467 & 8.418 & 0.048 & 0.412 \\ & EMLCP & 45.458 & 0.994 & 0.996 & 41.474 & 0.986 & 0.991 & 37.063 & 0.966 & 0.977 \\ \cline{1-1} & TLM & **45.952** & **0.995** & **0.996** & **42.556** & **0.989** & **0.993** & **38.740** & **0.977** & **0.985** \\ \hline \end{tabular}
\end{table}
Table 6: The PSNR, SSIM and FSIM values for two HSIs tested by observed and the EMLCP and the TLM TRPCA methods.
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline & \multicolumn{3}{c|}{\(\lambda\)} & \multicolumn{3}{c|}{\(\gamma\)} & \multicolumn{3}{c}{\(\varepsilon\)} \\ \hline \multirow{2}{*}{Data SR} & \multirow{2}{*}{5\%} & \multirow{2}{*}{10\%} & \multirow{2}{*}{20\%} & \multirow{2}{*}{5\%} & \multirow{2}{*}{10\%} & \multirow{2}{*}{20\%} & \multirow{2}{*}{5\%} & \multirow{2}{*}{10\%} & \multirow{2}{*}{20\%} \\ & & & & & & & & & \\ \hline fake\_and\_real\_beers & 0.1 & 0.3 & 0.3 & 20000 & 10000 & 10000 & 5 & 5 & 5 \\ face & 0.1 & 0.3 & 0.3 & 60000 & 10000 & 10000 & 5 & 5 & 5 \\ egyptian\_statue & 0.1 & 0.3 & 0.3 & 60000 & 20000 & 10000 & 5 & 5 & 5 \\ cloth & 0.1 & 0.1 & 0.1 & 60000 & 20000 & 10000 & 25 & 10 & 5 \\ clay & 0.1 & 0.3 & 0.3 & 100000 & 20000 & 10000 & 5 & 5 & 5 \\ chart\_and\_stuffed\_toy & 0.1 & 0.1 & 0.3 & 60000 & 10000 & 10000 & 10 & 5 & 5 \\ beads & 0.1 & 0.1 & 0.1 & 100000 & 20000 & 20000 & 30 & 10 & 10 \\ balloons & 0.1 & 0.3 & 0.3 & 140000 & 10000 & 10000 & 5 & 5 & 5 \\ cd & 0.1 & 0.1 & 0.2 & 200000 & 10000 & 10000 & 10 & 10 & 5 \\ MRI & 0.1 & 0.1 & 0.1 & 900000 & 60000 & 20000 & 30 & 30 & 20 \\ news & 0.1 & 0.1 & 0.1 & 100000 & 40000 & 40000 & 20 & 10 & 5 \\ akiyo & 0.1 & 0.1 & 0.1 & 80000 & 40000 & 20000 & 30 & 10 & 5 \\ hall & 0.1 & 0.1 & 0.1 & 100000 & 40000 & 20000 & 30 & 20 & 10 \\ highway & 0.1 & 0.1 & 0.1 & 100000 & 100000 & 20000 & 30 & 30 & 30 \\ foreman & 0.1 & 0.1 & 0.1 & 100000 & 40000 & 20000 & 30 & 30 & 25 \\ container & 0.1 & 0.1 & 0.1 & 100000 & 40000 & 20000 & 20 & 15 & 5 \\ coastguard & 0.1 & 0.1 & 0.1 & 100000 & 40000 & 20000 & 30 & 30 & 15 \\ suzie & 0.1 & 0.1 & 0.1 & 100000 & 40000 & 20000 & 30 & 20 & 15 \\ carphone & 0.1 & 0.1 & 0.1 & 100000 & 40000 & 20000 & 25 & 20 & 20 \\ \hline \end{tabular}
\end{table}
Table 7: Parameters under different experiments of LRTC.
\(\tau_{1}\) on different experiments, where \(\tau_{\lambda}=\frac{1}{\sqrt{\max(I_{1},I_{2})I_{3}}}\). The weights are set to: \(\omega_{j,i}=\frac{1}{c+e^{-\omega_{N}-j+1,i}}\), where \(c=1.2,N=\min\{I_{1},I_{2}\}\), \(w_{N-j+1,i}=\frac{N\times\sigma_{j}(\bar{\mathcal{W}}^{(i)})}{m_{i}}\), \(\sigma_{j}(\bar{\mathcal{W}}^{(i)})\) is the \((j,j,i)\)-th singular value of \(\bar{\mathcal{W}},\mathcal{W}=\mathcal{X}+\frac{\mathcal{O}}{\mu}\) and \(m_{i}=\max\{\sigma_{j}(\bar{\mathcal{W}}^{(i)}),j=1,2,\ldots,N\}\). Besides, \(\mu_{0}=1/10000,\eta=1.2\).
## 7 Conclusion
This paper proposes a new non-convex function called the LM function, which not only preserves large singular values but also further increases the penalty on small singular values. Based on this, we propose the weighted tensor \(LM\)-norm as the smooth relaxation for tensor rank approximation. Two main applications of tensor recovery are considered: the first is the low-rank tensor completion (LRTC) problem, and the second is the tensor robust principal component analysis (TRPCA) problem. For each application, we propose the TLM-based model along with corresponding solution algorithms.
Our main conclusions are:
\(\bullet\) The experiments demonstrate that the proposed methods achieve good visual results and high numerical performance on various datasets. The parameters of the proposed methods are influenced by the image data. As the sampling rate decreases or the noise level increases, the optimal parameters \(\lambda\gamma\) generally need to be increased.
\(\bullet\) The selection of weights in the proposed weighted tensor LM norm is inversely proportional to the singular values, which enhances sensitivity to different singular values. Compared with other methods, our proposed weighted tensor LM norm method is more effective in approximating tensor rank.
\(\bullet\) The EMLCP method represents the state-of-the-art method, and further improving its performance is challenging. Through comparison with the results of the EMLCP method, we further validate that the LM function outperforms the MLCP function in handling singular values and demonstrates the efficiency of the proposed methods.
In the various experiments conducted in this paper, the selection of weights depends on the problem type, and the optimal parameter selection may depend on the characteristics of test data. In fact,
\begin{table}
\begin{tabular}{c|c|c c c c} \hline HSI & NL & \(\lambda\) & \(\gamma\) & \(\varepsilon\) & \(\tau_{1}\) \\ \hline \multirow{4}{*}{Pavia City Center} & 0.2 & 0.02 & 4000 & 1 & 0.015\(\tau_{\lambda}\) \\ & 0.3 & 0.02 & 15000 & 1 & 0.014\(\tau_{\lambda}\) \\ & 0.4 & 0.02 & 50000 & 1 & 0.011\(\tau_{\lambda}\) \\ \hline \multirow{4}{*}{Washington DC} & 0.2 & 0.02 & 10000 & 1 & 0.016\(\tau_{\lambda}\) \\ & 0.3 & 0.02 & 13000 & 1 & 0.011\(\tau_{\lambda}\) \\ \cline{1-1} & 0.4 & 0.02 & 400000 & 1 & 0.008\(\tau_{\lambda}\) \\ \hline \end{tabular}
\end{table}
Table 8: Parameters under different experiments of TRPCA.
finding the optimal parameters for different datasets is a complex engineering task. Therefore, how to construct adaptive parameter selection methods becomes an interesting research topic. Moreover, in this paper, we utilized fast Fourier transform, and it is also worthwhile to explore the relevant theories and potential improvements under other transforms, such as discrete cosine transform transform [32], unitary transform [33], framelet transform [34], group-tube transform [35] and even nonlinear transform [36]. In the future, we will focus on three aspects: developing methods for constructing adaptive parameters, investigating the performance of non-convex functions under different transforms, and exploring more effective approaches for handling singular values.
| 低ランクテンソル補完 (LRTC) は、Incompleteな観測されたテンソルから完全な低ランクテンソルを復元することを目的とし、画像処理やコンピュータビジョンなどの様々な実用的なアプリケーションで注目を集めています。しかし、現在の方法では、観測された情報は十分な量の場合にのみ良好な性能を発揮し、観測された情報は少なくなると性能が低下したり、失敗する可能性があります。観測された情報の利用効率を向上させるため、テンソル結合ランクと対数合成ノルム (TJLC) という新しい手法が提案されました。この手法は、テンソル Tucker rank と tubal rank の2つの種類のテンソル低ランク構造を同時に利用し、既知と欠損の要素間の関連性を強化します。LRTC に適用するために、2種類のテンソルランクを直接的に利用するのは難しいことから、新たなテンソル対数合成ノルムが提案されました。その後、TJLC モデル |
2309.11567 | On the Hagedorn behavior of the superstring propating in a cosmological
time dependent background | In this work the LvN quantization of the type IIB superstring is carried on
in a time dependent plane wave background with a constant self-dual
Ramond-Ramond 5-form and a linear dilaton in the light-like direction. Such an
endeavour allows us to define an invariant density matrix and study important
issues in real time string thermodynamics. In particular, the Hagendorn
temperature is calculated as function of the thermalization time. | Daniel Luiz Nedel | 2023-09-20T18:10:46 | http://arxiv.org/abs/2309.11567v1 | # On the Hagedorn behavior of the superstring
###### Abstract
In this work the LvN quantization of the type IIB superstring is carried on in a time dependent plane wave background with a constant self-dual Ramond-Ramond 5-form and a linear dilaton in the light-like direction. Such an endeavour allows us to define an invariant density matrix and study important issues in real time string thermodynamics. In particular, the Hagedorn temperature is calculated as function of the thermalization time.
Keywords:Superstrings and Heterotic Strings, Sigma Model, Spacetime Singularities, Thermal Field Theory
ArXiv ePrint: 0000.0000
+
Footnote †: institutetext: Department of Physics, University of California, Berkeley, CA 94720-1105, USA
## 1 Introducion
The formulation of superstring theory at finite temperature and the study of the superstring sigma model for time dependent backgrounds are topics of constant and renewed interest in the literature, in view of the structural role superstring theory plays in the construction of theoretic frameworks for fundamental interactions and quantum gravitation. In particular, the study of thermal effects when the string propagates in cosmological time dependent geometries can shed light on important questions related to quantum cosmology and may help to understand the nature of space-like singularities[1].
One outstanding feature of string theory at finite temperature is the exponential growth of states as function of energy. Due to this behavior, the partition function becomes ill defined for temperatures above the so-called Hagedorn temperature. If the Hagedorn behavior works in string theory as it works in hadrons physics, then the true degrees of freedom of the theory at high temperature may be others than those of the perturbative string. However, in spite of many works about finite temperature string theory, a precise understanding of the Hagedorn temperature and the true degrees of freedom at higher temperatures is still lacking. Many of the advances made in understanding the Hagedorn temperature stem from a specific equilibrium finite temperature field theory formalism: the imaginary time formalism. In this case, the thermal state is described by compactifying Euclidean time on a circle, the thermal circle. The radius of the thermal circle is equal to the inverse temperature in natural units. For string theory applications this formalism entails two complications. The first one arises from the simple fact that string theory
contains gravity: for theories containing gravity, the radius of the thermal circle becomes a dynamic field, which makes the very notion of thermal equilibrium non-trivial [2]. In addiction, for closed strings one needs to take into account the winding modes around the thermal circle. Above the Hagedorn temperature these modes become tachyonic and it is precisely these tachyonic excitations that encodes the Hagedorn divergence and the long/short string transition discussed in [3], [4], [5],[6]. However, when the superstring propagates in a time dependent background, the mass and coupling parameters of the superstring sigma model depend explicitly on time. Therefore, the study of thermal effects in this time dependent superstring sigma model requires a real time formalism. Actually, from a worldsheet perspective it's an open system. So, a non equilibrium formalism must be taken into account.
In general, the non equilibrium quantization of a determined system is carried on using the Schwinger and Keldysh formalism. In this formalism a closed time path integral is introduced to treat properly the non equilibrium evolution of quantum fields from their initial thermal equilibrium. Here another approach is used: it is the so called Liouville-von Neumann (LvN) approach [7; 8; 9].
The LvN approach is a canonical method that unifies the usual methodology to study the evolution of pure states, given by the functional Schrodinger equation, with the usual approach used to study the evolution of mixed states, described by the density matrix (which in turn obeys the LvN equation). Note that even though the density matrix depends on time, it still satisfies the LvN equation. Hence, the LvN method treats the time-dependent nonequilibrium system exactly in the same way as the time-independent one.
In the present work the LvN approach is used to study thermal effects in the light cone superstring for the superstring propagating in a time dependent plane wave background with a constant self-dual Ramond-Ramond 5-form and a linear dilaton in the light-like direction. This background keeps sixteen supersymmetries and the sigma model was canonically quantized in [10], where it was shown that the Hamiltonian is time-dependent with vanishing zero-point energy and has a supersymmetric spectrum. As shown in [10] the background is geodesically incomplete and hence admits a null cosmology interpretation. However, the dilaton diverges close to the cosmological singularity and so one needs a non-perturbative description to study the string dynamics close to null singularity.In the sigma model studied in [10], the sign that the theory is not well defined at the singularity appears as a divergence in the time-dependent Hamiltonian when it is evaluated close to the singularity. On the other hand, it was shown in [11] that as the string evolves towards the singularity, the vacuum seen by asymptotically flat observers is a left/right entanglement state. Hence a left/right superstring entanglement entropy appears, dynamically generated by the background. It was shown that, at the singularity, the left/right string entanglement is finite an then could be a useful tool to probe the singularity. Furthermore, it was shown that, at the singularity, the left/right entanglement state is in fact a thermal state and the worldsheet entanglement entropy becomes the thermodynamic entropy for a 2d free supersymmetric gas, which implies that near the singularity the string thermalizes at a finite temperature. Here, in order to study more carefully the superstring thermalization in this background, the superstring canonical quantization is carried on in the Liouville picture,
which allows us to calculate the Hagedorn temperature in the adiabatic approximation as function of time, where time means the time where thermalization takes place. In fact, the Hagedorn temperature is shown to increase as the string evolves from the asymptotically flat time to the singularity time. The present work is divided as follows: in the first section the LvN approach is presented. The time dependent background studied here is presented in section 3. In section 4 the bosonic and fermionic sectors of the light cone superstring time dependent sigma model are quantized in the LvN picture and the invariant creation/annihilation operators are constructed. The density matrix and the adiabatic approximation are discussed in section 5. As an application, the non equilibrium thermal two point function is calculated in the adiabatic approximation. Finally in section 6 the Hagedorn temperature is calculated as function of time.
## 2 The Liouville-von Neumann (LvN) method
The core of the LvN approach lies on the definition of invariant operators and the fact that quantum LvN equation provides all the quantum and statistical information of non equilibrium systems. Given an operator O and a time dependent evolution operator \(U(t)\), an invariant operator \(O_{L}(t)\) is defined by \(O_{L}(t)=U(t)O_{S}U^{\dagger}(t)\), where \(O_{S}\) is the operator \(O\) in Schrodinger picture. This relation also defines the so called Liouville picture. Comparing to the Heisenberg picture, the operator \(O_{L}(t)\) evolves backward in the same way as the density operator. So \(O_{L}\) also satisfies the LvN equation
\[i\frac{\partial O_{L}}{\partial t}+[O_{L},H]=0 \tag{1}\]
where the Hamiltonian H can be time dependent. The Lewis-Riesenfeld invariant theorem states that an operator satisfying (1) has time-dependent eigenstates and time-independent eigenvalues. So the spectrum of the invariant operators yields quantum states for the time dependent system.
In order to study the nonequilibrium evolution exactly in the same way as the equilibrium one, it is necessary to find an invariant operator \(O_{L}\) such that a time dependent density matrix satisfying LvN equation can be written as \(\rho_{L}(t)=Z^{-1}e^{-\beta O_{L}(t)}\), where Z is the trace of \(\rho_{L}(t)\). Here it is assumed that the system reaches a thermodynamic equilibrium point characterized by the temperature \(1/\beta\). At the time \(t_{0}\) at which equilibrium is reached, \(\rho_{L}(t_{0})\) is the usual equilibrium density matrix. As an example, suppose we have an oscillator interacting with a thermal bath and, as a consequence of this interaction, we have a time-dependent mass. The Hamiltonian will be time dependent and there will be modes creation(or particle creation in a quantum field scenario). One could naively construct a thermal density matrix defined by the time dependent Hamiltonian
\[\rho_{H}=\frac{1}{Z}e^{-\beta H(t)}\,. \tag{2}\]
This density matrix does not satisfy the quantum Liouville-von Neumann (LvN) equation and it is not possible to relate \(1/\beta\) to the equilibrium temperature. If the system starts in the initial thermal equilibrium state, its final state can be far away from the initial one.
Actually, owing to particle production, the final state can be unitarily inequivalent to the initial one. The strategy of the LvN approach is to define time dependent oscillators \(a_{L},a_{L}^{\dagger}\) that satisfy the equation (1).
\[i\frac{\partial a_{L}}{\partial\tau}+[a_{L},H]=0\,. \tag{3}\]
The linearity of the LvN equation allows us to use \(a_{L}(t)\) and \(a_{L}^{\dagger}(t)\) to construct operators that also satisfy equation (1); in particular, the number operator \(N_{L}=a_{L}{}^{\dagger}(t)a_{L}(t)\). By using the Lewis-Riesenfeld invariant theorem, one finds the Fock space consisting of the time dependent number states such that
\[N_{L}(t)|n,t\rangle=n|n,t\rangle\,. \tag{4}\]
With the invariant oscilators, a density matrix which satisfies the LvN equation can be defined as
\[\rho_{\rm T}=\frac{1}{Z_{N}}e^{\beta\omega_{0}a^{\dagger}(t)a(t)}\,, \tag{5}\]
where \(\beta\) and \(\omega_{0}\) are free parameters and the trace that appears in the definition of \(Z_{N}\) is taken over the states defined in (4). Now, the system is characterized by the density matrix (5) in the same way of a time independent one. The key point is to find solutions for equation (3) such that, in the adiabatic regime, the density matrix (5) is equal to the density matrix (2) evaluated at the thermalization time. Thus, the unitary real time evolution of the system can be studied until it reaches thermal equilibrium, characterized by \(\beta\).
## 3 The Background
In this section, the time dependent background studied here is presented. Consider the following time dependent background with Ramond-Ramond flux
\[ds^{2}=-2dx^{+}dx^{-}-\lambda(x^{+})\,x_{I}^{2}\,dx^{+}dx^{+}+dx ^{I}dx^{I}\,,\] \[\phi=\phi(x^{+})\,,\qquad(F_{5})_{+1234}=(F_{5})_{+5678}=2f, \tag{6}\]
where \(\phi\) is the dilaton and \(F_{5}\) the Ramond-Ramond field. As usual for a generic plane wave, the supersymmetry preserved by the background is reduced from maximal (32 supercharges) to sixteen supercharges. When type IIB Green-Schwarz (GS) superstring propagates in this background, conformal invariance of the worldsheet demands
\[R_{\mu\nu}=-2D_{\mu}D_{\nu}\phi+\frac{1}{24}e^{2\phi}(F_{5}^{2})_{\mu\nu}\,, \tag{7}\]
and the only non zero component of the Ricci curvature tensor \(R_{\mu\nu}\) is
\[R_{++}=8\lambda(x^{+}). \tag{8}\]
Putting (3.1) into (3.2) gives
\[\lambda=-\frac{1}{4}\phi^{\prime\prime}+f^{2}e^{2\phi}\,. \tag{3.4}\]
In the reference [10], a solution of (3.2) with non zero constant Ramond-Ramond field (\(f=f_{0}\)) is studied. It has the form
\[\phi=-cx^{+},\ \lambda=f_{0}^{2}e^{-2cx^{+}}, \tag{3.5}\]
for any constant \(c\). In this case, the metric admits a null cosmology interpretation and the cosmological singularity is located in \(x^{+}=-\infty\). Note that in this model the string coupling \(g=e^{-\phi}\) diverges at the singularity. As discussed previously, interaction of the string with this kind of backround makes the parameters of the sigma model time-dependent. In the next sections the quantization of the superstring sigma model for this background is carried on in the Liouville picture.
## 4 Superstring in LvN picture.
In this section the Liouville-von Neumann method is used to study the quantum dynamics of the superstring propagating in the backgound (3.1). This implies defining a superstring Hilbert space constructed with creation/annihilation operators that are LvN invariant, that is, operators which satisfy equation (2.1).
### Bosonic Sector
Let us start with the bosonic sector. Although the gauge fixing has already been discussed in [10], it is useful to include a review here in order to fix notation. The bosonic part of the superstring sigma model for the background (3.1) is
\[S=\frac{1}{4\pi\alpha^{\prime}}\int d^{2}\sigma g^{ab}\ G_{\mu \nu}\partial_{a}X^{\mu}\partial_{b}X^{\nu}\] \[=\frac{1}{4\pi\alpha^{\prime}}\int d^{2}\sigma g^{ab}\ \left(-2 \partial_{a}X^{+}\partial_{b}X^{-}+\partial_{a}X^{I}\partial_{b}X^{I}-m^{2}(X ^{+})X_{I}^{2}\partial_{a}X^{+}\partial_{b}X^{+}\right),\]
where \(g_{ab}\) is the worldsheet metric, \(\sigma^{a}=(\tau,\sigma)\) are the worldsheet coordinates, \(I=1,2,\cdots,8\) and \(m(X^{+})=fe^{-cX^{+}}\). As usual, the RR fluxes do not appear in the bosonic action. The bosonic worldsheet gauge symmetry is fixed using the light cone gauge
\[\sqrt{-g}g^{ab} = \eta^{ab}\,\ \ \ \ -\eta_{\tau\tau}=\eta_{\sigma\sigma}=1\] \[X^{+} = \alpha^{\prime}p^{+}\tau\,\ \ \ \ p^{+}>0. \tag{4.2}\]
In this gauge all the dynamics is determined by \(X^{I}\)'s through the constraints resulting from [12]
\[\frac{\delta{\cal L}}{\delta g_{\tau\sigma}}=0\,\ \ \ \frac{\delta{\cal L}}{ \delta g_{\tau\tau}}=\frac{\delta{\cal L}}{\delta g_{\sigma\sigma}}=0 \tag{4.3}\]
After setting \(-g_{\tau\tau}=g_{\sigma\sigma}=1\), the constraints (4.3) allow us to write \(\partial_{\sigma}X^{-}\) and \(\partial_{\tau}X^{-}\)in terms of \(X^{I}\)
\[\partial_{\sigma}X^{-}=\frac{1}{\alpha^{\prime}p^{+}}\ \partial_{ \sigma}X^{I}\partial_{\tau}X^{I}\, \tag{4.4}\] \[\partial_{\tau}X^{-}=\frac{1}{2\alpha^{\prime}p^{+}}\,\left( \partial_{\tau}X^{I}\partial_{\tau}X^{I}+\partial_{\sigma}X^{I}\partial_{ \sigma}X^{I}-(m(\tau)\alpha^{\prime}p^{+})^{2}X^{I}X^{I}\right). \tag{4.5}\]
Choosing \(c=\frac{1}{\alpha^{\prime}p^{+}}\), the light cone bosonic action can be written as
\[S^{bos.}_{l.c.}=\frac{1}{4\pi\alpha^{\prime}}\int d\tau\int_{0}^{2\pi\alpha^{ \prime}p^{+}}d\sigma\left[\partial_{\tau}X^{I}\partial_{\tau}X^{I}-\partial_ {\sigma}X^{I}\partial_{\sigma}X^{I}-m^{2}(\tau)X_{I}^{2}\right]\,, \tag{4.6}\]
where \(\tau\) and \(\sigma\) were re-scaled by \(\alpha^{\prime}p^{+}\) and \(m(\tau)=f_{0}e^{-\tau}\). Since the bosonic sector of the theory is \(SO(8)\) invariant, the \(I\) index will be frequently omitted.
In order to quantize the theory in the LvN approach, the string coordinate \(X(\sigma)\) and momentum density \(P(\sigma)=\frac{X}{2\pi\alpha^{\prime}}\) are expanded as
\[X^{I}(\sigma) = x_{0}^{I}+\sqrt{2}\sum_{n=1}^{\infty}\left(x_{n}^{I}\cos\frac{n \sigma}{\alpha}+x_{-n}^{I}\sin\frac{n\sigma}{\alpha}\right)\,,\] \[P^{I}(\sigma) = \frac{1}{2\pi\alpha}\left[p_{0}^{I}+\sqrt{2}\sum_{n=1}^{\infty} \left(p_{n}^{I}\cos\frac{n\sigma}{\alpha}+p_{-n}^{I}\sin\frac{n\sigma}{\alpha} \right)\right]\,, \tag{4.7}\]
where it was defined \(\alpha=p^{+}\alpha^{\prime}\). In this notation(usual in pp waves backgrounds) all of the string oscillations--left-movers, right-movers, and the zero modes--can be treated on an equal footing. Note that the form of the expansion for the worldsheet fields \(X^{I}(\sigma)\) and \(P^{I}(\sigma)\) allows us to associate the Fourier modes to Hermitian operators \(x_{n}\) and \(p_{n}\). This will be very useful for writing the thermal density matrix in the position representation. In general the Fourier mode operators \(x_{n}\) and \(p_{n}\) can be time dependent. However, in the LvN picture they are Schrodinger operators. They can be chosen such that the expansion (4.7) represents \(X\) and \(P\) at a given fixed time.
The normalization is chosen so that the canonical commutation relation
\[[X^{I}(\sigma),P^{J}(\sigma^{\prime})]=i\delta^{IJ}\delta(\sigma-\sigma^{ \prime}) \tag{4.8}\]
follows from imposing
\[[x_{m}^{I},p_{n}^{J}]=i\delta^{IJ}\delta_{mn}. \tag{4.9}\]
Next, the light cone Hamiltonian is written as
\[H^{bos.}_{l.c.}=\frac{1}{4\pi\alpha^{\prime}}\int_{0}^{2\pi\alpha^{\prime}p^{+ }}d\sigma\,\left[(2\pi\alpha^{\prime})^{2}P_{I}^{2}+(\partial_{\sigma}X^{I})^ {2}+m^{2}(\tau)X_{I}^{2}\right]. \tag{4.10}\]
In order to proceed with the LvN quantization, equation (4.7) is used to write the light cone Hamiltonian in terms of the Fourier mode operators( omitting \(SO(8)\) indices):
\[H^{bos.}_{l.c.}=\frac{1}{2\alpha}\sum_{n=-\infty}^{\infty}\left[p_{n}^{2}+\omega_{ n}^{2}(\tau)x_{n}^{2}\right], \tag{30}\]
where \(\omega_{n}(\tau)=\sqrt{n^{2}+\alpha^{2}m^{2}(\tau)}\). Now, the LvN invariant operators can be can be found. Following the LvN approach, it can be defined a set of bosonic rising and lowering operators
\[\left[\alpha_{n}^{I}(\tau),\alpha_{m}^{\dagger\,J}(\tau)\right]=\delta^{IJ} \delta_{nm} \tag{31}\]
satisfying the quantum light cone LvN equation
\[\frac{i}{\alpha^{\prime}p^{+}}\frac{\partial}{\partial\tau}\alpha_{n}^{I}( \tau)+[\alpha_{n}^{I}(\tau),H^{bos.}_{l.c.}]=0,n\in\mathbb{Z}. \tag{32}\]
In order to find the invariant bosonic string oscillators, the operators \(\alpha_{n}(\tau)\),\(\alpha_{m}^{\dagger}(\tau)\) are defined in terms of the Fourier mode operators \(x_{n}\), \(p_{n}\)
\[\alpha_{n}^{I}(\tau) = i\left(\phi_{n}^{*}(\tau)p_{n}^{I}-\dot{\phi_{n}^{*}}(\tau)x_{n }^{I}\right)\] \[\alpha_{n}^{\dagger\,I}(\tau) = -i\left(\phi_{n}(\tau)p_{n}^{I}-\dot{\phi_{n}}(\tau)x_{n}^{I} \right)\,, \tag{33}\]
where \(\phi\) and \(\dot{\phi}\) must satisfy the Wronskian
\[\dot{\phi}_{n}^{*}(\tau)\phi_{m}(\tau)-\dot{\phi}(\tau)\phi^{*}(\tau)=i\delta _{mn} \tag{34}\]
to ensure that the relations (30) are satisfied. Now all work boils down to finding the functions \(\phi_{n}(t)\) such that \(\alpha_{n}^{I}(\tau)\) satisfy (32). Plugging (33) into (32) results in the following equation for \(\phi_{n}(t)\)
\[\ddot{\phi}_{n}+\omega_{n}^{2}(\tau)\phi_{n}=0\,. \tag{35}\]
A solution that satisfies (34) can be written in terms of Bessel Functions:
\[\phi_{n}(\tau)=\sqrt{\left(\frac{\tilde{f}}{2}\right)}\Gamma(1+in)J_{in}\left( z(\tau)\right)\,, \tag{36}\]
where \(\tilde{f}=\alpha f_{0}\), \(z(\tau)=\tilde{f}e^{-\tau}\) and \(J_{m}\) is a Bessel function of the first kind. The relations (34) follow from the Gamma and Bessel function properties
\[\Gamma(1+in)\,\Gamma(1-in)=\frac{n\pi}{\sinh n\pi}\,\] \[J_{\nu}(z)J_{-\nu}^{\prime}(z)-J_{-\nu}(z)J_{\nu}^{\prime}(z)=- \frac{2\sin\nu\pi}{\pi z}. \tag{37}\]
Here we are interested in the adiabatic limit given by
\[|\frac{\dot{\omega}_{n}(\tau)}{\omega(\tau)}|=\tilde{f}^{2}\frac{\tilde{f}^{2 }e^{-2\tau}}{\tilde{f}^{2}e^{-2\tau}+n^{2}}\ll 1 \tag{38}\]
Note that, even close to the null singularity (\(\tau\rightarrow-\infty\)), the adiabatic regime is controlled by Ramond-Ramond field. So, in the adiabatic regime (\(\alpha f_{0}<<1\)), the solution can be approximated by
\[\phi_{n}(\tau)\approx\phi_{n}^{ad}(\tau)=\frac{1}{\sqrt{2\omega_{n}(\tau)}}e^{-i \int\omega_{n}(\tau)}, \tag{30}\]
The adiabatic solution is important to study the thermalization process in the LvN approach. It will be shown that in this regime the invariant density matrix approaches the thermal equilibrium density matrix, calculated at the instant of time when the system enters thermodynamic equilibrium. Once the invariant creation and annihilation operators are defined, one uses the Lewis-Riesenfeld theorem and finds a base for the bosonic fock space defined by time-dependent number states:
\[N_{n}^{LvN}(\tau)|n,\tau)_{b}=n|n,\tau)_{b} \tag{31}\]
where \(N_{n}^{LvN}(\tau)\) is defined in the usual way:
\[N_{n}^{LvN}(\tau)=\delta^{IJ}\alpha_{n}^{\dagger I}(\tau)\alpha_{n}^{J}(\tau). \tag{32}\]
In the next section, the invariant number operator will be used to define a LvN invariant density matrix. Suppose the system thermalizes adiabatically at time \(\tau_{0}\). Then, close to \(\tau_{0}\) the Hamiltonian can be written as
\[H_{l.c.}^{bos.}\approx H_{LvN}^{bos.}=\frac{1}{\alpha}\left[\sum_{n=-\infty} ^{\infty}\omega_{n}(\tau_{0})N_{n}^{LvN}(\tau)+4\right]. \tag{33}\]
The position and momentum modes operators that appear in equation (27)can be also defined in the Heisenberg representation. Let us define \(p_{n}(\tau)\) and \(x_{n}(\tau)\) as the momentum and position modes operators in the Heisenberg representation, and write the non invariant operators
\[a_{n}^{I}(\tau) = \frac{\sqrt{2\omega_{n}(\tau)}}{2}\left[\frac{p_{n}^{I}}{\omega_{ n}}-ix_{n}^{I}\right]\] \[a_{n}^{\dagger I}(\tau) = \frac{\sqrt{2\omega_{n}(\tau)}}{2}\left[\frac{p_{n}^{I}}{\omega_ {n}}+ix_{n}^{I}\right]\,, \tag{34}\]
which obey
\[[a_{m}^{I}(\tau),a_{n}^{\dagger J}(\tau)]=\delta_{mn}\delta^{IJ}. \tag{35}\]
In terms of the non-invariant creation and annihilation operators, the time dependent bosonic Hamiltonian may be written as
\[H_{l.c.}^{bos.}(\tau)=\frac{1}{\alpha}\left[\sum_{n=-\infty}^{\infty}\omega_{ n}(\tau)a_{n}^{\dagger}(\tau)a_{n}(\tau)+4\right]\,, \tag{36}\]
which is identical to the bosonic part of the Hamiltonian derived in [10]. The Hamiltonian is diagonal; however, it cannot be used to define the thermal density matrix since it is not LvN invariant.
### Fermionic Sector
In this subsection the quantization of the fermionic sector is worked out in the LvN approach. As in the bosonic case, for the sake of self-containedness a review of the light cone gauge fixing process, without further details, will be carried out. The fermionic part of the type two B superstring in this background can be written as
\[S^{fer.}=-\frac{i}{2\pi\alpha^{\prime}}\int d^{2}\sigma(\sqrt{-g}g^{ab}\delta_{ AB}-\epsilon^{ab}\sigma_{3AB})\,\partial_{a}x^{\mu}\,\bar{\theta}^{A}\Gamma_{\mu}( \hat{D}_{b}\theta)^{B}+{\cal O}(\theta^{3})\,, \tag{4.27}\]
\[\sigma_{3}={\rm diag}(1,-1)\,,\] \[\hat{D}_{b}=\partial_{b}+\Omega_{\nu}\,\partial_{b}x^{\nu}\,, \tag{4.28}\]
with \(\hat{D}_{b}\) being the pull-back of the covariant derivative to the worldsheet. The indices \(a,b\) are worldsheet indices; \(A,B=1,2\) and \(\mu\) is the spacetime index. The spin connection \(\Omega_{\nu}\) is defined by
\[\Omega_{-} = 0,\] \[\Omega_{\,I} = \frac{ie^{\phi}}{4}f\,\Gamma^{+}(\Pi+\Pi^{\prime})\,\Gamma_{I}\, \sigma_{2},\] \[\Omega_{+} = -\frac{1}{2}\lambda\,x^{I}\Gamma^{+I}{\bf 1}+\frac{ie^{\phi}}{4 }f\,\Gamma^{+}(\Pi+\Pi^{\prime})\,\Gamma_{+}\sigma_{2}\,, \tag{4.29}\]
where \(\Gamma^{\pm}=(\Gamma^{0}\pm\Gamma^{9})/\sqrt{2}\), \(\sigma_{2}\) is the Pauli matrix and \(\Pi\) is symmetric, traceless and squares to one 1. The fermionic fields \(\theta^{A}\) are 10d spinors and in equation (4.27) the space time spinor indices where omitted( actually \(\theta^{A}=\theta^{A}_{\alpha}\) with \(\alpha=1,2,\ldots,16\), and \(A=1,2\)). Higher orders in theta will not be taken into account because they do not contribute in the light-cone gauge [13],[14]. The representation of \(\Gamma\)-matrices chosen is such that \(\Gamma^{0}\) is the 10d charge conjugation; therefore, the components of \(\theta^{A}\) are all real. The gauge symmetries are fixed choosing light-cone gauge:
Footnote 1: The following representation will be used : \(\Pi=\Gamma^{1}\Gamma^{2}\Gamma^{3}\Gamma^{4}={\rm diag}({\bf 1}_{4},-{\bf 1}_{4})\), \(\Pi^{\prime}=\Gamma^{5}\Gamma^{6}\Gamma^{7}\Gamma^{8}\)
\[x^{+}=\alpha^{\prime}p^{+}\tau\,,\ \ p^{+}>0\;.\] \[\Gamma^{+}\theta^{A}=0\;, \tag{4.30}\]
The kappa symmetry (\(\Gamma^{+}\theta^{A}=0\)) implies
\[(\theta^{A})^{T}\Gamma^{I}\theta^{B}=0,\ \ \forall A,B\,,\] \[(\Omega_{I})^{A}_{\ B}\theta^{B}=0\,,\] \[\Pi\theta^{A}=\Pi^{\prime}\theta^{A}\,. \tag{4.31}\]
After fixing the kappa symmetry the ten dimensional fermions are reduced to \(SO(8)\) representation. In ref.[10] the light cone fermionic action is written in terms of the real fields
\(\theta_{a}^{1}\) and \(\theta_{a}^{2}\); here complex fields will be used. Since \(\theta_{a}^{1}\) and \(\theta_{a}^{2}\) have the same chirality, we can define the complex positive chirality SO(8) spinor (\(\theta^{a}\), \(a=1,\ldots,8\)) by
\[\theta_{a}=e^{-i\frac{\tau}{4}}\left(\theta_{a}^{1}+i\theta_{a}^{2}\right),\ \ \bar{\theta}_{a}=e^{i\frac{\tau}{4}}\left(\theta_{a}^{1}-i\theta_{a}^{2}\right) \tag{4.32}\]
From this point onwards the \(SO(8)\) spinor indices will be often omitted. Finally, the light cone fermionic action can be written as
\[S_{l.c.}^{fer.}=\frac{1}{4\pi\alpha^{\prime}}\int d\tau\int_{0}^{2\pi\alpha}d \sigma\ [i(\bar{\theta}\partial_{\tau}\theta+\theta\partial_{\tau}\bar{ \theta})-\theta\partial_{\sigma}\theta+\bar{\theta}\partial_{\sigma}\bar{ \theta}-2m(\tau)\bar{\theta}\mathrm{I}\theta]. \tag{4.33}\]
where \(\tau\) and \(\sigma\) were re-scaled as in the bosonic case. The last term in the action is a time dependent mass term resulting from the RR five-form flux. The time dependent mass \(m(\tau)\) is the same as the bosonic sector. Again, in order to quantize the theory using the LvN approach, we expand \(\theta\) and its conjugate momentum \(\Lambda\equiv\frac{i}{2\pi\alpha^{\prime}}\bar{\theta}\) as
\[\theta(\sigma) = \vartheta_{0}+\frac{1}{\sqrt{2}}\sum_{n\neq 0}(\vartheta_{|n|} -ie(n)\vartheta_{-|n|})e^{in\sigma/\alpha^{\prime}p^{+}},\] \[\Lambda(\sigma) = \frac{i}{2\pi\alpha}\left[\lambda_{0}+\frac{1}{\sqrt{2}}\sum_{n \neq 0}(\lambda_{|n|}-ie(n)\lambda_{-|n|})e^{in\sigma/\alpha^{\prime}p^{+}} \right], \tag{4.34}\]
such that the anticommutation relation
\[\{\theta^{a}(\sigma),\Lambda^{b}(\sigma^{\prime})\}=i\delta^{ab}\delta( \sigma-\sigma^{\prime}) \tag{4.35}\]
follows from
\[\{\vartheta_{m}^{a},\lambda_{n}^{b}\}=\delta^{ab}\delta_{mn}. \tag{4.36}\]
In equation (4.34), \(e(n)\) is the signal of \(n\). Note that the Fourier modes satisfy \(\lambda_{n}=\frac{\alpha^{\prime}p^{+}}{2}\bar{\vartheta_{n}}\). For the sake of simplicity, let us define \(\lambda=-i\Lambda\). In terms of \(\lambda\), the fermionic Hamiltonian is
\[H_{l.c.}^{fer.}=\frac{1}{2}\int_{0}^{2\pi\alpha^{\prime}p^{+}}d\sigma\ \left[-4\pi\lambda\partial_{\sigma}\lambda+\frac{1}{4\pi}\theta\partial_{ \sigma}\theta+2m(\tau)(\lambda\Pi\theta)\right]. \tag{4.37}\]
In terms of the Fourier mode operators, we have
\[H_{l.c.}^{fer.}=\frac{1}{2}\sum_{n=-\infty}^{\infty}\left[\frac{n}{2}\left( \frac{4}{(\alpha^{\prime}p^{+})2}\lambda_{-n}\lambda_{n}-\hat{\vartheta}_{-n} \vartheta_{n}\right)+2m(\tau)\lambda_{n}\Pi\vartheta_{n}\right]. \tag{4.38}\]
If this Hamiltonian is used to solve the LvN equation in order to find the invariant fermionic operators, a set of coupled equations that are difficult to solve will emerge. The equations become simpler if we perform the following Bogoliubov transformation
\[\lambda_{n} = \frac{\sqrt{\alpha}}{2}\left[\hat{\lambda}_{n}+e(-n)\hat{\vartheta }_{-n}\right],\ n\neq 0\] \[\vartheta_{n} = \frac{1}{\sqrt{\alpha}}\left[\hat{\vartheta}_{n}+e(-n)\hat{ \lambda}_{-n}\right],\ n\neq 0\] \[\lambda_{0} = \hat{\lambda}_{0},\ \ \vartheta_{0}=\hat{\vartheta}_{0}, \tag{4.39}\]
such that
\[\{\hat{\lambda}_{n},\hat{\vartheta}_{m}\}=\delta_{nm},\,\,\,n\in\mathbb{Z} \tag{4.40}\]
In terms of the hat operators, the Hamiltonian is written as
\[H^{fer.}_{l.c.}=H^{fer.}_{0}+\sum_{n=1}^{\infty}\left[\frac{n}{\alpha}\left( \hat{\vartheta}_{-n}\hat{\lambda}_{-n}-\hat{\lambda}_{n}\hat{\vartheta}_{n} \right)+m(\tau)\left(\hat{\lambda}_{-n}\Pi\lambda_{n}+\hat{\vartheta}_{n}\Pi \hat{\vartheta}_{-n}\right)\right], \tag{4.41}\]
where
\[H^{fer.}_{0}=\tilde{f}^{2}e^{-2\tau}\hat{\lambda}_{0}\Pi\hat{\vartheta}_{0}. \tag{4.42}\]
Now, it can be defined a set of fermionic LvN invariant operators satisfying
\[\{\beta_{m}(\tau),\beta_{n}^{\dagger}(\tau)\}=\delta_{mn}\,,n\in\mathbb{Z}\,. \tag{4.43}\]
Let's write \(\beta_{n}(\tau)\) and \(\beta_{n}^{\dagger}(\tau)\) as
\[\beta_{n}(\tau)=F(\tau)\hat{\lambda}_{n}+G(\tau)\hat{\vartheta}_ {-n}\] \[\beta_{n}^{\dagger}(\tau)=F(\tau)^{*}\hat{\vartheta}_{n}+G(\tau) ^{*}\hat{\lambda}_{-n}, \tag{4.44}\]
where the functions \(F(\tau)\) and \(G(\tau)\) must satisfy
\[|F(\tau)|^{2}+|G(\tau)|^{2}=1. \tag{4.45}\]
The equations (2.1) for \(\beta_{n}\) result in the following system of coupled first order equations
\[i\dot{F}+nF+\alpha m(\tau)\Pi G = 0\] \[i\dot{F}-nG+\alpha m(\tau)\Pi F = 0. \tag{4.46}\]
By using \(\Pi^{2}=1\), equations (4.46) results in the following decoupled second order equations :
\[\ddot{G}+\dot{G}+(n^{2}+in+\tilde{f}^{2}e^{-2\tau})G = 0\] \[\ddot{F}+\dot{F}+(n^{2}-in+\tilde{f}^{2}e^{-2\tau})F = 0 \tag{4.47}\]
By defining again \(z(\tau)=\tilde{f}e^{-\tau}\), the solutions that satisfy the conditions (4.45) are
\[F(\tau) = \sqrt{\frac{z(\tau)}{2}}\Gamma(\frac{1}{2}+in)J_{\frac{1}{2}+in} \left(z(\tau)\right)\] \[G(\tau) = \sqrt{\frac{z(\tau)}{2}}\Gamma(\frac{1}{2}+in)J_{-\frac{1}{2}+in }\left(z(\tau)\right), \tag{4.48}\]
where it was used the following properties
\[\Gamma\left(\frac{1}{2}+in\right)\,\Gamma\left(\frac{1}{2}-in \right)=\frac{\pi}{\cosh n\pi}\,\,, \tag{4.49}\] \[J_{-\frac{1}{2}+in}(z)J_{-\frac{1}{2}-in}(z)+J_{\frac{1}{2}+in} (z)J_{\frac{1}{2}-in}(z)=\frac{2\cosh n\pi}{\pi z}. \tag{4.50}\]
An adiabatic solution can be found and it has the same structure as that found in the bosonic case. So, we have constructed a set of fermionic rising and lowering invariant operators that can be used to define the fermionic number operators in the usual way. The superstring vacuum is defined by
\[\alpha^{I}_{n}(\tau)|0,t\rangle=0,\;\;\;\beta_{n}(\tau)|0,\tau\rangle=0 \tag{63}\]
The states created from \(\alpha^{\dagger}_{n},\beta^{\dagger}_{n}\) in general depend on time, but the Lewis-Riesenfeld theorem guarantees that their eigenvalues (occupation numbers) do not depend on time.
As in the bosonic case, it can also be found a set of non-invariant time dependent fermionic rising and lowering/ operators, satisfying
\[\{b_{m}(\tau),b^{\dagger}_{n}(\tau)\}=\delta_{mn},n\in\mathbb{Z}. \tag{64}\]
However, change of basis is far more complicated than the one used in the bososic sector. Going back to base (62), let's write \(\vartheta_{n}\) and \(\lambda_{n}\) as the fermionic mode operators in the Heisenberg representation. It can be defined the following time dependent operators:
\[b_{n}(\tau) = \frac{1}{2}\left[\sqrt{\alpha}A^{+}_{n}(\tau)\vartheta_{n}+ \frac{2}{\sqrt{\alpha}}e(n)A^{-}_{n}(\tau)\lambda_{-n}\right]\] \[b^{\dagger}_{n}(\tau) = \frac{1}{2}\left[\frac{2}{\sqrt{\alpha}}A^{+}_{n}(\tau)\lambda_ {n}+\sqrt{\alpha}e(n)A^{-}_{n}(\tau)\vartheta_{-n}\right], \tag{65}\]
where the time dependent matrices \(A^{\pm}_{n}(\tau)\) are defined by
\[A^{\pm}_{n}(\tau)=\frac{1}{1+\gamma_{n}(\tau)}\left(1+\gamma_{n}(\tau)\Pi \right),\;\gamma_{n}(\tau)=\frac{\omega_{n}(\tau)-|n|}{\alpha m(\tau)}. \tag{66}\]
The change of basis is similar to the one used in the time independent case [15]. Note that the relations (65) also breaks the \(SO(8)\) symmetry to \(SO(4)\times SO(4)\). The matrices \(A^{\pm}_{n}(\tau)\) were chosen in such a way that in terms of \(b_{n}\), the fermionic time dependent Hamiltonian(non invariant) is diagonal and takes the simple form
\[H^{ferm.}_{l.c.}(\tau)=\frac{1}{\alpha^{\prime}p^{+}}\left[\sum_{n=-\infty}^{ \infty}\omega_{n}(\tau)\left(b^{\dagger}_{n}b_{n}-4\right)\right], \tag{67}\]
such that the total time dependent superstring Hamiltonian, written in terms of \(a_{n}\), \(b_{n}\), has the same form of the one found in [10] using a different method. Note that the zero-point energy of the non invariant Hamiltonian exactly cancels between the bosons and the fermions.
## 5 The invariant string density matrix
In the previous section the superstring was quantized in LvN picture, which made it possible to find LvN invariant creation/annihilation operators. These invariant operators can be used to define a density matrix that satisfies the light cone LvN equation:
\[\frac{i}{\alpha}\frac{\partial}{\partial\tau}\rho_{LvN}+[\rho_{LvN},H_{l.c.}( \tau)] \tag{68}\]
where \(H_{l.c.}=H_{l.c.}^{bos.}+H_{l.c.}^{fer.}\). Note that, in terms of the non invariant oscillators, the Hamiltonian is:
\[H_{l.c.}(\tau)=\frac{1}{\alpha}\sum_{n=-\infty}^{\infty}\omega_{n}(\tau)\left(a _{n}^{\dagger}(\tau)a_{n}(\tau)+b_{n}^{\dagger}(\tau)b_{n}(\tau)\right). \tag{30}\]
This Hamiltonian is diagonal and has a time dependent supersymetric spectrum. However, as explained before, this Hamiltonian cannot be used to define a thermal density matrix because it is not LvN invariant. The main goal of this section is to show that, if the thermalization occurs adiabatically at time \(\tau_{0}\), this Hamiltonian can be used to define an instantaneous thermal density matrix, defined at time \(\tau_{0}\). To this end, it will be defined first a density matrix that satisfies equation (31) an then will be shown that, in the adiabatic limit, this density matrix approaches the one obtained with the instantaneous Hamiltonian \(H_{l.c.}(\tau_{0})\). With this result, we can calculate the thermal partition function at an equilibrium temperature T, defined in \(\tau_{0}\) in terms of the instantaneous diagonal Hamiltonian. As an application of the invariant density matrix, the non equilibrium worldsheet two point function is calculated. For the sake of simplicity, only the bosonic sector are going to be taken into account. The generalization for the fermionic sector is straightforward.
In order to calculate the invariant light cone density matrix, we need to take into account the time like killing vectors. In flat space, the time like killing vector is \(\frac{1}{\sqrt{2}}\left[\frac{\partial}{\partial x^{\tau}}+\frac{\partial}{ \partial x^{-}}\right]\). Here, owing to the non trivial dilaton dependency on \(x^{+}\), the timelike killing vector is just \(\frac{\partial}{\partial x^{-}}\). However, we want to define an invariant density matrix such that, in the asymptotically flat limit, it reduces to the standard expression for the light cone flat space string density matrix. So, the invariant density matrix is defined as
\[\rho_{LvN}=\frac{1}{Z_{LvN}}e^{-\tilde{\beta}\left(p^{+}+H_{LvN}\right)}\,, \tag{31}\]
where \(\tilde{\beta}=\frac{\beta}{\sqrt{2}}\) and \(H_{LvN}\) is given in terms of the invariant creation/annihilation operators
\[H_{LvN}==\frac{1}{\alpha^{\prime}p_{+}}\sum_{n=1}^{\infty}\tilde{\omega}_{n} \left(\delta_{IJ}\alpha_{n}^{I\dagger}\alpha_{n}^{J}+\delta_{ab}\beta_{n}^{a \dagger}\beta_{n}^{b}\right). \tag{32}\]
The normalization factor \(Z_{LvN}\)(the string partition function) is the trace
\[Z_{LvN}=Tre^{-\tilde{\beta}\left(p^{+}+H_{LvN}\right)}. \tag{33}\]
In general, \(\tilde{\omega}_{n}\) and \(\beta\) are free parameters. It is assumed that the system thermalizes at a time \(\tau_{0}\), with equilibrium temperature \(1/\beta\). The parameter \(\tilde{\omega}_{n}\) will be related to \(\omega_{n}(\tau_{0})\), as it will be clear soon.
By using the Lewis-Riesenfeld invariant theorem, the Hamiltonian can be written on a time dependent number basis and the density matrix can be written as
\[\rho_{LvN}(\beta,\tau)=\frac{1}{Z_{LvN}}\sum_{\{n_{i}^{I}\}}\exp-\tilde{\beta }\left[\sum_{I,i}\tilde{\omega}_{i}n_{i}^{I}+p^{+}\right]\lvert\{n_{i}^{I}\}, p^{+},\tau\rangle\langle\{n_{i}^{I}\},p^{+},\tau\rvert, \tag{34}\]
where \(\{n_{i}^{I}\}=\{n_{i}^{I}\}_{i=-\infty}^{\infty}=n_{-\infty}^{I},\ldots,n_{\infty} ^{I}\). In order to simplify the notation, space-time indices will not be taken into account in the next steps. As the background is symmetrical with respect to the different transverse coordinates, one can calculate the contribution of one dimension and then take into account the other transverse dimensions. Now let us take advantage of the notation used in (4.7) to write the density matrix in position representation. In terms of the Fourier modes it takes the form
\[\rho^{1}(x_{1},x_{2},...,x_{1}^{\prime},x_{2}^{\prime}...,\beta)_{ LvN}=\frac{1}{Z_{LvN}}\langle x_{1},x_{2},...|\rho_{LvN}(\tilde{\beta})|x_{1}^ {\prime},x_{2}^{\prime},...\rangle, \tag{5.7}\]
where index \(1\) in the density matrix indicates that the contribution of only one transversal dimension is being taken into account. To simplify the notation, \(\rho^{1}(x,x^{\prime},\beta)_{LvN}\) will be used instead of \(\rho^{1}(x_{1},x_{2},...,x_{1}^{\prime},x_{2}^{\prime}...,\beta)_{LvN}\). The next step is to write the string number state in position representation. By writing \(\alpha_{n}(\tau)\) in the position representation, the LvN vacuum state is defined by
\[i\left[\phi_{n}^{*}\frac{\partial}{i\partial x_{n}}-\dot{\phi}_{n}x_{n} \right]\Psi_{0}=0,\ \ n\in\mathbb{Z}\,. \tag{5.8}\]
The normalized solution is
\[\Psi_{0}=\prod_{j\in\mathbb{Z}}\left(\frac{1}{2\phi_{j}}\right)^{1/4}e^{\frac{ i}{2}\frac{\dot{\phi}_{j}^{*}}{\phi_{j}}x_{j}^{2}}\,. \tag{5.9}\]
The other states are constructed in the usual way by applying \(\alpha_{n}^{\dagger}(\tau)\). So, the string number state is written in the coordinate representation as
\[\Psi_{n}=\prod_{j}\frac{1}{\sqrt{2\pi\phi_{j}^{*}\phi_{j}}}\frac{1}{\sqrt{2^{n _{j}}n_{j}!}}\left(\frac{\phi_{j}}{\phi_{j}^{*}}\right)H_{n}(q_{j})e^{\frac{i }{2}\frac{\dot{\phi}_{j}^{*}}{\phi_{j}}x_{j}^{2}}\,, \tag{5.10}\]
where \(H_{n}(q_{j})\) are the Hermite polynomials and
\[q_{j}=\frac{x_{j}}{\sqrt{2\phi_{j}^{*}\phi_{j}}}\,. \tag{5.11}\]
Using (5.10), the density matrix for one transversal coordinate \(\rho^{1}(x,x^{\prime},\beta)\) is given by
\[\frac{e^{-\tilde{\beta}p^{+}}}{Z_{LvN}}\sum_{\{n_{j}\}}\prod_{j\in\mathbb{Z}} \frac{H_{n_{j}}(q_{j})H_{n_{j}}(q_{j}^{\prime})}{2\pi\phi_{j}^{*}\phi_{j}2^{n _{j}}n_{j}!}e^{-\tilde{\beta}\omega_{j}(n_{j}+\frac{1}{2})}e^{i[\frac{\phi_{j} ^{*}}{\phi_{j}}x_{j}^{2}-\frac{\dot{\phi}_{j}}{\phi_{j}^{*}}x_{j}^{\prime}{}^ {2}]}\,. \tag{5.12}\]
Following the method developed in [16], the density matrix can be simplified using the following integral representation for the Hermite polynomials:
\[H_{n_{j}}(q_{j})=\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty}(-2iz)^{n_{j}}e^{- (z_{j}+iq_{j})^{2}}dz. \tag{5.13}\]
Using this identity for each mode, one has
\[\rho^{1}(x,x^{\prime},\beta)=\frac{e^{-\tilde{\beta}p^{+}}}{Z}\prod_{n_{j}} \prod_{j\in\mathbb{Z}}\frac{1}{\sqrt{2\pi\phi_{j}^{*}\phi_{j}}}e^{-\tilde{ \beta}\frac{\omega_{j}}{2}}I_{n_{j}}\;, \tag{5.14}\]
where
\[I_{n_{j}}=\frac{e^{-\tilde{\beta}\omega_{j}n_{j}}}{\pi 2^{n_{j}}n_{j}!}e^{q_{j}^{2}+q_ {j}^{\prime 2}}\int\int dz_{j}dw_{j}(2iz_{j})^{n_{j}}(2iw_{j})^{n_{j}}e^{-z_{j}^{2} +2iz_{j}x_{j}}e^{-w_{j}^{2}+2iw_{j}x_{j}^{\prime}}. \tag{5.15}\]
Now, by defining the following matrices
\[A_{j}=2\begin{bmatrix}1&e^{-\tilde{\beta}\omega_{j}}\\ e^{-\tilde{\beta}\omega_{j}}&1\end{bmatrix},\,\,Y_{j}=\begin{bmatrix}z_{j}\\ w_{j}\end{bmatrix},\,\,B_{j}=-2\begin{bmatrix}x_{j}^{\prime}\\ x_{j}\end{bmatrix}, \tag{5.16}\]
after summing over each \(n_{j}\), the density matrix is
\[\rho^{1}(x_{1},x_{2},...,x_{1}^{\prime},x_{2}^{\prime}...,\beta)_{LvN}=\frac{e ^{-\tilde{\beta}p^{+}}}{Z}\sum_{n_{j}}\prod_{j\in\mathbb{Z}}\frac{1}{\sqrt{2 \pi^{2}\phi_{j}^{*}\phi_{j}}}e^{q_{j}^{2}+q_{j}^{\prime 2}}\int\int exp\left[- \frac{1}{2}Y^{\dagger}\mathbf{A_{j}}Y_{j}+iB_{j}^{\dagger}Y_{j}\right]. \tag{5.17}\]
Finally, one can use the result
\[\int\int e^{\left[-\frac{1}{2}Y^{\dagger}\mathbf{A}Y+iB^{\dagger}Y\right]}= \frac{2\pi}{\sqrt{\det\mathbf{A}}}e^{\left[\frac{1}{2}B^{\dagger}\mathbf{A}^{- 1}B\right]} \tag{5.18}\]
to write the density matrix in the form( after some algebra and taking into account the eight transverse dimensions)
\[\rho(x_{1},x_{2},...,x_{1}^{\prime},x_{2}^{\prime}...,\beta)_{LvN} = \frac{e^{-\beta p^{+}}}{Z_{LvN}}\prod_{j\in\mathbb{Z}}\Biggl{[} \frac{1}{2\pi\phi_{j}^{*}(\tau)\phi_{j}(\tau)\sinh\tilde{\beta}\tilde{\omega}_ {j}}\Biggr{]}^{4}\] \[\times \exp\Biggl{[}4i\sum_{n\in\mathbb{Z}}\frac{d}{d\tau}\ln(\phi_{j}^ {*}(\tau)\phi_{j}(\tau))({x_{n}^{\prime}}^{2}-x_{n}^{2})\Biggr{]}\] \[\times \exp\Biggl{[}-\sum_{n\in\mathbb{Z}}\frac{1}{\phi(\tau)_{j}^{*} \phi(\tau)_{j}}\Biggr{\{}(x_{n}^{\prime}+x_{n})^{2}\tanh(\frac{\tilde{\beta} \tilde{\omega}_{j}}{2})+(x_{n}^{\prime}-x_{n})^{2}\coth(\frac{\tilde{\beta} \hbar\tilde{\omega}_{j}}{2})\Biggr{\}}\Biggr{]}.\]
One can now compare this density matrix with the density matrix obtained with the time dependent Hamiltonian (5.2) defined at a time \(\tau_{0}\)
\[\rho_{T}(\tau_{0})=\frac{e^{-\tilde{\beta}\left(p^{+}+H_{lc}^{bos}(\tau_{0}) \right)}}{Z_{T}(\tau_{0})}, \tag{5.20}\]
where \(T=\frac{1}{\beta}\) is the equilibrium temperature and \(Z_{T}\) is the thermal partition function
\[Z_{T}(\tau_{0})=Tre^{-\tilde{\beta}\left(p^{+}+H_{lc}^{bos}(\tau_{0})\right)}\,. \tag{5.21}\]
It is easy to see that when \(\tilde{\omega}_{n}=\omega_{n}(\tau_{0})\), \(Z_{T}=Z_{LvN}\). Following the same steps as before, the instantaneous density matrix is written in the position representation as
\[\rho_{T}(\tau_{0}) = \frac{e^{-\tilde{\beta}p^{+}}}{Z_{T}(\tau_{0})}\prod_{j\in \mathbb{Z}}\Biggl{[}\frac{\omega_{j}(\tau_{0})}{2\pi\sinh(\tilde{\beta}\omega_ {j}(\tau_{0})}\Biggr{]}^{4}\] \[\times\exp\Biggl{[}-2\sum_{n\in\mathbb{Z}}\omega_{n}(\tau_{0}) \Biggr{\{}(x_{n}^{\prime}+x_{n})^{2}\tanh(\frac{\tilde{\beta}\omega_{n}(\tau_{ 0})}{2})+(x_{n}^{\prime}-x_{n})^{2}\coth(\frac{\tilde{\beta}\hbar\omega_{n}( \tau_{0})}{2})\Biggr{\}}\Biggr{]}.\]
In the adiabatic regime
\[\phi_{n}\phi_{n}^{*}\approx\frac{1}{2\omega_{n}},\ \left|\frac{\dot{\omega}_{n}}{ \omega_{n}}\right|<<1, \tag{5.23}\]
one has
\[\rho(x_{1},x_{2},...,x_{1}^{\prime},x_{2}^{\prime}...,\beta)_{LvN}\approx\rho_{ T}(\tau_{0}), \tag{5.24}\]
if one sets \(\tilde{\omega}_{j}=\omega_{j}(\tau_{0})\). So, close to \(\tau_{0}\), the non equilibrium string thermal state is given by (5.6) and (5.19) an it can be used (5.21) to calculated, for example, the Hagedorn temperature as a function of \(\tau_{0}\). Before, as a direct application of the invariant density matrix, let's calculate the worldsheet time dependent two-point function at finite temperature, which is an important object to study non-equilibrium phenomena.
### The Real Time thermal two point function.
As an application of the invariant density matrix, it can be calculated the time dependent two-point function at finite temperature. Let's start using the LvN invariant operators to evaluated the two-point function at equal times at zero temperature, by taking the expectation value with respect to the vacuum state \(|0,\tau\rangle\) which is annihilated by \(\alpha_{n}^{I}(\tau)\),
\[g^{IJ}(\sigma,\sigma^{\prime})=\langle 0,\tau|X^{I}(\sigma,\tau)X^{J}(\sigma^{ \prime},\tau)|0,\tau\rangle\,. \tag{5.25}\]
This is computed by inverting the relations (4.14 ):
\[x_{n}^{I}=\alpha_{n}^{I}(\tau)\phi_{n}(\tau)+\alpha_{n}^{I}(\tau)\phi_{n}^{*} (\tau). \tag{5.26}\]
In the adiabatic approximation one gets
\[g^{IJ}(\sigma,\sigma^{\prime},\tau)=\alpha^{\prime}\sum_{n\in\mathbb{Z}}\frac{ e^{i(\sigma-\sigma^{\prime})}}{\omega_{n}(\tau)}. \tag{5.27}\]
The time dependent finite temperature two-point function is calculated by taking the expectation value with respect to the thermal state, defined by the invariant density matrix
\[G^{IJ}(\sigma,\sigma^{\prime},t)_{T}=\mathrm{Tr}\left[\rho_{LvN}X^{I}(\sigma, \tau)X^{J}(\sigma^{\prime},\tau)\right]. \tag{5.28}\]
The trace must be taken over the physical states of the closed string. Although the light-cone gauge solves the two worldsheet reparametrization constraints, one single consistency condition remains to be imposed to calculate the trace above, which is related to the circle isometry of the closed string. In order to fix this isometry on the Fock space, the bosonic physical states \(|\Psi\rangle_{b}\) must be annihilated by the \(\sigma\) translation generator \(\mathcal{P}_{b}\), which implies the level-matching constraint
\[\mathcal{P}_{b}|\Psi\rangle_{b}=0 \tag{5.29}\]
where
\[\mathcal{P}_{b}=\sum_{n\in\mathbb{Z}}n\left[\delta_{IJ}\alpha_{n}^{I\dagger}( \tau)\alpha_{n}^{J}(\tau)\right]. \tag{5.30}\]
Then, to ensure that the trace is taken over the physical states, one introduces the projector
\[\int_{-1/2}^{1/2}d\lambda e^{(2\pi i\lambda{\cal P}_{b})}, \tag{108}\]
such that the invariant thermal two point function is
\[G^{IJ}(\sigma,\sigma^{\prime},\tau)_{T}=\frac{1}{Z_{LvN}}\int dp^{+}e^{-\tilde{ \beta}p^{+}}\int_{-1/2}^{1/2}\sum_{\{n_{j}\}}\langle\{n_{j}(\tau)\}|e^{-\tilde {\beta}H_{LvN}}e^{2\pi i\lambda{\cal P}}X^{I}(\sigma,\tau)X^{J}(\sigma^{\prime},\tau)|\{n_{j}(\tau)\}\rangle, \tag{109}\]
where \(H_{LvN}\) is given in (111). By defining
\[k_{j}=\tilde{\beta}\omega_{j}(\tau_{0}),\ \ b_{j}=2\pi\lambda j, \tag{110}\]
the two point function can be written as
\[G^{IJ}(\sigma,\sigma^{\prime},\tau)_{T}=\frac{1}{Z}\int p^{+}\sum_{\{n_{j}\}} \langle n_{j}|\exp\left[-\sum_{j\in\mathbb{Z}}(k_{j}+ib_{j})n_{j}\right]X( \sigma,\tau)X(\sigma^{\prime},\tau)|\{n_{j}\}\rangle. \tag{111}\]
After some algebra, the two point function is written as a zero temperature two point function plus a thermal contribution
\[G^{IJ}(\sigma,\sigma^{\prime},\tau)_{T}=\frac{\alpha^{\prime}}{Z}\int dp^{+} \int_{-1/2}^{1/2}d\lambda|\eta_{m}(\beta,\lambda)|^{8}\left[g^{IJ}(\sigma, \sigma^{\prime},\tau)+2\alpha^{\prime}\delta^{IJ}g(\sigma,\sigma^{\prime},t)_ {T}\right] \tag{112}\]
where \(\eta_{m}(\beta,\lambda)\) is the "massive" eta function
\[\eta_{m}(\beta,\lambda)=\prod_{n\in\mathbb{N}}\frac{1}{1-e^{-\tilde{\beta} \omega_{n}(\tau_{0})+2\pi i\lambda n}} \tag{113}\]
and
\[g(\sigma,\sigma^{\prime},\tau)_{T} = \sum_{n=0}\frac{1}{\omega_{n}}\left[\frac{e^{-(k_{n}-ib_{n})}}{1 -e^{-(k_{n}-ib_{n})}}\cos n\sigma\cos n\sigma^{\prime}+\frac{e^{-(k_{n}+ib_{n })}}{1-e^{-(k_{n}+ib_{n})}}\sin n\sigma\sin n\sigma^{\prime}\right] \tag{114}\] \[= \sum_{p=1,n=0}\frac{e^{-pkn}}{\omega_{n}}\left[e^{ipb_{n}}\cos n \sigma cosn\sigma^{\prime}+e^{-ipb_{n}}\sin n\sigma\sin n\sigma^{\prime}\right]\]
is the thermal correction. In general, in finite temperature quantum field theories the term in \(\eta\) in the numerator does not appear because it is just the Z factor of the denominator. Here these terms are left over due to the integral over \(p^{+}\) and \(\lambda\)(note that \(\omega_{n}\) depends on \(p^{+}\)).
Le's focus on \(g(\sigma,\sigma^{\prime},t)_{T}\). The parity of the eta function in relation to \(\lambda\) can be used to rewrite the thermal contribution to two point function as
\[g(\sigma,\sigma^{\prime},\tau)_{T} = \sum_{p=1}^{\infty}\sum_{n\in\mathbb{Z}}\frac{e^{-pK_{n}}}{2 \omega_{n}}\left[e^{in(2\pi p\lambda+(\sigma-\sigma^{\prime}))}+e^{in(2\pi p \lambda-(\sigma-\sigma^{\prime}))}\right]. \tag{115}\]
In order to investigate the leading short-distance behaviour, the Poisson resummation formula is used:
\[\sum_{n\in\mathbb{Z}}F(n)=\sum_{l\in\mathbb{Z}}\int_{-\infty}^{\infty}e^{2\pi iyl }F(y)dy, \tag{102}\]
along with the following representation of the modified Bessel function,
\[K_{0} = \int_{0}^{\infty}\frac{e^{-\beta\sqrt{x^{2}+\gamma^{2}}}}{\sqrt{x^{ 2}+\gamma^{2}}}\cos bxdx=K_{0}(\sqrt{b^{2}+\beta^{2}}), \tag{103}\]
to rewrite the finite temperature two point function as
\[G^{IJ}(\sigma,\sigma^{\prime},\tau) = 2\alpha^{\prime}K_{0}(fe^{-\tau}|\sigma-\sigma^{\prime}|)\] \[+ \frac{2\alpha^{\prime}}{Z}\int dp^{+}\int_{-1/2}^{1/2}d\lambda| \eta_{m}(\beta,\lambda)|^{8}\sum_{n\in\mathbb{Z}}K_{0}(fe^{-\tau}|2\pi n \alpha+\sigma-\sigma^{\prime}|)\] \[+ \frac{2\alpha^{\prime}}{Z}\int dp^{+}\int_{-1/2}^{1/2}d\lambda| \eta_{m}(\beta,\lambda)|^{8}\sum_{p=1}^{\infty}\sum_{n\in\mathbb{Z}}\left[K_{ 0}(fe^{-\tau}\sqrt{(\beta\alpha p)^{2}+b_{n}^{+}(\sigma,\sigma^{\prime}, \lambda)}\right]\right.\] \[+ \left.\frac{2\alpha^{\prime}}{Z}\int dp^{+}\int_{-1/2}^{1/2}d \lambda|\eta_{m}(\beta,\lambda)|^{8}\sum_{p=1}^{\infty}\sum_{n\in\mathbb{Z}} \left[K_{0}(fe^{-\tau}\sqrt{(\beta\alpha p)^{2}+b_{n}^{-}(\sigma,\sigma^{ \prime},\lambda)}\right],\right.\]
where \(b_{n}^{\pm}(\sigma,\sigma^{\prime},\lambda)=\left[2\pi\alpha(n+\lambda)\pm( \sigma-\sigma^{\prime})\right]^{2}\). We can see that the only term that has singularities when \(\sigma\rightarrow\sigma^{\prime}\) is the first term. In particular, the finite temperature contributions is finite at short-distances. Let's analyze the behaviour of the two point function in two different limits. In the limit \(fe^{\tau}|\sigma-\sigma^{\prime}|<<1\), one can expand the Bessel function as
\[K_{0}\left(z\right)=-\left(\ln\left(\tfrac{1}{2}z\right)+\gamma\right)I_{0} \left(z\right)+\frac{\tfrac{1}{4}z^{2}}{(1!)^{2}}+(1+\tfrac{1}{2})\frac{( \tfrac{1}{4}z^{2})^{2}}{(2!)^{2}}+(1+\tfrac{1}{2}+\tfrac{1}{3})\frac{(\tfrac{1 }{4}z^{2})^{3}}{(3!)^{2}}+\cdots, \tag{104}\]
where \(I_{0}(z)\) is the modified Bessel function of first kind (\(I_{0}(0)=1\)) and \(\gamma\) is the Euler constant. So, the leading short-distance behavior of the two point function is
\[\alpha^{\prime}\ln\frac{fe^{-\tau}}{|\sigma-\sigma^{\prime}|} \tag{105}\]
which has the same leading short-distance logarithmic behavior of the flat space one. On the other hand, in the limit \(e^{-\tau}|\sigma-\sigma^{\prime}|>>1\), the Bessel function has the following asymptotic expansion
\[K_{0}(z)=\sqrt{\frac{\pi}{2x}}e^{-x}\sum_{k=0}^{\infty}\frac{\Gamma(k+1/2)}{k! \Gamma(1/2-k)}(2z)^{-k}\,. \tag{106}\]
In this limit, the thermal two point function has an exponential damping behavior. In particular, the two point function goes to zero near the null singularity. This may corroborate the idea raised in [17], where it was argued that the string gets highly excited and breaks up into bits propagating independently near the singularity.
The high cone Superstring Partition Function and Hagedorn Behavior.
In this section the light cone superstring partition function will be calculated. As previously shown, in the adiabatic regime the density matrix \(\rho_{T}(\tau)\) constructed with the non-invariant operators approaches the invariant density matrix as \(\tau\) approaches \(\tau_{0}\), where \(\tau_{0}\) is the time at which \(\rho_{T}(\tau)\) is evaluated. This result allows us to use the diagonal Hamiltonian (5.2) evaluated at \(\tau_{0}\) to calculate the partition function and, consequently, the Hagedorn temperature as a function of \(\tau_{0}\).
The light cone superstring partition function at time \(\tau_{0}\) is the trace
\[Z(\beta,\tau_{0}) = \text{Tr}e^{-\tilde{\beta}\left(p^{+}+H_{l.c.}(\tau_{0})\right)}. \tag{6.1}\]
Again, in order to fix the \(S^{1}\) isometry on the superstring Fock space and to ensure that the trace is taken over the physical states, one introduces the projector
\[\int d\lambda e^{(2\pi i\lambda\mathcal{P})}, \tag{6.2}\]
where the superstring sigma translation generator \(\mathcal{P}\) is
\[\mathcal{P}=\sum_{n\in\mathbb{Z}}n\left[\delta_{IJ}a_{n}^{I\dagger}a_{n}^{J}+ \delta_{ab}b_{n}^{\dagger a}b_{n}^{b}\right] \tag{6.3}\]
and \((a_{n}^{I\dagger},\,a_{n}^{J},\,b_{n}^{\dagger a},b_{n}^{b})\) are the operators (4.24) defined at time \(\tau_{0}\). The light cone superstring thermal partition function can be written as
\[Z(\beta,\tau_{0})=\int dp^{+}\int d\lambda e^{-\beta p^{+}}\text{Tr}e^{\left( -\tilde{\beta}H_{l.c.}(\tau_{0})+2i\pi\lambda\mathcal{P}\right)}. \tag{6.4}\]
The integrand of this partition function has interesting modular properties, which become apparent by defining the following complex parameter
\[\tau^{\prime}=\lambda+i\frac{\tilde{\beta}}{2\pi\alpha^{\prime}p^{+}}=\tau_{1 }+i\tau_{2}, \tag{6.5}\]
such that the partition function can be written as
\[Z(\beta,\tau_{0})=\int dp^{+}e^{-\beta p^{+}}\text{Tr}\left[e^{-2\pi\tau_{2}H _{l.c.}(\tau_{0})}e^{i2\pi\tau_{1}\mathcal{P}}\right]. \tag{6.6}\]
It is well known that the thermal partition function of the closed close string can be written as a functional integral on the torus [18]. Let's remember here how the torus appears in the density matrix formalism that we are using. Note that the operator \(e^{-2\pi\tau_{2}H_{l.c.}}\) propagates the closed superstring through imaginary light cone time \(-2\pi\tau_{2}\). In turn, the operator \(e^{i2\pi\tau_{1}\mathcal{P}}\) rotates the closed string by an angle \(2\pi\tau_{1}\). So, the trace taken over matrix elements of the form \(\langle i|e^{-2\pi\tau_{2}H_{l.c.}(\tau_{0})}e^{i2\pi\tau_{1}\mathcal{P}}|f\rangle\) can be represented as a path integral on a torus by gluing the ends of the cylinder of length \(2\pi\tau_{2}\) with a relative twist \(2\pi\tau_{1}\). Actually the twist is related to the Dehn twist associated to one of the cycles [19]. We then conclude that \(\tau^{\prime}\) is indeed the modulus of a torus represented by the parallelogram defined in the complex
plane with vertices at \(0\), \(\tau^{\prime}\), \(1\), \(\tau^{\prime}+1\) and identified opposite sides. Furthermore, the thermal density matrix allows observing a kind of torus generalization of the KMS condition that is a consequence of the closed string torus topology [20].
In order to explore the torus modular properties of the partition function, the integral over \(p^{+}\) is rewritten as an integral over the moduli \(\tau_{2}\), such that the UV asymptotic behavior of the partition function is recover in the limit \(\tau_{2}\to 0\). Finally, after taking the trace over the bosonic and fermionic number states, the partition function is
\[Z(\beta,\tau_{0})=\frac{\tilde{\beta}}{2\pi\alpha^{\prime}}\int_{0}^{\infty} \frac{d\tau_{2}}{{\tau_{2}}^{2}}\int d\tau_{1}\exp(-\frac{\beta^{2}}{2\pi\, \alpha^{\prime}\,\tau_{2}})z_{lc}(\tau^{\prime},\tau_{0}), \tag{100}\]
where
\[z_{lc}(\tau^{\prime},\tau_{0})=z_{lc}^{bos.}(\tau^{\prime},\tau_{0})z_{lc}^{ ferm.}(\tau^{\prime},\tau_{0}) \tag{101}\]
is the product of the bosonic and fermionic contributions:
\[z_{lc}^{bos.}(\tau^{\prime},\tau_{0}) = \exp\left[-16\pi\tau_{2}\left(\frac{\tilde{f}e^{-\tau_{0}}}{2}+ \sum_{n=1}^{\infty}\sqrt{n^{2}+\tilde{f}^{2}e^{-2\tau_{0}}}\right)\right] \tag{102}\] \[\left[\prod_{n\in\mathbb{Z}}\left(1-\exp[2\pi(-\tau_{2}\sqrt{n^{2 }+\tilde{f}^{2}e^{-2\tau_{0}}}+i\tau_{1}n)]\right)\right]^{-8},\]
\[z_{lc}^{ferm.}(\tau^{\prime},\tau_{0}) = \exp\left[16\pi\tau_{2}\left(\frac{\tilde{f}e^{-\tau_{0}}}{2}+ \sum_{n=1}^{\infty}\sqrt{n^{2}+\tilde{f}^{2}e^{-2\tau_{0}}}\right)\right]\] \[\left[\prod_{n\in\mathbb{Z}}\left(1+\exp[2\pi(-\tau_{2}\sqrt{n^{ 2}+\tilde{f}^{2}e^{-2\tau_{0}}}+i\tau_{1}n)]\right)\right]^{8}.\]
As usual in pp waves, the partition function is written in terms of generalized " massive" modular functions. Note that the contribution from the Ramond field now depends on the torus moduli space from \(p^{+}\), so
\[\tilde{f}=\frac{\tilde{\beta}}{2\pi\tau_{2}}f_{0}. \tag{103}\]
This does not happen in the time dependent pp wave model studied in [21], hence the partition function UV behavior of the two models are completely different. Actually, owing to scale invariance of the metric, the partition function of the model studied in [21] has the same UV behavior of the string partition function in flat space.
We can now study the behavior of the partition function for each time \(\tau_{0}\) where the thermalization occurs. In particular, it can be studied the UV behavior for each \(\tau_{0}\). Before, let's assume that the thermalization occurs close to the null singularity and try to extrapolate this result to the strong coupling region. As it can be seen, there are no divergences in the partition function (added to those that we should have in the UV limit) due to singularity. This can be easily proven by performing a sequence of steps, which will also be useful for analyzing the UV behavior. Let's start by taking the logarithm of \(z_{lc}(\tau^{\prime},\tau_{0})\)
\[\ln z_{lc}(\tau^{\prime},\tau_{0})=\]
\[8\sum_{n\in{\bf Z}}\left[\log(1-e^{-2\pi\tau_{2}\sqrt{m^{2}+n^{2}}+ 2\pi i\tau_{1}(n)+2\pi ia})-\log(1+e^{-2\pi\tau_{2}\sqrt{m^{2}+(n)^{2}}+2\pi i \tau_{1}(n-1/2)}\right]\] \[=-\sum_{n\in{\bf Z}}\sum_{p=1}^{\infty}\frac{1}{p}\left[e^{-2\pi pr _{2}\left(-2\pi\tau_{2}\sqrt{m^{2}+n^{2}}\right)}F(n,p,\tau_{1})\right]\]
where
\[F(n,p,\tau_{1})=8e^{i2\pi np}(1-\cos\pi p). \tag{113}\]
Next, by making the replacement \(r=p^{2}s\) and using the identity
\[e^{-z}=\frac{1}{\sqrt{\pi}}\int_{0}^{\infty}dr\,r^{-1/2}e^{-r-\frac{z^{2}}{4r}}\,, \tag{114}\]
equation (109) becomes
\[\ln z_{lc}(\tau,\tau_{0}) = \frac{1}{\sqrt{\pi}}\sum_{n\in\mathbb{Z}}\sum_{p=1}^{\infty}\int _{0}^{\infty}ds\,s^{-1/2}e^{-p^{2}s-(2\pi\tau_{2})^{2}\left(f^{2}e^{-2\tau_{0 }}+n^{2}\right)/s}F(n,p,\tau_{1}) \tag{115}\] \[= 2\sum_{n\in\mathbb{Z}}\sum_{p=1}^{\infty}\frac{(f^{2}e^{-2\tau_ {0}}+n^{2})^{1/4}}{\sqrt{\pi p}}K_{1/2}\left(2p\sqrt{f^{2}e^{-2\tau_{0}}+n^{2} }\right)\,,\]
where it was used the following integral representation of the modified Bessel function:
\[K_{\nu}=\int_{0}^{\infty}s^{\nu-1}e^{-\frac{a}{s}-\frac{b}{s}}=2\left(\frac{a }{b}\right)^{\nu/2}K_{\nu}\left(2\sqrt{ab}\right)\,. \tag{116}\]
Now, just by using the asymptotic behavior of the Bessel function
\[\lim_{x\to\infty}K_{\nu}(x)\approx\sqrt{\frac{\pi}{2x}}e^{-x}\,, \tag{117}\]
it can be easily seen that there are no divergences in the partition function arising from the singular behavior of the metric in \(\tau_{0}\to-\infty\). This is not surprising since the partition function of genus one does not depend on string coupling.
Next, the UV behavior of the partition function will be studied. The product that appears in the light cone partition function \(z_{lc}(\tau^{\prime},\tau_{0})\) is a massive generalization of the Theta functions. The modular properties of this kind of "generalized" Theta functions were studied in [22; 23; 24]. Here, the thermalization time \(\tau_{0}\) plays a role in the modular transformations.2 Consider the following generalized modular function
Footnote 2: For the time-independent case, in references[23; 24], the modular properties are studied keeping m independent of \(\tau_{2}\), while in [22], the dependence of m on \(\tau_{2}\) is taken into account.
\[Z_{a,b}(\tau^{\prime},\tau_{0})=\left|\prod_{n=-\infty}^{\infty}(1-e^{-2\pi \tau_{2}\sqrt{m^{2}(\tau_{0})+(n+b)^{2}}+2\pi i\tau_{1}(n+b)+2\pi ia})\right|^ {8} \tag{118}\]
such that \(z_{lc}(\tau^{\prime},\tau_{0})\) can be written as
\[z_{lc}(\tau^{\prime},t_{0})=\frac{Z_{1/2,0}(\tau^{\prime},\tau_{0})}{Z_{0,0}(\tau ^{\prime},\tau_{0})} \tag{6.19}\]
Following a similar strategy used in [23; 24],which in fact consists of the same steps developed in (6.12),(6.14) and (6.15), together with the Poisson resummation formula, it can be shown that
\[\frac{\ln z_{lc}(\tau^{\prime},\tau_{0})}{8} = \ln Z_{0,\frac{1}{2}}(\frac{\tau^{\prime}}{|\tau^{\prime}|^{2}}, \tau_{0}-\ln|\tau^{\prime}|)-\ln Z_{0,0}(\frac{\tau^{\prime}}{|\tau^{\prime}|^{ 2}},\tau_{0}-\ln|\tau^{\prime}|) \tag{6.20}\] \[+ 2\pi\frac{\tau_{2}}{|\tau^{\prime}|^{2}}\left[\Delta_{\frac{1}{2 }}(\frac{\tilde{\beta}fe^{-\tau_{0}}|\tau^{\prime}|}{2\pi\tau_{2}})-\Delta_{ 0}(\frac{\tilde{\beta}fe^{-\tau_{0}}|\tau^{\prime}|}{2\pi\tau_{2}})\right]\]
where \(\Delta_{1/2}(\frac{\beta fe^{-\tau_{0}}|\tau^{\prime}|}{2\pi\tau_{2}})\) and \(\Delta_{0}(\frac{\beta fe^{-\tau_{0}}|\tau|}{2\pi\tau_{2}})\) are defined by3
Footnote 3: Actually, \(\Delta_{b}(t_{0})\) corresponds to the zero-energy of a 2D massive complex scalar boson \(\phi\) with twisted boundary condition \(\phi(\tau,\sigma+\pi)=e^{2\pi ib}\phi(\tau,\sigma)\).
\[\Delta_{b}(m)=-\frac{1}{2\pi^{2}}\sum_{p=1}^{\infty}\cos(2\pi bp)\int_{0}^{ \infty}ds\ e^{-p^{2}s-\frac{\pi^{2}m^{2}}{s}}=-\frac{m}{\pi}\sum_{p=1}^{\infty }\frac{\cos(2\pi bp)}{p}K_{1}\left(2\pi mp\right) \tag{6.21}\]
and \(K_{1}\) is a modified Bessel function of the second kind. Using (6.20) and setting \(\tau_{1}=0\), the leading behavior of \(Z(\beta,\tau_{0})\) as \(\tau_{2}\to 0\) is
\[exp\left\{-\frac{\tilde{\beta}^{2}}{2\pi\alpha^{\prime}\tau_{2}}+\frac{16\pi} {\tau_{2}}\left[\Delta_{\frac{1}{2}}(\frac{\tilde{\beta}fe^{-\tau_{0}}|\tau|} {2\pi\tau_{2}})-\Delta_{0}(\frac{\tilde{\beta}fe^{-\tau_{0}}|\tau|}{2\pi\tau_ {2}})\right]\right\} \tag{6.22}\]
Thus, the partition function starts to diverge when the exponent above is zero. Hence, the Hagedorn temperature satisfies the following equation
\[\frac{\beta_{H}}{4\pi\alpha^{\prime}}=16\pi\left[\Delta_{\frac{1}{2}}(\frac{ \beta_{H}fe^{-\tau_{0}}}{2\pi\sqrt{2}})-\Delta_{0}(\frac{\beta_{H}fe^{-\tau_{0 }}}{2\pi\sqrt{2}})\right]. \tag{6.23}\]
In the asymptotically flat region: \(fe^{-\tau_{0}}<<1\), on gets
\[T_{H}=\frac{1}{2\pi\sqrt{2\alpha^{\prime}}}\left(1+2\sqrt{\alpha^{\prime}}fe^{ -\tau_{0}}+...\right), \tag{6.24}\]
where dots mean higher order terms in the Ramond field. The first term is just the flat space result for the Hagedorn temperature. Note that as time goes from \(\infty\) to \(-\infty\), the Hagedorn temperature increases as the thermalization time goes towards the singularity. It can be shown that this happens at all instants of time and not necessarily in the approximation used in (6.24). Let's rewrite the difference that appears in (6.23) as
\[\Delta_{\frac{1}{2}}(\frac{\beta_{H}fe^{-\tau_{0}}}{2\pi\sqrt{2}})-\Delta_{0} (\frac{\beta_{H}fe^{-\tau_{0}}}{2\pi\sqrt{2}})=\frac{1}{2\pi^{2}}\sum_{p=1}^{ \infty}\frac{[1-(-1)^{p}]}{p}\int_{0}^{\infty}ds\ e^{-p^{2}s-\frac{\beta_{H}^{ 2}f^{2}e^{-2\tau_{0}}}{8s}}, \tag{6.25}\]
so, the derivative of \(\beta_{H}\) with respect to \(-\tau_{0}\) is
\[-(2f\beta e^{-\tau_{0}})^{2}\left[\sum_{p=1}^{\infty}\frac{[1-(-1)^{p}]}{p}\int_{ 0}^{\infty}\frac{ds}{s}\ e^{-p^{2}s-\frac{\beta_{H}^{2}f^{2}e^{-2\tau_{0}}}{8s }}\right]<0 \tag{101}\]
and one concludes that the Hagedorn temperature \(T_{H}=\frac{1}{\beta_{H}}\) increases as \(\tau_{0}\) goes to \(-\infty\), that is, as the string approaches the null singularity.
The Hagedorn behavior close to the null singularity can be now cleared up just analyzing the asymptotic behavior of \(\Delta_{b}(x)\) defined in (100). This can be done using the method of Steepest Descents(see for example chapter 12 of [25]),or one just can use (100) to represent \(\Delta_{0}(\tau_{0})\) and \(\Delta_{1/2}(\tau_{0})\) as modified Bessel functions and then use (101). As the higher values of p are exponentially suppressed, the most relevant term in the series (101) is given by taking \(p=1\),
\[\lim_{\tau_{0}\rightarrow-\infty}\beta_{H}(\tau_{0}) = \left[32\pi\alpha^{\prime}\sqrt{\frac{f\sqrt{2}}{\pi}}\right]^{4/ 3}\lim_{\tau_{0}\rightarrow-\infty}e^{-\frac{2\tau_{0}}{3}}\exp\left(-\frac{ 4\beta_{H}fe^{-\tau_{0}}}{3\sqrt{2}}\right) \tag{102}\] \[= 0\,.\]
So, as the singularity is approached, the Hagedorn temperature is pushed toward infinity. It is tempting to conclude that there is no Hagedorn transition at any finite temperature close to singularity. This point will be brought again in the conclusions.
## 7 Conclusions
In the present work, the LvN formulation was used to define an invariant thermal density matrix for a superstring propagating in a time dependent plane wave background with a constant self-dual Ramond-Ramond 5-form and a linear dilaton in the light-like direction. The metric has a cosmological singularity at \(\tau\rightarrow-\infty\) and it is asymptotically flat at \(\tau\rightarrow\infty\).
In the formulation used here, it is assumed that the system enters thermodynamic equilibrium adiabatically at a certain time \(\tau_{0}\). The adiabatic approximation is controlled by the Ramond field. With this assumption, it was possible to use the density matrix to calculate the Hagedorn temperature as a function of \(\tau_{0}\). It has been shown that the Hagedorn temperature increases as string propagates from the asymptotically flat region towards the singularity. In particular, the calculation shows that the Hagedorn temperature diverges at the singularity,which could indicate that, in this background, there is no hagedorn behavior near the singularity. However, we need to be careful in extrapolating the result found here to this region. This is because it is the region of strong coupling, owing to the dependence of the dilaton on time. It is important to keep in mind that the time that appears in the Hagedorn temperature is the time where thermalization occurs. So,in addition to the fact that a free string gas cannot be defined in this region, the very notion of thermodynamic equilibrium is not so simple to assume in the strong coupling
limit. This is just because in non-interacting thermodynamics, one always starts with a weakly interacting gas and then the coupling is adiabatically turned down until to reach the free gas. If one doesn't start with the interacting gas the equilibration processes cannot occur. It is clear how to figure out this process in the small dilaton region, but in the strong coupling limit it is not. Note that the Ramond field also plays a role here, acting as the controller of adiabatic dynamics.
On the other hand, sometimes in string theory the perturbative string presents some window into the non-perturbative sector of the theory. This is the case for example of D branes seen as boundary states in the closed string channel. Perhaps, in view of [11], [26] in this case the entanglement entropy may play this role. Let's clear up this point. It was shown in reference [11] that the left/right entanglement entropy is finite when evaluated at the singularity and thus it can be used to probe the null singularity. This entropy is actually the entropy related to the vacuum state as seen by asymptotic observers; this state is in effect a boundary state. Indeed, in reference [26], for the time dependent pp wave studied in [27], it was shown that the vacuum state, as seen by asymptotic observers, actually represents a D-brane described in the closed string channel. However, for that model, the dilaton remained small near the singularity. It would be interesting to verify this point for the model studied here. Also, it will be interesting to calculate the Hagedorn temperature as function of the dilaton. To this end, the finite temperature string field theory formulation developed in [28] will be extremely useful. This is a work in progress.
Finally,the invariant density matrix also allowed to calculate the two-point thermal function in real time. It was shown that the real time thermal two point function can be written in terms of generalized Theta functions. The modular properties of these functions can be used to study non-equilibrium thermodynamic quantities, such as transport coefficients and the scrambling time. This will be left for a future work.
| |
2309.08022 | Empowering Visually Impaired Individuals: A Novel Use of Apple Live
Photos and Android Motion Photos | Numerous applications have been developed to assist visually impaired
individuals that employ a machine learning unit to process visual input.
However, a critical challenge with these applications is the sub-optimal
quality of images captured by the users. Given the complexity of operating a
camera for visually impaired individuals, we advocate for the use of Apple Live
Photos and Android Motion Photos technologies. In this study, we introduce a
straightforward methodology to evaluate and contrast the efficacy of
Live/Motion Photos against traditional image-based approaches. Our findings
reveal that both Live Photos and Motion Photos outperform single-frame images
in common visual assisting tasks, specifically in object classification and
VideoQA. We validate our results through extensive experiments on the ORBIT
dataset, which consists of videos collected by visually impaired individuals.
Furthermore, we conduct a series of ablation studies to delve deeper into the
impact of deblurring and longer temporal crops. | Seyedalireza Khoshsirat, Chandra Kambhamettu | 2023-09-14T20:46:35 | http://arxiv.org/abs/2309.08022v1 | Empowering Visually Impaired Individuals: A Novel Use of Apple Live Photos and Android Motion Photos
###### Abstract
Numerous applications have been developed to assist visually impaired individuals that employ a machine learning unit to process visual input. However, a critical challenge with these applications is the sub-optimal quality of images captured by the users. Given the complexity of operating a camera for visually impaired individuals, we advocate for the use of Apple Live Photos and Android Motion Photos technologies. In this study, we introduce a straightforward methodology to evaluate and contrast the efficacy of Live/Motion Photos against traditional image-based approaches. Our findings reveal that both Live Photos and Motion Photos outperform single-frame images in common visual assisting tasks, specifically in object classification and VideoQA. We validate our results through extensive experiments on the ORBIT dataset, which consists of videos collected by visually impaired individuals. Furthermore, we conduct a series of ablation studies to delve deeper into the impact of deblurring and longer temporal crops.
**Keywords:** Live Photo, Motion Photo, Deep Learning, Visually Impaired
## 1 Introduction
_Live Photos_ and _Motion Photos_, technologies from Apple and Android, allow a single photo to function as a still image and when activated, a short video with motion and sound. These technologies leverage a background feature that continuously captures images when the Camera app is opened, regardless of whether the shutter button is pressed. When a Live/Motion Photo is taken, the device records this continuous stream of photos, capturing moments before and after the shutter press. These images are stitched into a three-second animation, complemented by optional audio recorded during the same span. Live/Motion Photos surpass video clips due to their ease of capture and standardized format. Figure 1 depicts the main three components of a Live/Motion Photo, and Figure 5 shows screenshots of the Apple iOS environment for capturing and working with Live Photos.
People with visual impairments often rely on assistive devices that provide insights about their surroundings. For instance, people with low vision often rely on magnification tools to better observe the content of interest, or those with low vision and no vision rely on on-demand technologies [1, 23, 13] that deliver answers to submitted visual questions. Two fundamental computer vision tasks in these aids are object classification and video question answering (VideoQA). Object classification, though basic, is a key component of more advanced methods [13]. In contrast, VideoQA accurately responds to inquiries about any video, empowering visually impaired people to access information about real-world or online videos [14].
A significant problem with the current visual assisting technologies is the limitation of the visually impaired people to capture the desired image for these technologies. The images taken by blind people have different quality flaws, such as blurriness, brightness, darkness, obstruction, and so on [12]. Image quality issues may make it difficult for humans and machine learning systems to recognize image content, causing the system to provide set responses, such as "unanswerable". Prior research has indicated that this can
be frustrating for people with visual impairments using accessible applications, requiring extra time and effort to determine what is going wrong and get an answer [2]. Figure 2 shows a recorded video by a visually impaired user where half of the frames cover only a small portion of the object.
We posit that the additional contextual information provided by Live/Motion Photos can significantly enhance the ability of the assistance systems to accurately interpret and analyze the content of the images. Not only does this approach provide multiple frames for analysis, which could increase the chances of capturing a clear shot of the subject, but it also offers temporal information that can be critical for understanding dynamic scenarios. Through the course of this paper, we will present empirical evidence demonstrating how the use of Live/Motion Photos can mitigate the challenges faced by visually impaired individuals in capturing clear images.
Our contributions are as follows:
* We introduce a straightforward approach for comparing Live/Motion Photos to images.
* We evaluate state-of-the-art methods on Live/Motion Photos and images for object classification and VideoQA tasks.
* We conduct ablation studies on the impact of deblurring and varying temporal crop lengths.
## 2 Related Work
A plethora of commercial systems have been developed to empower individuals with visual impairments. These commercial systems are categorized into two distinct types: human-in-the-loop systems and end-to-end (E2E) automated systems. Human-in-the-loop systems are designed to bridge the gap between visually impaired individuals and sighted volunteers or staff members. Through these systems, users can make inquiries or seek assistance with visual tasks. Some notable examples of human-in-the-loop platforms are BeMyEyes, BeSpecular, and Aira [1, 2]. Contrary to human-in-the-loop systems, end-to-end systems rely on artificial intelligence and cloud computing to provide visual assistance to users. These systems do not involve human intermediaries. Examples of E2E systems include TapTapSee and Microsoft's Seeing AI.
A critical factor that determines the efficacy of these systems is the clarity and relevance of the content within the images that are sent for analysis. Given that visually impaired individuals might face challenges in capturing well-composed images, ensuring that the subject matter of the image is clear and discernible is not a trivial task. In this paper, we introduce an innovative approach to alleviate this challenge by utilizing Live Photos or Motion Photos.
Figure 1: Apple Live Photo structure. A Live/Motion Photo consists of a key photo, a three-second-long video, and the optional corresponding audio. The key photo is the middle frame of the video, but it can be changed to another frame.
## 3 Method
Studying Live/Motion Photos poses a significant challenge due to the absence of existing datasets. The process of creating a comprehensive dataset solely from visually impaired users is laborious and complex [16]. To address this issue, we leverage pre-existing video datasets collected by visually impaired individuals or those containing content relevant to the daily experiences of the blind. By extracting three-second temporal crops from these videos, we simulate Live/Motion Photos for tasks such as object classification and VideoQA. This enables us to evaluate and compare the effectiveness of different methods on both simulated Live/Motion Photos and standard images.
### Object Classification
To demonstrate the impact of Live/Motion Photos on object classification accuracy, we conduct experiments using the ORBIT dataset [16]. This dataset is a collection of videos recorded on cell phones by people who are blind or low-vision. The ORBIT dataset consists of 3,822 videos with 486 object categories recorded by 77 blind or low-vision people on their cell phones. Each video is meant to capture one main object, although the object may not be visible in all frames. The videos are captured in various lengths, from one second to two minutes.
To simulate Live/Motion Photos, we create short video clips with the same length as Live/Motion Photos from ORBIT and compare the performance of different image classifiers to video classifiers on these clips. To this aim, we train each image classifier on image frames of the videos and report the average classification accuracy of the frames. To evaluate the video classifiers, we train and test each method on random temporal crops of three seconds. We choose the top-performing image and video classifiers; specifically, ResNet [11], MViTv2 [12], and EfficientNetV2 [13] for image classification, and ViViT [1] and MViTv2 [12] for video classification. We use the same hyper-parameters and setup as in the original implementations, and the input size is fixed across all the methods. Following [16], we use frame accuracy as the evaluation metric for the frame-by-frame classification and video accuracy for the holistic video classification. Frame accuracy is the average number of correct predictions per frame divided by the total number of frames in a video. Video accuracy is the number of correct video-level predictions divided by the total number of videos.
Table 1 reports the object classification accuracy. The highest accuracy using images is 70.9% and achieved by EfficientNetV2-L. The results show that video classification approaches outperform frame-by-frame classification. More specifically, for Live/Motion Photos (videos of three seconds long), MViTv2 achieves an accuracy of 77.1% which is an improvement of 6.2% over EfficientNetV2-L. Since MViTv2 is designed for both image and video classification, it exhibits the benefit of using video clips over images better than other methods. Similarly, ViViT reaches an accuracy of 74.9% which is higher than EfficientNetV2-L by a margin of 4.0%. This
Figure 2: A visually impaired user trying to record a video of a keyboard [16]. Adjusting the camera field of view to cover a whole object is a challenging task for blind users. The frames are uniformly sampled, and the total video length is five seconds.
result strongly supports the effectiveness of Live/Motion Photos over single images.
### Video Question Answering
We investigate the effectiveness of Live/Motion Photos in the VideoQA task. We compare the performance of multiple VQA methods on image frames to the performance of VideoQA methods on video clips with the same length as Live/Motion Photos. While there are numerous video question answering datasets, we choose the ActivityNet-QA dataset [23] since it contains video clips similar to the day-to-day life of people with visual impairments. The ActivityNet-QA dataset adds question-answer pairs to a subset videos of the ActivityNet dataset [1]. The ActivityNet-QA dataset contains 5,800 videos with 58,000 human-annotated question-answer pairs divided as 3,200/1,800/800 videos for train/val/test splits. This dataset contains 200 different types of daily human activities, which is suitable for visual assisting applications.
We train image-based methods on randomly drawn frames with their corresponding question-answer pairs from the ActivityNet-QA dataset. Similarly, we train video-based methods on random temporal crops with the same length as Live/Motion Photos. We employ mPLUG [11] and BEiT-3 [27] as the image-based methods and Just Ask [27] and Singularity [10] as the video-based methods for Live/Motion Photos. These methods achieve state-of-the-art accuracy in the VQA and VideoQA tasks, and their implementation code is publicly available. For each method, we re-use the original hyper-parameters that achieve the best results.
As for the evaluation criteria, we use accuracy, a commonly used criterion to measure the performance of classification tasks. For the QA pairs in the test set with size \(N\), given any testing question \(\mathbf{q}_{i}\in Q\) and its corresponding ground-truth answer \(\mathbf{y}_{i}\in Y\), we denote the predicted answer from the model by \(\mathbf{a}_{i}\). \(\mathbf{a}_{i}\) and \(\mathbf{y}_{i}\) correspond to a sentence that can be seen as a set of words. The accuracy measure is defined as:
\[Accuracy=\frac{1}{N}\sum_{i=1}^{N}\mathbf{1}[\mathbf{a}_{i}=\mathbf{y}_{i}] \tag{1}\]
where \(\mathbf{1}[\cdot]\) is an indicator function such that its output is one only if \(\mathbf{a}_{i}\) and \(\mathbf{y}_{i}\) are identical, and zero otherwise [23]. We follow previous evaluation protocols for open-ended settings [27, 23, 24] and use a fixed vocabulary of training answers.
Table 2 reveals the results of our experiments for the VideoQA task. The highest accuracy for image-based approaches is 30.1% and achieved by BEiT-3. Both VideoQA methods outperform the VQA methods. More specifically, using Live/Motion Photos, Singularity achieves the highest accuracy of 38.6%, which is more than 8% higher than BEiT-3 accuracy. Similarly, Just Ask reaches an accuracy of 34.9% which is 4.8% higher than BEiT-3.
The outcomes of our experiments in object classification and VideoQA confirm the benefit of using Live/Motion Photos over images.
\begin{table}
\begin{tabular}{l|c} Method & Accuracy \\ \hline ResNet-152 [11] & 69.2 \\ MViTv2-B [11] & 70.7 \\ EfficientNetV2-L [26] & 70.9 \\ \hline ViViT [2] & 74.9 \\ MViTv2-B [11] & 77.1 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of frame-by-frame to holistic object classification methods on the ORBIT test set. The top three methods use images, and the bottom two use Live/Motion Photos.
\begin{table}
\begin{tabular}{l|c} Method & Accuracy \\ \hline mPLUG [11] & 28.9 \\ BEiT-3 [27] & 30.1 \\ \hline Just Ask [27] & 34.9 \\ Singularity [10] & 38.6 \\ \hline \end{tabular}
\end{table}
Table 2: Results of image-based and video-based methods for the VideoQA task on the ActivityNet-QA test set. mPLUG and BEiT-3 use images, while Just Ask and Singularity use Live/Motion Photos.
## 4 Deblurring Impact
Blurring is a prevalent issue in images and videos captured by individuals with visual impairments [11], and this issue can adversely affect the efficacy of assistive technologies. In this section, we undertake a systematic investigation to discern the potential benefits of deblurring on the accuracy of object classification and VideoQA. For the deblurring process, we employ the FGST method [14], a state-of-the-art video deblurring algorithm that amalgamates Transformer modules with optical flow estimation. We then proceed to apply FGST on two datasets, namely ORBIT and ActivityNet-QA, to deblur the visual content. With the deblurred datasets, we replicate the experiments as outlined in Section 3.1 and Section 3.2.
The outcomes of this investigation are tabulated in Table 3. The table segregates the results into two categories - the upper portion presents the outcomes for object classification, while the lower portion provides the results for VideoQA. The empirical findings demonstrate that the maximum enhancement in accuracy is 2.6%, which is attained by the Just Ask method, whereas the minimum improvement is documented at 1.7% by the MViTv2-B method. Furthermore, for a more illustrative understanding, Figure 3 showcases a selection of frames along with the corresponding model outputs prior to and subsequent to the deblurring process. This visualization facilitates a comparison of the quality and detail in the frames. Additionally, Figure 4 presents a compilation of frames extracted from a deblurred video, providing a visual representation of the enhancements achieved through the deblurring process.
Figure 4: Ten uniformly sampled frames from a random video in ORBIT, before and after deblurring. **Top:** Original video. **Bottom:** After deblurring. Deblurring tends to provide greater benefits to frames containing smaller objects.
Figure 3: Sample video frames from ORBIT dataset with their corresponding model output. **Top:** Original frame. **Bottom:** After deblurring. Deblurring enhances the precision of model predictions.
## 5 Temporal Length Impact
Although Live/Motion Photos are limited to three seconds, it is possible for other applications to implement the same technology but without the capturing limitations. Therefore, in this section, we study the effect of video length on accuracy for object classification and VideoQA tasks. To this aim, we evaluate the video-based methods on three temporal crop size ranges. The 'Short' crop range is the random crops of shorter than 15 seconds, the 'Medium' range is between 15 to 30, and the 'Long' range is longer than 30 seconds. Videos that are shorter than a targeted crop size are not included in that group. Additionally, we evaluate the methods on the whole dataset using all the available frames in the videos. We do not use videos shorter than the required minimum length for the Medium and Long ranges. We use the same setup in Sections 3.1 and 3.2.
For object classification, we employ ViViT [1] and MViTv2-B [11] and evaluate them on the ORBIT dataset [16]. The top two methods in Table 4 report the results for object classification using different video lengths. For MViTv2-B, the lowest accuracy is 77.1%, achieved by using Live/Motion Photos, and the highest accuracy is 79.0%, achieved using the longest video crops. For both methods, adding more frames helps improve the accuracy. The accuracy of using all the frames gets slightly worse due to the addition of shorter videos. Since having more frames reveals more data about an object, the longer crops reach higher accuracies.
For VideoQA, we employ Just Ask [23] and Singularity [11] and train and test them on the ActivityNet-QA dataset [22]. The bottom two methods in Table 4 report the results for different video lengths in VideoQA. The lowest accuracy for Singularity is 38.6% by using Live/Motion Photos, and 41.1% is the highest accuracy by using all the frames.
The findings from our ablation study reveal that while there is a positive correlation between the length of video clips and the enhancement in accuracy, the incremental accuracy attained through longer video clips, as compared to Live/Motion Photos, is not significant in contrast to single images. This implies that Live/Motion Photos, constrained to a duration of three seconds, are capable of furnishing a substantial improvement in accuracy that is deemed sufficient for a majority of applications.
\begin{table}
\begin{tabular}{l|c c c c c} & & \multicolumn{3}{c}{Accuracy} \\ Method & Live/Motion Photo & Short & Medium & Long & \\ & =3s & \textless{}15s & 15s\textgreater{} and \textless{}30s & \textgreater{}30s & All Frames \\ \hline ViViT [1] & 74.9 & 75.8 & 76.6 & 77.1 & 76.7 \\ MViTv2-B [11] & 77.1 & 77.9 & 78.4 & 79.0 & 78.5 \\ \hline Just Ask [23] & 34.9 & 36.0 & 36.9 & 37.8 & 37.0 \\ Singularity [11] & 38.6 & 39.6 & 40.4 & 41.1 & 40.6 \\ \hline \end{tabular}
\end{table}
Table 4: The results of top-performing methods with different temporal crop lengths. **Top:** Object classification on the ORBIT test set. **Bottom:** VideoQA on the ActivityNet-QA test set. Videos shorter than a targeted crop size are not included in that group.
\begin{table}
\begin{tabular}{c|c c} \hline Method & Without Deblurring & With Deblurring \\ \hline ViViT [1] & 74.9 & 76.9 (+2.0) \\ MViTv2-B [11] & 77.1 & 78.8 (+1.7) \\ \hline Just Ask [23] & 34.9 & 37.5 (+2.6) \\ Singularity [11] & 38.6 & 40.9 (+2.3) \\ \hline \end{tabular}
\end{table}
Table 3: The impact of deblurring Live/Motion Photos. **Top:** Object classification on the ORBIT test set. **Bottom:** VideoQA on the ActivityNet-QA test set.
## 6 Conclusion and Future Directions
Despite significant recent developments, visual assistance applications are still in need of improvement. Current machine learning methods designed to help visually impaired people suffer from the low quality of the images taken by the end users.
In this paper, we made multiple contributions to improving existing methods for visual assisting. We introduced a simple way to evaluate the performance of Live/Motion Photos compared to single images. We employed this approach to show that Live/Motion Photos achieve higher accuracy in common visual assisting tasks. Our experiment revealed that Live/Motion Photos perform better than images in object classification and VideoQA tasks. In addition, we further studied the effect of longer temporal crops and showed how deblurring can improve accuracy.
In future research, it is essential to carry out user studies involving visually impaired individuals. This information will guide us in refining our method, ensuring it is not only technically robust but also practically beneficial for the intended users.
| 視覚障碍の人々を支援するアプリケーションが多数開発され、機械学習ユニットを用いて視覚入力の処理を行っています。しかし、これらのアプリケーションの課題は、利用者が撮影する画像のサブ最適な品質です。視覚障碍の人々がカメラを操作する複雑さに対して、Apple LivePhotos と Android Motion Photos の技術を採用する必要があります。本研究では、Live/Motion Photos の有効性を評価する直感的でシンプルで、従来の画像ベースのアプローチとの比較を行う方法を導入します。私たちの研究結果により、Live Photos と Motion Photos は、一般的な視覚支援タスクにおいて、特に物体分類と VideoQA に対して、単一フレーム画像を上回っています。この結果を検証するため、視覚障碍の人々による動画の収集からなる ORBIT データセットを用いて、実験を行っています。さらに、ぼかしを適用したり、時間的な範囲を拡大したりするなどの詳細な研究を行いました。 |
2301.13546 | Joint Task Offloading and Cache Placement for Energy-Efficient Mobile
Edge Computing Systems | This letter investigates a cache-enabled multiuser mobile edge computing
(MEC) system with dynamic task arrivals, taking into account the impact of
proactive cache placement on the system's overall energy consumption. We
consider that an access point (AP) schedules a wireless device (WD) to offload
computational tasks while executing the tasks of a finite library in the
\emph{task caching} phase, such that the nearby WDs with the same task request
arriving later can directly download the task results in the \emph{task arrival
and execution} phase. We aim for minimizing the system's weighted-sum energy
over a finite-time horizon, by jointly optimizing the task caching decision and
the MEC execution of the AP, and local computing as well as task offloading of
the WDs at each time slot, subject to caching capacity, task causality, and
completion deadline constraints. The formulated design problem is a
mixed-integer nonlinear program. Under the assumption of fully predicable task
arrivals, we first propose a branch-and-bound (BnB) based method to obtain the
optimal offline solution. Next, we propose two low-complexity schemes based on
convex relaxation and task-popularity, respectively. Finally, numerical results
show the benefit of the proposed schemes over existing benchmark schemes. | Jingxuan Liang, Hong Xing, Feng Wang, Vincent K. N. Lau | 2023-01-31T10:47:59 | http://arxiv.org/abs/2301.13546v1 | # Joint Task Offloading and Cache Placement for Energy-Efficient Mobile Edge Computing Systems
###### Abstract
This letter investigates a cache-enabled multiuser mobile edge computing (MEC) system with dynamic task arrivals, taking into account the impact of proactive cache placement on the system's overall energy consumption. We consider that an access point (AP) schedules a wireless device (WD) to offload computational tasks while executing the tasks of a finite library in the _task caching_ phase, such that the nearby WDs with the same task request arriving later can directly download the task results in the _task arrival and execution_ phase. We aim for minimizing the system's weighted-sum energy over a finite-time horizon, by jointly optimizing the task caching decision and the MEC execution of the AP, and local computing as well as task offloading of the WDs at each time slot, subject to caching capacity, task causality, and completion deadline constraints. The formulated design problem is a mixed-integer nonlinear program. Under the assumption of fully predicable task arrivals, we first propose a branch-and-bound (BnB) based method to obtain the optimal offline solution. Next, we propose two low-complexity schemes based on convex relaxation and task-popularity, respectively. Finally, numerical results show the benefit of the proposed schemes over existing benchmark schemes.
Mobile edge computing, proactive cache placement, computation offloading, branch-and-bound, optimization.
## I Introduction
Various computation-extensive internet of things (IoT) applications (such as extended reality, auto-driving, and tactile networks) call for low-latency communication and computation [1]. By deploying dedicated edge servers at the network edge, mobile edge computing (MEC) has been recognized as an enabling technology to meet the stringent requirement of these delay-sensitive services while addressing the computation/communication resource limitation issue of these wireless devices (WDs) [2, 3, 4]. Leveraging the storage resources of the MEC servers to proactively cache computational tasks for possible reuse, the computation performance of the MEC system can be further enhanced.
Compared to the conventional MEC system designs without caching capabilities [2, 3, 4], cache-enabled MEC system designs encounter several new technical challenges. First, the task caching and offloading decisions need to be jointly made so as to make the best use of the limited caching and computation resources. Second, the task caching and offloading strategies need to be adaptive to task dynamics and the WDs' mobility. Finally, to improve the energy efficiency of the cache-enabled MEC system, it is imperative to jointly optimize the system's computation, caching, and communication resources. In the literature, there exist several works investigating cache-enabled MEC system designs [5, 6, 7, 8, 9]. For example, an adaptive task offloading and caching scheme was proposed to provide high-quality video services to vehicular users [5]. Based on a two-stage dynamic game strategy, [6, 7] investigated joint computation offloading and resource allocation design for cache-enabled MEC systems. The works [8] and [9] proposed the joint service caching and task offloading design in the dense cellular network and single-user scenarios, respectively. Note that most of the above existing works [5, 6, 7, 8, 9] failed to consider the benefit of proactive caching to the overall multiuser MEC systems with WDs' dynamical task arrivals over time slots.
In this letter, we investigate an energy-efficient cache-enabled multiuser MEC system with dynamic task arrivals over a finite-time horizon. The finite-time horizon consists of the _task caching_ phase and the _task arrival and execution_ phase. We consider that the MEC server selects and proactively caches the result of several tasks from a finite library in the task caching phase; at the task arrival and execution phase, the WDs can directly download the task results if their requested tasks have been cached by the MEC server, and perform local computing and task offloading otherwise. We jointly optimize the task cache placement decision and remote computing of the AP, and task offloading as well as local computing of the WDs at each time slot, so as to minimize the system's weighted-sum energy consumption over the horizon. For obtaining a lower-bound benchmark for practical design schemes with dynamic task arrivals but imperfect prediction, we assume that the computational task sequence of each WD is fully predictable. We employ the branch-and-bound (BnB) method to obtain the optimal offline solution. Next, to facilitate the cache-enabled MEC design with low computational complexity, we propose a convex-relaxation based scheme and a task-popularity based scheme, respectively. Finally, numerical results show the benefits of our proposed schemes over existing benchmarks.
## II System Model and Problem Formulation
We consider a cache-enabled multiuser MEC system, which consists of an AP (integrated with an MEC server) and a set \(\mathcal{K}\triangleq\{1,...,K\}\) of single-antenna WDs. These \(K\) WDs need to compute the randomly arrived tasks within a given completion deadline. Denote by \(\mathcal{T}\triangleq\{T_{1},T_{2},...,T_{L}\}\) the computational task set to be possibly processed by the \(K\) WDs. We consider a block-by-block cache-enabled MEC system, where each transmission block is divided into Phase I which is _MEC server's task caching_ and Phase II which is _WDs' task arrival and execution_. Phase I and phase II are, respectively, further composed of \(N_{p}\) and \(N\) equal-duration time slots each
with length \(\tau\). Without loss of generality, we focus on cache-enabled MEC design within one block as shown in Fig. 1. To guarantee a high efficiency of this cache-enabled MEC system, it is assumed that \(N_{p}<N\). For the tasks which are not cached at the AP, the WDs need to perform local computing and/or to offload the tasks to the MEC server for remote execution, i.e., task offloading.
### _Phase I: MEC Server's Task Caching_
#### I-A1 Task Cache Placement
Let \(\alpha_{\ell}\in\{0,1\}\) denote the caching decision for task \(T_{\ell}\) at the MEC server, where the task \(T_{\ell}\) is cached if \(\alpha_{\ell}=1\), and \(\alpha_{\ell}=0\) otherwise, \(\forall\ell=1,...,L\)1. By denoting \(D^{\max}\) the caching capacity of the MEC server, the MEC caching needs to satisfy
Footnote 1: The caching decision variables \(\{\alpha_{\ell}\}_{\ell=1}^{L}\) will be specified by the solution to an optimization problem (P1) detailed in Section III.
\[\sum_{\ell=1}^{L}\alpha_{\ell}D_{\ell}\leq D^{\max}, \tag{1}\]
where \(D_{\ell}\) denotes the number of input-bits for task \(T_{\ell}\).
#### I-A2 Task Offloading for Caching
For facilitating the cache-enabled multiuser MEC system design, we consider the MEC server's cached tasks are all generated and offloaded from one selected WD with the smallest pathloss for task offloading to the AP. Denote by WD-\(k_{o}\) the selected WD, where \(k_{o}\in\mathcal{K}\). In order to spare the MEC server sufficient time to execute the cached tasks in the task caching phase, WD-\(k_{o}\) needs to fully offload a number \(\sum_{\ell=1}^{L}\alpha_{\ell}D_{\ell}\) of task input-bits by the end of the \((N_{p}-1)\)th slot. Within the task caching phase, denote by \(\tilde{d}_{k_{o},i}^{\text{off}}\) the number of task input-bits offloaded from WD-\(k_{o}\) to the AP at the \(i\)th slot, where \(i=1,...,N_{p}-1\). Hence, we have
\[\sum_{i=1}^{N_{p}-1}\tilde{d}_{k_{o},i}^{\text{off}}=\sum_{\ell=1}^{L}\alpha_ {\ell}D_{\ell}. \tag{2}\]
During the task caching phase, the amount of energy consumption of WD-\(k_{o}\) due to task offloading is \(\tilde{E}_{k_{o}}^{\text{off}}=\sum_{i=1}^{N_{p}-1}\frac{\tau a^{2}(2\frac{ \tilde{d}_{k_{o},i}^{\text{off}}(\tau+h_{k_{o},i})-1}{|h_{k_{o},i}|^{2}})}{|h_ {k_{o},i}|^{2}}\), where \(h_{k_{o},i}\) and \(B_{k_{o},i}\) denote the complex-valued channel coefficient and system bandwidth for task offloading from WD-\(k_{o}\) to the AP at the \(i\)th slot of the task caching phase, respectively, and \(\sigma^{2}\) denotes the additive white Gaussian noise (AWGN) power at the AP receiver.
#### I-A3 Cached Task Execution
The AP executes all cached tasks to proactively obtain their results for further reuse. Due to the causality of task execution, the total number of task input-bits to be executed by the MEC server until the \(i\)th slot of the task caching phase _cannot_ exceed that offloaded by WD-\(k_{o}\) before the \((i-1)\)th slot, where \(i=1,...,N_{p}\). Denote by \(\tilde{d}_{i}^{\text{mec}}\) the number of task input-bits executed by the MEC server at the \(i\)th slot of the task caching phase. Accordingly, the task causality constraints in the task caching phase are
\[\sum_{j=1}^{i}\tilde{d}_{j}^{\text{mec}}\leq\sum_{j=1}^{i-1}\tilde{d}_{k_{o},j }^{\text{off}},\ \forall i=1,...,N_{p}, \tag{3}\]
where \(\tilde{d}_{1}^{\text{mec}}=0\) due to the fact that there exists no task to execute yet at the first slot of the task caching phase. In addition, the computation of the offloaded tasks needs to be completed by the MEC server within the task caching phase. Hence, we have the task completion constraint as
\[\sum_{j=1}^{N_{p}}\tilde{d}_{j}^{\text{mec}}=\sum_{j=1}^{N_{p}-1}\tilde{d}_{k_ {o},j}^{\text{off}}. \tag{4}\]
In addition, the amount of energy consumption of the MEC server within the task caching phase is \(\tilde{E}^{\text{mec}}=\sum_{i=1}^{N_{p}}\zeta_{0}C_{0}\tilde{d}_{i}^{\text{ mec}}(\tilde{f}_{i}^{\text{mec}})^{2}=\sum_{i=1}^{N_{p}}\frac{\zeta(C_{0}^{3} \tilde{d}_{i}^{\text{mec}})^{3}}{\tau^{2}}\), where \(\tilde{f}_{i}^{\text{mec}}=\frac{C_{0}^{\text{mec}}}{C_{0}\tilde{d}_{i}^{\text {mec}}}\) denotes the required CPU rate for task execution by the MEC server at the \(i\)th slot of Phase I, \(C_{0}\) denotes the number of required CPU cycles per task input-bit, and \(\zeta_{0}\) denotes the CPU architecture capacitance coefficient of the MEC server.
### _Phase II: WDs' Task Arrival and Execution_
Within this phase, if the results of the task arriving at the beginning of a slot for WD-\(k\) has been cached by the MEC server during Phase I, WD-\(k\) will download the results directly2. Otherwise, this task needs to be executed by local computing at WD-\(k\) and/or task offloading to the MEC server. Let \(\mathbf{s}_{k}\triangleq\{s_{k,1},...,s_{k,N}\}\) denote the sequence of computation tasks for each WD-\(k\), where each task \(s_{k,n}\in\mathcal{T}\) arrives at WD-\(k\) at the beginning of the \(n\)th slot and \(n\in\mathcal{N}\triangleq\{1,...,N\}\).3 Since the arrived tasks are randomly sampled from the task set \(\mathcal{T}\), it is possible that some tasks in the sequence \(\mathbf{s}_{k}\) may be repeated. Therefore, we need to retrieve the task-arrival set from each WD-\(k\)'s task sequence \(\mathbf{s}_{k}\).
Footnote 2: We assume that the number of task-output bits is significantly smaller than that of task-input bits, and therefore the incurred energy cost at the MEC server is negligible [1, 2, 3].
Footnote 3: We assume that the sequence of each WD’s computational tasks is fully predicted by exploiting the historical data _a priori_[4, 5, 6]. Hence, the proposed solution is offline, serving as a performance lower bound for online solutions considering (partially) unknown dynamic task arrivals.
**Definition 1** (Causality Task Set): _For each WD-\(k\), we define \(\mathcal{S}_{k,n}^{\text{CTS}}=\{s_{k,k}\in\mathcal{T}\ |\ i\in\{1,...,n\}\}\) as WD-\(k\)'s causality task set (CTS) till the \(n\)th slot. It follows that \(\mathcal{S}_{k,1}^{\text{CTS}}=\{s_{k,1}\}\) and \(\mathcal{S}_{k,i}^{\text{CTS}}\subseteq\mathcal{S}_{k,j}^{\text{CTS}}\) for \(i<j\in\mathcal{N}\)._
We consider _partial offloading_ policy [11], such that each WD-\(k\) can arbitrarily divide each task into two parts for local computing and computation offloading, respectively.
#### I-B1 Local Computing and Task Offloading of WDs
Let \(d_{k,n}^{\text{dec}}\geq 0\) and \(d_{k,n}^{\text{off}}\geq 0\) denote the number of task input-bits for local computing and computation offloading for each WD-\(k\) at the \(n\)th slot, respectively. For WD-\(k\), the total number of task input-bits executed by both local computing and offloading until the \(n\)th slot must be smaller than those arriving until
Fig. 1: Timeline of the cache-enabled MEC protocol within one block.
the \(n\)th slot, where \(n\in\mathcal{N}\). Therefore, we have the task computation causality constraints as [11]
\[\sum_{j=1}^{n}d_{k,j}^{\text{loc}}+\sum_{j=1}^{n}d_{k,j}^{\text{eff}} \leq\sum_{\ell=1}^{L}\mathbbm{1}_{T_{\ell}\in\mathcal{S}_{k,n}^{\text{res}}}(1 -\alpha_{\ell})D_{\ell},\ n\in\mathcal{N}, \tag{5}\]
where \(k\in\mathcal{K}\), and \(\mathbbm{1}_{A}\) denotes the indicator function with \(\mathbbm{1}_{A}=1\) if the statement \(A\) is true, and \(\mathbbm{1}_{A}=0\) otherwise.
Note that the WDs need to obtain the computed results of the arrived tasks before the end of the \(N\)th slot. Therefore, we have the task computation deadline constraint as
\[\sum_{j=1}^{N}d_{k,j}^{\text{dec}}+\sum_{j=1}^{N}d_{k,j}^{\text{ff}}=\sum_{ \ell=1}^{L}\mathbbm{1}_{T_{\ell}\in\mathcal{S}_{k,N}^{\text{res}}}(1-\alpha_{ \ell})D_{\ell}, \tag{6}\]
where \(k\in\mathcal{K}\). Note that \(d_{k,N}^{\text{eff}}=0\), since there has no time for the MEC server's remote execution at the end of the \(N\)th slot.
Denote by \(C_{k}\) the number of CPU cycles for executing one task input-bit by the local computing of WD-\(k\). We consider that these CPU cycles are locally executed by WD-\(k\) using an identical CPU frequency at the \(n\)th slot, which is determined as \(f_{k,n}=\frac{C_{k}d_{k,n}^{\text{dec}}}{\tau}\), \(\forall k\in\mathcal{K}\), \(n\in\mathcal{N}\)[1, 2]. For WD-\(k\), we assume that the CPU frequency \(f_{k,n}\) is always smaller than the allowable maximum CPU frequency. Denote by \(E_{k}^{\text{loc}}\) the total amount of energy consumption of WD-\(k\) for local computing. Therefore, we have \(E_{k}^{\text{loc}}=\sum_{n=1}^{N}\zeta_{k}C_{k}d_{k,n}^{\text{loc}}f_{k,n}^{2} =\sum_{n=1}^{N}\frac{\zeta_{k}C_{k}^{\text{loc}}(d_{k,n}^{\text{loc}})^{3}}{ \tau^{2}}\), where \(\zeta_{k}\) denotes the CPU architecture capacitance coefficient of WD-\(k\).
Let \(p_{k,n}>0\), \(h_{k,n}\in\mathbb{C}\), and \(B_{k,n}>0\) denote the transmit power, the channel coefficient, and the system bandwidth for task offloading from WD-\(k\) to the AP at the \(n\)th slot of Phase II, respectively. The channel state information \(\{h_{k,n}\}\) is assumed to be perfectly obtained based on channel estimation methods in this letter. As WD-\(k\) needs to offload a number \(d_{k,n}^{\text{eff}}\) of task input-bits to the MEC server, the data rate for offloading from WD-\(k\) to the AP at the \(n\)th slot is \(r_{k,n}=d_{k,n}^{\text{ff}}/\tau\), where \(r_{k,n}\triangleq B_{k,n}\log_{2}(1+\frac{p_{k,n}|h_{k,n}|^{2}}{\sigma^{2}})\). Hence, the amount of energy consumption for WD-\(k\)'s task offloading in Phase II is given by \(E_{k}^{\text{eff}}=\sum_{n=1}^{N-1}p_{k,n}\tau=\sum_{n=1}^{N-1}\frac{\tau \sigma^{2}(2^{\mathbbm{k}_{k,n}^{\text{eff}}/(p_{k,n})}-1)}{|h_{k,n}|^{2}}\).
As a result, the total energy consumption \(E_{k}\) for WD-\(k\) in Phase II is expressed as \(E_{k}=E_{k}^{\text{loc}}+E_{k}^{\text{loc}}\), \(\forall k\in\mathcal{K}\).
#### Ii-B2 Task Execution of MEC Server
The MEC server needs to execute the offloaded tasks from the \(K\) WDs. Denote by \(d_{n}^{\text{me}}\) the number of task input-bits executed by the MEC server at the \(n\)th slot. Due to the task causality conditions, the total number of task input-bits executed by the MEC server until the \(n\)th slot cannot exceed those offloaded from the \(K\) WDs until the previous \((n-1)\)th slot. Therefore, the task causality constraints at the MEC server are expressed as
\[\sum_{j=1}^{n}d_{j}^{\text{mee}}\leq\sum_{j=1}^{n-1}\sum_{k=1}^{K}d_{k,j}^{ \text{off}},\ \forall n\in\mathcal{N}\setminus\{N\}. \tag{7}\]
Note that \(d_{1}^{\text{mee}}=0\), since there exist no offloaded tasks available at the MEC server at the first slot. Again, the computation of these offloaded tasks needs to be completed before the end of the \(N\)th slot of Phase II. Thus, the task computation deadline constraint at the MEC server is
\[\sum_{j=1}^{N}d_{j}^{\text{mee}}=\sum_{j=1}^{N-1}\sum_{k=1}^{K}d_{k,j}^{\text{ off}}. \tag{8}\]
Let \(f_{n}^{\text{mee}}\) denote the CPU frequency of the MEC server at the \(n\)th slot, which is determined as \(f_{n}^{\text{mee}}=\frac{C_{0}d_{n}^{\text{mee}}}{\tau}\). The amount of energy consumption for the MEC server to execute a total of \(\sum_{n=1}^{N}C_{0}d_{n}^{\text{mee}}\) CPU cycles within the \(N\) slots is expressed as \(E^{\text{mee}}=\sum_{n=1}^{N}\frac{\zeta_{0}C_{0}^{3}(d_{n}^{\text{mee}})^{3}}{ \tau^{2}}\).
### _Problem Formulation_
In this letter, we are interested in minimizing the weighted-sum energy consumption of a block for the cache-enabled multiuser MEC system, subject to the MEC server's caching capacity constraint, the task causality constraints, and the task completion deadline constraints. Accordingly, by defining \(\boldsymbol{x}\triangleq(\{a_{\ell}\}_{\ell=1}^{L},\{\tilde{d}_{k,i}^{\text{ off}},\tilde{d}_{k,n}^{\text{dec}}\}_{i=1}^{N},\{d_{k,n}^{\text{dec}},f_{k,n}^{ \text{dec}}\}_{k\in\mathcal{K},n\in\mathcal{N}},\{d_{n}^{\text{mee}}\}_{n=1}^{N})\), the cache-enabled MEC design problem is formulated as
\[\text{(P1)}:\ \underset{\boldsymbol{x}}{\text{minimize}}\ w_{0}(\tilde{E}^{ \text{mee}}+E^{\text{mee}})+w_{1}(\tilde{E}_{k_{a}}^{\text{off}}+\sum_{k=1}^{K}E_ {k})\] (9a) subject to \[\text{(1)}\text{(-8)},\ \alpha_{\ell}\in\{0,1\},\ \forall\ell=1,...,L \tag{9b}\] \[\tilde{d}_{k_{a},i}^{\text{dec}}\geq 0,\ \tilde{d}_{i}^{\text{ mece}}\geq 0,\ \forall i=1,...,N_{p}\] (9c) \[d_{k,n}^{\text{loc}}\geq 0,d_{k,n}^{\text{off}}\geq 0,d_{n}^{\text{ mec}}\geq 0,\ \forall k,\forall n, \tag{9d}\]
where \(w_{0}\geq 0\) and \(w_{1}\geq 0\) denote the energy weights such that \(w_{0}+w_{1}=1\). Note that (P1) is a mixed-integer nonlinear programming (MINLP) problem, which is NP-hard [12, 13].
## III Proposed Offline Solutions to Problem (P1)
In this section, we first employ BnB method to obtain the optimal offline solution to (P1), and then introduce two low-complexity schemes based on task-popularity and convex relaxation, respectively.
### _Optimal Offline Solution Based on BnB Algorithm_
The BnB method is an efficient and powerful tree-search algorithm by maintaining a provable upper and lower bound on the optimal objective value, and terminating with an \(\epsilon\)-optimal solution [13]. Hence, in order to obtain the globally optimal benchmark for practical cache-enabled MEC design schemes, we employ the BnB method to solve problem (P1) in this subsection.
To start with, we define the sets \(\mathcal{L}_{0}\subseteq\mathcal{L}\) and \(\mathcal{L}_{1}\subseteq\mathcal{L}\), where \(\mathcal{L}\triangleq\{1,...,L\}\). Consider an optimization problem as
\[\text{P}(\mathcal{L}_{0},\mathcal{L}_{1}):\ \underset{\boldsymbol{x}}{\text{minimize}}\ w_{0}(\tilde{E}^{ \text{mee}}+E^{\text{mee}})+w_{1}(\tilde{E}_{k_{a}}^{\text{off}}+\sum_{k=1}^{K}E_ {k})\] subject to \[\text{(1)}\text{(-8)},\text{(9c)},\text{(9d)}\] \[\alpha_{\ell}\in\{0,1\},\ \forall\ell\in\mathcal{L}\setminus( \mathcal{L}_{0}\cup\mathcal{L}_{1}),\]
where \(\alpha_{\ell}=0\) for \(\ell\in\mathcal{L}_{0}\) and \(\alpha_{\ell}=1\) for \(\ell\in\mathcal{L}_{1}\). If the sets satisfy \(\mathcal{L}_{0}\cup\mathcal{L}_{1}\neq\mathcal{L}\), then \(\text{P}(\mathcal{L}_{0},\mathcal{L}_{1})\) is a mixed Boolean convex problem [13]. Following the BnB approach, we establish a
binary tree with root as \(\text{P}(\emptyset,\emptyset)\), and \(\text{P}(\mathcal{L}_{0},\mathcal{L}_{1})\) corresponds to a node at depth \(m\) in the tree, where a number \(|\mathcal{L}_{0}|+|\mathcal{L}_{1}|=m\) of Boolean variables are specified and \(0\leq m\leq L\). Specifically, we obtain a global upper bound and a global lower bound in each iteration of the BnB method, where the optimal value of problem (P1) is guaranteed to be always within the range of the global upper and lower bounds. The detailed BnB procedure is described as follows.
* _Bounding:_ By solving \(\text{P}(\mathcal{L}_{0},\mathcal{L}_{1})\) with the Boolean variables being relaxed as continuous variables, we obtain a lower bound of the optimal value of \(\text{P}(\mathcal{L}_{0},\mathcal{L}_{1})\). By rounding \(\alpha_{\ell}\), \(\forall\ell\in\mathcal{L}\setminus(\mathcal{L}_{0}\cup\mathcal{L}_{0})\), to be zero or one, we obtain an upper bound of the optimal value of \(\text{P}(\mathcal{L}_{0},\mathcal{L}_{1})\).
* _Branching:_ By selecting one task index \(\ell\in\mathcal{L}\setminus(\mathcal{L}_{0}\cup\mathcal{L}_{0})\), we obtain two sub-problems as \(\text{P}(\mathcal{L}_{0}\cup\{\ell\},\mathcal{L}_{1})\) and \(\text{P}(\mathcal{L}_{0},\mathcal{L}_{1}\cup\{\ell\})\). Letting the Boolean variables of \(\text{P}(\mathcal{L}_{0}\cup\{\ell\},\mathcal{L}_{1})\) (or \(\text{P}(\mathcal{L}_{0},\mathcal{L}_{1}\cup\{\ell\})\)) be relaxed and fixed, respectively, we obtain a lower and an upper bound of the optimal value of \(\text{P}(\mathcal{L}_{0}\cup\{\ell\},\mathcal{L}_{1})\) (or \(\text{P}(\mathcal{L}_{0},\mathcal{L}_{1}\cup\{\ell\})\)). Then, we update the global lower and upper bounds.
* _Pruning:_ At each iteration, we remove the nodes with lower bounds larger than the current global upper bound from the tree.
The proposed BnB method maintains a provable upper and lower bound on the optimal objective value, and it returns an \(\epsilon\)-optimal solution for problem (P1) [13], where \(\epsilon>0\) denotes the tolerable error. Specifically, a number of \(\mathcal{O}(2^{L+2}-1)\sqrt{N_{p}+KN+N}\log(\frac{(N_{p}+KN+N)/\epsilon^{(0)} }{\epsilon})))\) Newton iterations in the worst case is required to solve (P1), where \(t^{(0)}>0\) denotes the initial barrier parameter of the interior-point method for obtaining the lower and upper bounds for each problem \(\text{P}(\mathcal{L}_{0},\mathcal{L}_{1})\)[12], respectively. This is practically prohibited in the terms of computational complexity, especially when the task library size \(L\) is large. Hence, we propose in the sequel two computationally-efficient solutions by separating caching and computation decisions.
### _Suboptimal Solution with Task-Popularity Caching Policy_
In this subsection, we present a task-popularity caching based design scheme. First, based on the task-popularity scores of the total \(L\) tasks and the MEC server's caching capacity, we determine the task cache placement decision for the task-caching phase. Next, given the cache-placement decisions, we jointly optimize the \(K\) WDs' task offloading decisions and local/remote CPU frequencies within Phase II.
For task \(T_{\ell}\), its task-popularity score \(t_{\ell}\) is defined as the number of occurrences in the \(K\) WDs' task sequences [9, 10], i.e., \(t_{\ell}=\sum_{k=1}^{K}\sum_{n=1}^{N}\mathbb{1}_{s_{k,n}-T_{\ell}}\), where \(s_{k,n}\in\mathcal{S}_{k,N}^{\text{CTS}}\). Based on the popularity scores, these \(L\) tasks are ordered as \(t_{\pi(1)}\geq t_{\pi(2)}\geq...\geq t_{\pi(L)}\), where \(\mathbf{\pi}=[\pi(1),...,\pi(L)]^{T}\) is a permutation of the sequence \(\{1,...,L\}\). Under the caching capacity constraint of the MEC server, we select a number of \(1\leq M\leq L\) tasks with the highest-\(M\) popularity scores4, i.e., \(\{T_{\pi(1)},...,T_{\pi(M)}\}\), to be cached in the MEC server, such that \(\sum_{m=1}^{M}D_{\pi(m)}\leq D^{\max}\) and \(\sum_{m=1}^{M+1}D_{\pi(m)}>D^{\max}\). Accordingly, the sets \(\mathcal{L}_{0}^{\text{pop}}=\{\pi(M+1),...,\pi(L)\}\) and \(\mathcal{L}_{1}^{\text{pop}}=\{\pi(1),...,\pi(M)\}\) are determined, and we have \(\alpha_{i}^{\text{pop}}=0\) for \(i\in\mathcal{L}_{0}^{\text{pop}}\) and \(\alpha_{i}^{\text{pop}}=1\) for \(j\in\mathcal{L}_{1}^{\text{pop}}\). Next, given the determined \(\mathcal{L}_{0}^{\text{pop}}\) and \(\mathcal{L}_{1}^{\text{pop}}\), we solve the convex problem \(\text{P}(\mathcal{L}_{0}^{\text{pop}},\mathcal{L}_{1}^{\text{pop}})\) to obtain its optimal solution \(((\widehat{d}_{k_{\alpha},i}^{\text{off}})^{\text{pop}},(\widehat{d}_{k_{ \alpha}}^{\text{rec}})^{\text{pop}},(\widehat{d}_{k_{\alpha}}^{\text{off}})^{ \text{pop}},(\widehat{d}_{k_{\alpha}}^{\text{rec}})^{\text{pop}})\). Now, the task-popularity caching based solution for (P1) is obtained as \((\alpha_{\ell}^{\text{pop}},(\widehat{d}_{k_{\alpha},i}^{\text{off}})^{\text{ pop}},(\widehat{d}_{i}^{\text{rec}})^{\text{pop}},(\widehat{d}_{k_{\alpha}}^{\text{ off}})^{\text{pop}},(\widehat{d}_{k_{\alpha}}^{\text{rec}})^{\text{pop}},( \widehat{d}_{n}^{\text{mc}})^{\text{pop}})\).
Footnote 4: Note that when multiple tasks have the same popularity score, the MEC server selects the task with as large number of task input-bits as possible for energy saving, subject to the MEC server’s cache capacity constraint. In the case when the equally-popular tasks have the same task input-bits, the MEC server equiprobably selects one of these tasks.
### _Suboptimal Solution Based on Convex Relaxation_
In this subsection, we present a convex relaxation based design scheme. Specifically, by relaxing the binary task cache decision variables \(\{\alpha_{\ell}\}_{\ell=1}^{L}\) into continuous ones (i.e., \(0\leq\alpha_{\ell}\leq 1\), \(\forall\ell=1,...,L\)), problem (P1) is transformed into a convex optimization problem, whose optimal solution can thus be efficiently obtained by off-the-shelf convex solvers, e.g., CVX toolbox [12]. Denote by \((a_{\ell}^{*},\widehat{d}_{k_{\alpha},i}^{\text{off*}},\widehat{d}_{i}^{\text{ mec*}},d_{k,n}^{\text{off*}},\widehat{d}_{k_{\alpha},n}^{\text{loc*}},\widehat{d}_{n}^{\text{ mec*}})\) the optimal solution to the convex-relaxed problem (P1). We determine the sets \(\mathcal{L}_{1}^{\text{pel}}=\{\ell|0\leq\alpha_{\ell}^{*}\leq 0.5,\ell\in\mathcal{L}\}\) and \(\mathcal{L}_{1}^{\text{pel}}=\{\ell|0.5<\alpha_{\ell}^{*}\leq 1,\ell\in\mathcal{L}\}\). Hence, we have \(\alpha_{i}^{\text{rel}}=0\) for \(i\in\mathcal{L}_{0}^{\text{pol}}\) and \(\alpha_{i}^{\text{pel}}=1\) for \(j\in\mathcal{L}_{1}^{\text{rel}}\), and the solution \(((\widehat{d}_{k_{\alpha},i}^{\text{off}})^{\text{pel}},(\widehat{d}_{i}^{\text{ mec}})^{\text{rel}},(\widehat{d}_{k_{\alpha},n}^{\text{off*}})^{\text{rel}},(\widehat{d}_{k_{ \alpha},n}^{\text{loc*}})^{\text{rel}},(\widehat{d}_{n}^{\text{mc*}})^{\text{ rel}})\) for Phase II.
## IV Numerical Results
In this section, we evaluate the effectiveness of the proposed schemes. In simulations, we set \(K=20\), \(N_{p}=5\), \(N=30\), and \(\tau=0.1\) second. The CPU architecture capacitance coefficients are set as \(\zeta_{k}=10^{-28}\) and \(\zeta_{0}=10^{-29}\); the number of CPU cycles for WD-\(k\)'s local computing and MEC server's execution of one task input-bit is \(C_{k}=3\times 10^{3}\) and \(C_{0}=10^{3}\) CPU-cycles/bit, \(k\in\mathcal{K}\), respectively; the energy weights of the AP and the WDs are set as \(w_{0}=0.1\) and \(w_{1}=0.9\), respectively. Denote by \(d_{k}\in[500,1000]\) meters (m) the distance between WD-\(k\) and the AP, where \(d_{k}=500+\frac{500(k-1)}{K^{-1}}\) m, \(k\in\mathcal{K}\). We consider Rician fading channel model [2]: \(h_{k,n}=\sqrt{\frac{\mathcal{X}_{R}\Omega_{0}d_{k}^{-n}}{1+\mathcal{K}_{R}}}h_{0} +\sqrt{\frac{10d_{k}^{-n}}{1+\mathcal{K}_{R}}}h\), \(\forall k,n\), where \(\mathcal{X}_{R}=3\) denotes Rician factor, \(h_{0}=1\) is the line-of-sight (LoS) component, \(\Omega_{0}=-32\) dB corresponds to the pathloss at a reference distance of one meter, \(\alpha=3\) denotes the pathloss exponential, and \(h\sim\mathcal{CN}(0,1)\) denotes
* _Full local computing scheme:_ Each WD only locally executes its tasks, which corresponds to solving (P1) by setting \(d_{k,n}^{\text{eff}}=0\), \(\forall k,n\).
Fig. 2 shows the average weighted-sum energy performance versus the caching capacity \(D^{\max}\), where the noise power is \(\sigma^{2}=10^{-8}\) Watt (W). Except for the benchmark scheme that the MEC server cannot cache tasks, the system weighted-sum energy consumption of the other five schemes decreases with \(D^{\max}\). In Fig. 2(a), compared to the _Full offloading_ scheme, the proposed _Relaxation_ scheme achieves a closer performance to the BnB optimal scheme in the case with a small \(D^{\max}\) value (e.g., \(D^{\max}\leq 60\) Kbits), but it is not true with a large \(D^{\max}\) value. This implies the importance of exploiting both offloading and local computing capabilities for energy saving in the case of a small caching capacity. In Fig. 2(a), the task-popularity based caching scheme performs inferiorly to both the _Relaxation_ scheme and the _Full offloading_ scheme, the _Full local computing_ scheme. In Fig. 2(b), the _Task-popularity caching_ scheme outperforms the _Full offloading_ scheme in the case of a small caching capacity value (e.g., \(D^{\max}\leq 75\) Kbits), but it is not true in the case of a larger caching capacity value. This shows the merit of the _Task-popularity caching_ scheme for energy saving with a large task set size \(L\). Finally, all the schemes consume more energy in Fig. 2(b) than that in Fig. 2(a). This is because the causality task set size increases with the task set size \(L\).
Fig. 3 shows the energy consumption performance of the task caching and task arrival/execution phases, respectively, where the task set size is \(L=40\). It is observed that all the five schemes with MEC caching capability consume almost the same energy during the task caching phase, but it is not true for the task arrival/execution phase. This is because the MEC server prefers to cache computational tasks as many as possible for energy saving. In Fig. 3, the _Task-popularity caching_ scheme performs inferiorly to the _Relaxation_ scheme, and a substantially large performance gap is observed between the _BnB_ and _Relaxation_ scheme. The _Task-popularity caching_ scheme outperforms the _Full offloading_ scheme in Fig. 3(b), but it is not true in Fig. 3(a). This demonstrates that the energy consumption for task offloading becomes dominant in the case with a high noise power.
## V Conclusion
In this letter, we investigated a joint task cache placement and offloading design for cache-enabled MEC systems with dynamic task arrivals. With the objective of minimizing the system's weighted-sum energy consumption in both the task caching and task arrival/execution phases, we jointly optimized the task cache placement, the MEC server's task execution, and local computing as well as task offloading of the WDs, subject to the caching capacity, task causality, and task completion deadline constraints. We first employed the BnB method to obtain the optimal offline solution to characterize a performance lower bound for online schemes considering (partially) unknown dynamic task arrivals, and then proposed two low-complexity caching strategies based on task-popularity and convex relaxation, respectively. As a future work, it is worth investigating the robust task offloading and caching design against predicted errors of the task sequence and reinforcement learning (RL) based joint design for scenarios of partially predictable and fully unknown task-arrival sequences, respectively.
| この手紙は、動的タスク到着を考慮した、キャッシュを有効にしたマルチユーザーモバイルエッジコンピューティング(MEC)システムを調査しています。このシステムにおけるプロアクティブなキャッシュ配置の影響を分析しています。ここでは、アクセスポイント(AP)が無線デバイス(WD)をスケジュールして計算タスクをオフロードする、そして、タスクキャッシュのフェーズにおいて、有限のライブラリの実行タスクと並行して実行するタスクのオフロードを考慮しています。このタスクキャッシュのフェーズでは、同じタスクの要求が到着するWDが近接しているため、そのWDは、タスク到着と実行のフェーズでタスク結果を直接ダウンロードすることができます。このシステムのウェイトされた総エネルギー消費を最小限にするために、タスクキャッシュの決定とAPのMEC実行、WDのローカル計算、そして各時間帯におけるタスクオフロードをそれぞれ最適 |
2306.17787 | Subgroups of $E$-unitary and $R_1$-injective special inverse monoids | We continue the study of the structure of general subgroups (in particular
maximal subgroups, also known as group $\mathcal{H}$-classes) of special
inverse monoids. Recent research of the authors has established that these can
be quite wild, but in this paper we show that if we restrict to special inverse
monoids which are $E$-unitary (or have a weaker property we call
$\mathcal{R}_1$-injectivity), the maximal subgroups are strongly governed by
the group of units. In particular, every maximal subgroup has a finite index
subgroup which embeds in the group of units. We give a construction to show
that every finite group can arise as a maximal subgroup in an
$\mathcal{R}_1$-injective special inverse monoid with trivial group of units.
It remains open whether every combination of a group $G$ and finite index
subgroup $H$ can arise as maximal subgroup and group of units. | Robert D. Gray, Mark Kambites | 2023-06-30T16:41:37 | http://arxiv.org/abs/2306.17787v2 | # Subgroups of \(E\)-unitary and \(\mathcal{R}_{1}\)-injective special inverse monoids
###### Abstract.
We continue the study of the structure of general subgroups (in particular maximal subgroups, also known as group \(\mathcal{H}\)-classes) of special inverse monoids. Recent research of the authors has established that these can be quite wild, but in this paper we show that if we restrict to special inverse monoids which are \(E\)_-unitary_ (or have a weaker property we call \(\mathcal{R}_{1}\)_-injectivity_), the maximal subgroups are strongly governed by the group of units. In particular, every maximal subgroup has a finite index subgroup which embeds in the group of units. We give a construction to show that every finite group can arise as a maximal subgroup in an \(\mathcal{R}_{1}\)-injective special inverse monoid with trivial group of units. It remains open whether every combination of a group \(G\) and finite index subgroup \(H\) can arise as maximal subgroup and group of units.
Key words and phrases:special inverse monoid, maximal subgroup, group \(\mathcal{H}\)-class, \(E\)-unitary, \(\mathcal{R}_{1}\)-injective 2020 Mathematics Subject Classification: 20M18, 20M05 \({}^{1}\)School of Mathematics, University of East Anglia, Norwich NR4 7TJ, England. Email Robert.D.Gray@uea.ac.uk.
\({}^{2}\)Department of Mathematics, University of Manchester, Manchester M13 9PL, England. Email Mark.Kambites@manchester.ac.uk.
This research was supported by the EPSRC-funded projects EP/N033353/1 'Special inverse monoids: subgroups, structure, geometry, rewriting systems and the word problem' and EP/V032003/1 'Algorithmic, topological and geometric aspects of infinite groups, monoids and inverse semigroups', and by a London Mathematical Society Research Reboot Grant.
(non-inverse) monoids it is known that the monoid is quite strongly governed in this way: for example all the maximal subgroups are isomorphic to the group of units [11]. Early results about special inverse monoids suggested that the same kind of relationship might hold, but more recent work has led to the realisation that things are more complex. We recently studied the possible maximal subgroups (also known as _group \(\mathcal{H}\)-classes_) which can arise in finitely presented special inverse monoids, answering a question of the first author and Ruskuc by showing that the possible groups of units are exactly the finitely generated recursively presented groups, and more generally that the possible group \(\mathcal{H}\)-classes are exactly the (not necessarily finitely generated) recursively presented groups. This implies in particular that (unlike in the special non-inverse case) the group \(\mathcal{H}\)-classes are _not_ all isomorphic to the group of units. However, in the examples we constructed it turns out that the group \(\mathcal{H}\)-classes all _embed_ in the group of units, and it is natural to ask if this is always true.
In the present paper we show that this is also not the case: indeed we construct special inverse monoids with trivial group of units and arbitrary finite groups arising as group \(\mathcal{H}\)-classes. However, our main theorem is that under a relatively mild (weaker than \(E\)-unitarity) assumption called \(\mathcal{R}_{1}\)-_injectivity_, every \(\mathcal{H}\)-class is _virtually_ embeddable in the group of units, in other words, has a finite index subgroup which embeds in the group of units.
In addition to this introduction the paper is divided into six sections. Section 2 fixes notation and collects some basic facts about (mostly special) inverse monoids, some of which are folklore but some new and potentially of independent interest. Section 3 introduces and studies the new property of \(\mathcal{R}_{1}\)-injectivity. Section 4 shows that the Schutzenberger graphs of \(\mathcal{R}_{1}\)-injective special inverse monoids all admit a certain kind of _block decomposition_, which makes it relatively straightforward to understand their structure modulo the Schutzenberger graph of right units. In Section 5 we apply the block decomposition to study maximal subgroups of \(\mathcal{R}_{1}\)-injective special inverse monoids, in particular establishing that they are constrained to admit a finite index subgroup which embeds in the group of units. Section 6 goes some way towards proving the previous theorem "sharp", by constructing a family of examples with trivial group of units but arbitrary finite groups arising as maximal subgroups. Finally, Section 7 digresses slightly to note an interesting fact about special inverse monoids with a generator which is neither a right nor a left unit: in such monoids _every_ finite subgroup of the group of units arises as the maximal subgroup around some idempotent.
## 2. Special Inverse Monoids
In this section we fix notation and collect together some preliminary results about special inverse monoids; some of these are folklore known to experts but hard to find in the literature, while others (most notably Theorem 2.2) are new and likely to be of independent interest.
Let \(A\) be a (typically, but not necessarily, finite) alphabet, and let \(A^{\pm 1}\) denote the union of \(A\) with a disjoint alphabet \(\{a^{-1}\mid a\in A\}\). We extend the inverse operation to be an involution on \(A^{\pm 1}\) by defining \((a^{-1})^{-1}=a\), and to words over \(A^{\pm 1}\) by \((a_{1}\dots a_{n})^{-1}=a_{n}^{-1}\dots a_{1}^{-1}\). For brevity we will
often write \(w^{\prime}\) instead of \(w^{-1}\). Where \(A\) is viewed as a choice of generators for an inverse monoid \(M\), we will sometimes write \(\overline{w}\) to denote the element of \(M\) represented by a word \(w\in A^{\pm 1}\).
Recall that the inverse monoid defined by the presentation \(\langle A\mid R\rangle\), where \(R\subseteq(A^{\pm 1})^{*}\times(A^{\pm 1})^{*}\) is the quotient of the free inverse monoid on \(A\) by the congruence generated by \(R\). All presentations in this paper will be inverse monoid presentations unless stated otherwise. An inverse monoid presentation is called _special_ if all relations have the form \(w=1\), and an inverse monoid is called special if it admits a special inverse monoid presentation. We shall assume familiarity with (special) inverse monoids, as well as standard ideas in the field such as Green's relations, Schutzenberger graphs, Stephen's procedure, \(E\)-unitarity and the maximal group image. The reader unfamiliar with these is directed to [8] for the classical theory of inverse monoids in general, and [2] for the more recent theory of special inverse monoids.
If \(m\in M\) we write \(S\Gamma(m)\) for the (right) Schutzenberger graph of \(m\), and \(\mathcal{H}_{m}\), \(\mathcal{R}_{m}\) and so forth for the equivalences classes of \(M\) under Green's various relations. The following is well known and easy to prove directly from the definitions.
**Proposition 2.1**.: _Let \(w\) be a word over the generators for an inverse monoid \(M\). Then \(w\) represents:_
* _an idempotent element if and only if every path it labels in every Schutzenberger graph is a closed path;_
* _an element of_ \(\mathcal{J}_{1}\) _if and only if labels a path somewhere in_ \(S\Gamma(1)\)_;_
* _an element of_ \(\mathcal{R}_{1}\) _(a right unit) if and only if labels a path in_ \(S\Gamma(1)\) _starting at the identity;_
* _an element of_ \(\mathcal{L}_{1}\) _(a left unit) if and only if labels a path in_ \(S\Gamma(1)\) _ending at the identity; and_
* _an element of_ \(\mathcal{H}_{1}\) _if and only if it labels a path in_ \(S\Gamma(1)\) _starting at the identity and a path in_ \(S\Gamma(1)\) _ending at the identity._
Notably missing from Proposition 2.1 is a description of words representing elements of \(\mathcal{D}_{1}\). For inverse monoids in general there is no easy way to describe these but, surprisingly, in special inverse monoids there is a very nice description akin to those above:
**Theorem 2.2**.: _Let \(M\) be a special inverse monoid generated by \(X\), and \(w\) a word over \(X^{\pm 1}\). Then the following are equivalent:_
* \(w\) _represents an element of_ \(\mathcal{D}_{1}\)_;_
* \(w\) _labels a path in_ \(S\Gamma(1)\) _which passes through (or starts or ends at) the vertex_ \(1\)_; and_
* _there is a decomposition_ \(w=uv\) _(as words, possibly empty) where_ \(u\) _represents a left unit and_ \(v\) _represents a right unit._
Proof.: The equivalence of (ii) and (iii) is immediate from the characterisation of left units and right units given by Proposition 2.1. If (iii) holds then since \(u\) represents a left unit we have \(\overline{u}^{\prime}\ \overline{u}=1\) so that
\[\overline{w}\ \mathcal{L}\ \overline{u}^{\prime}\ \overline{u}\ \overline{v}\ =\ \overline{v}\ \mathcal{R}\ 1\]
and (i) holds.
What remains, which is the main burden of the proof and the only part which does not hold for inverse monoids in general, is to show that (i) implies (ii) or (iii). We do this by showing first that every element in the \(\mathcal{D}_{1}\) has _some_ representative word with a decomposition of the type in (iii), and then that the property that a word \(w\) has such a decomposition is preserved under the insertion and removal of relators, and the basic inverse semigroup manipulations of commuting idempotents and replacing \(u\) with \(uu^{\prime}u\) and vice versa.
For the first step, if \(s\in\mathcal{D}_{1}\) then there exists \(t\in M\) with \(s\mathcal{L}t\mathcal{R}1\). Because \(s\mathcal{L}t\) we have \(s^{\prime}s=t^{\prime}t\), so that \(s=ss^{\prime}s=st^{\prime}t=(st^{\prime})t\). Now \(t\) is a right unit by assumption, and this means we have \((st^{\prime})^{\prime}(st^{\prime})=ts^{\prime}st^{\prime}=tt^{\prime}tt^{ \prime}=1\), so that \(st^{\prime}\) is a left unit. Thus, if we choose words representing \(st^{\prime}\) and \(t\) respectively, concatenating them will yield a word of the required form representing \((st^{\prime})t=s\).
Now suppose \(w\) is any word which factorises as in (iii), or equivalently that it can be read along some path \(\pi\) in \(S\Gamma(1)\) which visits \(1\). First note that if we replace a factor of the form \(x\) with \(xx^{\prime}x\) or vice versa, or replace a factor \(uu^{\prime}vv^{\prime}\) with \(vv^{\prime}uu^{\prime}\) then the resulting word can be read along a path in \(S\Gamma(1)\) starting and ending in the same place as \(\pi\) and traversing the same set of edges as \(\pi\) (possibly different numbers of times and in a different order). So in particular the resulting word can be read along a path in \(S\Gamma(1)\) which visits \(1\). If we insert some relator into \(w\) then, since every relator can be read around a closed path at every vertex of \(S\Gamma(1)\), the resulting path can still be read along a path in \(S\Gamma(1)\) which visits \(1\).
Finally, suppose we remove a factor of \(w\) which is a relator, say \(w=prq\) where \(r\) is a relator and we obtain the word \(pq\). Since all paths labelled by relators are closed, we may remove a closed subpath from \(\pi\) to obtain a path labelled \(pq\). If this path still visits \(1\) then we are done. If it does not visit \(1\) then the closed subpath we removed from \(\pi\) must do so, which means there must be a factorisation \(r=ab\) (as words) such that \(pa\) labels a path ending at \(1\) (so \(pa\) represents a left unit) and \(bq\) labels a path beginning at \(1\) (so \(bq\) represents a right unit). Now \(a\) represents both a right unit (because it is a prefix of a relator) and a left unit (because it can be read along a path ending at \(1\)), so \(a\) is a unit. But this means \(p=(pa)a^{\prime}\) in \(M\), so \(p\) represents a left unit, and by Proposition 2.1 can be read along a path ending at \(1\). By a dual argument, \(b\) also represents a unit, so \(q=b^{\prime}(bq)\) in \(M\) and \(q\) is a right unit and can be read along a path starting at \(1\). Thus, the resulting word \(pq\) can be read along a path which visits \(1\).
**Corollary 2.3**.: _A generator in a special inverse monoid presentation represents an element of \(\mathcal{D}_{1}\) if and only if represents an element of \(\mathcal{R}_{1}\cup\mathcal{L}_{1}\)._
Proof.: If \(x\) is a generator and \(x\mathcal{D}1\) then by Theorem 2.2 it must label some path in \(S\Gamma(1)\) passing through \(1\); since the label is a single letter such a path must have length \(1\), and therefore must start and/or end at \(1\), which means that \(x\) is \(\mathcal{R}\)-related and/or \(\mathcal{L}\)-related to \(1\).
**Remark 2.4**.: While it is a general and well-known fact about inverse monoids (which is implicitly proved in the third paragraph of the proof of Theorem 2.2) that every element of \(\mathcal{D}_{1}\) can be decomposed in the monoid
as the product of a left and a right unit, it is far more unusual and surprising that every _word_ representing such an element can be decomposed _as a word_ into words representing a left and a right unit. This behaviour is very particular to special inverse presentations, and can fail even for non-special presentations of special inverse monoids. For example the non-standard presentation \(\operatorname{Inv}\langle p,q,r\mid pq=1,qp=r\rangle\) for the (special, bisimple) bicyclic monoid contains a generator \(r\) representing an element of \(\mathcal{D}_{1}\setminus(\mathcal{L}_{1}\cup\mathcal{R}_{1})\); if the theorem held then \(r\) would have to label a path through \(1\), which since it is a single letter means a path starting or ending at \(1\), but in this case it would represent an element of \(\mathcal{L}_{1}\) or \(\mathcal{R}_{1}\).
**Proposition 2.5**.: _Any special inverse monoid in which \(\mathcal{R}_{1}=\mathcal{H}_{1}\) (or \(\mathcal{L}_{1}=\mathcal{H}_{1}\)) decomposes as the free product of a group with a free inverse monoid._
Proof.: Suppose \(M\) is a special inverse monoid in which \(\mathcal{R}_{1}=\mathcal{H}_{1}\), the case \(\mathcal{L}_{1}=\mathcal{H}_{1}\) being dual.
We claim that in fact \(\mathcal{J}_{1}=\mathcal{H}_{1}\). Indeed, suppose not for a contradiction. Notice first that there must be a generator in \(\mathcal{J}_{1}\setminus\mathcal{H}_{1}\); indeed if not then every word either contains a generator outside \(\mathcal{J}_{1}\) (in which case it does not represent an element of \(\mathcal{J}_{1}\), since \(M\setminus\mathcal{J}_{1}\) is an ideal) or has all generators in \(\mathcal{H}_{1}\) (in which case it represents an element of \(\mathcal{H}_{1}\), since \(\mathcal{H}_{1}\) is a subgroup). Clearly in order to be in \(\mathcal{J}_{1}\) this generator must appear in a relator, \(r\) say. Write \(r=uxv\) where \(x\) is the leftmost generator not in \(\mathcal{H}_{1}\). Then the factor \(u\) represents a right unit, which since \(\mathcal{R}_{1}=\mathcal{H}_{1}\) means it represents a unit. Now in the monoid we have \(xv=u^{-1}uxv=u^{-1}1=u^{-1}\), so \(xv\) represents a unit. But this means \(x\) is right invertible, so \(x\in\mathcal{R}_{1}=\mathcal{H}_{1}\), giving a contradiction.
Now it is easy to see that the generating set can be split into generators in \(\mathcal{J}_{1}=\mathcal{H}_{1}\), which generate the group \(\mathcal{H}_{1}\), and generators not in \(\mathcal{J}_{1}\) which do not appear in any relation and hence generate a free factor.
As a consequence we obtain a very simple proof of the following well-known fact:
**Corollary 2.6**.: _Every finite special inverse monoid is a finite group._
Proof.: It is well known that finite monoids satisfy \(\mathcal{R}_{1}=\mathcal{H}_{1}\). Indeed, given \(x\in\mathcal{R}_{1}\) write \(xy=1\), then since the monoid is finite we have \(x^{i}=x^{i+j}\) for some \(i,j\geq 1\) which, right multiplying by \(y^{i}\), gives \(x^{j}=1\) for some \(j\geq 1\). Hence \(x\in\mathcal{H}_{1}\). So by the above any finite special inverse monoid is free product of a group with a free inverse monoid. Since non-trivial free inverse monoids are infinite, the free inverse monoid must be trivial and hence the given monoid is a (necessarily finite) group.
We shall need the following fact, which is established by the argument in the proof of [7, Proposition 4.2].
**Proposition 2.7**.: _In a special inverse monoid the (non-inverse) submonoid \(\mathcal{R}_{1}\) [respectively \(\mathcal{L}_{1}\)] is generated (under multiplication only) by the set of elements represented by the proper prefixes [suffixes] of the defining relators._
We shall also need the following lemma, which is proved by a similar technique to [2, Lemma 3.3].
**Lemma 2.8**.: _Let \(M=\langle A\mid R\rangle\) be a special inverse monoid and suppose \(\Omega\) is a rooted deterministic \(A\)-labelled graph such that some word \(w\in(A^{\pm 1})^{*}\) can be read from root, and every defining relation of \(M\) can be read around a closed path at every vertex of \(\Omega\). Then there is a morphism from \(S\Gamma(w)\) to \(\Omega\), taking the root to the root._
Proof.: Let \(T\) be the (non-deterministic) infinite graph constructed iteratively by starting with a line labelled \(w\), and adding a cycle labelled by \(r\) at every vertex for every \(r\in R\), but not performing any edge folding. We view each of these cycles as oriented in such a way that the word \(r\) is the label of the path given by reading the cycle clockwise. By a _proper subpath of a cycle of \(T\)_ we mean a path \(\pi\) with initial vertex being the vertex at which the cycle was attached in the construction of \(T\), and such that \(\pi\) is a simple path which traverses the cycle clockwise but does not visit every vertex of the cycle, i.e. the end vertex of \(\pi\) is not equal to the start vertex of \(\pi\). Note that if \(r\in R\) is the label of a cycle in \(T\) then any proper subpath of this cycle is labelled by a proper prefix of the word \(r\). From the construction it follows that every vertex \(u\) of \(T\) there is a unique sequence \((\pi_{0},\pi_{1},\pi_{2},\ldots,\pi_{k})\) where \(\pi_{0}\) is a simple path starting at the root and traversing part of the line labelled \(w\), each \(\pi_{i}\) for \(i\geq 1\) is a proper subpath of a cycle and \(\pi_{0}\pi_{1}\ldots\pi_{k}\) is a path from the root of \(T\) to \(u\). We define a map from the vertex set of \(T\) to vertices in \(\Omega\) where the vertex \(u\) with corresponding sequence \((\pi_{0},\pi_{2},\ldots,\pi_{k})\) of proper subpaths of cycles maps to the vertex in \(\Omega\) obtained by following the path labelled by \(p_{0}p_{1}\ldots p_{k}\) starting at the vertex \(w\) of \(\Omega\), where \(p_{i}\) is the label of the path \(\pi_{i}\) for \(0\leq i\leq k\). This gives a well-defined (by uniqueness of the sequences of proper subpaths of cycles) map from the vertices of \(T\) to the vertices of \(\Omega\). As a consequence of the assumptions that \(\Omega\) is deterministic and in \(\Omega\) every relator from \(R\) can be read from every vertex in \(\Omega\), this map extends uniquely to a morphism of graphs which maps edges of \(T\) to the edges of \(\Omega\). Let us use \(\phi\) to denote this graph morphism from \(T\) to \(\Omega\).
It follows from Stephen's procedure that \(S\Gamma(w)\) is obtained by determining \(T\). We claim that \(\phi\) induces a well-defined graph morphism from \(S\Gamma(w)\) to \(\Omega\). To see this note that two vertices \(v\) and \(u\) of \(T\) are identified in \(S\Gamma(w)\) if and only if there is a path in \(T\) between these vertices labelled by a word that freely reduces to the empty word in the free group. Since \(\phi\) is a morphism it follows that there is a path in \(\Omega\) between \(\phi(v)\) and \(\phi(u)\) labelled by the same word that freely reduces to the empty word in the free group. Since the graph \(\Omega\) is deterministic it follows that \(\phi(v)=\phi(u)\). Hence \(\phi\) induces a well-defined map from the vertices of \(S\Gamma(w)\) to the vertices of \(\Omega\). Two edges \(e\) and \(f\) of \(T\) are identified in \(S\Gamma(w)\) if and only if they have the same label, say \(a\in A\), and their start vertices \(v\) and \(u\) are identified in \(S\Gamma(1)\). But we have already seen that this means that \(\phi(v)=\phi(u)\) which, since \(\Omega\) is deterministic means that both \(e\) and \(f\) must be mapped to the unique edge in \(\Omega\) with start vertex \(\phi(v)=\phi(u)\) and labelled by \(a\). This shows that \(\phi\) induces a well defined map from the edges of \(S\Gamma(w)\) to the edges of \(\Omega\).
It remains to verify that \(\phi\) induces a morphism of graphs from \(S\Gamma(w)\) to \(\Omega\). Let \(e\) be an edge in \(S\Gamma(w)\). Choose an edge \(f\) in \(T\) such that \(f\) is equal to \(e\) when \(T\) is determinised, that is, \(f\) is a member of the equivalence class of edges that represented \(e\). Since \(\phi\) is a morphism from \(T\) to \(\Omega\) it follows
that the start vertex of \(f\) in \(T\) maps to the start vertex of \(\phi(f)\) in \(\Omega\), and the end vertex of \(f\) in \(T\) maps to the end vertex of \(\phi(f)\) in \(\Omega\). But by definition \(\phi(e)=\phi(f)\) and \(\phi\) maps the start vertex \(e\) to the same place as the start vertex of \(f\), and similarly for the end vertices. It follows that \(\phi\) induces a morphism of graphs from \(S\Gamma(w)\) to \(\Omega\).
## 3. \(\mathcal{R}_{1}\)-injectivity
In this section we define a new property called _\(\mathcal{R}_{1}\)-injectivity_ which is weaker than \(E\)-unitarity, establish some of its basic properties and give examples to show that it encompasses many special inverse monoids of interest which are not \(E\)-unitary.
**Definition 3.1**.: We say that an inverse monoid is _\(\mathcal{R}_{1}\)-injective_ if the morphism to the maximal group image is injective when restricted to the \(\mathcal{R}\)-class of the identity.
The following result lists a number of equivalent characterisations; as well as being useful later, we hope they help to convince the reader that the definition is natural.
**Proposition 3.2**.: _Let \(M\) be an inverse monoid generated by a subset \(X\). Then the following are equivalent:_
1. \(M\) _is_ \(\mathcal{R}_{1}\)_-injective;_
2. \(S\Gamma(1)\) _naturally embeds in the Cayley graph of the maximal group image_ \(M/\sigma\)_;_
3. _for some_ \(x\in\mathcal{D}_{1}\)_, the morphism from_ \(M\) _to_ \(M/\sigma\) _is injective when restricted to_ \(R_{x}\)_;_
4. _for every_ \(y\in\mathcal{D}_{1}\)_, the morphism from_ \(M\) _to_ \(M/\sigma\) _is injective when restricted to_ \(R_{y}\)_;_
5. \(\sigma^{-1}(1)\cap\mathcal{D}_{1}=E(M)\cap\mathcal{D}_{1}\)_;_
6. _every path in_ \(S\Gamma(1)\) _labelled by a word representing in the identity in_ \(M/\sigma\) _is closed;_
7. _the morphism to the maximal group image is injective when restricted to the_ \(\mathcal{L}\)_-class of the identity;_
8. _for some_ \(x\in\mathcal{D}_{1}\)_, the morphism from_ \(M\) _to_ \(M/\sigma\) _is injective when restricted to_ \(L_{x}\)_;_
9. _for every_ \(y\in\mathcal{D}_{1}\)_, the morphism from_ \(M\) _to_ \(M/\sigma\) _is injective when restricted to_ \(L_{y}\)_._
Proof.: The equivalence of (i) and (ii) and the fact that (i) implies (iii) are immediate from the definitions.
Suppose (iii) holds for some \(x\in\mathcal{D}_{1}\) and let \(y\in\mathcal{D}_{1}\). Then \(x\mathcal{D}y\) so there exists \(z\) with \(x\mathcal{R}z\mathcal{L}y\). In particular we may write \(z=qy\) for some \(q\in M\), and by Green's lemma there is a bijection
\[\lambda_{q}:\mathcal{R}_{y}\to\mathcal{R}_{qy}=\mathcal{R}_{z}=\mathcal{R}_{x},\ \ i\mapsto qi.\]
Now if \(a,b\in\mathcal{D}_{y}\) with \(\sigma(a)=\sigma(b)\) then \(\sigma(qa)=\sigma(qb)\) where \(qa,qb\in\mathcal{R}_{x}\), so \(qa=qb\), which since \(\lambda_{q}\) is a bijection means that \(a=b\). Thus, (iv) holds.
Now suppose (iv) holds. If \(e\in E(M)\) then clearly \(\sigma(e)=1\). Now if \(s\in\mathcal{D}_{1}\) with \(\sigma(s)=1\) then \(s\mathcal{R}e\) for some idempotent \(e\), but now \(\sigma(e)=1\)
and by assumption \(\sigma\) is injective on \(\mathcal{R}_{s}\) so we must have \(s=e\) and \(s\) is idempotent. Thus, (v) holds.
Suppose (v) holds, and let \(x,y\in\mathcal{R}_{1}\) be such that \(\sigma(x)=\sigma(y)\). Then we have \(x^{\prime}y\mathcal{R}x^{\prime}\mathcal{L}1\) so that \(x^{\prime}y\mathcal{D}1\). Moreover, \(\sigma(x^{\prime}y)=\sigma(x)^{\prime}\sigma(y)=\sigma(x)^{\prime}\sigma(x)=1\). Thus we may deduce from (v) that \(x^{\prime}y\) is idempotent. Now we have
\[1=(xx^{\prime})(yy^{\prime})=x(x^{\prime}y)y^{\prime}=x(x^{\prime}y)^{2}y^{ \prime}=(xx^{\prime})(yx^{\prime})(yy^{\prime})=yx^{\prime}\]
from which it follows that \(x=y\). Thus, (i) holds.
If (i) holds then (vi) follows from the fact that paths in a group Cayley graph labelled by words representing the identity are necessarily closed.
Now suppose (vi) holds and let \(x,y\in\mathcal{R}_{1}\) with \(\sigma(x)=\sigma(y)\). Choose words \(w_{x},w_{y}\) representing \(x\) and \(y\) respectively. Then by Proposition 2.1 there are paths in \(S\Gamma(1)\) starting at \(1\) labelled \(w_{x}\) and \(w_{y}\), so there is a path in \(S\Gamma(1)\) from vertex \(x\) to vertex \(y\) labelled \(w_{x}^{-1}w_{y}\). Clearly in \(M/\sigma\) the word \(w_{x}^{-1}w_{y}\) represents \(\sigma(x^{\prime}y)=\sigma(x)^{\prime}\sigma(y)=1\), so by (vi) we have that the path from \(x\) to \(y\) is closed, which must mean \(x=y\). Thus, (i) holds.
The equivalence of (i) and (vii) follows from the facts that they are left/right dual, and that condition (v) and the hypotheses are left/right symmetric. The equivalence of (vii), (viii) and (ix) is then dual to the equivalence of (i), (ii) and (iii)
**Remark 3.3**.: There are also, of course, further equivalent conditions which are left/right duals to conditions (ii) and (vi). We omit these as stating them would first require the notion of a left Schutzenberger graph, for which we have no further need here.
**Remark 3.4**.: The equivalence of conditions (i), (iii) and (iv) in Proposition 3.2 can also be deduced from the fact that Schutzenberger graphs of \(\mathcal{R}\)-classes in the same \(\mathcal{D}\)-class are isomorphic [13, Theorem 3.4(a)] together with the fact that group Cayley graphs are homogeneous.
**Remark 3.5**.: Notwithstanding the equivalent characterisations given by Proposition 3.2, we do not expect \(\mathcal{R}_{1}\)-injectivity to be a useful or interesting property for inverse monoids in general, since it gives information only about the \(\mathcal{D}\)-class of \(1\), and there is no reason to suppose that this has any influence on the wider structure of the monoid. For example, the condition would be trivially satisfied in any inverse monoid whose identity is adjoined. But in special inverse monoids, where the structure of the whole monoid is more strongly influenced by the right and left units, it seems to be a very powerful property, as we shall see later.
In view of condition (v) in Proposition 3.2 it is natural to ask if \(\mathcal{R}_{1}\)-injectivity is also equivalent to the condition \(\sigma^{-1}(1)\cap\mathcal{R}_{1}=E(M)\cap\mathcal{R}_{1}\), in other words \(\sigma^{-1}(1)\cap\mathcal{R}_{1}=\{1\}\). In fact while this condition is self-evidently necessary for \(\mathcal{R}_{1}\)-injectivity, it is not sufficient even in special inverse monoids, as the following example shows:
**Example 3.6**.: Consider the special inverse monoid
\[\langle a,b,c,d\mid acb=adb=cc^{\prime}=dd^{\prime}=1\rangle.\]
By Proposition 2.7 the right units are generated by the proper prefixes of the relators, which since \(ac=ad\) in the monoid means by the set
\(\{a,ac,c,d\}\). The maximal group image is the group with the same presentation, which is equivalent as a group presentation to \(\langle a,b,c,d\mid d=c,b=(ac)^{-1}\rangle\), in other words, a free group generated by \(a\) and \(c\) (with \(d\) mapping to \(c\) and \(b\) to \((ac)^{-1}\)). Clearly no positive word over the set \(X\) represents the identity in this group, so no non-idempotent right unit of the monoid maps to \(1\) in the maximal group image, in other words, we have \(\sigma^{-1}(1)\cap\mathcal{R}_{1}=E(M)\cap\mathcal{R}_{1}\). On the other hand, the right units \(c\) and \(d\) get identified, but it can be seen (for example by constructing \(S\Gamma(1)\); see Figure 1) that they are distinct in the monoid, so the monoid cannot be \(\mathcal{R}_{1}\)-injective. Note that the element \(c^{\prime}d\) is a non-idempotent in \(\mathcal{D}_{1}\) which maps to \(1\) in the maximal group image, witnessing the failure of condition (v) in Proposition 3.2.
One might also ask if \(\mathcal{R}_{1}\)-injectivity is equivalent to the stronger (than condition (v) in Proposition 3.2) condition that \(\sigma^{-1}(1)\cap\mathcal{J}_{1}=E(M)\cap\mathcal{J}_{1}\); we shall see in Example 3.9 below that this is not the case.
**Proposition 3.7**.: _Every \(E\)-unitary inverse monoid is \(\mathcal{R}_{1}\)-injective._
Proof.: It is well-known (indeed, sometimes even taken as a definition) that an inverse monoid (or semigroup) is E-unitary if and only if the pre-image of the identity in the maximal group image is exactly the set of idempotents [6, Proposition 5.9.1], and it follows immediately that every E-unitary monoid satisfies condition (v) of Proposition 3.2.
The converse of Proposition 3.7 is very far from being true, even in very restricted cases. The following examples show that it can fail even for positive, special, finite presentations, and for one-relator special presentations.
**Example 3.8**.: The inverse monoid \(\langle a,b,c,d\mid acb=adb=1\rangle\) is shown in [7, Section 3] not to be \(E\)-unitary but is \(\mathcal{R}_{1}\)-injective. Indeed, the right units are just the submonoid generated by \(a\) and \(ac\) (since by Proposition 2.7 the right units in any special inverse monoid are generated by the proper prefixes of the relators, and it follows from the presentation that \(ac=ad\) in the monoid). The maximal group image is the group with the same presentation, which is equivalent as a group presentation to \(\langle a,b,c,d\mid c=d,b=ac^{-1}\rangle\), in other words, a free group generated by \(a\) and \(c\) (with \(d\) mapping to \(c\) and
Figure 1. An illustration of \(S\Gamma(1)\) for the inverse monoid \(\langle a,b,c,d\mid acb=adb=cc^{\prime}=dd^{\prime}=1\rangle\) considered in Example 3.6. The full graph is obtained by repeatedly gluing this building block freely at every vertex.
to \(ac^{-1}\)). Since \(a\) and \(ac\) are easily seen to generate a free monoid inside this free group, distinct right units in the monoid must map to distinct elements of the maximal group image.
**Example 3.9**.: Similarly the one-relator inverse monoid \(\langle x,y\mid xyx^{\prime}=1\rangle\) (a homomorphic image of that in the previous example with \(a\), \(b\), \(c\) and \(d\) mapping respectively to \(x\), \(x^{\prime}\), \(y\) and \(y\)) is \(\mathcal{R}_{1}\)-injective but not E-unitary. In this case the right units are just \(1\) and the positive powers of \(x\) (because \(xy=x\) in the monoid), and the presentation as a group presentation is equivalent to \(\langle x,y\mid y=1\rangle\), so the maximal group image is an infinite cyclic group generated by \(x\) with \(y\) mapping to \(1\). Again, distinct right units in the monoid map to distinct elements of the maximal group image, and so the monoid is \(\mathcal{R}_{1}\)-injective. On the other hand, \(y\) is not idempotent in the monoid (as can be seen be constructing \(S\Gamma(y)\) and applying the criterion for idempotency given by Proposition 2.1) but maps to \(1\) in the maximal group image, so the monoid is not \(E\)-unitary. Note also that \(y\mathcal{J}1\), so this example shows that \(\mathcal{R}_{1}\)-injectivity is strictly weaker than the condition \(\sigma^{-1}(1)\cap\mathcal{J}_{1}=E(M)\cap\mathcal{J}_{1}\). Finally, notice that in this example the Schutzenberger graph \(S\Gamma(1)\) does not embed into the Cayley graph of the maximal group image as a _full_ subgraph: indeed the Cayley graph contains a loop at \(1\) labelled \(y\), while \(S\Gamma(1)\) does not; see Figure 2.
The converse of Proposition 3.7**does** hold for cyclically reduced (in particular for positive) one-relator special inverse monoids for the trivial reason that these are known [7]**all** to be E-unitary!
While the above examples show that \(\mathcal{R}_{1}\)-injectivity is quite common, there are also many special inverse monoids which are not \(\mathcal{R}_{1}\)-injective. Indeed, the following examples show that even a one-relator special inverse monoid can fail to be \(\mathcal{R}_{1}\)-injective, even if the relator is a reduced (but not cyclically reduced, since this is known to imply \(E\)-unitarity [7]) word.
**Example 3.10**.: The inverse monoid \(\langle x,y\mid yy^{\prime}xyx^{\prime}=1\rangle\) is not \(\mathcal{R}_{1}\)-injective. Indeed, the element \(y\) is right invertible (since it is a prefix of the defining relator), is easily seen to map to the identity in the maximal group image, but is not equal to \(1\) in the monoid. The latter point can be seen in \(S\Gamma(1)\) (see Figure 3) where \(y\) labels both a loop and a non-loop edge. The fact that it labels a loop means it must map to \(1\) in the maximal group image, and the fact it labels a non-loop edge means condition (iv) of Proposition 3.2 fails.
**Example 3.11**.: The inverse monoid \(\langle x,y\mid(xyx)y(x^{\prime}y^{\prime}x^{\prime})=1\rangle\) is not \(\mathcal{R}_{1}\)-injective, although the relator is a reduced (but of course not cyclically reduced, which by [7] would imply the monoid was \(E\)-unitary). Indeed,
Figure 2. The Schützenberger graph \(S\Gamma(1)\) of the inverse monoid \(\langle x,y\mid xyx^{\prime}=1\rangle\) considered in Example 3.9.
constructing \(S\Gamma(1)\) (see Figure 4) we see that \(y\) once again labels both loop and non-loop edges, so again \(y\) represents the identity in the maximal group image and condition (iv) of Proposition 3.2 fails. We note that in this example, unlike the previous one, there are infinitely many non-loop \(y\)-edges, as well as infinitely many loop \(y\)-edges.
## 4. Schutzenberger Graphs and Blocks
Throughout this section \(M\) will be an \(\mathcal{R}_{1}\)-injective special inverse monoid generated by a (not necessarily finite) set \(X\). Our aim is to show that under this assumption, the different Schutzenberger graphs of \(M\) admit a decomposition into (not necessarily disjoint) copies of \(S\Gamma(1)\). The existence of such a decomposition stems in large part from the following elementary result, which extends to the \(\mathcal{R}_{1}\)-injective case an observation made in the \(E\)-unitary case by Stephen [13, Theorem 3.8].
**Lemma 4.1**.: _Let \(M\) be an \(\mathcal{R}_{1}\)-injective inverse monoid generated by a set \(X\), and \(w\) a word. Then for every vertex \(v\) of \(S\Gamma(w)\) there is an injective morphism from \(S\Gamma(1)\) to \(S\Gamma(w)\) which takes \(1\) to \(v\)._
Proof.: Every word readable from \(1\) in \(S\Gamma(1)\) is right invertible and therefore readable from \(v\) in \(S\Gamma(w)\). Moreover, if two words \(x\) and \(y\) reach the same vertex when read from \(1\) in \(S\Gamma(1)\) then they represent the same element, and therefore also reach the same place when read from \(v\) in \(S\Gamma(w)\). Thus, there is a well-defined morphism \(f:S\Gamma(1)\to S\Gamma(w)\) given by setting \(f(u)\) to be the unique vertex at the end of a path starting at \(v\) and labelled \(w\), where \(w\) is any label of a path from \(1\) to \(u\).
For injectivity, suppose \(r,s\in R_{1}\) are such that \(f(r)=f(s)\). This means that \(vr=vs\) in the monoid, so in particular \((vr)\sigma(vs)\) which since \(\sigma\) is group congruence implies \(r\sigma s\). But \(M\) is \(\mathcal{R}_{1}\)-injective, so this means \(r=s\).
We shall now build upon Lemma 4.1 to establish a block decomposition for the lower Schutzenberger graphs.
**Definition 4.2**.: A _preblock_ of an \(X\)-labelled directed graph \(\Gamma\) is a subgraph3 of \(\Gamma\) which is the image of an injective morphism from \(S\Gamma(1)\). A _root_ of a preblock is a vertex which is the image of \(1\in S\Gamma(1)\) under such a morphism. A _block_ of \(\Gamma\) is a preblock which is maximal under inclusion.
Footnote 3: By a subgraph we mean a subset of vertices and a subset of edges between those vertices; there is no assumption that it is a “full” or “‘induced” subgraph containing all edges between the given vertex set, and indeed in general a preblock will not be a full subgraph.
**Remark 4.3**.: A preblock (or block) does not typically have a _unique_ root; indeed it is easy to see that each preblock will have one root for every automorphism of \(S\Gamma(1)\). By [13, Theorem 3.5] this means the roots of each (pre)block are in (non-canonical) bijection with the units of the monoid.
**Remark 4.4**.: By inclusion of preblocks, we mean of course that the vertices **and edges** of one preblock are contained in those of the other. _A priori_ it might therefore seem possible for the vertex set of one block to be a proper subset of the vertex set of another block, or even for two distinct blocks to have exactly the same vertex set, but in fact neither of these things can happen provided the graph \(\Gamma\) is deterministic. Indeed suppose a block \(C\) contains all the vertices of another block \(B\). Choose a root vertex \(r\) for \(B\). Now because \(C\) is isomorphic to \(S\Gamma(1)\), Lemma 4.1 tells us that \(C\) contains a subgraph isomorphic to \(S\Gamma(1)\) rooted at \(r\). Since \(\Gamma\) is deterministic this means \(C\) must contain all the edges of \(B\).
**Lemma 4.5**.: _Let \(M\) be an \(\mathcal{R}_{1}\)-injective special inverse monoid generated by a set \(X\), and \(w\) a word over \(X^{\pm 1}\). Then \(S\Gamma(w)\) has finitely many blocks. Every preblock of \(S\Gamma(w)\) lies in a block. Every vertex of \(S\Gamma(w)\) lies in at least one block. All but finitely many edges of \(S\Gamma(w)\) lie in at least one block. Any edge which does not lie in a block is a cut edge, and is traversed by the path from the root labelled \(w\). Every block has a root which is the vertex corresponding to some prefix of \(w\)._
Proof.: Let \(\Lambda\) be the directed, labelled graph obtained by starting with the Munn tree of \(w\) and gluing a copy of \(S\Gamma(1)\) to every vertex. It follows from work of Stephen [13] that \(S\Gamma(w)\) may be obtained from \(\Lambda\) by folding (or _determination_, in Stephen's terminology).
Clearly the copies of \(S\Gamma(1)\) glued to the vertices of the Munn tree are a finite set of preblocks of \(\Lambda\), containing all vertices and all but finitely many edges (the missing ones being those of the Munn tree). It follows from Lemma 4.1, that the determination process applied to \(\Lambda\) never identifies two vertices within the same preblock; indeed suppose it identified distinct vertices \(v_{1}\) and \(v_{2}\) within a preblock with root \(v\). For \(i=1,2\) let \(w_{i}\) be a word labelling a path from \(v\) to \(v_{i}\) within the preblock. Then starting at the image of \(v\) in \(S\Gamma(w)\), there will be paths labelled \(w_{1}\) and \(w_{2}\) leading to the same place, which contradicts the fact that there is an embedded copy of \(S\Gamma(1)\) rooted at \(v\). Hence, the image under determination of a preblock in \(\Lambda\) is always a preblock in \(S\Gamma(w)\). Let \(C\) be the (finite) set of these preblocks in \(S\Gamma(w)\), and \(B\) the (finite) subset of preblocks in \(C\) which are maximal in \(C\) under containment.
We claim that \(B\) is the (finite) set of all blocks of \(S\Gamma(w)\). Clearly, every vertex in \(S\Gamma(w)\) lies in some preblock in \(B\). Now any other preblock of \(S\Gamma(w)\) is rooted at some vertex in \(S\Gamma(w)\), and hence at a vertex of some preblock in \(B\). By Lemma 4.1 again, \(S\Gamma(1)\) contains a copy of \(S\Gamma(1)\) at every vertex, so it follows that every preblock is contained in a preblock in \(B\). Since the preblocks in \(B\) are defined be maximal in \(C\), and therefore are not contained in each other, it follows that they are maximal among all preblocks, and therefore comprise all blocks of \(S\Gamma(w)\).
Since each block in \(B\) is the image of a preblock in \(\Lambda\) rooted at a Munn tree vertex, and every vertex of the Munn tree can be reached from the identity by reading some prefix of \(w\), it follows that each block has a root which can be reached from \(1\) by reading some prefix of \(w\) from the root of \(S\Gamma(w)\), and which therefore corresponds to a prefix of \(w\).
It remains to show that those edges in \(S\Gamma(w)\) which do not lie in a block are cut edges, and are traversed by a reading of \(w\) from the root. Since every preblock lies in a block, it suffices to show that those edges which do not lie in a preblock are cut edges. As a precursor to this, we claim that if an edge is a cut edge in some graph then its image after any single determination step (and hence, by induction, after finitely many determination steps) either remains a cut edge or is also the image of a non-cut edge. Indeed, suppose that \(e\) is a cut edge, and consider the possible effects of a single determination step, that is, of folding two edges together. If neither edge is \(e\) then, since the two edges must have a vertex in common, they both lie in the subgraph at the same end of \(e\), and it is clear that folding them cannot create a connection to the subgraph at the other end of \(e\), so \(e\) remains a cut edge. If one is \(e\) and the other is another cut edge, then the resulting folded edge remains the only connection between the two end-points of \(e\), and hence is still a cut edge. So the only way \(e\) can cease to be a cut edge is by identification with a non-cut edge, as required.
Now suppose for a contradiction that some edge of \(S\Gamma(w)\), \(e\) say, does not lie in a preblock and is not a cut edge. Let \(\pi\) be a path connecting the endpoints of \(e\) without traversing \(e\). In order for \(e\) not to lie in a preblock, every preimage of \(e\) in \(\Lambda\) must be a Munn tree edge. It follows in particular that all such pre-images are traversed by a reading \(w\) from the root in \(\Lambda\), and hence that \(e\) is traversed by a reading of \(w\) from the root in \(S\Gamma(w)\). It
also follows that every preimage of \(e\) in \(\Lambda\) is a cut edge. Let \(f\) be one such preimage of \(e\) in \(\Lambda\). Now the path \(\pi\) must be created during folding after some finite number of determination steps, so there is some finite sequence of determination steps after which \(f\) ceases to be a cut edge, having only been identified with other cut edges. But this contradicts the previous paragraph.
**Remark 4.6**.: Lemma 4.5 does **not** say that the blocks are anywhere close to being disjoint! We shall see some cases in which they are disjoint, or close to disjoint in the sense that the intersections are simple to describe, but in general the intersections can be very complicated. It seems that problems concerning the "lower" regions (\(\mathcal{D}\)-classes other than \(\mathcal{D}_{1}\)) of an \(\mathcal{R}_{1}\)-injective special inverse monoid often reduce to understanding the intersections of blocks in its Schutzenberger graphs.
**Remark 4.7**.: Under the stronger assumption of \(E\)-unitarity, ideas similar to those in Lemma 4.5 were first introduced (although not made so explicit) by Stephen [13, 14]. In [9, Section 2.2.5] and [5] it is implicitly suggested (with an incorrect attribution to Stephen [13, 14], who does not actually make such a claim) that the existence of such a decomposition suffices to reduce the word problem in the monoid to the problem of deciding whether a given word represents the identity. This line of reasoning is flawed, since to understand the lower Schutzenberger graphs well enough to solve the word problem requires understanding not only the internal structure of the blocks (in other words, of \(S\Gamma(1)\)) but also the _intersections_ of the blocks, which as discussed above may be very complex. One of the main results of [5], stating that the word problem is decidable for special inverse monoids with a single _sparse_ (see [5] for the definition) relator, relies upon this claim and therefore cannot be established by the argument given. Rather the paper establishes only that in such monoids it is decidable whether a given word represents the identity. The decidability of the word problem for these monoids remains open, although we conjecture that it is in fact decidable. It may be that the methods in [5] can be further developed to directly solve the whole of the word problem. Alternatively, it may be possible to establish (perhaps using methods from [5]) that these monoids have Schutzenberger graphs quasi-isometric to trees, in which case they have solvable word problem by subsequent work of the first author, Silva and Szakacs [3].
**Example 4.8**.: Consider the special inverse monoid
\[\langle a,b,c\mid c^{\prime}abc=ab^{\prime}a^{\prime}aba^{\prime}=1\rangle\ =\ \langle a,b,c\mid ab^{\prime}a^{\prime}aba^{\prime}\ c^{\prime}abc=1\rangle.\]
The equivalence of the two presentations is because the second relator in the left-hand presentation is an idempotent in the free inverse monoid, so by standard results from the theory of inverse monoid presentations (see for example [1, Lemma 3.3]) it can be combined into the other relator.
Consider \(S\Gamma(1)\) and \(S\Gamma(ac)\) as constructed by Stephen's procedure (see Figure 5). Note that the former has a simple path starting at \(1\) labelled \(ab^{\prime}a^{\prime}\). On the other hand, consider the vertex (call it \(v\)) at the start of the \(a\)-edge in the Munn tree in \(S\Gamma(ac)\). The path labelled \(ab^{\prime}a^{\prime}\) here folds up so that it ends in the same place as its prefix labelled \(a\); in other words in this monoid
\(acc^{\prime}a^{\prime}\ ab^{\prime}a^{\prime}=acc^{\prime}a^{\prime}\ a\) even though \(ab^{\prime}a^{\prime}\neq a\). This is significant because there is no path coming into \(v\) labelled by an element of \(\mathcal{R}_{1}\), which means that vertex \(v\) is not in a block rooted at some other vertex, and it cannot be in a block rooted at itself because the presence of the above relation means there is not an embedded copy of \(S\Gamma(1)\) rooted at \(v\). Hence, \(S\Gamma(ac)\) does not have a block decomposition in the above sense.
## 5. Automorphisms and Subgroups
In this section we apply the block decomposition developed in Section 4 to study automorphisms of Schutzenberger graphs, and hence group \(\mathcal{H}\)-classes of \(\mathcal{R}_{1}\)-injective special inverse monoids.
The following statement is key to the way in which an \(\mathcal{R}_{1}\)-injective special inverse monoid is governed by \(\mathcal{R}_{1}\).
**Theorem 5.1**.: _Let \(M\) be an \(\mathcal{R}_{1}\)-injective special inverse monoid. Then every subgroup of \(M\) has a finite index subgroup which embeds in the group of units._
Proof.: Clearly it suffices to show that every _maximal_ subgroup of \(M\) has a finite index subgroup which embeds in the group of units. Let \(G\) be a maximal subgroup of \(M\), and \(w\) be a word representing the identity element of \(G\). Then by [13, Theorem 3.5], \(G\) is the group of labelled digraph automorphisms of the Schutzenberger graph \(S\Gamma(w)\). From here on we consider \(G\) acting on \(S\Gamma(w)\).
Notice that, because the blocks were defined from the isomorphism type of \(S\Gamma(w)\), any automorphism of \(S\Gamma(w)\) must map blocks to blocks, in the strong sense that its restriction to (both vertices and edges of) any block is an isomorphism to another block. Thus, the action of \(G\) on \(S\Gamma(w)\) induces an action of \(G\) on the (finite, by Lemma 4.5) set of blocks of \(S\Gamma(w)\).
Now choose any block \(X\) of \(S\Gamma(w)\), and fix a root \(r\) for \(X\). Let \(K\) be the stabiliser of the point \(X\) until the action of \(G\) on the set of blocks, in other words, the subgroup of all automorphisms in \(G\) which map \(X\) to itself. Then \(K\) is a finite index subgroup of \(G\). Since \(K\) maps \(X\) to itself, the action of \(K\) on \(S\Gamma(w)\) restricts to an action by automorphisms on \(X\). Since \(X\cong S\Gamma(1)\) is a connected, labelled and deterministic graph, its automorphisms are fixed-point free, so this action is faithful. It follows that \(K\) acts faithfully by
Figure 5. An approximation during Stephen’s procedure of \(S\Gamma(ac)\) for the monoid considered in Example 4.8.
automorphisms on \(X\), which is isomorphic to \(S\Gamma(1)\). Hence, \(K\) embeds in the automorphism group of \(S\Gamma(1)\), which is isomorphic to the group of units by another application of [13, Theorem 3.5].
One might ask whether the stronger statement is true that every subgroup actually embeds in the group of units. The following example shows that it is not:
**Example 5.2**.: Consider the special inverse monoid
\[\langle x,p,y\mid xpy=xp^{\prime}y=1\rangle.\]
Consider the Schutzenberger graphs \(S\Gamma(1)\) and \(S\Gamma(xy)\), which are easy to construct. It is easy to see that the former has no automorphisms (so the group of units is trivial), while the latter has an automorphism exchanging the vertices at the start and end of the path \(xy\) coming from the Munn tree (so there is a subgroup \(\mathbb{Z}_{2}\), which in fact is easily seen to be a maximal subgroup, in the \(\mathcal{D}\)-class of \(xy\)).
The maximal group image of this monoid (in other words, the group given by interpreting the monoid presentation as a group presentation) is easily seen to be the free product of an infinite cyclic group generated by \(x\) and an order-2 cyclic group generated by \(xy=p\). It is clear from considering \(S\Gamma(1)\) that it embeds into the Cayley graph of this group, so that the monoid is \(\mathcal{R}_{1}\)-injective by Proposition 3.2.
On the other hand, it is **not** E-unitary, since one can verify by constructing \(S\Gamma(p^{2})\) that the element \(p^{2}\) is not idempotent.
In Section 6 below, we will construct further examples where the group of units is trivial and but arbitrary finite groups arise as maximal subgroups. The example above and those in the following section are \(\mathcal{R}_{1}\)-injective but not \(E\)-unitary. We have not been able to construct an \(E\)-unitary special inverse monoid where the maximal subgroups do not embed in the group of units, nor to prove that such a monoid cannot exist. Similarly, we do not know if there are stronger restrictions on the behaviour subgroups in the \(1\)-relator case. In these cases we do not even know if a trivial group of units implies that all subgroups are trivial.
One case in which subgroups **do** have to embed in the group of units is for \(\mathcal{D}\)-classes of an \(\mathcal{R}_{1}\)-injective special inverse monoid such that the corresponding block decomposition as given by Lemma 4.5 is disjoint (see Remark 4.6 above):
**Theorem 5.3**.: _Let \(M\) be an \(\mathcal{R}_{1}\)-injective special inverse monoid, \(w\) a word and suppose the blocks of \(S\Gamma(w)\) have pairwise disjoint vertex sets. Then the maximal subgroups in the \(\mathcal{D}\)-class of \(w\) are isomorphic to the same subgroup of the group of units, and if \(w\notin\mathcal{D}_{1}\) this subgroup is finite._
Proof.: By [13, Theorem 3.5], the maximal subgroups are all isomorphic to the automorphism group of \(S\Gamma(w)\), which we will denote \(G\).
Consider the (finite, typically non-deterministic) labelled digraph \(\Omega\) with vertices the blocks of \(S\Gamma(w)\), and an edge from \(X\) to \(Y\) labelled \(x\) if and only if \(S\Gamma(w)\) has an edge from a vertex of \(X\) to a vertex of \(Y\) labelled \(x\). Since \(S\Gamma(w)\) is connected and every vertex lies in a block, \(\Omega\) is connected. Since
the blocks of \(S\Gamma(w)\) are disjoint, edges between distinct blocks cannot lie within blocks, so by Lemma 4.5 they are cut-edges. It follows that the edges of \(\Omega\) are all cut-edges: in other words, the underlying undirected graph of \(\Omega\) is a tree.
Clearly since automorphisms map blocks to blocks, the action of \(G\) by automorphisms on \(S\Gamma(w)\) induces an action by labelled digraph automorphisms on \(\Omega\). Since \(\Omega\) is a finite digraph whose underlying graph is a tree, there is a vertex of \(\Omega\) which is fixed by \(G\) (because by a result of Halin [4, Lemma 2] an automorphism of a finite undirected tree fixes either an edge or a vertex) and hence a block \(X\) of \(S\Gamma(w)\) which is fixed setwise by the action of \(G\). Thus, \(G\) can be restricted to act by directed graph automorphisms on the block \(X\), and since non-trivial automorphisms of \(S\Gamma(w)\) are fixed-point free, this action is faithful. Since \(X\) is isomorphic to \(S\Gamma(1)\) this means \(G\) acts faithfully on \(S\Gamma(1)\), so embeds in the automorphism group of \(S\Gamma(1)\) which is exactly the group of units.
Now consider the set \(K\) of edges in \(S\Gamma(w)\) which have exactly one end in \(X\). Since the blocks are disjoint, these edges cannot lie in any block, so by Lemma 4.5 they are cut edges and there are only finitely many of them. If \(K\) is empty then \(X\) is not connected to any other block, which since \(S\Gamma(w)\) is connected means \(S\Gamma(w)\cong X\cong S\Gamma(1)\), so by [13, Theorem 3.4(a)], \(w\mathcal{D}1\).
If \(K\) is non-empty then the action of \(G\) on \(S\Gamma(w)\) must clearly fix \(K\) setwise (since it fixes \(X\)). Thus, we may restrict the action of \(G\) on \(S\Gamma(w)\) to an action (by permutations) on the set \(K\) of edges. But by the fixed-point-free property again, an automorphism which fixes any edge is trivial, so the action of \(G\) on the finite set \(K\) is faithful, so \(G\) is finite.
Recall that the block decomposition given by Lemma 4.5 leaves open the possibility that finitely many edges do not lie in any block. One might ask if this can really happen; the following example shows that it can.
**Example 5.4**.: Consider the monoid
\[\langle x,p,q,y\mid xpy=xp^{\prime}y=1\rangle.\]
Note that it is the free product of the monoid in Example 5.2 with a free inverse monoid of rank \(1\), and is easily seen to be \(\mathcal{R}_{1}\)-injective with trivial group of units by a similar argument. Consider the Schutzenberger graph \(S\Gamma(qq^{\prime}xyqq^{\prime})\). This has four blocks, rooted at either end of the two edges labelled \(q\). The two edges labelled \(q\) are cut edges not contained in any block. There is clearly an automorphism swapping the two \(q\) edges, so the automorphism group of \(S\Gamma(qq^{\prime}xyqq^{\prime})\), and hence the maximal subgroup it the \(\mathcal{D}\)-class of \(qq^{\prime}xyqq^{\prime}\), is isomorphic to \(\mathbb{Z}_{2}\).
However, in the case of a \(\mathcal{D}\)-class whose Schutzenberger graph has edges not lying in any block, the block decomposition suffices to prove a very strong statement about the corresponding maximal subgroups: they are necessarily finite (even if the group of units of the monoid is infinite, and even if the monoid is not finitely presented).
**Theorem 5.5**.: _Let \(M\) be an \(\mathcal{R}_{1}\)-injective special inverse monoid generated by \(X\). If a word \(w\) is such that \(S\Gamma(w)\) has an edge not contained in any block, then the maximal subgroups in the \(\mathcal{D}\)-class of \(w\) are finite._
Proof.: By Lemma 4.5 the graph \(S\Gamma(w)\) has only finitely many edges not contained in blocks. Since the block decomposition is automorphism invariant, the automorphisms of \(S\Gamma(w)\) must permute these edges. Since the action of the automorphism group is fixed-point free it must act faithfully on this finite set, and therefore must be finite.
## 6. Subgroups Differing from the Group of Units
Our aim in this section is to construct examples of \(\mathcal{R}_{1}\)-injective special inverse monoids where the group of units is trivial but an arbitrary finite group arises as a maximal subgroup.
**Theorem 6.1**.: _For every finite group \(G\), there exists an \(\mathcal{R}_{1}\)-injective special inverse monoid with trivial group of units and a maximal subgroup isomorphic to \(G\)._
We proceed to prove this theorem with a construction and a number of lemmas. We begin with an elementary lemma about deterministic inverse graphs, which is well-known to experts.
**Lemma 6.2**.: _A morphism of connected, deterministic inverse graphs is uniquely determined by where it takes any single vertex. If two connected, deterministic inverse graphs with distinguished root vertices admit root-preserving morphisms between them in both directions, then the morphisms are isomorphisms._
Proof.: Suppose \(f:X\to Y\) is a morphism of connected, deterministic inverse graphs. If \(f(v)=v\) then for each other vertex \(u\in X\), because \(X\) is connected we may choose a path from \(v\) to \(u\), say with label \(w\). But now \(f(u)\) must be at the end of a path labelled \(w\) starting at \(f(y)\); because \(Y\) is deterministic there can be only one such path, and so \(f(u)\) is determind by \(f(v)\).
Now if there are root-preserving morphisms \(f:X\to Y\) and \(g:Y\to X\) then the compositions \(f\circ g:X\to X\) and \(g\circ f:Y\to Y\) are morphisms which agree with the identity maps on \(X\) and \(Y\) on their root vertices; since the identity maps are also morphisms, by the previous paragraph \(f\circ g\) and \(g\circ f\) must be equal to the respective identity maps, so \(f\) and \(g\) are isomorphisms.
Now let \(G\) be a finite group. For simplicity we consider a presentation for \(G\) with very large sets of generators and relations.
Specifically, consider the finite special monoid presentation \(\langle A\mid R\rangle\) for \(G\) where \(A=G\setminus\{1\}\) and \(R\) consists of all 2-letter and 3-letter words equal to 1 in \(G\). For each generator \(a\in A\) introduce new letters \(x_{a}\) and \(y_{a}\) and their formal inverses, and define \(\overline{a}=x_{a}y_{a}\). For \(w\in(A\cup A^{-1})^{*}\) define \(\overline{w}=\overline{w_{1}}\dots\overline{w_{|w|}}\). Let \(X=\{x_{a}\mid a\in A\}\) and \(Y=\{y_{a}\mid a\in A\}\).
For each relator \(r=r_{1}\dots r_{|r|}\in R\), and each \(1\leq k\leq|r|\) introduce a new letter \(\delta_{r,k}\), let \(\Delta\) be the alphabet of these letters, and define a word
\[s_{r,k}=x_{r_{k}}\delta_{r,k}(\delta_{r,k-1})^{-1}y_{r_{k-1}}\in X\Delta \Delta^{-1}Y\]
where indices are interpreted modulo \(|r|\), so that \(r_{0}=r_{|r|}\) and \(\delta_{r,-1}=\delta_{r,|r|}\). Now let \(M\) be the special inverse monoid generated by the set
\[X\cup Y\cup\Delta\]
subject to the set of four-letter relations
\[\{s_{r,k}\mid r\in R,1\leq i\leq|r|\}.\]
We shall study the Schutzenberger graphs of the monoid \(M\), and other graphs with edges labelled by the generators of \(M\). In these graphs, we shall use the terms \(x\)_-edges_, \(y\)_-edges_ and \(\delta\)_-edges_ to mean edges labelled respectively by some \(x_{a}\), by some \(y_{a}\) and by some \(\delta_{r,k}\).
**Lemma 6.3**.: _The graph \(S\Gamma(1)\) has the following properties_
1. _The root is the unique vertex with the property that all edges incident with it are either_ \(x\)_-edges going out or_ \(y\)_-edges coming in._
2. _There are no vertices having both an_ \(x\)_-edge coming in and a_ \(y\)_-edge going out._
_In particular \(S\Gamma(1)\) has trivial automorphism group, so \(M\) has trivial group of units._
Proof.: We consider the construction of \(S\Gamma(1)\) by Stephen's procedure.
Consider first a union of cycles labelled by the defining relations of \(S\), amalgamated at the root, without determinising. The vertices can be divided into four types:
1. the root, which has \(x\)-edges going out and \(y\)-edges coming in;
2. vertices with an \(x\)-edge coming in and a \(\delta\)-edge going out;
3. vertices with two \(\delta\)-edges coming in; note that these two edges will have labels of the form \(\delta_{r,k}\) and \(\delta_{r,k-1}\) (where as usual \(k-1\) is interpreted modulo \(|r|\)) and since \(R\) contains no relations of length \(1\) these labels are distinct;
4. vertices with a \(\delta\)-edge coming in and a \(y\)-edge coming out;
Determinising \(x\)-edges in this graph will identify various vertices of type (b), and determinising \(y\)-edges will identify various vertices of type (d). Let \(\Gamma^{\prime}\) be the graph resulting from this determinisation. We claim that \(\Gamma^{\prime}\) is now deterministic. Indeed, the only remaining thing which could fail is determinism of the \(\delta\)-edges. However, it follows from the definition of the relations that each possible each possible \(\delta\)-edge label appears only twice, once with its start at a vertex of type (b), and once with its start at a vertex of type (d). Hence, two \(\delta\)-edges with the same label cannot have the same start vertex. Moreover, each of the \(\delta\)-edges ends at a vertex of type (c), and we have seen that two \(\delta\)-edges meeting at a type (c) have differing labels, so two \(\delta\)-edges with the same label cannot have the same end vertex.
Now notice that vertices of types (b), (c) and (d) in \(\Gamma^{\prime}\) have no \(x\)-edges coming out, and no \(y\)-edges coming in, while the root of \(\Gamma^{\prime}\) has only \(x\)-edges coming out and \(y\)-edges coming in. It follows that if we attach a new copy of \(\Gamma^{\prime}\) at each non-root vertex of \(\Gamma^{\prime}\), the resulting graphs remains deterministic.
Thus, by Stephen's procedure, \(S\Gamma(1)\) can be constructed iteratively as a tree of copies of \(\Gamma^{\prime}\). It is clear that the original root remains the only vertex which has only \(x\)-edges going out and \(y\)-edges coming in (since every other vertex is constructed as type (b), (c) or (d) and therefore has other edges incident with it). Moreover, the only way a vertex can have an \(x\)-edge coming in is if it is constructed as a type (b) vertex, while the only way it
can have a \(y\)-edge going out is if it is constructed as a type (d) vertex. Thus, no vertex has both an \(x\)-edge coming in and a \(y\)-edge coming out.
Finally, it follows from (i) that automorphisms of \(S\Gamma(1)\) must fix the root, and since automorphisms of deterministic labelled graphs are fixed-point free, this means that the automorphism group is trivial.
We now define a graph \(\Omega\) which has:
* for each element \(g\in G\), a vertex \(v_{g}\);
* for each element \(g\in G\) and generator \(a\in A\), a vertex \(u_{g,a}\), an edge from \(v_{g}\) to \(u_{g,a}\) labelled \(x_{a}\) and an edge from \(u_{g,a}\) to \(v_{ga}\) labelled \(y_{a}\);
* for each element \(g\in G\) and relation \(r=r_{1}\dots r_{|r|}\in R\), a new vertex \(t_{g,r}\), and for each \(1\leq k\leq|r|\) an edge from \(u_{gr_{1}\dots r_{k-1},r_{k}}\) to \(t_{g,r}\) labelled \(\delta_{r,k}\); and
* at each vertex \(u_{g,a}\) and \(t_{g,r}\) an attached copy of \(S\Gamma(1)\).
We shall refer to the vertex \(v_{1}\) as the _root_ of \(\Omega\). We shall show, eventually, that the graph \(\Omega\) is isomorphic to a Schutzenberger graph of \(M\).
**Lemma 6.4**.: _The graph \(\Omega\) is deterministic._
Proof.: By construction,
* the edges explicitly created are readily verified to have the required property;
* the attached of copies of \(S\Gamma(1)\) are by definition internally deterministic; and
* the vertices at which we attach copies of \(S\Gamma(1)\) do not have explicitly constructed \(x\)-edges going out or \(y\)-edges coming in, so by Lemma 6.3 the attachment of \(S\Gamma(1)\) at these vertices does not cause any non-determinism.
**Lemma 6.5**.: _The automorphism group of \(\Omega\) is isomorphic to \(G\)._
Proof.: It is immediate from symmetry of the definition that there is a faithful action of \(G\) where \(h\) acts by taking \(v_{g}\) to \(v_{hg}\), taking \(u_{g,a}\) to \(u_{hg,a}\), taking \(t_{g,r}\) to \(t_{hg,r}\), and extending in the obvious way to permute the attached copies of \(S\Gamma(1)\). What remains is to show that there are no more automorphisms of \(\Omega\). Suppose, then, that \(f:\Omega\to\Omega\) is an automorphism.
Fix some \(a\in A\). Notice that the vertices of the form \(u_{g,a}\) are the only vertices with an \(x_{a}\)-edge coming in and a \(y_{a}\)-edge going out. Indeed, by construction none of the vertices of the form \(v_{g}\) or \(t_{g,r}\) have this property, and by Lemma 6.3 the vertices in the attached copies of \(S\Gamma(1)\) do not have this property either. Hence, the set of vertices \(\{u_{g,a}\mid g\in G\}\) must be preserved by \(f\). Let \(h\) be such that \(f(u_{1,a})=u_{h,a}\). Now \(f\) agrees with the action of \(h\) on the vertex \(u_{1,a}\), so by Lemma 6.2 it must act the same as \(h\) on the whole graph.
**Lemma 6.6**.: _Every defining relation \(s_{r,k}\) can be read around a closed path at every vertex in \(\Omega\)._
Proof.: Every vertex except those of the form \(v_{g}\) by definition lies at the root of an attached copy of \(S\Gamma(1)\), and hence certainly has the claimed property.
For those of the form \(v_{g}\), we can let \(s=r_{1}\dots r_{k-1}\) where \(r=r_{1}\dots r_{|r|}\) and now we have a closed path
\[v_{g}=v_{(gs^{-1})r_{1}\dots r_{k-1}}\xrightarrow{x_{r_{k}}}u_{g,r_{k}} \xrightarrow{\delta_{r,k}}t_{gs^{-1},r}\xrightarrow{(\delta_{r,k-1})^{-1}}u_{ gs^{-1}r_{1}\dots r_{k-2},r_{k-1}}\xrightarrow{y_{k}}v_{g}.\]
Now let \(W\) be the set of all words over \(A\) of length \(4\) or less, and define
\[w=\prod_{x\in W}^{n}(\overline{x})(\overline{x})^{-1}\in M\]
noting that the order of the product is unimportant because the factors are idempotent, and therefore commute. Our aim is to show that the Schutzenberger graph \(S\Gamma(w)\) is isomorphic to \(\Omega\). We shall do this by showing that there are morphisms (preserving the root) between these graphs in both directions.
**Lemma 6.7**.: _There is a morphism of labelled directed graphs from \(S\Gamma(w)\) to \(\Omega\), mapping the root to the root._
Proof.: We know that
1. \(\Omega\) is deterministic;
2. \(w\) can be read from the root in \(\Omega\) (since every word of the form \(\overline{x}\) for \(x\in A^{*}\) can be); and
3. every defining relation can be read around a closed path at every vertex in \(\Omega\).
The result now follows from Lemma 2.8.
We now proceed to show that there is a morphism in the other direction.
For each relation \(r=abc\) let \(\Omega_{r}\) be the subgraph of \(\Omega\) consisting of \(v_{1}\), \(v_{a}\), \(v_{ab}\), \(u_{1,a}\), \(u_{a,b}\), \(u_{ab,c}\), \(t_{1,r}\) and all the edges between them. Similarly for \(r=ab\) let \(\Omega_{r}\) be the subgraph of \(\Omega\) consisting of \(v_{1}\), \(v_{a}\), \(u_{1,a}\), \(u_{a,b}\) and \(t_{1,r}\) and edges between them.
**Lemma 6.8**.: _With \(M\) and \(w\) as above, if \(r=abc\) [respectively, \(r=ab\)] is a relation in \(R\) and the word \(\overline{abc}=x_{a}y_{a}x_{b}y_{b}x_{c}y_{c}\) [respectively, \(\overline{ab}=x_{a}y_{a}x_{b}y_{b}\)] is readable at some vertex \(v\) of \(S\Gamma(w)\), then there is a morphism from \(\Omega_{r}\) to \(S\Gamma(w)\) taking the root of \(\Omega_{r}\) to \(v\)._
Proof.: Let \(v_{1}^{\prime}=v\), and let \(u_{1,a}^{\prime}\), \(v_{a}^{\prime}\), \(u_{a,b}^{\prime}\), \(v_{ab}^{\prime}\) and \(u_{ab,c}^{\prime}\) be the vertices reached in \(S\Gamma(w)\) on reading \(x_{a}\), \(x_{a}y_{a}\), \(x_{a}y_{a}x_{b}\), \(x_{a}y_{a}x_{b}y_{b}\) and \(x_{a}y_{a}x_{b}y_{b}x_{c}\) respectively from \(v\). Certainly we can read the relations \(s_{r,1}\), \(s_{r,2}\) and \(s_{r,3}\) around closed cycles at \(v\), \(v_{a}^{\prime}\) and \(v_{ab}^{\prime}\) respectively. The fact that the \(S\Gamma(w)\) is deterministic means that the vertices reached after reading the first two letters of each of these cycles must be the same; call this vertex \(t_{1,r^{\prime}}\). Now it is easy to verify that the map taking each vertex \(x\) of \(\Omega_{r}\) to the vertex we have designated as \(x^{\prime}\) in \(S\Gamma(1)\) must be a morphism.
We now introduce some more notation. Let \(z_{1}\) be the root vertex of \(S\Gamma(w)\), and for each \(g\in A=G\setminus\{1\}\) let \(z_{g}\) denote the vertex in \(S\Gamma(w)\) reached by reading \(x_{g}y_{g}\) from the root.
**Lemma 6.9**.: _Any path in \(S\Gamma(w)\) starting at the root and with label of the form \(\overline{a_{1}\dots a_{n}}\) where \(a_{1}\dots a_{n}=g\) in \(G\) ends at \(z_{g}\)._
Proof.: The claim holds by definition when \(n=0\) and \(n=1\), so assume for induction that it is true for \(k\) and consider a path from the root with label \(\overline{a_{1}\dots a_{k+1}}\) which leads to a vertex \(v\). By the inductive hypothesis the prefix path labelled \(\overline{a_{1}\dots a_{k}}\) leads to \(z_{h}\) where \(h=a_{1}\dots a_{k}\) in \(G\).
Now if \(g=1\) then \(ha_{k+1}=1\) in \(G\), so the two-letter word \(r=ha_{k+1}\) is one of the defining relators of \(G\). By Lemma 6.8 there is morphism from \(\Omega_{r}\) to \(S\Gamma(w)\), taking the root to the root, so the path from the root labelled \(x_{h}y_{h}x_{a_{k+1}}y_{a_{k+1}}\) must be closed. But this path ends at \(v\), so we must have \(v=z_{1}=z_{g}\).
If \(g\neq 1\) then it follows from the definition of \(w\) that \(\overline{ha_{k+1}g^{-1}}\) is readable from the root in \(S\Gamma(w)\). Now by definition we have \(ha_{k+1}g^{-1}=1\) in \(G\), so the three-letter word \(r=ha_{k+1}g^{-1}\) is one of the defining relators of \(G\). By Lemma 6.8 there is morphism from \(\Omega_{r}\) to \(S\Gamma(w)\), taking the root to the root, so this path must be closed. Moreover the word \(gg^{-1}\) is also a defining relator of \(G\) and \(\overline{gg^{-1}}\) is also readable from the root, and hence by Lemma 6.8 must be read around a closed path. Since the graph is deterministic, it follows that \(v=z_{g}\).
**Lemma 6.10**.: _The Schutzenberger graph \(S\Gamma(w)\) is isomorphic to \(\Omega\)._
Proof.: We have already seen that there is a morphism from \(S\Gamma(w)\) to \(\Omega\), taking the root to the vertex \(v_{1}\), so by Lemma 6.2 it will suffice to show that there is also a morphism from \(\Omega\) to \(S\Gamma(w)\), taking \(v_{1}\) to the root. We can define such a morphism as follows:
* Each vertex of the form \(v_{g}\) is mapped to the vertex \(z_{g}\). (In particular, \(v_{1}\) is mapped to the root vertex \(z_{1}\).)
* Each vertex of the form \(u_{g,a}\) is mapped to the vertex at the end of the unique edge labelled \(x_{a}\) leaving \(z_{g}\); by the definition of \(w\) there is an edge in \(S\Gamma(w)\) from \(g\) to this vertex labelled \(x_{a}\) and by Lemma 6.9 and the fact that \(x_{g}y_{g}x_{a}y_{a}\) is readable from the root, there is also an edge from this vertex to \(w_{ga}\) labelled \(y_{a}\).
* Each vertex of the form \(t_{g,r}\) is mapped to the image of the vertex \(t_{1,r}\) under the morphism from \(\Omega_{r}\) to \(S\Gamma(w)\) which takes the root to \(z_{g}\); the existence of a morphism from \(\Omega_{r}\) ensures that this vertex has edges leading to the correct images of vertices of the form \(u_{h,a}\).
* Since \(S\Gamma(w)\) is a Schutzenberger graph, for every vertex \(v\) in it there is a morphism from \(S\Gamma(1)\) to \(S\Gamma(w)\) taking the root to \(v\). Each attached copy of \(S\Gamma(1)\) in \(\Omega\) is attached at some vertex \(v\) on which we have already defined our map. We map the whole copy of \(S\Gamma(1)\) to \(S\Gamma(w)\) by the morphism which takes the root to the appropriate vertex.
At each stage we have verified that the new vertices on which we define the map are sent to vertices which have the required edges to the images of those vertices on which it is already defined; thus, the given map on vertices can be extended to edges to give a morphism as required.
**Lemma 6.11**.: _The monoid \(M\) is \(\mathcal{R}_{1}\)-injective._
Proof.: The maximal group image is given by the group presentation
\[K=\operatorname{Gp}\bigl{\langle}X\cup Y\cup\{\delta_{r,i}\mid r\in R,1\leq i \leq|r|\}\mid s_{r,k}\ (r\in R,1\leq k\leq|r|)\bigr{\rangle}\]
where
\[s_{r,k}=x_{r_{k}}\delta_{r,k}(\delta_{r,k-1})^{-1}y_{r_{k-1}}\in X\Delta\Delta ^{-1}Y\]
with indices interpreted modulo \(|r|\). Our aim is to identify the group \(K\) by performing certain Tietze transformations to simplify the presentation.
Fix \(r\in R\) where \(|r|=n\) and consider the set of relators \(s_{r,k}\). We can eliminate \(\delta_{r,1}\) and the relator \(s_{r,1}\) by rearranging the latter as
\[\delta_{r,1}=x_{r_{1}}^{-1}y_{r_{n}}^{-1}\delta_{r,n},\]
and substituting the right-hand-side in place of \(\delta_{r,1}\) in \(s_{r,2}\), which is the only other relation in which \(\delta_{r,1}\) appears. Then we may eliminate \(\delta_{r,2}\) and \(s_{r,2}\) using
\[\delta_{r,2}=x_{r_{2}}^{-1}y_{r_{1}}^{-1}\delta_{r,1}=x_{r_{2}}^{-1}y_{r_{1}} ^{-1}x_{r_{1}}^{-1}y_{r_{n}}^{-1}\delta_{r,n}.\]
and appropriately modifying \(s_{r,3}\). Continuing in this way once we have eliminated all of the generators \(\delta_{r,1},\ldots,\delta_{r,n-2}\) our original set of relations will be reduced just to the following two relations
\[\delta_{r,n-1} = x_{r_{n-1}}^{-1}y_{r_{n-2}}^{-1}x_{r_{n-2}}^{-1}y_{r_{n-3}}^{-1} \ldots x_{r_{2}}^{-1}y_{r_{1}}^{-1}x_{r_{1}}^{-1}y_{r_{n}}^{-1}\delta_{r,n}\] \[1 = x_{r_{n}}^{-1}y_{r_{n-1}}^{-1}\delta_{r,n-1}\delta_{r,n}^{-1}.\]
Finally we eliminate \(\delta_{r,n-1}\) using the first of these relations, and substituting into the second we obtain the single relation
\[1 = x_{r_{n}}^{-1}y_{r_{n-1}}^{-1}x_{r_{n-1}}^{-1}y_{r_{n-2}}^{-1}x_ {r_{n-2}}^{-1}y_{r_{n-3}}^{-1}\ldots x_{r_{2}}^{-1}y_{r_{1}}^{-1}x_{r_{1}}^{-1 }y_{r_{n}}^{-1}\delta_{r,n}\delta_{r,n}^{-1}\] \[= x_{r_{n}}^{-1}y_{r_{n-1}}^{-1}x_{r_{n-1}}^{-1}y_{r_{n-2}}^{-1}x_ {r_{n-2}}^{-1}y_{r_{n-3}}^{-1}\ldots x_{r_{2}}^{-1}y_{r_{1}}^{-1}x_{r_{1}}^{-1 }y_{r_{n}}^{-1}.\]
By cyclically permuting we see that this relation can be replaced by the relation \(\overline{r}=1\) where
\[\overline{r}=\overline{r_{1}\ldots r_{|r|}}=x_{r_{1}}y_{r_{1}}\ldots x_{r_{n} }y_{r_{n}}.\]
Note that the generators \(\delta_{r,1},\ldots,\delta_{r,n-1}\) were all eliminated using these Tietze transformations; the generator \(\delta_{r,n}\) was not eliminated but no longer features in any relations, and will therefore generate a free factor.
For each \(r\in R\) define \(\lambda_{r}=\delta_{r,|r|}\) and set \(\Lambda=\{\lambda_{r}\ (r\in R)\}\). Repeating the above sets of Tietze transformations for every \(r\in R\) we obtain the following presentation for the maximal group image
\[K=\operatorname{Gp}\bigl{\langle}X\cup Y\cup\Lambda\mid\overline{r}=1\ (r\in R)\bigr{\rangle}=FG(\Lambda)*\operatorname{Gp}\bigl{\langle}X\cup Y\mid \overline{r}=1\ (r\in R)\bigr{\rangle}\]
Now for each \(a\in A\) add a new redundant generator \(z_{a}\) and relation \(z_{a}=x_{a}y_{a}\) to the presentation and set \(Z=\{z_{a}:a\in A\}\). Since \(y_{a}={x_{a}}^{-1}z_{a}\) we can then eliminate the redundant generators \(\{y_{a}:a\in A\}\) from the presentation giving
\[K \cong FG(\Lambda)*\operatorname{Gp}\bigl{\langle}X\cup Z\mid \tilde{r}=1\ (r\in R)\bigr{\rangle}\] \[\cong FG(\Lambda\cup X)*\operatorname{Gp}\bigl{\langle}A\mid R\bigr{\rangle} \cong FG(\Lambda\cup X)*G\]
where for \(r=a_{i_{1}}\ldots a_{i_{k}}\) we define \(\tilde{r}=z_{a_{i_{1}}}\ldots z_{a_{i_{k}}}\).
We now move on to proving that the monoid \(M\) is \(\mathcal{R}_{1}\)-injective. By Proposition 2.7 the submonoid \(\mathcal{R}_{1}\) is generated as a monoid by the proper
prefixes of the defining relators \(s_{r,k}\). Hence every element of \(R_{1}\) can be written as a product of the elements
\[x_{r_{k}},\quad y_{r_{k}}^{-1},\quad x_{r_{k}}\delta_{r,k}\]
where \(r\in R\), \(1\leq k\leq|r|\). We compute the image of each of these generators in the maximal group image:
\[K=\operatorname{Gp}\bigl{\langle}\Lambda\cup X\cup Z\mid\tilde{r}=1\ (r\in R) \bigr{\rangle}\]
where \(\Lambda=\{\lambda_{r}=\delta_{r,|r|}\ (r\in R)\}\) and \(Z=\{z_{a}=x_{a}y_{a}\ (a\in A)\}\):
* The image of \(x_{r_{k}}\) is \(x_{r_{k}}\).
* The image of \(y_{r_{k}}^{-1}\) is \(z_{r_{k}}^{-1}x_{r_{k}}\).
* The image of \(x_{r_{|r|}}\delta_{r,|r|}\) is \(x_{r_{|r|}}\lambda_{r}\).
* For \(r\in R\) and \(1\leq k\leq|r|\) with \(k<|r|=n\) the image of \(x_{r_{k}}\delta_{r,k}\) is \[x_{r_{k}}\delta_{r,k} =x_{r_{k}}x_{r_{k}}^{-1}y_{r_{k-1}}^{-1}x_{r_{k-1}}^{-1}\dots y_ {r_{1}}^{-1}x_{r_{1}}^{-1}y_{r_{n}}^{-1}\delta_{r,n}\] \[=y_{r_{k-1}}^{-1}x_{r_{k-1}}^{-1}\dots y_{r_{1}}^{-1}x_{r_{1}}^{- 1}y_{r_{n}}^{-1}\delta_{r,n}\] \[=z_{r_{k-1}}^{-1}\dots z_{r_{1}}^{-1}z_{r_{1}}^{-1}x_{r_{n}} \delta_{r,n}\] \[=z_{r_{k-1}}^{-1}\dots z_{r_{1}}^{-1}z_{r_{|r|}}^{-1}x_{r_{|r|}} \lambda_{r}.\]
Recall that, due to the way in which we chose the original group presentation \(\operatorname{Gp}\bigl{\langle}A\mid R\bigr{\rangle}\) for the finite group \(G\), none of the generators \(a\in A\) is equal to the identity of \(G\).
Moreover, all the defining relators are positive words over \(A\). Finally, notice that no proper subword of a defining relator is equal to \(1\) in \(G\); indeed if it were then (since the relators are all of length \(2\) or \(3\)) this word imply that a single letter also equal to \(1\) in \(G\), but we chose our generating set to exclude the identity element.
To prove that the monoid is \(\mathcal{R}_{1}\)-injective we need in particular that all the elements in the set
\[\{z_{r_{k-1}}^{-1}\dots z_{r_{1}}^{-1}z_{r_{|r|}}^{-1}x_{r_{|r|}}\lambda_{r} \mid r\in R,1\leq k\leq|r|-1\}\]
are distinct in the group \(K\). Because \(K\) admits a decomposition as \(FG(\Lambda\cup X)*G\) two such elements can clearly only be equal in \(K\) if they correspond to the same relator \(r\), in other words, if they are \(z_{r_{i-1}}^{-1}\dots z_{r_{1}}^{-1}z_{r_{|r|}}^{-1}x_{r_{|r|}}\lambda_{r}\) and \(z_{r_{j-1}}^{-1}\dots z_{r_{1}}^{-1}z_{r_{|r|}}^{-1}x_{r_{|r|}}\lambda_{r}\) and for the same \(r\) and different \(i\) and \(j\). Assuming without loss of generality that \(i>j\) and cancelling, we obtain \(z_{r_{i-1}}^{-1}\dots z_{r_{j}}^{-1}=1\). Now again using the free product decomposition of \(K\), it follows that \(r_{j}\dots r_{i-1}=1\) in \(G\). But this contradicts the fact established above that no proper subword of a defining relator in \(G\) is equal to \(1\) in \(G\).
Next we claim that the submonoid of
\[K=FG(\Lambda\cup X)*\operatorname{Gp}\bigl{\langle}Z\mid\tilde{r}=1\ (r\in R)\bigr{\rangle}\]
generated by the set
\[Q =\{x_{r_{k}},\ z_{r_{k}}^{-1}x_{r_{k}},\ x_{r_{|r|}}\lambda_{r} \mid r\in R,1\leq k\leq|r|\}\] \[\qquad\cup\{z_{r_{k-1}}^{-1}\dots z_{r_{1}}^{-1}z_{r_{|r|}}^{-1}x_ {r_{|r|}}\lambda_{r}\mid r\in R,1\leq k\leq|r|-1\}\]
of all the images of all the prefixes of relators in the presentation of \(M\) is a free monoid freely generated by these generators. These generators are all
distinct by the argument above. Now any product of these generators is in normal form with respect to the free product decomposition
\[K=FG(\Lambda\cup X)*\operatorname{Gp}\bigl{\langle}Z\mid\tilde{r}=1\ (r\in R) \bigr{\rangle}=FG(\Lambda\cup X)*G\]
From this together with the fact that every generator comes from the set \(GX\Lambda\) can be used to deduce that the submonoid of \(K\) generated by \(Q\) is a free monoid with free generating set \(Q\).
**Remark 6.12**.: The inverse monoids constructed in the proof of Theorem 6.1 are not \(E\)-unitary since for any relator \(r\in R\) we have (in the notation of that proof) \(\overline{r}=1\) in the maximal group image but one can see that \(\overline{r}\) does not represent an idempotent in the inverse monoid \(M\) defined in the proof.
## 7. Generators which are not right or left units
In this final section we note the following rather surprising theorem, which allows us in particular to easily construct examples of finitely presented special inverse monoids with many different non-isomorphic maximal subgroups. Indeed it suggests that this kind of behaviour is in some sense "the norm" for special inverse monoids! This contrasts sharply with the case of special (non-inverse) monoids, where by a result of Malheiro [11] all maximal subgroups lie in the \(\mathcal{D}\)-class of \(1\) and therefore are necessarily isomorphic.
**Theorem 7.1**.: _Let \(M\) be a special inverse monoid presentation with a generator which represents neither a left nor a right unit. Then \(M\) contains every finite subgroup of the group of units as a maximal subgroup._
Proof.: Choose a generator \(v\) which is neither a left nor a right unit. Let \(Q\) be a finite subgroup of the group of units, say \(|Q|=k\), and let \(u_{1},\ldots,u_{k}\) be words representing the elements of \(Q\). Consider the element
\[w\ =\ \prod_{i=1}^{k}u_{k}vv^{\prime}u_{k}^{\prime},\]
of \(M\), noting that the order of the product is unimportant because the factors are idempotent and therefore commute. We shall describe the Schutzenberger graph \(S\Gamma(w)\).
By Stephen's procedure, it is easy to see that \(S\Gamma(w)\) can be obtained by starting with \(S\Gamma(1)\) (we shall call this the _central subgraph_), and for each vertex corresponding to an element of \(Q\), gluing on an edge leaving it labelled \(v\) and a new copy of \(S\Gamma(1)\) rooted at the far end of the edge, and determinising. We claim that in fact this graph is already deterministic, so that no determinising is needed. Indeed, clearly no folding can take place within the copies of \(S\Gamma(1)\) since \(S\Gamma(1)\) is already deterministic. The new \(v\)-edges cannot fold with each other because they do not share any endpoints. It remains only to show that the new \(v\)-edges cannot fold into an edge in one of the copies of \(S\Gamma(1)\). Notice that the new \(v\)-edges all connect at both ends into vertices of \(S\Gamma(1)\) corresponding to elements of the group of units. If there was an existing \(v\)-edge in \(S\Gamma(1)\) at one of these vertices for one of the new \(v\)-edges to fold into, it would therefore follow that \(v\) is either a right unit or a left unit (depending on the orientation of the edge), giving a contradiction.
Next notice that the central subgraph is the unique subgraph of \(S\Gamma(w)\) which both (i) is isomorphic to \(S\Gamma(1)\) and (ii) is not contained any subgraph isomorphic to \(S\Gamma(1)\) rooted at a vertex with a \(v\)-edge coming in. It follows that automorphisms of \(S\Gamma(w)\) must restrict to automorphisms of the central subgraph. Now it is easy to see that the automorphisms of \(S\Gamma(w)\) are exactly those automorphisms of the central subgraph which fix the set of vertices corresponding to \(Q\), in other words, the automorphisms corresponding to elements of \(Q\). Thus, the maximal subgroups in the \(\mathcal{D}\)-class of \(w\) are isomorphic to \(Q\) as required.
**Remark 7.2**.: The proof of Theorem 7.1 is clearly reminiscent of our reasoning with blocks in the \(\mathcal{R}_{1}\)-injective case above; indeed the similarity is our reason for including the result in this article, the main focus of which is otherwise on the \(\mathcal{R}_{1}\)-injective case. Since we are not here assuming \(\mathcal{R}_{1}\)-injectivity we do not have access to the machinery above to guarantee a block decomposition for every Schutzenberger graph, but it just so happens that the particular Schutzenberger graph \(S\Gamma(w)\) constructed in the proof does have a block decomposition.
As a consequence of Theorem 7.1 we obtain the following corollary, which is a very slight strengthening (because the free inverse monoid has rank \(1\), rather than \(2\)) of a result which was established by other means in our recent work [2, Corollary 5.11].
**Corollary 7.3**.: _There exists an \(E\)-unitary finitely presented special inverse monoid, which is the free product of a group and a free inverse monoid of rank \(1\) and which has every finite group as a maximal subgroup._
Proof.: Take a finitely presented group \(G\) having every finite group as a subgroup (for example, Higman's universal group, which contains every finitely _presented_ group [10, Theorem 7.3]), and consider the inverse monoid free product of \(G\) with a free inverse monoid of rank one. That the monoid is \(E\)-unitary follows easily from the fact that groups and free inverse monoids are both \(E\)-unitary. The result now follows from Theorem 7.1.
| ```
一般特殊逆群の構造の研究を継続しています。特に、最大群、これはグループ $\mathcal{H}$ クラスとも呼ばれる、特殊逆群の研究に焦点を当てています。著者らは、これらの群が非常に乱立しているという研究結果を報告しており、この論文では、特殊逆群を $E$ 単位群(または $\mathcal{R}_1$ 誘導性を持つという弱く定義された性質を持つ)に制限すると、最大群はグループの単位群に強く支配されることが示されます。特に、すべての最大群には、単位群に埋め込まれる有限指数群があります。私たちは、すべての有限群が、単位群を持たない特殊逆群の $\mathcal{R}_1$ 誘導性を持つ構造において最大群であることを示すための構築を提供します。すべての組み合わせが、有限群 $G$ と有限指数群 $H$ が最大群と単位群になるかという問題は残 |
2309.07311 | Sudden Drops in the Loss: Syntax Acquisition, Phase Transitions, and
Simplicity Bias in MLMs | Most interpretability research in NLP focuses on understanding the behavior
and features of a fully trained model. However, certain insights into model
behavior may only be accessible by observing the trajectory of the training
process. We present a case study of syntax acquisition in masked language
models (MLMs) that demonstrates how analyzing the evolution of interpretable
artifacts throughout training deepens our understanding of emergent behavior.
In particular, we study Syntactic Attention Structure (SAS), a naturally
emerging property of MLMs wherein specific Transformer heads tend to focus on
specific syntactic relations. We identify a brief window in pretraining when
models abruptly acquire SAS, concurrent with a steep drop in loss. This
breakthrough precipitates the subsequent acquisition of linguistic
capabilities. We then examine the causal role of SAS by manipulating SAS during
training, and demonstrate that SAS is necessary for the development of
grammatical capabilities. We further find that SAS competes with other
beneficial traits during training, and that briefly suppressing SAS improves
model quality. These findings offer an interpretation of a real-world example
of both simplicity bias and breakthrough training dynamics. | Angelica Chen, Ravid Shwartz-Ziv, Kyunghyun Cho, Matthew L. Leavitt, Naomi Saphra | 2023-09-13T20:57:11 | http://arxiv.org/abs/2309.07311v5 | # Sudden Drops in the Loss: Syntax Acquisition, Phase Transitions, and Simplicity Bias in MLMs
###### Abstract
Most interpretability research in NLP focuses on understanding the behavior and features of a fully trained model. However, certain insights into model behavior may only be accessible by observing the trajectory of the training process. We present a case study of syntax acquisition in masked language models (MLMs) that demonstrates how analyzing the evolution of interpretable artifacts throughout training deepens our understanding of emergent behavior. In particular, we study Syntactic Attention Structure (SAS), a naturally emerging property of MLMs wherein specific Transformer heads tend to focus on specific syntactic relations. We identify a brief window in pretraining when models abruptly acquire SAS, concurrent with a steep drop in loss. This breakthrough precipitates the subsequent acquisition of linguistic capabilities. We then examine the causal role of SAS by manipulating SAS during training, and demonstrate that SAS is necessary for the development of grammatical capabilities. We further find that SAS competes with other beneficial traits during training, and that briefly suppressing SAS improves model quality. These findings offer an interpretation of a real-world example of both simplicity bias and breakthrough training dynamics.
## 1 Introduction
While language model training usually leads to smooth improvements in loss over time (Kaplan et al., 2020), not all knowledge emerges uniformly. Instead, language models acquire different capabilities at different points in training. Some capabilities remain fixed (Press et al., 2023), while others decline (McKenzie et al., 2022), as a function of dataset size or model capacity. Certain capabilities even exhibit abrupt improvements--this paper focuses on such discontinuous dynamics, which are often called **breakthroughs**(Srivastava et al., 2022), **emergence**(Wei et al., 2022), **breaks**(Caballero et al., 2023), or **phase transitions**(Olsson et al., 2022). The interpretability literature rarely illuminates how these capabilities emerge, in part because most analyses only examine the final trained model. Instead, we consider _developmental_ analysis as a complementary explanatory lens.
To better understand the role of interpretable artifacts in model development, we analyze and manipulate these artifacts during training. We focus on a case study of **Syntactic Attention Structure** (SAS), a model behavior thought to relate to grammatical structure. By measuring and controlling the emergence of SAS, we deepen our understanding of the relationship between the internal structural traits and extrinsic capabilities of masked language models (MLMs).
SAS occurs when a model learns specialized attention heads that focus on a word's syntactic neighbors. This behavior emerges naturally during conventional MLM pre-training (Clark et al., 2019; Voita et al., 2019; Manning et al., 2020). We observe an abrupt spike in SAS at a consistent point in training, and explore its impact on MLM capabilities by manipulating SAS during training. Our observations paint a picture of how interpretability artifacts may represent simplicity biases that compete with other learning strategies during MLM training. In summary, our main contributions are:
* Monitoring latent syntactic structure (defined in Section 2.1) throughout training, we identify (Section 4.1) a precipitous loss drop composed of multiple phase transitions (defined in Section 2.3) relating to various linguistic abilities. At the onset of this stage (which we call the **structure onset**), SAS spikes. After the spike, the model starts handling complex linguistic phenomena correctly, as signaled by a break in BLiMP score (which we call the **capabilities onset**). The structure onset is associated with increasing functional complexity in the model, whereas the rest of training sees declining complexity.
* We introduce a regularizer to examine the causal role of SAS (defined in Section 2.2) and use it to show that SAS is necessary for handling complex linguistic phenomena (Section 4.2) and that SAS competes with an alternative strategy that exhibits its own break in the loss curve, which we call the **alternative strategy onset**.
* Section 4.3 shows that briefly suppressing SAS improves model quality and accelerates convergence. Suppressing past the alternative strategy onset damages performance and blocks SAS long-term, suggesting this phase transition terminates a critical learning period.
## 2 Methods
### Syntactic Attention Structure
One proposal for interpreting attention is to treat some attention weights as syntactic connections (Manning et al., 2020; Voita et al., 2019; Clark et al., 2019). Our method is based on Clark et al. (2019), who find that some specialized attention heads focus on the target word's dependency relations.
Dependency parses describe latent syntactic structure. Each word in a sentence has a word that it modifies, which is its parent in the syntax tree. Each dependency is labeled--e.g., an adjective modifies a noun through an amod relation in the Universal Dependencies annotation system (Nivre et al., 2017). In the example that follows, when an MLM predicts the word _nests_, it is likely to rely heavily on its syntactic relations _builds_ and _ugly_. One head may attend to adjectival modifiers like _ugly_ while another attends to direct objects like _builds_.
Figure 1: **BERT first learns to focus on syntactic neighbors with specialized attention heads, and then exhibits grammatical capabilities in its MLM objective**. The former (internal) and the latter (external) model behaviors both emerge abruptly, at moments we respectively call the **structure onset** (\(\blacktriangle\)) and **capabilities onset** (\(\blacklozenge\)) (quantified as described in Section 2.3). We separately visualize three runs with different seeds, noting that these seeds differ in the stability of Unlabeled Attachment Score (UAS; see Section 2.1) after the structure onset, but uniformly show that SAS emerges almost entirely in a brief window of time. We show (a) MLM loss, with 95% confidence intervals across samples; (b) internal grammar structure, measured by UAS on the parse induced by the attention distributions; and (c) external grammar capabilities, measured by average BLiMP accuracy with 95% confidence intervals across tasks.
We call this tendency to form heads that specialize in specific syntactic relations Syntactic Attention Structure (SAS). To measure SAS, we follow Clark et al. (2019) in using a simple probe based off the surface-level attention patterns, detailed in Appendix A. The probe provides an implicit parse, with an accuracy measured by **unlabeled attachment score** (UAS).
### Controlling SAS
In addition to training models with BERTBase parameters, we also train models where SAS is promoted or suppressed. The model with SAS promoted throughout training is called BERTSAs, while the model with SAS suppressed throughout training is called BERTSAs.
In order to adjust SAS for these models, we train a BERTBase model through methods that are largely conventional (Section 3.1), with one difference. We add a **syntactic regularizer** that manipulates the structure of the attention distributions using a syntacticity score \(\gamma(x_{i},x_{j})\), equal to the maximum attention weight between syntactically connected words \(i\) and \(j\). We use this regularizer to penalize or reward higher attention weights on a token's syntactic neighbors by adding it to the MLM loss \(L_{\text{MLM}}\). We scale the regularizer by a constant coefficient \(\lambda\) which may be negative to promote SAS or positive to suppress SAS. If we denote \(D(x)\) as the set of all dependents of \(x\), then the new loss is:
\[L(x)=\underbrace{L_{\text{MLM}}(x)}_{\text{Original loss}}+\underbrace{\lambda \sum_{i=1}^{|x|}\sum_{x_{j}\in D(x_{i})}\gamma(x_{i},x_{j})}_{\text{ Syntactic regularization}} \tag{1}\]
### Identifying breakthroughs
This paper studies breakthroughs: sudden changes in model behavior during a brief window of training. What do we consider to be a breakthrough, given a metric \(f\) at some distance (e.g., in timesteps) from initialization \(d\)? We are looking for break point \(d^{*}\) with the sharpest angle in the trajectory of \(f\), as determined by the slope between \(d^{*}\) and \(d^{*}\pm\Delta\) for some distance \(\Delta\). If we have no measurements at the required distance, we infer a value for \(f\) based on the available checkpoints--e.g., if \(d\) is measured in discrete timesteps, we calculate the angle of loss at 50K steps for \(\Delta=5K\) by imputing the loss from checkpoints at 45K and 55K steps to calculate slope.
\[\text{break}(f,\Delta)=\arg\max_{t}\left[f(t+\Delta)-f(t)\right]-\left[f(t)- f(t-\Delta)\right] \tag{2}\]
In other words, \(\text{break}(f,\Delta)\) is the point \(t\) that maximizes the difference between the slope from \(f(t)\) to \(f(t+\Delta)\) and the slope from \(f(t-\Delta)\) to \(f(t)\), approximating the point of maximum acceleration.
## 3 Models and Data
### Architecture and Training
We pre-train BERTBase models using largely the same training set-up and dataset as Sellam et al. (2022). We use the uncased architecture with 12 layers of 768 dimensions each and train with the AdamW optimizer (Loshchilov and Hutter, 2019) for 1M steps with learning rate of 1e-4, 10,000 warm-up steps and training batch size of 256 on a single \(4\times 100\) NVIDIA A100 node. Our results only consider checkpoints that are recorded while pretraining remains numerically stable for all seeds, so we only analyze up to 300K steps.
Our training set-up departs from the original BERT set-up (Devlin et al., 2019) in that we use a fixed sequence length of 512 throughout training, which was shared by Sellam et al. (2022). We also use the same WordPiece-based tokenizer as Devlin et al. (2019) and mask tokens with 15% probability. Unless otherwise stated, all experiments are implemented with the HuggingFace transformers (v4.12.5) (Wolf et al., 2020), Huggingface datasets (v2.7.1) (Lhoest et al., 2021), and Pytorch (v1.11) (Paszke et al., 2019) libraries.
Our pre-training datasets consist of BookCorpus (Zhu et al., 2015) and English Wikipedia (Foundation, 2022). Since we do not have access to the original BERT pre-training dataset, we use a more recent Wikipedia dump from May 2020. For pre-training runs where syntactic regularization is applied, we
use spacey (Honnibal and Montani, 2017) dependency parses on the Wikipedia portion of the dataset as our silver standard labels.
### Finetuning and probing
Fine-tuning on GLUEOur fine-tuning set-up for each GLUE task matches that of the original paper (Wang et al., 2018), with initial learning rate 1e-4, batch size of 32, and 3 total epochs.
Evaluating on BLiMPBLiMP (Warstadt et al., 2020) is a benchmark of minimal pairs for evaluating knowledge of various English grammatical phenomena. We evaluate performance using the MLM scoring function from Salazar et al. (2020) to compute the pseudo-log-likelihood of the sentences in each minimal pair, and counting the MLM as correct when it assigns a higher value to the acceptable sentence in the pair. Further implementation details are in Appendix D.
Evaluating SAS dependency parsingWe measure SAS by evaluating the model's implicit best-head attention parse (Eq. (3), Clark et al., 2019) on a random sample of 1000 documents from the Wall Street Journal portion of the Penn Treebank (Marcus et al., 1999), with silver labels provided by the Stanford Dependencies parser (Schuster and Manning, 2016). We evaluate parse quality using the **Unlabeled Attachment Score** (UAS) computed from the attention map, as described in Eq. (3).
## 4 Results
Often, interpretable artifacts are assumed to be essential to model performance. However, evidence for the importance of SAS exists only at the instance level on a single trained model. We know that specialized heads can predict dependencies (Clark et al., 2019) and that pruning them damages performance more than pruning other heads (Voita et al., 2019). However, these results are only weak evidence that SAS is essential for modeling grammar. Passive observation of a trained model may discover artifacts that occur as a side effect of training without any effect on model capabilities. Causal methods that intervene on particular components at test time, meanwhile, may interact with the rest of the model in complex ways, spuriously implying a component to be essential for performance when it could be removed if it were not so entangled with other features. They also only address whether a component is necessary at test time, and not whether that component is necessary during _learning_. Both test-time approaches--passive observations and causal interventions--are limited.
We begin by confirming the assumption that SAS must be essential to performance. To motivate the case for skepticism of the role of SAS, we note a lack of correlation between SAS metrics and model capabilities across random pretraining seeds (Appendix E). After first strengthening the evidence for SAS as a meaningful phenomenon by taking model development into account, we then draw connections to the literature on phase transitions, simplicity bias, and model complexity.
Figure 2: Metrics during BERTBase training averaged, with 95% confidence intervals, across three seeds. Structure (\(\blacktriangle\)) and capabilities (\(\blacklozenge\)) onsets are marked.
### The Syntax Acquisition Phase
Most work on scaling laws (Kaplan et al., 2020) presents test loss as a quantity that homogeneously responds to the scale of training, declining by a power law relative to the size of the corpus. In the MLM setting, we instead identify a precipitous drop in the loss curve of BERTBase (Fig. 1(a)), consistently spanning 20K-30K timesteps of training across various random seeds. We now show how this rapid learning stage can be interpreted as the composition of two distinct phase transitions.
_The MLM loss drop occurs alongside the acquisition of grammatical capabilities in two consecutive stages_, each distinguished by breaks as defined by Eq. (2). The first stage aligns with the formation of SAS--we call this break in implicit parse UAS the **structure onset**. As seen in Fig. 1(b), the UAS spikes at a consistent time during each run, in tandem with abrupt improvements in MLM loss (Fig. 1(a)) and finetuning metrics (Fig. 2(b)). Immediately following the spike, UAS plateaus, but the loss continues to drop precipitously before leveling off. The second part of this loss drop is associated with a break in the observed grammatical capabilities of the model, as measured by accuracy on BLiMP (Fig. 1(c)). We call the BLiMP break the **capabilities onset**. We show similar trajectories on the MultiBERTs (Sellam et al., 2022) reproductions (Appendix F).
By observing these phase transitions, we can see that the _internal_ representation of grammar, in the form of syntactic attention, precipitates the _external_ observation of grammatical behavior, in the form of correct language modeling judgements on linguistically challenging examples. This is not only a single breakthrough during training, but a sequence of breakthroughs that appear to be dependent on each other. We might compare this to the "checkmate in one" BIG-Bench task, a known breakthrough behavior in autoregressive language models (Srivastava et al., 2022). Only at a large scale can models accurately identify checkmate moves, but further exploration revealed that the model was progressing in a linear fashion at offering consistently valid chess moves before that point. The authors posited that the checkmate capability was dependent on the ability to make valid chess moves, and likewise it seems we have found that grammatical capabilities are dependent on a latent representation of syntactic structure in the form of SAS.
We find that the existence of these phase transitions holds even when using continuous metrics (Appendix H), in contrast to Schaeffer et al. (2023), who found that many abrupt improvements in capabilities are due to the choice of thresholded metrics like accuracy. We also find that the phase transitions hold even when setting the x-axis to some continuous alternative to discrete training timesteps, such as weight norm (Appendix G). Thus both x-axis and y-axis may use non-thresholded scales, and the phase transitions remain present.
#### 4.1.1 Complexity and Compression
According to the Information Bottleneck (IB) theory of deep learning (Shwartz-Ziv and Tishby, 2017), the generalization capabilities of Deep Neural Networks (DNNs) can be understood as a specialized form of representation compression. This theory posits that DNNs achieve generalization by selectively discarding noisy and task-irrelevant information from the input, while preserving key features (Shwartz-Ziv, 2022). Subsequent research has provided generalization bounds that support this theory (Shwartz-Ziv et al., 2018; Kawaguchi et al., 2023). Recently, similar principles have been conjectured to explain the capabilities of language models (Chiang, 2023; Cho, 2023; Sutskever, 2023). Current studies on vision tasks distinguish two phases: an initial _memorization_ phase followed by a protracted representation _compression_ phase (Shwartz-Ziv and Tishby, 2017; Ben-Shaul et al., 2023). During memorization, SGD explores the multidimensional space of possible solutions. After interpolating, the system undergoes a phase transition into a diffusion phase, marked by chaotic behavior and a reduced rate of convergence as the network learns to compress information.
To validate this theory in MLM training, we analyze various complexity metrics as proxies for the level of compression (see Figure Fig. 2(a) for TwoNN intrinsic dimension (Facco et al., 2017), and Appendix K.2 for additional complexity and information metrics). Our results largely agree with the IB theory, showing a prevailing trend toward information compression throughout the MLM training process. However, during the acquisition of SAS, a distinct memorization phase emerges. This phase, which begins with the onset of structural complexity, allows the model to expand its capacity for handling new capabilities. A subsequent decline in complexity coincides with the onset of advanced capabilities, thereby confirming the dual-phase nature postulated by the IB theory.
### Controlling SAS
Having established the natural emergence of SAS, we use our syntacticity regularizer (Section 2.2) to evaluate whether SAS is truly necessary for handling complex grammatical phenomena. We confirm that this regularizer can suppress or accelerate the SAS phase (Fig. 3(b)). As seen in Fig. 3(a), _enhancing_ SAS behavior throughout training (BERT\({}_{\text{SAS+}}\)) leads to early improvements in MLM performance, but hurts later model quality.1 Conversely, _suppressing_ SAS (BERT\({}_{\text{SAS}}\).) damages both early performance and long term performance. Suppressing SAS during training prevents the emergence of linguistically complex capabilities (Fig. 3(c)). In other words, preventing the internal grammar structure onset will also prevent the external grammar capabilities onset that follows it.
Footnote 1: Note that in BERT\({}_{\text{SAS+}}\), we see the capabilities onset is after the structure onset, but before the SAS plateau, suggesting that SAS only needs to hit some threshold to precipitate the capabilities onset, and does not need to stabilize.
However, there exists an early apparent phase transition in the MLM loss (at around 6K steps), which suggests that an alternative strategy emerges that leads to improvements prior to the structure onset. We therefore refer to this early inflection as the **alternative strategy onset**. Our results suggest that SAS is crucial for effectively representing grammar, but the existence of the alternative strategy onset implies that SAS also competes with other useful traits in the network. We explore the alternative strategy onset represented by the phase transition under SAS suppression in Appendix L.
Importantly, the break in the loss curve occurs earlier in training when suppressing SAS. The implication is profound: that the alternative strategy is competing with SAS, and suppressing SAS permits the model to learn the alternative strategy more effectively and earlier. Inspired by this insight, we next ask whether there can be larger advantages to avoiding the natural SAS-based strategy early in training, thus claiming the benefits of the alternative strategy.
### Early-stage SAS Regularization
Because BERT\({}_{\text{SAS}}\). briefly outperforms both BERT\({}_{\text{Base}}\) and BERT\({}_{\text{SAS+}}\), we have argued that suppressing SAS implicitly promotes a competing strategy. This notion of competition between features or strategies is well-documented in the literature on simplicity bias (Shal et al., 2020; Arpit et al., 2017; Hermann and Lampinen, 2020; Pezeshki et al., 2021). Achille et al. (2018) find that some patterns must be acquired early in training in order to be learned at all, so avoiding an overly simplistic strategy can have significant long-term consequences on performance. To test the hypothesis that learning SAS early allows SAS to out-compete other beneficial strategies, this section presents experiments that only suppress the early acquisition of SAS. For multistage regularized models, we first suppress SAS with \(\lambda=0.001\) and then set \(\lambda=0\) after a pre-specified timestep in training. These models are
Figure 3: Metrics over the course of training for baseline and SAS-regularized models (under both suppression and promotion of SAS). Structure (\(\blacktriangle\)) and capabilities (\(\blacklozenge\)) onsets are marked, except on BERT\({}_{\text{SAS}}\), which does not clearly exhibit either onset. Each line is averaged over three random seeds. On y-axis: (a) MLM loss (b) Implicit parse accuracy (c) average BLiMP accuracy.
named after the timestep that SAS is suppressed until, e.g., BERT\({}^{(3k)}_{\text{SAS}}\): is the model where \(\lambda\) is set to 0 at timestep 3000.
We find that suppressing SAS early on improves the effectiveness of training later. Specifically, BERT\({}^{(3k)}_{\text{SAS}}\)- outperforms BERT\({}_{\text{Base}}\) even well after both models pass their structure and capabilities onsets (Fig. 4(a); Table 1), although these advantages cease to be significant after longer training runs (Appendix O). Some multistage models even have more consistent SAS than BERT\({}_{\text{Base}}\) (Fig. 4(b)). We posit that certain associative patterns are learned more quickly while suppressing SAS, and these patterns not only support overall performance but even provide improved features to acquire SAS.
#### 4.3.1 When can we recover the SAS phase transition?
Inspecting the learning curves of the temporarily suppressed models, we find that briefly suppressing SAS can promote performance (Appendix M) and accelerate the structure onset (Fig. 5(a)) while augmenting it (Fig. 5(b)). However, after more prolonged suppression of SAS, it becomes impossible to hit the dramatic spike in implicit parse UAS that we see in BERT\({}_{\text{Base}}\) (Section 4.3). If the SAS phase transition is prevented, MLM performance falls significantly compared to BERT\({}_{\text{Base}}\) and we see no SAS spike (Appendix M). It appears that we must choose between phase transitions; _the model
\begin{table}
\begin{tabular}{l c c c} \hline \hline & MLM Loss \(\downarrow\) & GLUE average \(\uparrow\) & BLiMP average \(\uparrow\) \\ \hline BERT\({}_{\text{Base}}\) & \(1.77\pm 0.01\) & \(\mathbf{0.74\pm 0.01}\) & \(\mathbf{0.74\pm 0.02}\) \\ BERT\({}_{\text{SAS}}\) & \(2.39\pm 0.03\) & \(0.59\pm 0.01\) & \(\mathbf{0.74\pm 0.01}\) \\ BERT\({}_{\text{SAS}}\) & \(2.02\pm 0.01\) & \(0.69\pm 0.02\) & \(0.67\pm 0.03\) \\ BERT\({}^{(3k)}_{\text{SAS}}\) & \(\mathbf{1.75\pm 0.01}\) & \(\mathbf{0.74\pm 0.00}\) & \(\mathbf{0.75\pm 0.01}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluation metrics, with standard error, after training for 100K steps (\(\sim 13\)M tokens), averaged across three random seeds for each regularizer setting. We selected BERT\({}^{(3k)}_{\text{SAS}}\) as the best multistage hyperparameter setting based on MLM test loss at 100K steps. Bolded values significantly outperform non-bolded values in the same column under a 1-sided Welch’s \(t\)-test.
Figure 4: Metrics for the checkpoint at 100k steps, for various models with SAS suppressed early in training. The vertical line marks the BERT\({}_{\text{SAS}}\). alternative strategy onset; note that _model quality is worst when the regularizer is changed during this phase transition_. The x-axis reflects the timestep when regularizer \(\lambda\) is changed from \(0.001\) to \(0\). To control for the length of training time without suppressing SAS, Appendix N presents the same findings measured at a checkpoint exactly 50K timesteps after releasing the regularizer. On y-axis: (a) MLM loss shown with standard error of the mean across batches; (b) Implicit parse accuracy (UAS); (c) GLUE average (Task breakdown in Appendix I); (d) BLiMP average (Task break down in Appendix J).
cannot undergo first the alternative strategy onset and then the structure onset_. In fact, we measure the worst model quality when we switch settings _during_ the alternative strategy transition (Fig. 4).
## 5 Discussion and Conclusions
Our work is a response to the limitations of probes that analyze only a single model checkpoint without regard to its training history (Saphra, 2023). We posit that _developmental_ explanations, which incorporate a model's training history, provide critical perspective and explanatory power. We have used this developmental approach to demonstrate the necessity of SAS for grammatical reasoning in MLMs, and have furthermore used SAS as a case study to shed light on ways of circumventing simplicity bias, the dynamics of model complexity, and the dangers of changing optimization strategies during a phase transition. Our work also guides further understanding of many deep learning phenomena, and may inspire a more rigorous approach to science of deep learning as well. Beyond this discussion, an extended literature review with connections to related work on causal interpretability, simplicity bias, and phase transitions is presented in Appendix C.
### Early dynamics and simplicity bias
Sutton (2019) introduced the machine learning world to the _Bitter Lesson_: models that use informed priors based on domain understanding will always lose to generic models trained on large quantities of data. Our work suggests that we might go further: even generically learned structure can form a disadvantageously strong prior, if that structure reflects human expert models of syntax. In other words, human interpretations of natural phenomena are so simplistic that their presence early in training can serve as a negative signal. If this observation holds in natural language--a modality that has evolved specifically to be human interpretable--how much worse might simplicity bias impact performance on other domains like scientific and physical modeling?
Dependency and competitionWe have found evidence of multiple possible relationships between emergent behaviors. Previous work suggests that model properties can _depend_ on one another, e.g., checkmate-in-one capabilities depend on first learning valid chess moves (Srivastava et al., 2022); or _compete_ with one another, e.g., as sparse and dense representation strategies compete on arithmetic tasks (Merrill et al., 2023). Similarly, we first find evidence of a dependency relationship, based on our evidence that SAS is a prerequisite for many linguistic capabilities as indicated by BLiMP. Then, we identify a competitive relationship, based on our observations that suppressing SAS leads to an alternative strategy that prioritizes context differently. These distinct relationships shed light on how model behaviors interact during training and may suggest training improvements that delay or promote particular behaviors. Existing work in simplicity bias (Shah et al., 2020; Pezeshki et al., 2021) suggests that a preference for simple heuristics might prevent the model from acquiring a more reliable strategy. Our results appear to be evidence of this pitfall in practice.
PretrainingThe benefits of early training without permitting SAS bear an intriguing parallel to pretraining. Just as pretraining removes the particulars of the downstream task by training on generic
Figure 5: If SAS is suppressed only briefly, it accelerates and augments the SAS onset. However, further suppression delays and attenuates the spike in UAS, until it eventually ceases to show a clear inflection. A vertical dotted line marks the \(\text{BERT}_{\text{SAS}}\). alternative strategy onset.
language structure, early SAS suppression removes the particulars of linguistic structure itself. In doing so, we encourage the MLM to treat the entire sequence without regard for proximity to the target word, as a bag-of-words model might. Therefore, the beginning of training is even more unstructured and generic than it would be under the baseline MLM objective.
Curriculum learningWe also offer some insights into why curriculum learning is rarely effective at large scales. Simple data is likely to encourage simplistic strategies, so any curriculum that homogenizes the early distribution could promote a simplistic strategy, helping early performance but harming later performance. Predictably, curricula no longer help at large scales (Wu et al., 2021).
### Phase transitions
Instability at critical pointsAbrupt changes are rarely documented directly at the level of validation loss, but we show that they may be observed--and interpreted--in realistic settings. Smooth improvements in loss may even elide abrupt breakthroughs in specific capabilities, as discussed in Appendix L.1. Our multistage results point to a surprising effect: that the worst time to change the regularization is during a phase transition. When we release the SAS suppression well _before_ the point at which the alternative transition starts during \(\text{BERT}_{\text{SAS}}\), training (i.e., the alternative strategy onset), we find it is possible to recover the SAS transition, preventing damage to GLUE, BLiMP, and MLM loss metrics. Likewise, although releasing the regularization _after_ the alternative transition prevents the recovery of SAS, it nonetheless incurs limited damage to model quality metrics. However, releasing the regularizer _during_ the phase transition leads to a substantially worse model under every metric. These findings suggest that, far from indicating a typical region of the loss surface, the moment of breakthrough constitutes a critical point where an optimizer misstep can damage the performance of the model, possibly even at convergence. This phenomenon may be consequential for future optimization research.
### Interpretability epistemology
While SAS was already known to emerge naturally in MLMs, there were reasons to be skeptical of its necessity. One objection is that raw attention distribution information is not a guaranteed proxy for information flow (Abnar and Zuidema, 2020; Ethayarajh and Jurafsky, 2021). Another thread questions the interpretability of attention by obfuscating the attention weights without damaging model performance (Jain and Wallace, 2019; Serrano and Smith, 2019). If the fundamentally informative nature of attention is subject to extensive debate (Bibal et al., 2022), we must also be skeptical of overstating its connection to syntax. Attention syntacticity is a microcosm of wider failures in the science of deep learning, which has been criticized for a tendency to use anecdotal observations and post-hoc explanations, rather than statistically rigorous correlational or causal tests (Forde and Paganini, 2019).
Prior evidence for the importance of SAS came in two forms, both of which operate post-hoc at the instance level on specific samples: instance-level observation in fully trained networks (Clark et al., 2019) and instance-level causal experiments in fully trained networks (Voita et al., 2019). Observational studies might discover structures that emerge as a side effect of training, rather than those crucial to the operation of the model. Traits that emerge as a side effect of a process but appear crucial to performance are called _spandrels_ in evolutionary biology; possible examples include human chins (Yong, 2016) and enjoyment of music (Pinker, 1997). While instance-level causal experiments like Voita et al. (2019) may be epistemically stronger than the observational studies, the network's failure to recover from a causal intervention does not indicate that it relies on the structure provided. Instead, the network may be more brittle to large distribution shifts on the relevant features, without truly relying on those features (Tucker et al., 2021). One possible scenario is that a behavior may develop early in training and become _vestigial_ (like a human's tailbone (Mukhopadhyay et al., 2012)) but sufficiently integrated into subnetworks that generate and cancel information that the network cannot easily recover from its removal. To support the skeptical case, we find that SAS metrics were not correlated with MLM capabilities across random seed (Fig. 6).
We provide several epistemically strong results in favor of the importance of SAS. First, we study models in development (Section 4.1), finding that the SAS phase transition directly precipitates the emergence of linguistic capabilities. This result supports that blackbox grammatical capabilities
depend on measurable internal structures. Second, we have causal interventions on development (Section 4.2), which again reveal the importance of this head specialization behavior by promoting and suppressing it. Instance-level interpretability methods, at best, offer evidence that a trait emerges and the model cannot recover from its removal; we can now say that certain capabilities depend on this trait--although the model eventually discovers alternative ways to represent some of them. | NLPにおける最も重要な解釈的研究は、完全に訓練されたモデルの動作と特徴を理解することに焦点を当てています。しかし、モデルの行動への特定の洞察は、トレーニング過程の軌跡を観察する必要があるかもしれません。私たちは、マスクされた言語モデル (MLM) の構文の習得に関するケーススタディを提供します。これは、トレーニング中に解釈可能のアートファクトの進化を分析することで、発達する行動の理解を深める方法を示しています。特に、この研究では、自然に発生する Syntactic Attention Structure (SAS) を調査しています。これは、Transformerの特定のヘッドが特定の構文的な関係に焦点を当てている、MLMs のある性質です。私たちは、モデルが abruptly SAS を獲得する、および損失が急激に減少する、Pretraining の短い窓を特定しました。この突破は、後続的に言語的能力の獲得に繋がりました。その後、SASの因果関係を |
2309.17102 | Guiding Instruction-based Image Editing via Multimodal Large Language
Models | Instruction-based image editing improves the controllability and flexibility
of image manipulation via natural commands without elaborate descriptions or
regional masks. However, human instructions are sometimes too brief for current
methods to capture and follow. Multimodal large language models (MLLMs) show
promising capabilities in cross-modal understanding and visual-aware response
generation via LMs. We investigate how MLLMs facilitate edit instructions and
present MLLM-Guided Image Editing (MGIE). MGIE learns to derive expressive
instructions and provides explicit guidance. The editing model jointly captures
this visual imagination and performs manipulation through end-to-end training.
We evaluate various aspects of Photoshop-style modification, global photo
optimization, and local editing. Extensive experimental results demonstrate
that expressive instructions are crucial to instruction-based image editing,
and our MGIE can lead to a notable improvement in automatic metrics and human
evaluation while maintaining competitive inference efficiency. | Tsu-Jui Fu, Wenze Hu, Xianzhi Du, William Yang Wang, Yinfei Yang, Zhe Gan | 2023-09-29T10:01:50 | http://arxiv.org/abs/2309.17102v2 | # Guiding Instruction-based Image Editing via Multimodal Large Language Models
###### Abstract
Instruction-based image editing improves the controllability and flexibility of image manipulation via natural commands without elaborate descriptions or regional masks. However, human instructions are sometimes too brief for current methods to capture and follow. Multimodal large language models (MLLMs) show promising capabilities in cross-modal understanding and visual-aware response generation via LMs. We investigate how MLLMs facilitate edit instructions and present MLLM-Guided Image Editing (MGIE). MGIE learns to derive expressive instructions and provides explicit guidance. The editing model jointly captures this visual imagination and performs manipulation through end-to-end training. We evaluate various aspects of Photoshop-style modification, global photo optimization, and local editing. Extensive experimental results demonstrate that expressive instructions are crucial to instruction-based image editing, and our MGIE can lead to a notable improvement in automatic metrics and human evaluation while maintaining competitive inference efficiency.
## 1 Introduction
Visual design tools are widely adopted in various multimedia fields nowadays. Despite considerable demand, they require prior knowledge to operate. To enhance controllability and accessibility, text-guided image editing has obtained popularity in recent studies (Li et al., 2020; Patashnik et al., 2021; Crowson et al., 2022; Gal et al., 2022). With an attractive ability to model realistic images, diffusion models (Ho et al., 2020) are also adopted in image editing (Kim et al., 2022). By swapping the latent cross-modal maps, models can perform visual manipulation to reflect the alteration of the input-goal
Figure 1: We introduce MLLM-Guided Image Editing (MGIE) to improve instruction-based image editing for various editing aspects. The top is the input instruction, and the right is the jointly derived expressive instruction by MGIE.
caption (Hertz et al., 2023; Mokady et al., 2022; Kawar et al., 2023). They can further edit a specific region by a guided mask (Nichol et al., 2022; Avrahami et al., 2022). Instead of relying on elaborate descriptions or regional masks, instruction-based editing (El-Nouby et al., 2019; Li et al., 2020; Fu et al., 2020) allows human commands that directly express how and which aspect of an image to edit. This flexibility also benefits practicality as such guidance is more aligned with human intuition.
Due to the data scarcity of the input-goal-instruction triplet, InsPix2Pix (Brooks et al., 2023) collects a curated IPr2Pr dataset. The instruction is generated by GPT-3 (Brown et al., 2020), and the input-goal image pair is synthesized from Prompt-to-Prompt (Hertz et al., 2023). InsPix2Pix then applies a pre-trained CLIP text encoder (Radford et al., 2021) to lead the diffusion model along with the input image. Although having feasible results, CLIP is trained for static descriptions, which is challenging to capture the essential visual transformation in editing. Furthermore, the instruction is too brief but ambiguous and insufficient to guide toward the intended goal. The deficiency limits the effectiveness of InsPix2Pix in instruction-based image editing.
Large language models (LLMs) (Brown et al., 2020; Touvron et al., 2023) have shown significant advancement in diverse language tasks, including machine translation, text summarization, and question answering. Learning from large-scale corpora with diverse content, LLMs contain latent visual knowledge and creativity, which can assist various vision-and-language tasks (Wu et al., 2023; Feng et al., 2023; Chakrabarty et al., 2023). Upon LLMs, multimodal large language models (MLLMs) can treat images as input naturally and provide visual-aware responses to serve as multimodal assistants (Zhang et al., 2023; Liu et al., 2023; Zhu et al., 2023; Koh et al., 2023).
Inspired by MLLMs, we incorporate them to deal with the insufficient guidance issue of instructions and introduce MLLM-Guided Image Editing (MGIE). As demonstrated in Fig. 2, MGIE consists of an MLLM and a diffusion model. The MLLM learns to derive concise expressive instructions and offers explicit visual-related guidance. The diffusion model is jointly updated and performs image editing with the latent imagination of the intended goal via end-to-end training. In this way, MGIE benefits from the inherent visual derivation and addresses ambiguous human commands to achieve reasonable editing. For the example in Fig. 1, it is difficult to capture what "_healthy_" means without additional context. Our MGIE can precisely connect "_vegetable toppings_" with the pizza and lead to the related editing as human expectation.
To learn instruction-based image editing, we adopt IPr2Pr as our pre-training dataset. We consider different editing aspects in EVR (Tan et al., 2019), GIER (Shi et al., 2020), MA5k (Shi et al., 2022), and MagicBrush (Zhang et al., 2023a). MGIE performs Photoshop-style modification, global photo optimization, and local object alteration. All should be guided by human instructions. Experimental results indicate that our MGIE significantly strengthens instruction-based image editing with reasonable expressive instructions in automatic metrics and human evaluation, and visual-aware guidance is crucial to this improvement. In summary, our contributions are three-fold:
* [leftmargin=*,noitemsep,topsep=0pt]
* We introduce MLLM-Guided Image Editing (MGIE), which jointly learns the MLLM and editing model with visual-aware expressive instructions to provide explicit guidance.
* We conduct comprehensive studies from various editing aspects, including Photoshop-style modification, global photo optimization, and local editing, along with qualitative comparisons.
* Extensive experiments demonstrate that visual-aware expressive instructions are crucial for image editing, and our MGIE effectively enhances editing performance.
## 2 Related Work
Instruction-based Image Editing.Text-guided image editing can significantly improve the controllability and accessibility of visual manipulation by following human commands. Previous works built upon the GAN frameworks (Goodfellow et al., 2015; Reed et al., 2016) to alter images but are limited to unrealistic synthesis or specific domains (Nam et al., 2018; Li et al., 2020; El-Nouby et al., 2019; Fu et al., 2020; Fu et al., 2022). With promising large-scale training, diffusion models (Ho et al., 2020; Ramesh et al., 2022; Sahari et al., 2022; Rombach et al., 2022) can accomplish image transformation via controlling the cross-modal attention maps for the global caption (Meng et al., 2022; Hertz et al., 2023; Kawar et al., 2023; Gu et al., 2023). Local image editing allows fine-grained manipulation by inpainting target regions with user-provided (Nichol et al., 2022; Avrahami et al., 2022; Wang et al., 2023b) or predicted masks (Bar-Tal et al., 2022; Couairon et al., 2023) while preserving the remain
ing areas. Different from them, instruction-based image editing accepts straight commands, such as "_add fireworks to the sky_", which is not restricted to elaborate descriptions or regional masks. Recent methods learn from synthetic input-goal-instruction triples (Brooks et al., 2023) and with additional human feedback (Zhang et al., 2023c) to follow editing instructions. However, the frozen CLIP text encoder is pre-trained for static descriptions but not the crucial transformation in editing. Moreover, the instructions are sometimes ambiguous and imprecise for the editing goal. In this paper, we learn with multimodal large language models to perceive images along with given prompts for expressive instructions, which provides explicit yet detailed guidance, leading to superior editing performance.
Large Language Models for Vision.Large language models (LLMs) have demonstrated impressive capabilities for text generation and generalizability in various tasks (Brown et al., 2020; Chowdhery et al., 2022; Touvron et al., 2023). With robust text understanding, previous works adapt LLMs for input prompts and reason downstream vision-and-language tasks (Zhang et al., 2023; Wu et al., 2023; Lu et al., 2023; Yang et al., 2023; Chakrabarty et al., 2023). They further produce pseudocode instructions or executable programs by LLMs (Huang et al., 2022; Gupta and Kembhavi, 2023; Suris et al., 2023; Feng et al., 2023; Lian et al., 2023). Through visual feature alignment with instruction tuning, multimodal large language models (MLLMs) can perceive images and provide adequate responses (Li et al., 2023; Zhang et al., 2023; Liu et al., 2023; Zhu et al., 2023). Recently, studies also adopt MLLMs for generating chat-related images (Koh et al., 2023; Sun et al., 2023). However, they can only produce images from scratch, which are distinct from inputs. Our proposed MGIE is the first to leverage MLLMs and improve image editing with derived expressive instructions.
## 3 Method
### Background: Multimodal Large Language Models (MLLMs)
Large language models (LLMs) have shown impressive capabilities for natural language generation. Multimodal large language models (MLLMs) empower LLMs to perceive images and provide reasonable responses. Initialized from a pre-trained LLM, the MLLM contains a visual encoder (_e.g._, CLIP-L (Radford et al., 2021)) to extract the visual features \(f\), and an adapter \(\mathcal{W}\) to project \(f\) into the language modality. The training of the MLLM (Liu et al., 2023) can be summarized as:
\[\mathcal{C} =\{x_{1},x_{2},...,x_{l}\}, \tag{1}\] \[f =\text{Enc}_{\text{vis}}(\mathcal{V}),\] \[x_{t} =\text{MLLM}(\{x_{1},...x_{t-1}\}\mid\mathcal{W}(f)),\]
where \(l\) is the length of the word token in \(\mathcal{C}\). \(\mathcal{C}\) can be the image caption (Features Alignment) or the multimodal instruction-following data (Instruction Tuning). The MLLM follows the standard auto-regressive training for the next token prediction and then can serve as a visual assistant for various tasks such as visual question answering and complex reasoning. Although the MLLM is capable of visual perception via the above training, its output is still limited to text.
Figure 2: Overview of MLLM-Guided Image Editing (**MGIE**), which leverages MLLMs to enhance instruction-based image editing. MGIE learns to derive concise expressive instructions and provides explicit visual-related guidance for the intended goal. The diffusion model jointly trains and achieves image editing with the latent imagination through the edit head in an end-to-end manner.
### MLLM-Guided Image Editing (MGIE)
As illustrated in Fig. 2, we propose MLLM-Guided Image Editing (MGIE) to edit an input image \(\mathcal{V}\) into a goal image \(\mathcal{O}\), by a given instruction \(\mathcal{X}\). To handle imprecise instructions, MGIE contains the MLLM and learns to derive explicit yet concise expressive instructions \(\mathcal{E}\). To bridge the language and visual modality, we add special [IMG] tokens after \(\mathcal{E}\) and adopt the edit head \(\mathcal{T}\) to transform them. They serve as the latent visual imagination from the MLLM and guide our diffusion model \(\mathcal{F}\) to achieve the intended editing goal. MGIE is then able to comprehend ambiguous commands with visual-related perception for reasonable image editing.
Concise Expressive Instruction.From features alignment and instruction tuning, the MLLM can offer visual-related responses with its cross-modal perception. For image editing, we use this prompt "_what will this image be like if_ [instruction]" as the language input with the image and derive a detailed explanation of the editing command. However, those explanations are always too lengthy and involve redundant descriptions, which even mislead the intention. To obtain succinct narrations, we apply a pre-trained summarizer 1 and make the MLLM learn to generate the summarized outputs. We treat this explicit yet concise guidance as expressive instruction \(\mathcal{E}\):
Footnote 1: We adopt Flan-T5-XXL (Chung et al., 2022), which has been specifically fine-tuned for summarization, as our summarization model for the original MLLM (MLLM*).
\[\mathcal{E} =\text{Summ}(\text{MLLM*}([\text{prompt},\mathcal{X}]\mid\mathcal{ W}(f)))\] \[=\{w_{1},w_{2},...,w_{l}\},\] \[w^{\prime}_{t} =\text{MLLM}(\{w_{1},...,w_{t-1}\}\mid\mathcal{W}(f)), \tag{2}\] \[\mathcal{L}_{\text{ins}} =\sum\nolimits_{t=1}^{l}\text{CELoss}(w^{\prime}_{t},w_{t}),\]
where we apply the cross-entropy loss (CELoss) to train the MLLM via teacher forcing. \(\mathcal{E}\) can provide a more concrete idea than \(\mathcal{X}\) such as linking "_desert_" with "_sand dunes_" and "_cacti or small shrubs_", which mitigates the comprehension gap for reasonable image editing. This strategy further enhances our efficiency. During inference, the trained MGIE straightforwardly derives concise \(\mathcal{E}\) instead of rolling out lengthy narrations (22.7 _vs._ 64.5 tokens) and relying on external summarization. MGIE now can acquire a visual imagination of the editing intention but is confined to the language modality. To bridge the gap, we append \(N\) visual tokens [IMG] after \(\mathcal{E}\), where their word embeddings are trainable, and the MLLM also learns to generate them through its language modeling (LM) head. The visual tokens are treated as a visual-related instruction understanding in \(\mathcal{E}\) and establish a connection between the language and vision modalities.
Image Editing via Latent Imagination.We adopt the edit head \(\mathcal{T}\) to transform [IMG] into actual visual guidance. \(\mathcal{T}\) is a sequence-to-sequence model, which maps the sequential visual tokens from the MLLM to the semantically meaningful latent \(\mathcal{U}=\{u_{1},u_{2},...,u_{L}\}\) as the editing guidance:
\[u_{t}=\mathcal{T}(\{u_{1},...,u_{t-1}\}\mid\{e_{\texttt{[IMG]}}+h_{\texttt{ [IMG]}}\}), \tag{3}\]
where \(e\) is the word embedding and \(h\) is the hidden state (from the last layer of MLLM before the LM head) of [IMG]. Specifically, the transformation over \(e\) can be treated as a general representation in the visual modality, and \(h\) is an instance-aware visual imagination for such editing intention. Our \(\mathcal{T}\) is similar to BLIP-2 (Li et al., 2023b;a) or GILL (Koh et al., 2023) for extracting visual features.
To guide image editing with the visual imagination \(\mathcal{U}\), we consider a latent diffusion model \(\mathcal{F}\)(Rombach et al., 2022), which includes the variational autoencoder (VAE) and addresses denoising diffusion in the latent space. Our goal of \(\mathcal{F}\) is to generate the latent goal \(o=\text{EncVAE}(\mathcal{O})\) from preserving the latent input \(v=\text{EncVAE}(\mathcal{V})\) and following the editing guidance \(\{u\}\). The diffusion process keeps adding noises to \(o\) as \(z_{t}\), where the noise level is increasing over timesteps \(t\). We then learn the UNet \(\epsilon_{\theta}\) to predict the added noise (Ho et al., 2020). As LDM, we inject the visual imagination into \(\epsilon_{\theta}\) via the cross-attention layer Attention\((Q,K,V)=\text{softmax}(\frac{QK^{T}}{\sqrt{\text{dim}}})\cdot V\) with
\[Q=W^{(i)}_{Q}\cdot\varphi_{i}(z_{t}),K=W^{(i)}_{K}\cdot\{u\},V=W^{(i)}_{V} \cdot\{u\}, \tag{4}\]
where \(\varphi\) is the flattened operation, \(W^{(i)}_{Q}\), \(W^{(i)}_{K}\), and \(W^{(i)}_{V}\) are learnable attention matrices. Following InsPix2Pix, we also concatenate \(v\) with \(z_{t}\). In this way, our \(\mathcal{F}\) can condition both \(\mathcal{V}\) and \(\mathcal{U}\) to perform
image editing. We take classifier-free guidance (Ho and Salimans, 2021), and the score estimation \(s_{\theta}\) is extrapolated to keep away from the unconditional \(\varnothing\), where the editing loss \(\mathcal{L}_{\text{edit}}\) is calculated as:
\[s_{\theta}(z_{t},v,\{u\}) =s_{\theta}(z_{t},\varnothing,\varnothing) \tag{5}\] \[\quad+\alpha_{\mathcal{V}}\cdot(s_{\theta}(z_{t},v,\varnothing)-s _{\theta}(z_{t},\varnothing,\varnothing))\] \[\quad+\alpha_{\mathcal{X}}\cdot(s_{\theta}(z_{t},v,\{u\})-s_{ \theta}(z_{t},v,\varnothing)),\] \[\mathcal{L}_{\text{edit}} =\mathbb{E}_{v,v,\{u\},\epsilon\sim\mathcal{N}(0,1),t}\left[|| \epsilon-\epsilon_{\theta}(z_{t},t,v,\{u\})||_{2}^{2}\right],\]
where \(\alpha_{\mathcal{V}}\) and \(\alpha_{\mathcal{X}}\) are the weights of the guidance scale for the image and the instruction. Similar to InsPix2Pix, we randomly make \(v=\varnothing\), \(\{u\}=\varnothing\), or both \(=\varnothing\) for 5% of data during training. After we have the generated latent \(o^{\prime}\) through the denoising process by \(\epsilon_{\theta}\), we can obtain the editing result \(O^{\prime}=\text{Dec}_{\text{VAE}}(o^{\prime})\). During inference, we use \(\alpha_{\mathcal{V}}=1.5\) and \(\alpha_{\mathcal{X}}=7.5\).
Image Patch Similarity (LPIPS) (Zhang et al., 2018) as the photo difference2. **MagicBrush**(Zhang et al., 2023a) annotates 10.5K triples. We follow them to use L1, DINO, CVS, and text-visual feature similarity (CTS) (Hessel et al., 2021) between goal captions and resulting images. We treat the same training/validation/testing split as the original settings. Without specific mention, all evaluations are in a zero-shot manner, where the model is only pre-trained on IPr2Pr. Apart from automatic metrics, we also conduct human evaluation for both expressive instructions and editing results in Sec. 4.3.
Footnote 2: As there is no object alteration in MA5k, feature-based DINO and CVS cannot clearly tell the difference.
Baselines.We treat InsPix2Pix (Brooks et al., 2023), built upon the CLIP text encoder with a diffusion model for instruction-based image editing, as our baseline. We consider a similar LLM-guided image editing (LGIE) model, where LLaMA-7B (Touvron et al., 2023) is adopted for expressive instructions \(\mathcal{E}\) from instruction-only inputs but without visual perception.
Implementation Details.The MLLM and diffusion model \(\mathcal{F}\) are initialized from LLaVA-7B (Liu et al., 2023) and StableDiffusion-v1.5 (Rombach et al., 2022). We jointly update both for the image editing task. Note that only word embeddings and LM head in the MLLM are trainable. Following GILL (Koh et al., 2023), we use \(N\)=8 visual tokens. The edit head \(\mathcal{T}\) is a 4-layer Transformer, which transforms language features into editing guidance. We adopt AdamW (Loshchilov and Hutter, 2019) with the batch size of 128 to optimize MGIE. The learning rates of the MLLM and \(\mathcal{F}\) are 5e-4 and 1e-4, respectively. All experiments are conducted in PyTorch (Paszke et al., 2017) on 8 A100 GPUs.
### Quantitative Results
Table 1 shows the zero-shot editing results, where models are trained only on IPr2Pr. For EVR and GIER that involve Photoshop-style modifications, expressive instructions can reveal concrete goals instead of brief but ambiguous commands, which makes the editing results more similar to intentions (_e.g._, higher 81.3 CVS on EVR by LGIE and higher 58.6 SSIM on GIER by MGIE). For global photo optimization on MA5k, InsPix2Pix is hard to deal with due to the scarcity of related training triples. Though trained from the same source, LGIE and MGIE can offer detailed explanations via learning with the LLM, but LGIE is still confined to its single modality. With access to images, MGIE derives explicit instructions such as _which regions should brighten_ or _what objects are more distinct_. It can bring a significant performance boost (_e.g._, higher 65.4 SSIM and lower 0.3 photo distance). Similar results are found on MagicBrush. MGIE also achieves the best performance from the precise visual
Figure 3: **Trade-off curve for image editing. We set \(\alpha_{\mathcal{X}}\) as 7.5 and vary \(\alpha_{\mathcal{Y}}\) in \([1.0,2.2]\). For both edit (X-axis) and input consistency (Y-axis), higher is better.**
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{**MA5k**} & \multicolumn{4}{c}{**MagicBrush**} \\ \cline{3-10} Arch. Method & L1\(\downarrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & L1\(\downarrow\) & DINO\(\uparrow\) & CVS\(\uparrow\) & CTS\(\uparrow\) \\ \hline \multirow{3}{*}{FZ} & InsPix2Pix & 0.172 & **57.70** & **0.376** & **0.113** & 70.73 & 84.87 & 29.25 \\ & LGIE & 0.179 & 56.49 & 0.385 & 0.136 & 66.15 & 82.85 & 28.66 \\ & MGIE & **0.166** & 56.92 & 0.381 & 0.130 & **72.81** & **85.91** & **29.86** \\ \hline \multirow{3}{*}{FT} & LGIE & 0.164 & 59.52 & 0.362 & 0.120 & 71.17 & 85.57 & 29.41 \\ & MGIE & **0.161** & **60.21** & **0.349** & **0.109** & **73.93** & **86.45** & **29.60** \\ \hline \multirow{3}{*}{E2E} & LGIE & 0.150 & 64.42 & 0.318 & 0.087 & 80.36 & 88.11 & 30.22 \\ & MGIE & **0.135** & **65.38** & **0.299** & **0.080** & **81.61** & **90.89** & **30.48** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Ablation study. We attempt FZ, FT, or E2E to utilize expressive instructions. _FZ_ directly treats expressive instructions as the inputs to frozen InsPix2Pix. FT further fine-tunes InsPix2Pix and makes it adapt to expressive instructions. Our E2E learns expressive instructions along with the MLLM and trains the diffusion model in an end-to-end manner.**
imagination and modifies the designate targets as the goals (_e.g._, higher 81.6 DINO visual similarity and higher 30.5 CTS global caption alignment).
To investigate instruction-based image editing for the specific purpose, Table 2 fine-tunes models on each dataset. For EVR and GIER, all models obtain improvements after the adaptation to Photoshop-style editing tasks. Since fine-tuning makes expressive instructions more domain-specific as well, our MGIE increases the most via learning with domain-related guidance. This also helps our diffusion model to demonstrate concrete edited scenes from the fine-tuned MLLM, which benefits both global optimization and local modification (_e.g._, notably lower 0.23 LPIPS on MASk and higher 94.3 CVS on MagicBrush). MGIE is consistently superior to LGIE in all aspects of editing since our visual-aware guidance is more aligned with the intended goal. From the above experiments, we illustrate that learning with expressive instructions can effectively enhance image editing, and visual perception plays a crucial role in deriving explicit guidance for the greatest enhancements.
Trade-off between \(\alpha_{\mathcal{X}}\) and \(\alpha_{\mathcal{Y}}\).There are two goals in image editing: manipulate the target as the instruction and preserve the remaining as the input image. Fig. 3 plots the trade-off curves between the instruction (\(\alpha_{\mathcal{X}}\)) and input consistency (\(\alpha_{\mathcal{Y}}\)). We fix \(\alpha_{\mathcal{X}}\) as 7.5 and vary \(\alpha_{\mathcal{Y}}\) in \([1.0,2.2]\). Higher \(\alpha_{\mathcal{Y}}\) will make an editing result more similar to the input but less aligned with the instruction. X-axis calculates the CLIP directional similarity as how much the editing follows the instruction; Y-axis is the feature similarity to the input image from the CLIP visual encoder. Through concrete expressive instructions, we surpass InsPix2Pix in all settings. Our MGIE additionally results in comprehensive enhancements by learning with explicit visual-related guidance. This supports robust improvement, whether requiring higher input correlation or edit relevance.
### Ablation Study
MLLM-Guided Image Editing exhibits encouraging improvement in both zero-shot and fine-tuning scenarios. Now, we investigate different architectures to use expressive instructions. Table 3 considers **FZ**, **FT**, and our **E2E**. FZ directly uses the derived expressive instructions3 as the input prompts to the frozen InsPix2Pix. In spite of having additional guidance, the scenario still differs from the trained editing instructions, which makes it difficult to deal with. LGIE even hurts the performance as it may mislead due to the shortage of visual perception. FT fine-tunes InsPixPix and adapts it to expressive instructions. These results support that image editing can benefit from explicit guidance along the derivation of instructions from the LLM/MLLM. E2E updates the editing diffusion model in conjunction with the LM, which learns to extract applicable guidance and discard irrelevant narration simultaneously through the end-to-end hidden states. In addition, our E2E can also avoid the potential error that may be propagated from the expressive instructions. Hence, we observe the most enhancements in both global optimization (MA5k) and local editing (MagicBrush). Among FZ, FT, and E2E, MGIE consistently surpasses LGIE. This indicates that expressive instructions with crucial visual perception are always advantageous across all ablation settings.
Footnote 3: During the ablation study, we employ concise summarized expressive instructions for a fair comparison.
Why MLLM Guidance is Helpful?Fig. 4 presents the CLIP-Score between input or ground-truth goal images and expressive instructions. A higher CLIP-S to input images indicates that instructions are relevant to the editing source. Better alignment with goal images provides explicit and correlated
edit guidance. Without access to visual perception, expressive instructions from LGIE are limited to general language imagination, which is not tailored to the source image. The CLIP-S are even lower than the original instructions. By contrast, MGIE is more aligned with inputs/goals, which explains why our expressive instructions are helpful. With a clear narration of the intended result, our MGIE can achieve the greatest improvements in image editing.
Human Evaluation.Apart from automatic metrics, we conduct a human evaluation to study generated expressive instructions and image editing results. We randomly sample 25 examples for each dataset (100 in total) and consider humans to rank across baselines and MGIE. To avoid potential ranking bias, we hire 3 annotators for each example. Fig. 5 plots the quality of generated expressive instructions. Precise guidance is informative and aligns with the intended goal (More Practically). At the same time, it should avoid incorrect or unrelated explanations (Less Hallucination). Firstly, over 53% support that MGIE provides more practical expressive instructions, which facilitates the
Figure 7: **Qualitative comparison** between InsPix2Pix, LGIE, and our MGIE. For the 1st example, MGIE can showcase the clear lightning in the sky and its reflection on the water. For the 2nd one, although LGIE accurately targets the Christmas tree, only MGIE removes it in the background. For photo optimization (the 3rd example), InsPix2Pix fails to adjust the brightness, and LGIE makes the whole photo white and obviously distinct. In contrast, MGIE follows the instruction to brighten as well as sharpen it. Moreover, in the 4th one, MGIE puts the glaze only on the donuts, but baselines even draw the entire image in strawberry pink.
image editing task with explicit guidance. Meanwhile, 57% of labelers indicate that our MGIE can prevent irrelevant descriptions from language-derived hallucinations in LGIE since it perceives the image to have a precise goal for editing. Fig. 6 compares the image editing results by InsPix2Pix, LGIE, and our MGIE in terms of instruction following, ground-truth relevance, and overall quality. The ranking score is ranging from 1 to 3, higher is better. With derived expressive instructions from the LLM or MLLM, LGIE and MGIE both outperform the baseline and perform image editing that is correlated with the instruction as well as similar to the ground-truth goal. Additionally, since our expressive instructions can provide concrete and visual-aware guidance, MGIE has the best human preference in all aspects, including the overall editing quality. These performance trends also align with automatic evaluations, which support our usage of metrics.
Inference Efficiency.Despite relying on MLLM to facilitate image editing, MGIE only rolls out concise expressive instructions (less than 32 tokens) and contains feasible efficiency as InsPix2Pix. Table 4 presents the inference time cost on an NVIDIA A100 GPU. For a single input, MGIE can accomplish the editing task in 10 seconds. With greater data parallelization, we take a similar amount of time (_e.g._, 37 seconds when batch size 8). The entire process can be affordable in one GPU (40GB). In summary, our MGIE surpasses the baseline on quality yet maintains competitive efficiency, leading to effective and practical image editing.
Qualitative Comparisons.Fig. 7 illustrates the visualized comparison on all used datasets. Fig. 8 further compares the expressive instructions by LGIE or MGIE. Our superior performance benefits from the explicit guidance of visual-related expressive instructions. Please visit our project website4 for more qualitative results.
Footnote 4: Project website: [https://mllm-ie.github.io](https://mllm-ie.github.io)
## 5 Conclusion
We propose MLLM-Guided Image Editing (MGIE) to enhance instruction-based image editing via learning to produce expressive instructions. Instead of brief but ambiguous guidance, MGIE derives explicit visual-aware intention and leads to reasonable image editing. We conduct extensive studies from various editing aspects and demonstrate that our MGIE effectively improves performance while maintaining competitive efficiency. We also believe the MLLM-guided framework can contribute to future vision-and-language research.
\begin{table}
\begin{tabular}{l c c} \hline \hline \multicolumn{2}{l}{BS} & \multicolumn{1}{l}{InsPix2Pix} & \multicolumn{1}{l}{MGIE} \\ \hline
1 & 6.8 & 9.2 \\
4 & 16.5 & 20.6 \\
8 & 31.5 & 36.9 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Time cost.
Figure 8: **Qualitative comparison of expressive instructions by LGIE and our MGIE. Due to the limitation of the single modality, LGIE can only have language-based insight but may derive irrelevant or even wrong explanations for image editing (_e.g._, “_two people still in the foreground_” for GIER). With access to images, MGIE provides explicit visual imagination after the editing such as “_baby on the beach with a shark_” or “_bring out details of leaves and trunk_”. More surprisingly, we can link “_lightsaber or spaceship_” from Star Wars and describe “_chewing on the stick_” for the dog, which is aligned with the intended goal.** | 指示ベースの画像編集は、自然コマンドによる画像操作の可制御性と柔軟性を向上させます。詳細な説明や地域マスクは不要です。しかし、人間による指示は、現在の方法では捉えられず、実行することが難しいことがあります。多様性のある大規模言語モデル(MLLM)は、多様形式の理解と、LMによる視覚にawareな応答生成において、可能性を示しています。私たちは、MLLMが編集指示を促進する能力について調査し、MLLMを介した画像編集(MGIE)を提案しました。MGIEは、表現的な指示を学習させ、明示的なガイダンスを提供します。編集モデルは、この視覚的な想像力をjointに捉え、端から端のトレーニングを通して操作を行います。私たちは、Photoshopのスタイルの変更、グローバルな写真の最適化、そしてローカル編集の各側面について評価しました。膨大な実験結果によって、表現的な指示が指示 |
2302.14779 | Twisted Drinfeld Centers and Framed String-Nets | We discuss a string-net construction on 2-framed surfaces, taking as
algebraic input a finite, rigid tensor category, which is neither assumed to be
pivotal nor semi-simple. It is shown that circle categories of our framed
string-net construction essentially compute Drinfeld centers twisted by powers
of the double dual functor. | Hannes Knötzele, Christoph Schweigert, Matthias Traube | 2023-02-28T17:24:32 | http://arxiv.org/abs/2302.14779v3 | # Twisted Drinfeld Centers and Framed String-Nets
###### Abstract.
We discuss a string-net construction on 2-framed surfaces, taking as algebraic input a finite, rigid tensor category, which is neither assumed to be pivotal nor semi-simple. It is shown that circle categories of our framed string-net construction essentially compute Drinfeld centers twisted by powers of the double dual functor.
###### Contents
* 1 Introduction
* 2 Recollections on Finite Tensor Categories
* 2.1 Rigid Monoidal Categories
* 2.2 (Co-)End in Finite Tensor Categories
* 3 Twisted Drinfeld Centers and Monads
* 3.1 Monadicity of Twisted Drinfeld Centers
* 3.2 Kleisli Category and Representable Functors
* 4 Progressive Graphical Calculus for Finite Tensor Categories
* 5 Framed String-Net Construction
* 5.1 Locally Progressive Graphs
* 5.2 Framed String-Net Spaces
* 6 Circle Categories and Twisted Drinfeld-Centers
* 6.1 2-Framings of the Circle and Framed Cylinders
* 6.2 Circle Categories
* 6.3 Circle Category as a Kleisli Category
## 1. **Introduction**
Over the last few decades, topological field theories proved to be a very fruitful research area relating concepts from topology, categorical algebra and mathematical physics. A topological field theory (TFT) in \(n\) dimensions with values in a symmetric monoidal category \(\mathcal{C}\) is a symmetric monoidal functor \(\mathcal{F}:\mathsf{Cob}^{n}\to\mathcal{C}\), where \(\mathsf{Cob}^{n}\)
Introduction
The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold \(\Sigma\) with a smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth manifold \(\Sigma\). The \(n\)-dimensional topological manifold \(\Sigma\) is a manifold with smooth manifold \(\Sigma\).
figure 3). In view of the results in [10], we expect that these circle categories are related to Drinfeld centers twisted by powers of the double dual functor. In fact, twisted Drinfeld centers \({}_{F}\mathsf{Z}_{G}(\mathbb{C})\) can be defined for any pair of strong-monoidal functors \(F,G:\mathbb{C}\to\mathbb{C}\): the objects of \({}_{F}\mathsf{Z}_{G}(\mathbb{C})\) are pairs \((c,\gamma_{\bullet,c})\) consisting of an object \(c\in\mathbb{C}\) together with a half-braiding \(\gamma_{c,x}:F(c)\otimes x\xrightarrow{\simeq}x\otimes G(c)\).
To identify the circle category for the cylinder \(\mathsf{C}_{n}\) with a twisted Drinfeld center, we use that the twisted Drinfeld center \({}_{F}\mathsf{Z}_{G}(\mathbb{C})\) is equivalent to the category of modules for the twisted central monad \({}_{F}T_{G}\) on \(\mathbb{C}\). We show in Theorem 6.3 that the string-net construction gives us the Kleisli category \(\mathbb{C}_{T_{n}}\) of a specific monad \(T_{n}\) where the twisting is by a power of the bidual functor (which is monoidal):
\[\mathsf{Cyl}(\mathsf{C}_{n},\mathbb{C})\simeq\mathbb{C}_{T_{n}}. \tag{1.1}\]
In Theorem 6.4 we show that the twisted Drinfeld center itself can be recovered, as a linear category by taking presheaves on the Kleisli category for which the pullback to a presheaf on \(\mathbb{C}\) is representable:
\[\mathrm{PSh}_{\mathbb{C}}(\mathsf{Cyl}_{n})\simeq\mathsf{Z}_{n}(\mathbb{C}) \tag{1.2}\]
where \(\mathsf{Z}_{n}(\mathbb{C})\) is the Drinfeld center twisted by the appropriate power of the double dual functor depending on \(n\), cf. equation (3.4). This allows us to recover twisted Drinfeld centers from framed string-nets. The comparison with [10, Corollary 3.2.3] shows complete coincidence. This provides a way to obtain twisted Drinfeld centers in the spirit of planar algebras [11]; they are closely related to tube algebras which can be formulated as the annular category [11] of a planar algebra.
This paper is organized as follows. In two preliminary sections, we recall in section 2 some facts and notation about finite tensor categories and in section 3 about twisted Drinfeld centers and monads. In this section, we show in particular in Proposition 3.6 how to obtain the Eilenberg-Moore category of a monad in terms of presheaves on the Kleisli category whose pullback is representable. While this statement is known in the literature, in particular in a general context, we include the proof for the benefit of the reader.
In section 4 we recall the graphical calculus of progressive graphs for monoidal categories that has been introduced in [11]. In section 5, we first show in subsection 5.1 how to globalize the graphical calculus from section 4 to 2-framed surfaces. This allows us to define in subsection 5.2 string-net spaces on 2-framed surfaces, see in particular Definition 5.9.
Section 6 is devoted to the study of circle categories: in subsection 6.1 we very briefly discuss framings of cylinders, before we define framed circle categories in section 6.2 and show in Theorem 6.3 that the circle categories are equivalent to Kleisli categories. Finally, Theorem 6.4 in section 6.3 contains the main result (1.2) and the extension to arbitrary framings in Remark 6.5.
**Acknowledgment:** The authors thank Gustavo Jasso, Ying Hong Tham and Yang Yang for useful discussions. CS and MT are supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under SCHW1162/6- 1; CS is also supported by the DFG under Germany's Excellence Strategy - EXC 2121 "Quantum Universe" - 390833306. HK acknowledges support by DFG under 460925688 (in the Emmy-Noether group of Sven Moller).
## 2. **Recollections on Finite Tensor Categories**
In this section, we recall some facts about finite tensor categories and at the same time fix notation. Proofs and more detailed information can be found in e.g. [1, 13, 11].
Throughout this paper, \(\mathbb{K}\) will be an algebraically closed field of characteristic zero. All monoidal categories will be assumed to be strict.
### Rigid Monoidal Categories
An abelian monoidal category \((\mathcal{C},\otimes,1)\) is \(\mathbb{K}\)-_linear_ if it is enriched in \(\mathsf{Vect}_{\mathbb{K}}\) and if \(\otimes:\mathcal{C}\times\mathcal{C}\to\mathcal{C}\) is a bilinear functor. A _linear functor_ between \(\mathbb{K}\)-linear categories is an additive functor, i.e. linear on Hom-spaces. For \(\mathbb{K}\)-linear categories \(\mathcal{D}\), \(\mathcal{D}\), we denote the category of linear functors from \(\mathcal{D}\) to \(\mathcal{B}\) by \(\mathsf{Fun}_{\mathbb{K}}(\mathcal{D},\mathcal{D})\). For a category \(\mathcal{C}\), we denote \(\mathcal{C}^{op}\) for the opposite category, i.e. \(\mathcal{C}^{op}\) has the same objects as \(\mathcal{C}\) and \(\operatorname{Hom}_{\in^{op}}(x,y)=\operatorname{Hom}_{\mathcal{C}}(y,x)\). For a monoidal category \((\mathcal{C},\otimes,1)\), its _opposite monoidal category_\(\mathcal{C}^{rev}:=(\mathcal{C}^{op},\otimes^{op},1)\) is the opposite category \(\mathcal{C}^{op}\) endowed with the monoidal structure \(x\otimes^{op}y:=y\otimes x\) for \(x,y\in\mathcal{C}^{op}\).
A monoidal category \(\mathcal{C}\) has _left duals_ if for every object \(x\in\mathcal{C}\), there exists an object \({}^{\vee}x\in\mathcal{C}\), called the _left dual object_ of \(x\), together with a _left coevaluation_\(\operatorname{coev}_{x}:\mathbb{1}\to{}^{\vee}x\otimes x\) and left evaluation \(\operatorname{ev}_{x}:x\otimes{}^{\vee}x\to 1\) satisfying the usual two zig-zag relations. Similarly, \(\mathcal{C}\) has _right duals_ if for \(x\in\mathcal{C}\), there exists an object \(x^{\vee}\in\mathcal{C}\), called the _right dual object_, together with a _right coevaluation_ morphism \(\widetilde{\operatorname{coev}}_{x}:1\to x\otimes x^{\vee}\) and a _evaluation_ morphism \(\widetilde{\operatorname{ev}}_{x}:x^{\vee}\otimes x\to 1\) satisfying again the appropriate two zig-zag relations. Equivalently, we could have defined a right dual object for \(x\in\mathcal{C}\) to be a left dual object for \(x\) in \(\mathcal{C}^{rev}\). A monoidal category is _rigid_ if it has both left and right duals.
Left and right duality can be conveniently expressed in terms of strong monoidal functors \(\mathcal{C}^{rev}\to\mathcal{C}\). To be more precise, the _left dual functor_ is defined as
\[\begin{split}{}^{\vee}(\bullet):\mathcal{C}^{rev}& \to\mathcal{C}\\ x&\mapsto{}^{\vee}x\\ \operatorname{Hom}_{\mathcal{C}^{env}}(x,y)\ni f& \mapsto{}^{\vee}f\in\operatorname{Hom}_{\mathcal{C}}({}^{\vee}x,{}^{\vee}x) \end{split} \tag{2.1}\]
with
\[{}^{\vee}f\coloneqq\left[{}^{\vee}x\xrightarrow{\operatorname{coev}_{y} \operatorname{gid}_{{}^{\vee}x}}{}^{\vee}y\otimes y\otimes{}^{\vee}x \xrightarrow{\operatorname{id}_{{}^{\vee}\otimes f\operatorname{gid}_{{}^{ \vee}x}}}{}^{\vee}y\otimes x\otimes{}^{\vee}x\xrightarrow{\operatorname{id}_{ {}^{\vee}\otimes\operatorname{ev}_{x}}}{}^{\vee}y\right]\quad. \tag{2.2}\]
Analogously, there is a _right duality functor_
\[\begin{split}(\bullet)^{\vee}:\mathcal{C}&\to \mathcal{C}^{rev}\\ x&\mapsto x^{\vee}\\ \operatorname{Hom}_{\mathcal{C}}(x,y)\ni f&\mapsto f^{ \vee}\in\operatorname{Hom}_{\mathcal{C}^{env}}(x^{\vee},y^{\vee}),\end{split} \tag{2.3}\]
where
\[f^{\vee}\coloneqq\left[{}^{y^{\vee}}\xrightarrow{\operatorname{id}_{{}^{ \vee}\otimes\widetilde{\operatorname{ev}}_{x}}}{}^{\vee}y^{\vee}\otimes x \otimes x^{\vee}\xrightarrow{\operatorname{id}_{{}^{\vee}\otimes f \operatorname{gid}_{{}^{\vee}}}}{}^{y^{\vee}}y\otimes y\otimes x^{\vee} \xrightarrow{\widetilde{\operatorname{ev}}_{y}\operatorname{gid}_{{}^{ \vee}}}x^{\vee}\right]\quad. \tag{2.4}\]
It is not hard to show that left and right duality functors are indeed strong monoidal functors. The following coherence result allows us to assume that left and right duality functors are strict and the two functors are inverse functors:
**Lemma 2.1**.: _[_15_, Lemma 5.4]_ _For any rigid monoidal category, there exists a rigid monoidal category \(\mathcal{D}\) such that_
1. \(\mathcal{C}\) _and_ \(\mathcal{D}\) _are equivalent as monoidal categories._
2. \(\mathcal{D}\) _is a strict monoidal category._
3. \({}^{\vee}(\bullet):\mathcal{D}^{rev}\to\mathcal{D}\) _is a strict monoidal functor._
4. \({}^{\vee}(\bullet)\) _and_ \((\bullet)^{\vee}\) _are inverse functors._
_Remark 2.2_.: We could have defined duality functors also with reversed directions, i.e. the left duality functor as functor and the right duality functor \((\bullet)^{\vee}:\mathcal{C}^{rev}\to\mathcal{C}\). From the previous Lemma, we get and. The double dual functors and are monoidal functors; in general they are _not_ naturally isomorphic to the identity functor as monoidal functors. A pivotal structure amounts to the choice of a monoidal isomorphism; in this paper, we do not require the existence of a pivotal structure.
**Definition 2.3**.:
1. A \(\mathbb{K}\)-linear category is _finite_, if it is equivalent to the category \(A-\mathsf{Mod}\) of finite-dimensional modules over a finite-dimensional \(\mathbb{K}\)-algebra \(A\).
2. A _finite tensor category_ is a finite rigid monoidal category.
_Remark 2.4_.:
1. For an equivalent intrinsic characterization of finite linear categories, we refer to [1, section 1.8]. In particular, the morphism spaces of a finite category \(\mathcal{C}\) are finite-dimensional \(\mathbb{K}\)-vector spaces and \(\mathcal{C}\) has a finite set of isomorphism classes of simple objects.
2. A finite tensor category \(\mathcal{C}\) is, in general, neither semi-simple nor pivotal.
A linear functor \(F:\mathcal{C}\to\mathcal{D}\) between \(\mathbb{K}\)-linear categories is not necessarily exact. In case \(\mathcal{C}\) and \(\mathcal{D}\) are finite tensor categories, it turns out that being left (right) exact is equivalent to admitting a left (right) adjoint.
**Theorem 2.5**.: _[_11_, Proposition 1.7]_ _A functor \(F:\mathcal{C}\to\mathcal{D}\) between finite linear categories is left (right) exact if and only if it admits a left (right) adjoint._
We note several consequences: by Lemma 2.1 the duality functors are inverses and thus adjoints. Hence both functors are exact. Due to the existence of left and right duals, the tensor product of a finite tensor category is an exact functor in both elements. Finally, given two finite linear categories \(\mathcal{D},\mathcal{C}\), we denote the category of left exact functors from \(\mathcal{D}\) to \(\mathcal{C}\) by \(\mathsf{Lex}(\mathcal{D},\mathcal{C})\).
### (Co-)End in Finite Tensor Categories
Coends, monads and their module categories will be crucial for relating circle categories obtained from framed string-nets to twisted Drinfeld centers. In this subsection, we recall necessary definitions and results. Most of the results can be found in [13, Chapter VI and IX.6]. Throughout this section \(\mathcal{C}\) will be a finite tensor category. Some of the results hold in greater generality; we refer to [13, Chapter IX.6 and IX.7].
Let \(\mathcal{A}\) be an abelian \(\mathbb{K}\)-linear category, \(H:\mathcal{C}\times\mathcal{C}^{op}\to\mathcal{A}\) a bilinear bifunctor and \(a\in\mathcal{A}\) be an object of \(\mathcal{A}\). A _dinatural transformation from \(H\) to \(a\)_ consists of a family of maps \(\{\psi_{c}:H(c,c)\to a\}_{c\in\mathcal{C}}\), such that \(\psi_{d}\circ H(f,\mathrm{id}_{d})=\psi_{c}\circ H(\mathrm{id}_{c},f)\) for all \(f\in\mathrm{Hom}_{\mathbb{K}}(c,d)\).
**Definition 2.6**.: The _coend of \(H\)_ is an object \(\int^{c\in\mathcal{C}}H(c,c)\), together with a universal dinatural transformation \(\left\{\iota_{c}:H(c,c)\to\int^{c\in\mathcal{C}}H(c,c)\right\}\). This means that for any dinatural
transformation \(\{\psi_{c}:H(c,c)\to a\}\), there exists a unique morphism \(\tau\in\operatorname{Hom}_{a}(\int^{c\in\mathfrak{C}}H(c,c),a)\), such that the following diagram commutes
for all \((c,d)\in\mathfrak{C}\times\mathfrak{C}^{op}\) and \(f:c\to d\).
**Lemma 2.7**.: _[_1_, Corollary 5.1.8]_ _If \(H:\mathfrak{C}\times\mathfrak{C}^{op}\to\mathfrak{A}\) is bilinear functor exact in both arguments, the coend \(\int^{c\in\mathfrak{C}}H(c,c)\) exists._
**Definition 2.8**.: _[_1_]_ _Let \(\mathfrak{D}\), \(\mathfrak{C}\) be finite tensor categories and \(\mathfrak{C}\) a \(\mathbb{K}\)-linear category. Assume that the functor \(H:\mathfrak{D}\times\mathfrak{C}\times\mathfrak{C}^{op}\to\mathfrak{C}\) is left exact in both arguments. The left exact coend of \(H\) is an object \(\oint^{c\in\mathfrak{C}}H(\bullet;c,c)\) in the category \(\mathsf{Lex}(\mathfrak{D},\mathfrak{C})\) of left exact functors, together with a universal dinatural transformation \(\{\iota_{c}:H(\bullet;c,c)\to\oint^{c\in\mathfrak{C}}H(\bullet;c,c)\}\) consisting of morphisms in \(\mathsf{Lex}(\mathfrak{D},\mathfrak{C})\)._
## 3. **Twisted Drinfeld Centers and Monads**
In this section, we introduce twisted Drinfeld centers of monoidal categories and review their description as Eilenberg-Moore categories over monads. String-net constructions do not directly yield Eilenberg-Moore categories; hence we develop an explicit construction of the Eilenberg-Moore category of a monad from its Kleisli category.
### Monadicity of Twisted Drinfeld Centers
As before, \(\mathfrak{C}\) is in this section a finite tensor category.
The _Drinfeld center_\(\mathsf{Z}(\mathfrak{C})\) of a monoidal category \(\mathfrak{C}\) is a categorification of the notion of a center of an algebra. It has as objects pairs \((X,\gamma_{\bullet,x})\), with a natural isomorphism \(\gamma_{\bullet,x}:\bullet\otimes x\xrightarrow{\simeq}x\otimes\bullet\), called the _half-braiding_, such that the identity
\[\gamma_{c\otimes d,x}=(\gamma_{c,x}\otimes\operatorname{id}_{d})\circ( \operatorname{id}_{c}\otimes\gamma_{d,x})\]
holds for all \(c,d\in\mathfrak{C}\). The following generalization is well-known:
**Definition 3.1**.: Let \(F,G:\mathfrak{C}\to\mathfrak{C}\) strict \(\mathbb{K}\)-linear monoidal endofunctors. The _twisted Drinfeld center_\({}_{F}\mathsf{Z}_{G}(\mathfrak{C})\) is the following category:
* _Objects_ are pairs \((x,\gamma_{\bullet,x})\), where (3.1) \[\gamma_{\bullet,x}:F(\bullet)\otimes x\xrightarrow{\simeq}x\otimes G(\bullet)\] is a natural isomorphism satisfying (3.2) \[\gamma_{c\otimes d,x}=(\gamma_{c,x}\otimes\operatorname{id}_{G(d)})\circ( \operatorname{id}_{F(c)}\otimes\gamma_{d,x})\] for all \(c,d\in\mathcal{C}\).
* A _morphism_\(f:(x,\gamma_{\bullet,x})\to(y,\gamma_{\bullet,y})\) is a morphism \(f\in\operatorname{Hom}_{\mathcal{C}}(x,y)\) such that (3.3) \[\left[F(c)\otimes x\xrightarrow{\gamma_{c,x}}x\otimes G(c)\xrightarrow{f \otimes\operatorname{id}}y\otimes G(c)\right]=\left[F(c)\otimes x \xrightarrow{\operatorname{id}\otimes f}F(c)\otimes y\xrightarrow{\gamma_{c,y }}y\otimes G(c)\right]\.\]
The monoidal functors we will be interested in are powers of the double duals. Specifically, we consider the following cases
\[\mathsf{Z}_{n}(\mathcal{C})\coloneqq\begin{cases}_{(\bullet)^{\vee}}\mathsf{ Z}_{\operatorname{id}_{\mathcal{C}}}(\mathcal{C}),&n\in\mathsf{Z}_{>0},\\ _{(\bullet)^{\vee}}\mathsf{Z}_{\operatorname{id}_{\mathcal{C}}}(\mathcal{C} ),&n=0,\\ _{(\bullet)^{\vee}}\mathsf{Z}_{(\vee^{\vee}(\bullet))^{-n}},&n\in\mathsf{Z}_{<0 },\end{cases} \tag{3.4}\]
which include for \(n=1\) the usual Drinfeld center \(\mathsf{Z}(\mathcal{C})\). The category \({}_{(\bullet)^{\vee}}\mathsf{Z}_{\operatorname{id}_{\mathcal{C}}}(\mathcal{C})\) obtained for \(n=0\) is known as the _trace_ of \(\mathcal{C}\), see e.g. [10, Definition 3.1.4].
These categories can be described in terms of monads on \(\mathcal{C}\).
A _monad_ on a category \(\mathcal{C}\) is a triple \((T,\mu,\eta)\) consisting of an endofunctor \(T:\mathcal{C}\to\mathcal{C}\) and natural transformations \(\mu:T^{2}\to T\), \(\eta:\operatorname{id}_{\mathcal{C}}\Rightarrow T\) such that the diagrams
commute for all \(c\in\mathcal{C}\). A _module_ for the monad \((T,\mu,\eta)\) is a pair \((d,\rho)\), consisting of an object \(d\in\mathcal{C}\) and a morphism \(\rho:Td\to d\) such that the diagrams
commute. A _morphism between two \(T\)-modules_\((d_{1},\rho)\), \((d_{2},\lambda)\) is a morphism \(f\in\operatorname{Hom}_{\mathcal{C}}(d_{1},d_{2})\) such that the diagram
commutes.
We denote the category of \(T\)-modules or Eilenberg-Moore category by \(T-\mathsf{Mod}\) or \(\mathcal{C}^{T}\). It comes with a forgetful functor \(U^{T}\) to \(\mathcal{C}\).
Given two exact \(\mathbb{K}\)-linear strict monoidal endofunctors \(F,G\) of a finite tensor category \(\mathcal{C}\), the functor
(3.5) \[Q:\mathcal{C}\times\mathcal{C}^{op} \to\mathsf{Fun}(\mathcal{C},\mathcal{C})\] \[(c,d) \mapsto F(c)\otimes\bullet\otimes G(\operatorname{\text{\text{ \text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text \text
In section 6.3, we need the following result.
**Lemma 3.5**.: _Let \(\mathcal{C}\) be a finite tensor category and \(F,G\in\mathsf{Fun}(\mathcal{C},\mathcal{C})\) exact strict monoidal endofunctors. Let_
\[Q:\mathcal{C}\times\mathcal{C}^{op} \to\mathsf{Lex}(\mathcal{C}\times\mathcal{C}^{op},\mathsf{Vect}_{ \mathbb{K}})\] \[(c,d) \mapsto\operatorname{Hom}_{\mathcal{C}}(\,(\bullet)\,,F(c) \otimes(\bullet)\otimes G(^{\vee}d)). \tag{3.10}\]
_Then the left exact coend \(\oint^{c\in\mathcal{C}}Q(c,c)\) exists and there is an isomorphism_
\[\oint^{c\in\mathcal{C}}Q(c,c)(\bullet,\bullet)\simeq\operatorname{Hom}_{ \mathcal{C}}(\,(\bullet)\,,_{F}T_{G}(\bullet)). \tag{3.11}\]
Proof.: Since \({}_{F}T_{G}\) is an exact functor, \(\operatorname{Hom}_{\mathcal{C}}(\,(\bullet)\,,_{F}T_{G}(\bullet)):\mathcal{C }\times\mathcal{C}^{op}\to\mathsf{Vect}_{\mathbb{K}}\) is left exact. Therefore it suffices to show that \(\operatorname{Hom}_{\mathcal{C}}(\,(\bullet)\,,_{F}T_{G}(\bullet))\) has the universal property of the left exact coend. This can be proven along the lines of [13, Proposition 9]. Adapting the proof given there to the current situation is not hard and is left as an exercise to the reader.
### Kleisli Category and Representable Functors
The string-net construction will not directly give the twisted center \(\mathsf{Z}_{n}(\mathcal{C})\). Hence we recall that given any monad \((T,\mu,\eta)\), there are several adjunctions giving rise to the same monad. In this subsection, we review this theory for a general monad \(T\) which is not necessarily a twisted central monad; for a textbook account, we refer to [10, Chapter 5].
* As discussed in subsection 3.1, the category of \(T\)-modules \(\mathcal{C}^{T}\) has as objects pairs \((c,\rho)\) with \(c\in\mathcal{C}\) and \(\rho:Tc\to c\) a morphism in \(\mathcal{C}\). The forgetful functor \(U^{T}:\mathcal{C}^{T}\to\mathcal{C}\) assigns to a \(T\)-module \((c,\rho)\) the underlying object \(c\in\mathcal{C}\). Its left adjoint \(I^{T}:\mathcal{C}\to\mathcal{C}^{T}\) assigns to \(c\in\mathcal{C}\) the free module \(Tc\) with action \(\mu_{c}:T^{2}(c)\to Tc\). The monad \(U^{T}\circ I^{T}\) induced on \(\mathcal{C}\) by the adjunction \(I^{T}\dashv U^{T}\) is again \(T\).
* The _Kleisli category_\(\mathcal{C}_{T}\) has as objects the objects of \(\mathcal{C}\); whenever an object \(c\in\mathcal{C}\) is seen as an object of the Kleisli category \(\mathcal{C}_{T}\), it will be denote by \(\overline{c}\). The Hom-spaces of the Kleisli category are \(\operatorname{Hom}_{\mathcal{C}_{T}}(\overline{c},\overline{d})\coloneqq \operatorname{Hom}_{\mathcal{C}}(c,Td)\), for all \(c,d\in\mathcal{C}\). A morphism in \(\mathcal{C}_{T}\) from \(\overline{c}\) to \(\overline{d}\) will be denoted by \(\overline{c}\rightsquigarrow\overline{d}\). The composition of morphisms in the Kleisli category \(\mathcal{C}_{T}\) is (3.12) \[g\circ_{\mathcal{C}_{T}}f:=\mu_{c_{3}}\circ_{\mathcal{C}}T(g)\circ_{\mathcal{C }}f\] for \(g:\overline{c}_{2}\rightsquigarrow\overline{c}_{3}\) and \(f:\overline{c}_{1}\rightsquigarrow\overline{c}_{2}\). The identity morphism \(\overline{c}\rightsquigarrow\overline{c}\) in \(\mathcal{C}_{T}\) is, as a morphism in \(\mathcal{C}\), the component \(\eta_{c}:c\to Tc\) of the unit of \(T\). Define a functor \(I_{T}:\mathcal{C}\to\mathcal{C}_{T}\) which is the identity on objects and sends a morphism \(c_{1}\overset{f}{\to}c_{2}\) in \(\mathcal{C}\) to the morphism \(\overline{c}_{1}\rightsquigarrow\overline{c}_{2}\) given by the morphism \[I_{T}(f):\quad c_{1}\overset{f}{\to}c_{2}\overset{\eta_{c_{2}}}{\to}Tc_{2}\] in \(\mathcal{C}\). Define also a functor \(U_{T}:\mathcal{C}_{T}\to\mathcal{C}\) sending \(\overline{c}\in\mathcal{C}_{T}\) to \(Tc\in\mathcal{C}\) and a morphism \(\overline{h}:\ \overline{c}\rightsquigarrow\overline{d}\) represented by the morphism \(h:c\to Td\) in \(\mathcal{C}\) to \[U_{T}(\overline{h}):\quad Tc\overset{T_{h}}{\to}T^{2}(d)\overset{\mu_{d}}{\to} Td\.\] By [10, Lemma 5.2.11], this gives a pair of adjoint functors, \(I_{T}\dashv U_{T}\), and that the adjunction realizes again the monad \(T\) on \(\mathcal{C}\), i.e. \(U_{T}\circ I_{T}=T\).
* It is also known [11, Proposition 5.2.12] that the Kleisli category \(\mathcal{C}_{T}\) is initial and that the Eilenberg-Moore category \(\mathcal{C}^{T}\) is final in the category of adjunctions realizing the monad \(T\) on \(\mathcal{C}\). Put differently, for any adjunction \(\oplus\xrightarrow{U}\mathcal{C}\) and \(\mathcal{C}\xrightarrow{I}\mathcal{D}\) with \(I\dashv U\) and \(U\circ I=T\), there are unique comparison functors \(K_{\oplus}:\mathcal{C}_{T}\xrightarrow{\mathcal{D}}\) and \(K^{\oplus}:\mathcal{D}\xrightarrow{\mathcal{C}^{T}}\) such that the diagram commutes.
* An adjunction \(I\dashv U\) that induces the monad \(T=U\circ I\) on \(\mathcal{C}\) is called _monadic_, if the comparison functor \(K^{\oplus}\) to the Eilenberg-Moore category \(\mathcal{C}^{T}\) is an equivalence of categories.
From the string-net construction, we will recover in Theorem 6.3 the Kleisli category of the twisted central monads as a circle categories. If \(\mathcal{C}\) is semi-simple, the twisted Drinfeld center can then be recovered as a Karoubification [10] or as presheaves [11]. For non-semi-simple categories, this does not suffice. It is instructive to understand how to explicitly recover the Eilenberg-Moore category from the Kleisli category.
Recall that all categories are linear and all functors are linear functors. Denote by \(\oplus:=\operatorname{Psh}_{I_{T}}(\mathcal{C}_{T})\) the category of functors \(F:=\mathcal{C}_{T}^{opp}\xrightarrow{\mathsf{Vect}}\) such that the pullback by \(I_{T}\)
\[F\circ I_{T}^{opp}:\ \mathcal{C}^{opp}\xrightarrow{I_{T}^{opp}}\mathcal{C}_{T}^ {opp}\xrightarrow{F}\mathsf{Vect}\]
is representable by some object \(c_{F}\in\mathcal{C}\). We then say that \(F\in\mathcal{D}\) is an \(I_{T}\)-representable presheaf on the Kleisli category \(\mathcal{C}_{T}\). In this way, we obtain a functor \(U:\mathcal{D}\xrightarrow{\mathcal{C}}\) sending the presheaf \(F\) to the \(I_{T}\)-representing object \(c_{F}\in\mathcal{C}\).
We construct its left adjoint: For \(c\in\mathcal{C}\), consider the functor \(\operatorname{Hom}_{\mathcal{C}_{T}}(-,I_{T}c):\mathcal{C}_{T}^{opp} \xrightarrow{\mathsf{Vect}}\). The pullback of this functor along \(I_{T}\) is representable, as follows from the equivalences
\[\operatorname{Hom}_{\mathcal{C}_{T}}(I_{T}-,I_{T}c)\cong\operatorname{Hom}_{ \mathcal{C}}(-,U_{T}I_{T}c)\cong\operatorname{Hom}_{\mathcal{C}}(-,Tc)\.\]
Note that the \(I_{T}\)-representing object of \(\operatorname{Hom}_{\mathcal{C}_{T}}(-,I_{T}c)\) is \(Tc\in\mathcal{C}\). We thus obtain a functor
\[\begin{array}{rcl}I:\ \mathcal{C}&\to&\mathcal{D}\\ c&\mapsto&\operatorname{Hom}_{\mathcal{C}_{T}}(-,I_{T}c)\end{array}\]
We have already seen that \(U\circ I=T\). It remains to see that the functors \(I\) and \(U\) are adjoint,
\[\operatorname{Hom}_{\mathcal{D}}(Ic,F)\cong\operatorname{Hom}_{\mathcal{C}}(c,U(F))\,\]
where \(F\in\mathcal{D}\) is assumed to be \(I_{T}\)-representable by \(c_{F}\in\mathcal{C}\). Hence the right hand side is naturally isomorphic to \(\operatorname{Hom}_{\mathcal{C}}(c,c_{F})\). For the left hand side, we compute
\[\begin{array}{rcl}\operatorname{Hom}_{\mathcal{D}}(Ic,F)&=&\mathsf{Nat}( \operatorname{Hom}_{\mathcal{C}_{T}}(-,I_{T}c),F)\cong F(I_{T}c)\\ &=&\operatorname{Hom}_{\mathcal{C}}(c,c_{F})\end{array}\]
where in the first line we used the Yoneda lemma and in the second line that \(F\circ I_{T}\) is represented by \(c_{F}\in\mathcal{C}\).
We are now ready for the main result of this subsection:
**Proposition 3.6**.: _The adjunction \(I\dashv U\) with \(U:\operatorname{Psh}_{I_{T}}(\mathcal{C}_{T})\to\mathcal{C}\) and \(I:\mathcal{C}\to\operatorname{Psh}_{I_{T}}(\mathcal{C}_{T})\) is monadic. As a consequence, the comparison functor \(K:\operatorname{Psh}_{I_{T}}(\mathcal{C}_{T})\to\mathcal{C}^{T}\) is an equivalence of categories and the Eilenberg-Moore category can be identified with the category of \(I_{T}\)-representable presheaves on the Kleisli category \(\mathcal{C}_{T}\)._
In [10] Proposition 3.6 is proven in a more general setting, using bicategorical methods. The statement of Proposition 3.6 appears as a comment in [14, Exercise 5.2.vii]. For the convenience of the reader, give an explicit proof, using the monadicity theorem [14, Theorem 5.5.1].
Proof.: Recall the short hand \(\mathcal{D}:=\operatorname{Psh}_{I_{T}}(\mathcal{C}_{T})\). We have to show that \(U:\mathcal{D}\to\mathcal{C}\) creates coequalizers of \(U\)-split pairs. Thus, consider for two \(I_{T}\)-representable functors \(F_{1},F_{2}\in\mathcal{D}\) a parallel pair
of natural transformations and assume that for \(c_{i}:=U(F_{i})\in\mathcal{C}\) and \(n_{i}:=U(\nu_{i})\) for \(i=1,2\) there is a split equalizer in \(\mathcal{C}\) for the parallel pair \(n_{1},n_{2}\):
(3.13)
We have to find a coequalizer \(\operatorname{coeq}(\nu_{1},\nu_{2}):F_{2}\to F_{3}\) in \(\mathcal{D}\) such that \(U(F_{3})=c_{3}\) and the coequalizer is mapped by \(U\) to \(h\). The functors are linear, and natural transformations are vector spaces; hence we can consider the natural transformation \(\nu:=\nu_{1}-\nu_{2}:F_{1}\to F_{2}\) and determine its cokernel in \(\mathcal{D}\). We also introduce the notation \(n:=n_{1}-n_{2}:c_{1}\to c_{2}\).
We start by defining a functor \(F_{3}:\mathcal{C}_{T}^{opp}\to\mathsf{Vect}\) on an object \(\overline{\gamma}\in\mathcal{C}_{T}^{opp}\) as the cokernel of the components of \(\nu\) in the category of vector spaces, so that we have for each \(\overline{\gamma}\in\mathcal{C}_{T}^{opp}\) an exact sequence
in vector spaces. To define the functor \(F_{3}\) on a morphism \(\overline{\gamma}_{1}\stackrel{{ f}}{{\to}}\overline{\gamma}_{2}\) in \(\mathcal{C}_{T}^{opp}\), consider the diagram
which has, by definition, exact rows. The left square commutes because of the naturality of \(\nu\). A standard diagram chase shows that there exists a unique linear map for the dashed arrow which we denote by \(F_{3}(f)\). This completes \(F_{3}\) to a functor
\(\mathcal{G}_{T}^{opp}\to\mathsf{Vect}\) and shows that the components \((q_{\overline{\gamma}})_{\overline{\gamma}\in\mathcal{C}^{T}}\) assemble into a natural transformation \(q:F_{2}\to F_{3}\).
We have to show that the functor \(F_{3}\) is \(I_{T}\)-representable and indeed represented by the object \(c_{3}\) appearing in the split coequalizer (3.13). To this end, consider the two pullbacks
\[\tilde{F}_{i}:=F_{i}\circ I_{T}^{opp}:\;\;\mathcal{G}\;\xrightarrow{}\; \mathcal{G}_{T}^{opp}\;\xrightarrow{}\;\mathsf{Vect}\]
which come with isomorphisms
\[\phi_{i}:\quad\tilde{F}_{i}\xrightarrow{}\operatorname{Hom}_{\mathfrak{C}}(-,c _{i})\]
of functors for \(i=1,2\). For each \(\gamma\in\mathfrak{C}\), we get a commuting diagram
(3.14)
The upper row is exact by construction. The lower row is exact, since \(c_{3}\) was part of a split coequalizer in \(\mathfrak{C}\) and split coequalizers are preserved by all functors. Again, a diagram chase implies the existence of a morphism \((\phi_{3})_{\gamma}:\tilde{F}_{3}(\gamma)\to\operatorname{Hom}_{\mathfrak{C}} (\gamma,c_{3})\) for the dashed arrow which by the nine lemma is an isomorphism.
To show naturality of the morphisms \((\nu_{3})_{\gamma}\), we take a morphism \(\gamma_{1}\xrightarrow{f}\gamma_{2}\) in \(\mathcal{G}^{opp}\) and consider the diagram which consists of two adjacent cubes and four more arrows:
To keep the diagram tidy, we do not provide all labels of the arrows and explain them here: diagonal arrows are labelled by applying the appropriate functor to \(f:\gamma_{1}\to\gamma_{2}\). Vertical arrows are isomorphisms labelled by \(\phi_{i}\). The front and rear squares of the two cubes are just instances of the commuting diagram (3.14) and thus commute. The squares on the top commute because \(\nu\) and \(q\) are natural; similarly, the squares on the bottom commute because \(n_{*}\) and \(h_{*}\) are natural. The left and middle diagonal walls commute because \(\phi_{1}\) and \(\phi_{2}\) are natural. A diagram chase now yields that the rightmost wall commutes as well, which is the naturality of \(\phi_{3}\).
## 4. **Progressive Graphical Calculus for Finite Tensor Categories**
It is standard to introduce a graphical calculus for computations in (strict) finite tensor categories. Following [1], morphisms in a (strict) finite tensor category \(\mathfrak{C}\) can be represented by so-called _progressive graphs_ on a standard rectangle in the \(x-y\)-plane.
A _graph_ is a \(1\)-dimensional, finite CW-complex \(\Gamma\) with a finite, closed subset \(\Gamma_{0}\subset\Gamma\), such that \(\Gamma-\Gamma_{0}\) is a \(1\)-dimensional smooth manifold without boundary. Elements of \(\Gamma_{0}\) are called _nodes_ of the graph. A node \(b\) is a _boundary node_, if for any connected open neighborhood \(b\in U\subset\Gamma\), \(U-\{b\}\) is still connected. The collection of boundary nodes is called the _boundary of \(\Gamma\)_ and is denoted by \(\partial\Gamma\). An _edge_ is a connected component \(e\subset\Gamma-\Gamma_{0}\) homeomorphic to the intervall \((0,1)\). By adjoining its endpoints to \(e\), we get a closed edge \(\hat{e}\). An _oriented edge_ is an edge with an orientation. For an oriented edge \(\hat{e}\) we admit only homeomorphism \(\hat{e}\simeq[0,1]\) preserving orientations. The endpoints of \(\hat{e}\) then are linearly ordered: The preimage of \(0\) in \(\hat{e}\), denoted by \(\hat{e}(0)\), is the source and the preimage \(\hat{e}(1)\) of \(1\) is the target. A graph where every edge is endowed with an orientation is called an oriented graph. For an oriented graph an edge \(e\), adjacent to a node \(v\), is _incoming at \(v\)_, if \(v\) is the target of \(e\) and _outgoing_, if \(v\) is the source of \(e\). This gives two, not necessarily disjoint, subsets in\((v)\) and out\((v)\) of incoming and outgoing edges at \(v\). An oriented graph \(\Gamma\) is _polarized_, if for any \(v\in\Gamma\), in\((v)\) and out\((v)\) are linearly ordered sets.
**Definition 4.1**.: Let \((\Gamma,\Gamma_{0},\partial\Gamma)\) be a polarized graph and \((\mathcal{C},\otimes,1)\) a monoidal category. A _\(\mathcal{C}\)-coloring_ of \(\Gamma\) comprises two functions
\[\varphi_{0}:\Gamma-\Gamma_{0}\to\operatorname{ob}(\mathcal{C}),\qquad\varphi_{ 1}:\Gamma_{0}-\partial\Gamma\to\operatorname{mor}(\mathcal{C}) \tag{4.1}\]
associating to any oriented edge of \(\Gamma\) an object of \(\mathcal{C}\) and to any inner node \(v\in\Gamma_{0}-\partial\Gamma\) a morphism in \(\mathcal{C}\), with
\[\varphi_{1}(v):\varphi_{0}(e_{1})\otimes\cdots\otimes\varphi_{0}(e_{n})\to \varphi_{0}(f_{1})\otimes\cdots\otimes\varphi_{0}(f_{m}), \tag{4.2}\]
where \(e_{1}<\cdots<e_{n}\) and \(f_{1}<\cdots<f_{m}\) are the ordered elements of in\((v)\) and out\((v)\), respectively.
**Definition 4.2**.: A _planar_ graph is a graph \((\Gamma,\Gamma_{0},\partial\Gamma)\) together with a smooth embedding \(\iota:\Gamma\to\mathbb{R}^{2}\).
For a planar graph, we will not distinguish in our notation between the abstract graph \(\Gamma\) and its embedding \(\iota(\Gamma)\). Note that a graph has infinitely many realizations as a planar graph, by choosing different embeddings.
**Definition 4.3**.: Let \(a,b\in\mathbb{R}\) with \(a<b\). A _progressive graph_ in \(\mathbb{R}\times[a,b]\) is a planar graph \(\Gamma\subset\mathbb{R}\times[a,b]\) such that
1. All outer nodes are either on \(\mathbb{R}\times\{a\}\) or \(\mathbb{R}\times\{b\}\), i.e. (4.3) \[\partial\Gamma=\Gamma\cap(\mathbb{R}\times\{a,b\})\quad.\]
2. The restriction of the projection to the second component (4.4) \[\operatorname{pr}_{2}:\mathbb{R}\times[a,b]\to[a,b]\] to any connected component of \(\Gamma-\Gamma_{0}\) is an injective map.
_Remark 4.4_.: Using the injective projection to the second component, every progressive graph is oriented. In addition, it is also polarized. For any \(v\in\Gamma_{0}\), we can pick \(u\in[a,\operatorname{pr}_{2}(v))\), such that any element of in\((v)\) intersects \(\mathbb{R}\times\{u\}\). Since the graph is progressive, the intersection points are unique. The intersection points of in\((v)\) with \(\mathbb{R}\times\{u\}\) are linearly ordered by the orientation of \(\mathbb{R}\) and induce a linear order on in\((v)\). Similar, one defines a linear order on out\((v)\) using the intersection with \(\mathbb{R}\times\{w\}\), for \(w\in(\operatorname{pr}_{2}(v),b]\).
_Remark 4.5_.: A progressive graph cannot have cups, caps or circles, since the restriction of \(\operatorname{pr}_{2}\) to these would be non-injective. This mirrors the fact that in a general non-pivotal category left and right duals for an object are not isomorphic and that there are no categorical traces. Thus we should not represent (co-)evaluation morphisms simply by oriented cups and caps, but use explicitly labelled coupons. In addition, in the absence of a categorical trace, we cannot make sense of a circle-shaped diagram.
Since a progressive graph \(\Gamma\) is always polarized, we have a notion of a \(\mathcal{C}\)-coloring for it, where \(\mathcal{C}\) is a monoidal category. Given a \(\mathcal{C}\)-coloring \(\varphi\coloneqq(\varphi_{0},\varphi_{1})\) of \(\Gamma\), we associate to every boundary node \(v\in\partial\Gamma\) the object in \(\mathcal{C}\) of its adjacent edge. The _domain_\(\operatorname{dom}(\Gamma,\varphi)\) of \(\Gamma\) is the linearly ordered set of objects assigned to the boundary node in \(\mathbb{R}\times\{a\}\). Its _codomain_\(\operatorname{codim}(\Gamma,\varphi)\) is the linearly ordered set of objects assigned to the boundary nodes in \(\mathbb{R}\times\{b\}\).
To the pair \((\Gamma,\varphi)\) of a progressive graph \(\Gamma\) with \(\mathcal{C}\)-coloring \(\varphi\) and \(\operatorname{dom}(\Gamma,\varphi)=(X_{1},\cdots,X_{n})\) and \(\operatorname{codim}(\Gamma,\varphi)=(Y_{1},\cdots,Y_{m})\), we can associate a morphism in \(\mathcal{C}\)
\[f_{\Gamma}:X_{1}\otimes\cdots\otimes X_{n}\to Y_{1}\otimes\cdots Y_{m}. \tag{4.5}\]
The full technical details of this construction can be found in [1]. We will discuss it for an example, the general procedure will then be clear.
Let \((\Gamma,\Gamma_{0},\partial\Gamma)\) be the following \(\mathcal{C}\)-colored progressive graph:
The graph has ten edges, which are colored by the objects \((X_{1},X_{2},X_{3},X_{4},Z_{1},Z_{2},Z_{3},Y_{1},Y_{2},Y_{3})\), and \(13\) nodes, \(5\) of which are inner nodes colored by morphisms \((f_{1},f_{2},f_{3},f_{4},f_{5})\). It has domain \(\operatorname{dom}(\Gamma)=(X_{1},\cdots,X_{4})\) and codomain \(\operatorname{codom}(\Gamma)=(Y_{1},Y_{2},Y_{3})\). In addition to the graph, we show eight auxiliary dashed lines:
1. Two horizontal ones at \(\mathbb{R}\times\{t_{1}\}\) and \(\mathbb{R}\times\{t_{2}\}\). These are called _regular level lines_ and their levels \(0<t_{1}<t_{2}<1\) are chosen such that \(\mathbb{R}\times\{t_{i}\}\) does not intersect the inner nodes \(\Gamma_{0}-\partial\Gamma\). Cutting \(\Gamma\) at \(\mathbb{R}\times\{t_{1}\}\) and \(\mathbb{R}\times\{t_{2}\}\), we get three consecutive progressive graphs \(\Gamma_{1}\), \(\Gamma_{2}\) and \(\Gamma_{3}\), where \(\Gamma_{1}\) is the progressive graph in \(\mathbb{R}\times[0,t_{1}]\), \(\Gamma_{2}\) is the one in \(\mathbb{R}\times[t_{1},t_{2}]\) and \(\Gamma_{3}\) is the top one in \(\mathbb{R}\times[t_{2},1]\).
2. Six vertical lines, three in \(\Gamma_{1}\), two in \(\Gamma_{2}\) and one in \(\Gamma_{3}\). Each collection of vertical lines gives a _tensor decomposition_ of \(\Gamma_{1}\), \(\Gamma_{2}\) and \(\Gamma_{3}\), respectively. E.g. the three vertical lines in \(\Gamma_{1}\), split it into a disjoint union of four graphs \(\Gamma_{1}^{i}\), \(i=1,\cdots,4\), which are linearly ordered from left to right. Each \(\Gamma_{1}^{i}\) either contains exactly one inner node or does not contain an inner node.
The \(\mathcal{C}\)-coloring of \(\Gamma\) associates to \(\Gamma_{1}^{i}\) a morphism in \(\mathcal{C}\). For the graphs \(\Gamma_{1}^{i}\) these are
\[f_{\Gamma_{1}^{i}}=\operatorname{id}_{X_{1}},\quad f_{\Gamma_{1}^{2}}= \operatorname{id}_{X_{2}},\quad f_{\Gamma_{1}^{3}}=f_{4},\quad f_{\Gamma_{1}^ {4}}=\operatorname{id}_{X_{4}}, \tag{4.6}\]
with \(f_{4}\in\operatorname{Hom}_{\mathcal{C}}(X_{3},Z_{2}\otimes Z_{3})\) as in figure 1. The progressive graph \(\Gamma_{1}\) thus evaluates to the morphism
\[f_{\Gamma_{1}}\coloneqq f_{\Gamma_{1}^{1}}\otimes f_{\Gamma_{1}^{2}}\otimes f _{\Gamma_{1}^{3}}\otimes f_{\Gamma_{1}^{4}}:X_{1}\otimes X_{2}\otimes X_{3} \otimes X_{4}\to X_{1}\otimes X_{2}\otimes Z_{2}\otimes Z_{3}\otimes X_{4}, \tag{4.7}\]
i.e. \(f_{\Gamma_{1}}=\operatorname{id}_{X_{1}}\otimes\operatorname{id}_{X_{2}}\otimes f _{4}\otimes\operatorname{id}_{X_{4}}\). The morphisms \(f_{\Gamma_{2}}\) and \(f_{\Gamma_{3}}\) are defined analogously. The morphism associated to the whole progressive graph is given by
\[f_{\Gamma}\coloneqq f_{\Gamma_{3}}\circ f_{\Gamma_{2}}\circ f_{\Gamma_{1}} \tag{4.8}\]
_Remark 4.6_.: We highlight the two very different roles of the \(x\)-direction and the \(y\)-directions in the plane: The horizontal \(x\)-coordinate corresponds to the monoidal product in \(\mathcal{C}\), whereas the vertical \(y\)-direction corresponds to the composition of morphisms. In other words, the implicitly chosen standard 2-framing on the strip \(\mathbb{R}\times[0,1]\) is essential for evaluating a progressive graph \(\Gamma\) to a morphism in \(\mathcal{C}\).
By one of the main results in [1], morphism \(f_{\Gamma}:\operatorname{dom}(\Gamma,\varphi)\to\operatorname{codim}(\Gamma,\varphi)\) constructed for a \(\mathcal{C}\)-colored progressive graph \(\Gamma\) neither depends on the choice of the regular level lines, nor on the tensor decomposition. Consider two \(\mathcal{C}\)-colored progressive graphs \((\Gamma_{1},\varphi_{1})\), \((\Gamma_{2},\varphi_{2})\) in \(\mathbb{R}\times[0,1]\). We say that \(\Gamma_{1}\)_and \(\Gamma_{2}\) are progressively isotopic_, if there exists an isotopy \(H:[0,1]\times(\mathbb{R}\times[0,1])\) from \(\Gamma_{1}\) to \(\Gamma_{2}\), such that \(H(s,\bullet)|_{\Gamma_{1}}\) is a progressive graph for all \(s\in[0,1]\). The isotopy \(H\) is called a _progressive isotopy_. Invariance of the associated morphism for a \(\mathcal{C}\)-colored progressive graph under the auxiliary decomposition in regular levels and tensor decompositions is then linked to the invariance under progressive isotopies, i.e. if \((\Gamma_{1},\varphi_{1})\) and \((\Gamma_{2},\varphi_{2})\) are progressively isotopic, then \(f_{\Gamma_{1}}=f_{\Gamma_{2}}\).
Conversely, every morphism in \(\mathcal{C}\) can be represented by a \(\mathcal{C}\)-colored graph:
\[f:X_{1}\otimes\cdots\otimes X_{n}\to Y_{1},\ldots,Y_{m}\ \mapsto\]
Obviously, a morphism can have different realizations as a progressive graph. The graph \(\Gamma\) from figure 1 describing the morphism \(f_{\Gamma}\) is topologically very different from the graph with a single inner node colored by \(f_{\Gamma}\) in equation (4.8). As in the oriented case, identifying different graphical realizations of the same morphism will be at the heart of the framed string-net construction.
## 5. **Framed String-Net Construction**
In this section, we define string-nets on 2-framed surfaces. The algebraic input for our string-net construction is a finite tensor category; as output, it produces a vector space for any 2-framed surface. The main point of the construction is to globalize the discussion of progressive graphs from the standard framed plane in section 4 to an arbitrary framed surface.
### Locally Progressive Graphs
**Definition 5.1**.: Let \(\Sigma\) be a smooth surface. \(\Sigma\) is _\(2\)-framed_ if there exist two nowhere vanishing vector fields \(X_{1},X_{2}\in\Gamma(T\Sigma)\), such that \(((X_{1})_{p},(X_{2})_{p})\in T_{p}\Sigma\) is an ordered basis for every \(p\in\Sigma\). The pair \((X_{1},X_{2})\) is a _global ordered frame_ for the tangent bundle \(T\Sigma\) of \(\Sigma\).
To any vector field \(X\) on \(\Sigma\), we can associate its _maximal flow_\(\theta:D\to\Sigma\). The domain is a subset \(D\subset\mathbb{R}\times\Sigma\) where \(D^{(p)}\coloneqq\{t\in\mathbb{R}\,|\,(t,p)\in D\}\) is an open interval. \(D\) is called a _flow domain_. The flow \(\theta\) satisfies \(\theta(0,p)=p\) and \(\theta(t_{1},\theta(t_{2},p))=\theta(t_{1}+t_{2},p)\) for all \(p\in\Sigma\). The flow is _maximal for \(X\)_ in the sense that for all \(p\in\Sigma\), the curve
\[\theta(\bullet,p)\to\Sigma \tag{5.1}\]
is the unique maximal integral curve of \(X\), i.e. \(\frac{\mathrm{d}}{\mathrm{d}t}\theta(t,p)=X_{\theta(t,p)}\) with initial value \(\theta(0,p)=p\). For \((X_{1},X_{2})\) a global frame on \(\Sigma\), we denote by \(\theta_{1}:D_{1}\to\Sigma\) and \(\theta_{2}:D_{2}\to\Sigma\) the corresponding maximal flows. The maximal integral curves for \((X_{1},X_{2})\) through a point \(p\in\Sigma\) are denoted by \(\theta_{1}^{(p)}:D_{1}^{(p)}\to\Sigma\) and \(\theta_{2}^{(p)}:D_{2}^{(p)}\to\Sigma\). Since \(X_{1},X_{2}\) are nowhere vanishing, the curves \(\theta_{1}^{(p)}\), \(\theta_{2}^{(p)}\) are smooth immersions for all \(p\in\Sigma\). Further details on maximal flows and framed manifolds and flows can be found e.g. in [13, Chapter 9].
Recall that a planar graph was defined as an abstract graph \((\Gamma,\Gamma_{0},\partial\Gamma)\) with a smooth map \(\iota:\Gamma\to\mathbb{R}^{2}\), such that \(\iota_{|\Gamma-\Gamma_{0}}\) is a smooth embedding. Similar, for \((\Sigma,\partial\Sigma)\) a smooth surface \(\Sigma\) with boundary \(\partial\Sigma\) an _embedded graph_ is an abstract graph \((\Gamma,\Gamma_{0},\partial\Gamma)\) together with a smooth map \(\iota_{\Sigma}:\Gamma\to\Sigma\), such that \(\iota_{\Sigma}|_{|\Gamma-\Gamma_{0}}\) is an embedding and \(\iota_{\Sigma}(\partial\Gamma)=\iota_{\Sigma}(\Gamma)\cap\partial\Sigma\). For an embedded graph \((\Gamma,\iota_{\Sigma})\), we usually suppress the embedding \(\iota_{\Sigma}\) from the notation.
We want to formulate the equivalent of a progressive graph for an arbitrary \(2\)-framed surface. In order to do so, we have to generalize the condition of injectivity of the projection to the second component that features in the definition of a progressive graph. The idea is to formulate a local condition on graphs at every point on the surface. Using the global frame of a \(2\)-framed surface \(\Sigma\), there is a neighborhood around every \(p\in\Sigma\), which looks like the strip \(\mathbb{R}\times[0,1]\) and the two vector fields give the two distinguished directions on the strip. The flow lines of \(X_{2}\) are then a natural analog of the vertical \(y\)-direction in the plane and we can perform a projection to \(X_{2}\)-flow lines by moving points along the flow of \(X_{1}\) (see figure 2). Given an embedded graph \(\Gamma\subset\Sigma\), we require that locally around every point, this projection, restricted to \(\Gamma\), is injective. This allows us to define a local evaluation map of an embedded \(\mathbb{C}\)-graph, which is the framed analog of the evaluation of graphs inside of disks in the oriented case.
A variant of the flow-out theorem [13, Theorem 9.20] shows that for a \(2\)-framed surface \(\Sigma\) with global frame \((X_{1},X_{2})\) and corresponding flow domains \(D_{1}\), \(D_{2}\), for every point \(p\in\Sigma\), there exist open intervals \(I_{1}^{(p)}\subset D_{1}^{(p)}\), \(I_{2}^{(p)}\subset D_{2}^{(p)}\) containing \(0\), such that
\[\begin{split}\phi^{(p)}:\overline{I}_{1}^{(p)}\times I_{2}^{(p)} &\hookrightarrow\Sigma\\ (s,t)&\mapsto\theta_{1}(s,\theta_{2}(t,p))\end{split} \tag{5.2}\]
is a smooth embedding. Let \((\Gamma,\Gamma_{0},\partial\Gamma)\) be an embedded graph in \(\Sigma\). An element \(t\in I_{2}^{(p)}\) is _regular_ with respect to \(\Gamma\), if \(\phi^{(p)}(I_{1}^{(p)}\times\{t\})\cap(\Gamma_{0}-\partial\Gamma)=\emptyset\), i.e. the flow line of \(X_{1}\) at
\(t\) inside \(\phi^{(p)}(\overline{I}_{1}^{(p)}\times I_{2}^{(p)})\) does not contain any inner nodes of \(\Gamma\). If \(t_{1}<0<t_{2}\) are regular levels, the image \(\phi^{(p)}(I_{1}^{(p)}\times[t_{1},t_{2}])\) is called a _standard rectangle_ for \(\Gamma\) at \(p\). The restriction of \(\Gamma\) to a standard rectangle at \(p\) is denoted by \((\Gamma^{(p)}[t_{1},t_{2}],\Gamma_{0}^{(p)}[t_{1},t_{2}],\partial\Gamma^{(p)} [t_{1},t_{2}])\).
**Definition 5.2**.: Let \((\Sigma,(X_{1},X_{2}))\) be a 2-framed surface and \((\Gamma,\Gamma_{0},\partial\Gamma)\) an embedded graph in \(\Sigma\). Then \(\Gamma\) is a _locally progressive graph_, if for every \(p\in\Sigma\), there exists a standard rectangle \(\phi^{(p)}(I_{1}^{(p)}\times[t_{1},t_{2}])\) for \(\Gamma\) at \(p\), such that the restriction of
\[\begin{split}\mathrm{pr}_{2}^{(p)}\coloneqq\mathrm{pr}_{2}\circ \left(\phi^{(p)}\right)^{-1}:\phi^{(p)}(I_{1}^{(p)}\times[t_{1},t_{2}])& \rightarrow[t_{1},t_{2}]\\ \phi^{(p)}(s,t)&\mapsto t\end{split} \tag{5.3}\]
to \(\Gamma^{(p)}[t_{1},t_{2}]-\Gamma^{(p)}[t_{1},t_{2}]\) is injective.
In order to understand these definitions, it is best to consider figure 2. The figure shows a small patch of a 2-framed surface \((\Sigma,(X_{1},X_{2}))\). The red horizontal lines are flow lines of \(X_{1}\) and the blue vertical line is a flow line of \(X_{2}\). In black, we show an embedded graph. Each of the dashed horizontal lines intersects an edge of the embedded graph at a unique point. Transporting this intersection point along the horizontal line until we hit the vertical blue line, defines the projection map \(\mathrm{pr}_{2}^{(p)}\) evaluated at the intersection point. For the graph shown in figure 2 the projection is obviously injective and thus, this is a locally progressive graph for the underlying 2-framed surface.
**Definition 5.3**.: Let \((\Gamma,\Gamma_{0},\partial\Gamma)\) be an embedded graph inside a framed surface \(\Sigma\) and \(\phi^{(p)}:\overline{I}_{1}^{(p)}\times I_{2}^{(p)}\hookrightarrow\Sigma\) a standard rectangle at \(p\). Given two regular levels \(t_{1}<0<t_{2}\) and \([s_{1},s_{2}]\subset\overline{I}_{1}^{(p)}\), the image \(\phi^{(p)}\left([s_{1},s_{2}]\times[t_{1},t_{2}]\right)\) is an _evaluation rectangle_ for \(\Gamma\) at \(p\), if
\[\Gamma\cap\phi^{(p)}(\{s_{1},s_{2}\}\times I_{2}^{(p)})=\emptyset \tag{5.4}\]
Figure 2. In the colored version, red horizontal lines correspond to flow line of \(X_{1}\) and the blue vertical line is a flow line of \(X_{2}\). together they yield a standard rectangle (and even an evaluation rectangle) for the locally progressive graph shown in black.
and
\[\Gamma_{0}\cap\phi^{(p)}\left([s_{1},s_{2}]\times\{t_{1},t_{2}\}\right)=\emptyset \quad. \tag{5.5}\]
Let now \(\mathcal{C}\) be again a finite tensor category, which is not assumed to be pivotal. An evaluation rectangle at \(p\in\Sigma\) for a \(\mathcal{C}\)-colored graph \(\Gamma\) will be denoted by \(R^{(p)}_{\Gamma}\).
Given an evaluation rectangle \(R^{(p)}_{\Gamma}=\phi^{(p)}([s_{1},s_{2}]\times[t_{1},t_{2}])\) for a locally progressive graph \(\mathcal{C}\)-colored graph \(\Gamma\) in \(\Sigma\), by (5.5), only the lower and upper horizontal flow lines \(\phi^{(p)}([s_{1},s_{2}]\times t_{1})\), \(\phi^{(p)}([s_{1},s_{2}]\times t_{2})\) intersect edges of the graph \(\Gamma\). We associate to each intersection point the corresponding \(\mathcal{C}\)-color of the edge of \(\Gamma\). Taking the tensor product of these elements according to the linear order on \([s_{1},s_{2}]\) gives the _(co-)domain of \(\Gamma\) with respect to \(R^{(p)}_{\Gamma}\)_, which will be denoted by \(\mathrm{dom}_{R}(\Gamma)\) and \(\mathrm{codom}_{R}(\Gamma)\), respectively. Note that in analogy to the (co-)domain of a progressive graph, we have \(\mathrm{dom}_{R}(\Gamma)\), \(\mathrm{codom}_{R}(\Gamma)\in\mathrm{ob}(\mathcal{C})\).
_Remark 5.4_.: From the definition of a locally progressive graph, it directly follows that the preimage of \(\Gamma\) is a progressive graph in the rectangle \([s_{1},s_{2}]\times[t_{1},t_{2}]\) for every evaluation rectangle \(\phi^{(p)}([s_{1},s_{2}]\times[t_{1},t_{2}])\). The \(\mathcal{C}\)-colored progressive graph has (co-)domain (co-)dom\({}_{R}(\Gamma)\) and yields a morphism in \(f^{\Gamma}_{R}\in\mathrm{Hom}_{\mathcal{C}}(\mathrm{dom}_{R}(\Gamma), \mathrm{codom}_{R}(\Gamma))\). This defines an evaluation map \(\nu_{R}(\Gamma)\coloneqq f^{\Gamma}_{R}\).
_Remark 5.5_.: When defining the evaluation of a \(\mathcal{C}\)-colored progressive graph, we stressed the very different roles the \(x-\) and \(y\)-directions had in the plane. The first corresponds to taking tensor products in \(\mathcal{C}\), whereas the latter encodes composition of morphisms. The vector fields of a global frame have similar roles for \(\mathcal{C}\)-colored embedded graphs. As stated in Remark 5.4, the \(y\)-flow lines define domain and codomain for the morphism corresponding to a locally progressive graph, whereas going along \(x\)-flow lines corresponds to taking tensor products.
### Framed String-Net Spaces
Let \(\mathcal{C}\) be a finite tensor category and \(\Sigma\) a 2-framed surface. We now define a string-net space in terms of \(\mathcal{C}\)-graphs on \(\Sigma\), which we are going to call framed string-net space.
**Definition 5.6**.: Let \(B\coloneqq\{p_{1},\cdots,p_{n}\}\subset\partial\Sigma\) be a finite and possibly empty subset of the boundary of the surface \(\Sigma\) and \(\nu_{B}:B\to\mathrm{ob}(\mathcal{C})\) a map. The pair \((B,\nu_{B})\) is called a _boundary value_.
Let \((\Gamma,\Gamma_{0},\partial\Gamma)\) be \(\mathcal{C}\)-colored embedded graph in \(\Sigma\). Boundary nodes of \(\Gamma\) are mapped to the boundary \(\partial\Sigma\) of the surface. This gives a finite subset \(B_{\Gamma}\) of the boundary. Defining a map \(\nu_{\Gamma}:B_{\Gamma}\to\mathrm{ob}(\mathcal{C})\) by mapping the boundary node to the \(\mathcal{C}\)-color of its adjacent edge, we obtain a boundary value \((B_{\Gamma},\nu_{\Gamma})\) for a \(\mathcal{C}\)-colored embedded graph. We call this the _boundary value_ of the graph \(\Gamma\).
**Definition 5.7**.: The set of all \(\mathcal{C}\)-colored locally progressive graphs on a 2-framed surface \(\Sigma\) with boundary value \((B,\nu_{B})\) is denoted by
\[\mathrm{Graph}(\Sigma,(B,\nu_{B}))\quad. \tag{5.6}\]
The vector space
\[\mathrm{VGraph}_{\mathbb{K}}(\Sigma,(B,\nu_{B}))\coloneqq\mathrm{span}_{ \mathbb{K}}\mathrm{Graph}(\Sigma,(B,\nu_{B})) \tag{5.7}\]
freely generated by this set is called _framed pre-string-net space_.
From now on all string-nets on 2-framed surfaces will be locally-progressive. Similar to the construction of string-net spaces on oriented surfaces, we want to identify elements of \(\operatorname{VGraph}(\Sigma,(B,\nu_{B}))\) if they locally evaluate to the same morphism in \(\mathcal{C}\). However, the additional datum of a 2-framing on \(\Sigma\) allows us to use evaluation rectangles of graphs instead of disks so that as an algebraic input we do not need a pivotal structure on \(\mathcal{C}\). By Remark 5.4 the preimage of a locally progressive graph inside every evaluation rectangle is a progressive graph. Thus, we can use the evaluation map for \(\mathcal{C}\)-colored progressive graphs we explained in section 4 to associate to every \(\mathcal{C}\)-colored locally progressive graph and evaluation rectangle \(\phi^{(p)}\left([s_{1},s_{2}]\times[t_{1},t_{2}]\right)\) at any point \(p\in\Sigma\) a morphism in \(\mathcal{C}\).
**Definition 5.8**.: Let \((B,\nu_{B})\) be a boundary value and \(\Gamma_{1},\cdots,\Gamma_{n}\in\operatorname{Graph}(\Sigma,(B,\nu_{B}))\). For \(\lambda_{1},\cdots,\lambda_{n}\in\mathbb{K}\), the element \(\Gamma\coloneqq\sum_{i=1}^{n}\lambda_{i}\Gamma_{i}\in\operatorname{VGraph}_{ \mathbb{K}}(\Sigma,(B,\nu_{B}))\) is a _null graph_, if there exists a common evaluation rectangle \(R^{(p)}\coloneqq\phi^{(p)}\left([s_{1},s_{2}]\times[t_{1},t_{2}]\right)\) for all \(\Gamma_{i}\), such that
1. (5.8) \[\Gamma_{i}\cap\phi^{(p)}([s_{1},s_{2}]\times\{t_{1},t_{2}\})=\Gamma_{j}\cap \phi^{(p)}([s_{1},s_{2}]\times\{t_{1},t_{2}\})\] for all \(i,j=1,\cdots,n\).
2. \(\operatorname{dom}_{R}(\Gamma)\coloneqq\operatorname{dom}_{R}(\Gamma_{i})= \operatorname{dom}_{R}(\Gamma_{j})\) and \(\operatorname{codim}_{R}(\Gamma)\coloneqq\operatorname{codim}_{R}(\Gamma_{i} )=\operatorname{codim}(\Gamma_{j})\) for all \(i,j=1,\cdots,j\).
3. \(\Gamma_{i}|_{\Sigma-R^{(p)}}=\Gamma_{j}|_{\Sigma-R^{(p)}}\) for all \(i,j=1,\cdots,n\).
4. (5.9) \[\sum_{i=1}^{n}\lambda_{i}\nu_{R}(\Gamma_{i})=0\in\operatorname{Hom}_{ \mathcal{C}}(\operatorname{dom}_{R}(\Gamma),\operatorname{codim}_{R}(\Gamma))\]
The sub-vector space spanned by all null graphs is denoted by \(\operatorname{NGraph}(\Sigma,(B,\nu_{B}))\).
**Definition 5.9**.: Let \(\Sigma\) be a framed surface, \(\mathcal{C}\) a finite tensor category and \((B,\nu_{B})\) be a boundary value in \(\mathcal{C}\). The _framed string-net space_ with boundary value \((B,\nu_{B})\) is defined as the vector space quotient
\[\operatorname{SN}^{f_{r}}(\Sigma,(B,\nu_{B}))\coloneqq\frac{\operatorname{ VGraph}(\Sigma,(B,\nu_{B}))}{\operatorname{NGraph}(\Sigma,(B,\nu_{B}))} \tag{5.10}\]
_Remark 5.10_.: Taking the quotient by null graphs also takes appropriate isotopies between locally progressive graphs into account. Recall that we defined locally progressive graphs as embedded graphs with a fixed embedding. Thus, a priori abstract \(\mathcal{C}\)-colored graphs with different embeddings yield different elements in \(\operatorname{VGraph}(\Sigma)\). By taking the above quotient, we can identify embedded graphs which differ by those isotopies such that graphs along the isotopy are all locally progressive graphs.
## 6. **Circle Categories and Twisted Drinfeld-Centers**
In this final section, we put our construction of string-nets for framed surfaces to the test and compute the relevant circle categories. We show that they are related to Drinfeld centers twisted by appropriate powers of the double dual.
### \(2\)-Framings of the Circle and Framed Cylinders
A _\(2\)-framing_ of \(S^{1}\) of a circle \(S^{1}\) is an isomorphism \(\lambda:TS^{1}\oplus\underline{\mathbb{R}}\xrightarrow{\simeq}\underline{ \mathbb{R}^{2}}\) of vector bundles, where \(\underline{\mathbb{R}}\to S^{1}\) and \(\underline{\mathbb{R}^{2}}\to S^{1}\) are the trivial vector bundles with fibers \(\mathbb{R}\) and \(\mathbb{R}^{2}\), respectively. There is a bijection [10, section 1.1]
\[\left\{\text{Homotopy classes of $2$-framings of $S^{1}$}\right\}\simeq\mathbb{Z}\quad. \tag{6.1}\]
The different \(2\)-framings for \(n\in\mathbb{Z}\) can be depicted as follows. We identify \(S^{1}\) as the quotient \(S^{1}\simeq[0,1]/0\sim 1\) and draw a circle as an interval, while keeping in mind that we identify the endpoints. The integer \(n\) then counts the number of full rotations in counterclockwise direction a frame of \(\mathbb{R}^{2}\) undergoes while going around the circle. We denote the circle with \(2\)-framing corresponding to \(n\in\mathbb{Z}\) by \(S^{1}_{n}\). We can trivially continue the \(2\)-framing of \(S^{1}_{n}\) along the radial direction of a cylinder over \(S^{1}\). This gives a \(2\)-framed cylinder \(\mathsf{C}\), which can be seen as \(2\)-framed cobordism \(\mathsf{C}:S^{1}_{n}\to S^{1}_{n}\). Possibly after a global rotation of the two vector fields, we can arrange that there is at least one point on \(S^{1}\) such that the flow line for the second vector field is radial. We fix such a point as an auxiliary datum and call the corresponding flow line the _distinguished radial line_.
We denote the cylinder with this particular \(2\)-framing corresponding to \(n\in\mathbb{Z}\) by \(\mathsf{C}_{n}\). The flow lines for \(\mathsf{C}_{-1}\), \(\mathsf{C}_{0}\) and \(\mathsf{C}_{1}\) are shown in figure 3.
### Circle Categories
Given a finite tensor category \(\mathcal{C}\) and a \(2\)-framed cylinder \(\mathsf{C}_{n}\) over a one-manifold, we construct a \(\mathsf{Vect}_{\mathbb{K}}\)-enriched category as follows.
**Definition 6.1**.: The _circle category_\(\mathsf{Cyl}(\mathsf{C}_{n},\mathcal{C})\) is defined as follows:
* the _objects_ of \(\mathsf{Cyl}(\mathsf{C}_{n},\mathcal{C})\) are the objects of \(\mathcal{C}\);
* the vector space of _morphisms_ between two objects \(X,Y\in\mathsf{Cyl}(\mathsf{C}_{n},\mathcal{C})\) is the framed string-net space (6.2) \[\operatorname{Hom}_{\mathcal{Cyl}(\mathsf{C}_{n},\mathcal{C})}(X,Y)\coloneqq \operatorname{SN}^{f_{r}}(\mathsf{C}_{n},B_{X,Y})\] where we take the boundary value \(B_{X,Y}\coloneqq\left(\left\{p_{1},p_{2}\right\},\left(X,Y\right)\right)\) with the chosen point \(p_{1}\) on \(S^{1}\times\left\{0\right\}\) and its counterpart \(p_{2}\) on \(S^{1}\times\left\{1\right\}\) in \(\mathsf{C}_{n}\).
The composition of morphisms is given by stacking cylinders and concatenating the corresponding string-nets.
We first define a functor \(I:\ \mathcal{C}\to\mathsf{Cyl}(\mathsf{C}_{n},\mathcal{C})\) which is the identity on objects. It maps a morphism \(f:c_{1}\to c_{2}\) in \(\mathcal{C}\) to the string-net which has two edges, both on the distinguished radial line, with a single node on this line, labeled by \(f\).
In the following, we consider as an example the _blackboard framed cylinder_ which is the framed surface \(\mathsf{C}_{1}\) in figure 3.
### Circle Category as a Kleisli Category
To describe the morphism spaces of the circle category purely in terms of algebraic data, we need to know that string-net constructions obey factorization. This has been discussed repeatedly in the literature, starting from [25, Section 4.4]. Other references include [10, p. 40] and [11, Section 7]. The idea is that gluing relates the left exact functors associated to a surface to a coend. The cylinder can be obtained by gluing a rectangle at two opposite boundaries; taking the insertions at the remaining boundaries into account and using the fact that for the rectangle string-net spaces give morphisms in \(\mathcal{C}\), the idea to implement factorization by a coend yields
\[\operatorname{Hom}_{\mathcal{Cyl}(\mathsf{C}_{1},\mathcal{C})}(\bullet, \bullet)\cong\oint^{c\in\mathcal{C}}\operatorname{Hom}_{\mathcal{C}}(\left( \bullet\right),c\otimes\left(\bullet\right)\otimes{}^{\vee}c)\quad. \tag{6.3}\]
**Lemma 6.2**.: _Let \(X\), \(Y\in\mathcal{C}\) be two objects of a finite tensor category \(\mathcal{C}\). Then there is an isomorphism of vector spaces_
\[\operatorname{Hom}_{\mathcal{Cyl}(\mathsf{C}_{1},\mathcal{C})}(x,y)\simeq \operatorname{Hom}_{\mathcal{C}}(x,Ty) \tag{6.4}\]
_where \(T\coloneqq{}_{\operatorname{id}}T{}_{\operatorname{id}}\) is the usual central monad of \(\mathcal{C}\)._
Proof.: Recall from Lemma 3.5 that
\[\operatorname{Hom}_{\mathcal{C}}(x,Ty)=\oint^{c\in\mathcal{C}}\operatorname{ Hom}_{\mathcal{C}}(\left(\bullet\right),c\otimes\left(\bullet\right)\otimes{}^{ \vee}c)(x,y)\quad. \tag{6.5}\]
and combine it with the factorization (6.3).
**Theorem 6.3**.: _There is an equivalence of \(\mathsf{Vect}\)-enriched categories_
\[\mathsf{Cyl}(\mathsf{C}_{1},\mathcal{C})\cong\mathcal{C}_{T}\,. \tag{6.6}\]
Proof.: Note that the circle category \(\mathsf{Cyl}(\mathsf{C}_{1},\mathcal{C})\) and the Kleisli category \(\mathcal{C}_{T}\) have the same objects as \(\mathcal{C}\). Thus we can define a functor
\[\kappa:\mathsf{Cyl}(\mathsf{C}_{1},\mathcal{C})\to\mathcal{C}_{T} \tag{6.7}\]
which is the identity on objects and acts on morphism spaces via the isomorphism induced by Lemma 6.2. For \(\kappa\) to be a functor, we need to check that it respects identity morphisms and composition of morphisms. For \(\overline{x}\), \(\overline{y}\in\mathcal{C}_{T}\) it holds \(\operatorname{Hom}_{\mathcal{C}_{T}}(\overline{x},\overline{y})=\operatorname {Hom}_{\mathcal{C}}(x,Ty)\). Let \(\{\iota_{c}:c\otimes(\bullet)\otimes\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
form, we get
\[\mapsto(t_{(c\otimes d)})_{z}\circ(\operatorname{id}\otimes g\otimes \operatorname{id})\circ h) \tag{6.10}\]
There is the commutative diagram
The lower path is the composition \((\alpha_{d}\circ g)\circ_{\otimes_{T}}(\alpha_{c}\circ h)\) in \(\mathcal{C}_{T}\). By Lemma 6.2, \(\kappa\) is fully faithful and since it is essentially surjective, it is an equivalence.
Recall the functor \(I:\ \mathcal{C}\to\mathsf{Cyl}(\mathsf{C}_{1},\mathcal{C})\) introduced at the end of section 6.2. Under the equivalence between the circle category and the Kleisli category, it is mapped to the induction functor \(I_{T}:\ \mathcal{C}\to\mathcal{C}_{T}\). Combining from Theorem 6.3, Proposition 3.6 and Proposition 3.2, we obtain
**Theorem 6.4**.: _Let \(\operatorname{Psh}_{I}(\mathsf{Cyl}(\mathsf{C}_{1},\mathcal{C}))\) be the category of \(I\)-representable presheaves on the circle category \(\mathsf{Cyl}(\mathsf{C}_{1},\mathcal{C})\). There is an equivalence of \(\mathbb{K}\)-linear categories_
\[\operatorname{Psh}_{I}(\mathsf{Cyl}(\mathsf{C}_{1},\mathcal{C}))\cong\mathsf{ Z}(\mathcal{C})\,, \tag{6.11}\]
_Remark 6.5_.:
1. Since \(\mathcal{C}\) is not required to be fusion, the Karoubification of the circle category \(\mathsf{Cyl}(\mathsf{C}_{1},\mathcal{C})\) does not, in general, yield the full center \(\mathsf{Z}(\mathcal{C})\). Recall that a projective module for a monad is a retract of a free module (cf. [17, Section 7.3.2]). The Karoubification of the Kleisli category only yields the subcategory of \(\mathsf{Z}(\mathcal{C})\) which has as objects the objects that under the equivalence \(T-\mathsf{Mod}\simeq\mathsf{Z}(\mathcal{C})\) correspond to projective \(T\)-modules. This was our motivation to discuss a different completion of the Kleisli category as \(I\)-representable presheaves on the Kleisli category in section 3.2.
2. For the general \(2\)-framed cylinder \(\mathsf{C}_{n}\), the \(2\)-framing forces us to add sufficiently many evaluations and coevaluations so that we get an equivalence (6.12) \[\operatorname{Psh}_{I}(\mathsf{Cyl}(\mathsf{C}_{n},\mathcal{C}))\simeq\mathsf{ Z}_{n}(\mathcal{C})\] The proof of this is in complete analogy to the case of \(\mathsf{C}_{1}\).
Our computation of circle categories for string-nets on framed cylinders \(\mathsf{C}_{n}\) is in complete accordance with the results of [13, Corollary 3.2.3, table 3]. | 2- framed表面におけるストリングネットの構築について議論します。入力として有限で、剛体のテンソルカテゴリーである、非pivotalかsemi-simpleであると想定されるわけではありません。本構文は、枠付きストリングネットの構築において、円形カテゴリーが、双対反復関数の冪にTwistされたDrinfeld中心を計算することに示されました。
Please let me know if you need any further clarification or have any other sentences you want me to translate. |
2309.11573 | NLL/NLO$^-$ studies on Higgs-plus-jet production with POWHEG+JETHAD | We consider the semi-inclusive emission of a Higgs boson in association with
a light-flavored jet separated by a large rapidity interval at the LHC. The
accessed kinematic regimes fall into the so-called semi-hard sector, whose
theoretical description lies at the intersection corner between the collinear
factorization and the high-energy resummation. We present a prototype version
of a matching procedure aimed at combining next-to-leading fixed-order (NLO)
calculations from POWHEG with the resummation of next-to-leading energy
logarithms (NLL) as obtained from JETHAD. | Francesco Giovanni Celiberto, Luigi Delle Rose, Michael Fucilla, Gabriele Gatto, Alessandro Papa | 2023-09-20T18:18:21 | http://arxiv.org/abs/2309.11573v1 | # NLL/NLO\({}^{-}\) studies on Higgs-plus-jet production with POWHEG+JETHAD
###### Abstract:
We consider the semi-inclusive emission of a Higgs boson in association with a light-flavored jet separated by a large rapidity interval at the LHC. The accessed kinematic regimes fall into the so-called semi-hard sector, whose theoretical description lies at the intersection corner between the collinear factorization and the high-energy resummation. We present a prototype version of a matching procedure aimed at combining next-to-leading fixed-order (NLO) calculations from POWHEG with the resummation of next-to-leading energy logarithms (NLL) as obtained from JETHAD.
## 1 Introductory remarks
With the discovery of the Higgs boson at the LHC a new era of precision tests of the Standard Model, as well as of intensive searches for clues of New Physics, began. In this respect, an accurate description of the gluon-gluon fusion channel in perturbative Quantum Chromodynamics (QCD) is of top priority [1, 2]. Higher-order calculations are necessary ingredients for precise studies of Higgs production _via_ the well-grounded _collinear factorization_. Here, cross sections are elegantly cast as one-dimensional convolutions between collinear parton distribution functions (PDFs) and on-shell perturbative coefficient functions. At the same time, the theoretical description of Higgs-sensitive final states in the kinematic sectors accessible at the LHC and at future hadron and lepton colliders calls for the inclusion, to all orders, of logarithms which are systematically missed by a purely collinear vision. These logarithms can be large enough to spoil the convergence of the perturbative series, thus requiring the development of all-order _resummation_ techniques.
In this study we consider the _semi-hard_ QCD sector [3, 4, 5], where the rigorous scale hierarchy, \(\sqrt{s}\gg\{Q\}\gg\Lambda_{\rm QCD}\) (\(\sqrt{s}\) is the squared center-of-mass energy, \(\{Q\}\) is a set of process-dependent hard scales, \(\Lambda_{\rm QCD}\) is the QCD hadronization scale), brings to the growth of large energy logarithms. The Balitsky-Fadin-Kuraev-Lipatov (BFKL) resummation [6, 7] offers us a systematic way to resum to all orders these logarithms within the leading-logarithmic (LL) and the next-to-leading logarithmic (NLL) level (for recent advancements beyond NLL, see Ref. [8, 9, 10, 11]). Remarkably, the BFKL formalism and its nonlinear extension to the saturation regime gives us a direct access to the gluon distribution in the nucleon at low-\(x\)[12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]. Suitable reactions whereby testing BFKL and, more in general, high-energy dynamics in hadron collisions, feature the semi-inclusive emission of two objects possessing high transverse masses and being strongly separated in rapidity. One one hand, transverse masses well above \(\Lambda_{\rm QCD}\) make us fall into the semi-hard regime. On the other hand, a large final-state rapidity interval, \(\Delta Y\), heightens the contribution of undetected gluons strongly ordered in rapidity, which are responsible for large logarithmic corrections.
A solid description of these two-particle hadroproduction channels calls for the employment of a _multilateral_ formalism, where both the collinear and the high-energy dynamics come into play. To this extent, a _hybrid high-energy and collinear factorization_ (HyF) was developed [24, 25, 26]. * HyF partonic cross sections take the form of a convolution between two impact factors (or emission functions), which are process-dependent, and the NLL BFKL Green's function (analogous to the Sudakov factor of soft-gluon resummations), which is the process-universal. Impact factors are in turn written as collinear convolutions between standard collinear PDFs and singly off-shell coefficient functions. The state-of-the-art accuracy of HyF is NLL/NLO. This means that, for a given process, the relevant coefficient functions need to be calculated at fixed NLO accuracy. Otherwise, one must rely upon a partial next-to-leading treatment, labeled as \(\rm NLL/NLO\)* when only the Green's function is taken at NLL and both the coefficient functions are at LO, or \(\rm NLL/NLO\)- when the Green's function is at NLL, one coefficient function is at NLO, and the other one is at LO.
Footnote *: For similar approaches, close in spirit to ours, see Refs. [27, 28, 29, 30, 31].
Promising semi-inclusive channels whereby probing the semi-hard QCD sector are: emissions of two Mueller-Navelet jets [32, 33, 34, 35, 36, 37, 38, 39], multi-jet diffractive systems [40, 41, 42, 43, 44], Drell-Yan pairs [45, 46, 47, 48], light [49, 50, 51, 52, 53, 54, 55, 56] as well as singly heavy flavored [57, 58, 59, 60, 61, 62, 63, 64, 65, 66] hadrons, quarkonium states [67, 68, 69, 70, 71], and exotic matter candidates [72]. In this article we consider the semi-inclusive Higgs-plus-jet
process, which was studied in perturbative QCD within next-to-NLO accuracy [73, 74, 75] and _via_ the transverse-momentum resummation at the next-to-NLL level [76]. As \(\Delta Y\) grows, the impact of energy logarithms becomes larger and larger. Thus, the high-energy resummation, as encoded in the HyF formalism, comes out as a valuable tool for a proper and consistent description of Higgs-plus-jet differential rates [24, 77, 78].
We present the POWHEG+JETHAD method, a prototype version of a novel _matching_ procedure aimed at combining, in the context of Higgs-plus-jet rapidity and transverse-momentum distributions, next-to-leading fixed-order results with the resummation of next-to-leading energy logarithms. Results presented in the next section are for Higgs-plus-jet rapidity and transverse momentum spectra with the matching accuracy pushed to NLL/NLO\({}^{-}\) accuracy. They supersede the NLL/NLO\({}^{*}\) predictions of Ref. [79], but they are still preliminary, with a full NLL/NLO treatment being in preparation.
## 2 Higgs-plus-jet production: Matching NLL to NLO
An insightful information coming from quite recent, HyF-related studies on the Higgs transverse-momentum (\(p_{H}\)) spectrum in semi-inclusive Higgs-plus-jet emissions at the LHC, is the solid stability which this distribution exhibits under higher-order corrections and energy-scale variations. At the same time, however, large deviations HyF predictions from the fixed-order background have been observed, their weight reaching roughly two orders of magnitude when \(p_{H}\gtrsim 120\) GeV [24]. A similar trend has been shown by \(\Delta Y\)-distributions at LHC as well as nominal FCC energies [77].
This motivated us to develop a pioneering _matching_ procedure between NLO fixed-order results and NLL-resummed calculations, which permits to exactly remove, within the NLL/NLO\({}^{-}\) accuracy, the corresponding _double counting_. Indeed, given that the full NLO contribution to the forward Higgs emission function was calculated only recently [80, 81, 82, 83] and it has not yet been implemented in our reference technology, in the JETHAD code [70, 84, 85], we will rely upon a NLL/NLO\({}^{-}\) treatment. A sketch of our matching procedure reads
Figure 1: Higgs-plus-jet rapidity (left) and transverse-momentum (right) rates at \(\sqrt{s}=14\) TeV. Uncertainty bands reflect the variation of \(\mu_{R}\) and \(\mu_{F}\) scales in the \(1<C_{\mu}<2\) range. Text boxes exhibit kinematic cuts.
\[\underbrace{\mathrm{d}\sigma^{\mathrm{NLL/NLO^{-}}}(\Delta Y,\varphi,s)}_{ \mathrm{NLL/NLO^{-}}}(\Delta Y,\varphi,s)\ =\underbrace{\mathrm{d}\sigma^{\mathrm{NLO}}(\Delta Y,\varphi,s)}_{ \mathrm{NLL/NLO^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{d}\sigma^{\mathrm{NLL^{-}}}(\Delta Y,\varphi,s)}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{d}\sigma^{\mathrm{NLL^{-}}}(\Delta Y,\varphi,s)}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{d}\sigma^{\mathrm{NLL^{-}}}(\Delta Y, \varphi,s)}_{\mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{NLL^{-}}}(\Delta Y,\varphi,s)\ =\ \ \underbrace{\mathrm{NLL^{-}}}_{ \mathrm{N
small-\(x\) (left) or large-\(x\) (right) resummation improvements on collinear PDFs at 14 TeV LHC and 100 TeV FCC energies. Left panel is for \(p_{H}\) distributions obtained by making use of small-\(x\) resummed PDFs from the NNPDF3.1sx family [99], whereas right panel shows transverse-momentum rates obtained by means of large-\(x\), threshold resummed PDFs from the NNPDF3.0lx one [100]. Ancillary panels below primary plots clearly indicate that the overall effects is relatively small, globally staying below 2%. For both resummations they are more pronounced and negative in the peak region, \(30\lesssim p_{H}/\mathrm{GeV}\lesssim 60\), but only in the FCC case (turquoise), while they change sign in the large-\(p_{H}\) tail, being negative at LHC energies and then becoming positive at FCC ones. We stress that our study on the large-\(x\) improvement should be intended as a proxy for the effect of the threshold resummation [101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114] coming from PDFs only. To quantify the full impact of the threshold resummation on our high-energy observables, know to be sizable [52, 55, 84], one must develop a systematic method to resum large-\(x\) logarithms in our off-shell coefficient functions.
## 3 Conclusions and Outlook
We developed a prototype version of a matching procedure, relying on the POWHEG [86, 87, 88, 89, 90] and JETHAD [70, 84, 85] codes. It purpose is combining NLO fixed-order calculations with the high-energy resummation at NLL. Future works will extend this study to: \(a)\) gauge the size of full NLO contributions [80, 83], \(b)\) assess the weight of heavy-quark finite-mass corrections [115, 116], \(c)\) compare our predictions with PS [91, 92, 93, 94, 95, 96] and HEJ [117, 118] inspired ones.
| We consider the semi-inclusive emission of a Higgs boson in association with a light-flavored jet separated by a large rapidity interval at the LHC.
The accessed kinematic regimes fall into the so-called semi-hard sector,
whose theoretical description lies at the intersection corner between the collinear factorization and the high-energy resummation.
We present a prototype version of a matching procedure aimed at combining next-to-leading fixed-order (NLO) calculations from POWHEG with the resummation of next-to-leading energy logarithms (NLL) as obtained from JETHAD. |
2309.05506 | Singularity theory of Weyl-point creation and annihilation | Weyl points (WP) are robust spectral degeneracies, which can not be split by
small perturbations, as they are protected by their non-zero topological
charge. For larger perturbations, WPs can disappear via pairwise annihilation,
where two oppositely charged WPs merge, and the resulting neutral degeneracy
disappears. The neutral degeneracy is unstable, meaning that it requires the
fine-tuning of the perturbation. Fine-tuning of more than one parameter can
lead to more exotic WP mergers. In this work, we reveal and analyze a
fundamental connection of the WP mergers and singularity theory: phase boundary
points of Weyl phase diagrams, i.e., control parameter values where Weyl point
mergers happen, can be classified according to singularity classes of maps
between manifolds of equal dimension. We demonstrate this connection on a
Weyl--Josephson circuit where the merger of 4 WPs draw a swallowtail
singularity, and in a random BdG Hamiltonian which reveal a rich pattern of
fold lines and cusp points. Our results predict universal geometrical features
of Weyl phase diagrams, and generalize naturally to creation and annihilation
of Weyl points in electronic (phononic, magnonic, photonic, etc) band-structure
models, where Weyl phase transitions can be triggered by control parameters
such as mechanical strain. | György Frank, Gergő Pintér, András Pályi | 2023-09-11T14:49:30 | http://arxiv.org/abs/2309.05506v1 | # Singularity theory of Weyl-point creation and annihilation
###### Abstract
Weyl points (WP) are robust spectral degeneracies, which can not be split by small perturbations, as they are protected by their non-zero topological charge. For larger perturbations, WPs can disappear via pairwise annihilation, where two oppositely charged WPs merge, and the resulting neutral degeneracy disappears. The neutral degeneracy is unstable, meaning that it requires the fine-tuning of the perturbation. Fine-tuning of more than one parameter can lead to more exotic WP mergers. In this work, we reveal and analyze a fundamental connection of the WP mergers and singularity theory: phase boundary points of Weyl phase diagrams, i.e., control parameter values where Weyl point mergers happen, can be classified according to singularity classes of maps between manifolds of equal dimension. We demonstrate this connection on a Weyl-Josephson circuit where the merger of 4 WPs draw a swallowtail singularity, and in a random BdG Hamiltonian which reveal a rich pattern of fold lines and cusp points. Our results predict universal geometrical features of Weyl phase diagrams, and generalize naturally to creation and annihilation of Weyl points in electronic (phononic, magnonic, photonic, etc) band-structure models, where Weyl phase transitions can be triggered by control parameters such as mechanical strain.
## Contents
* I Introduction
* II Singularity theory predicts generic and stable features of Weyl phase diagrams
* II.1 Math example for \(m=1\): fold
* II.2 Math example for \(m=2\): cusp
* II.3 Math example, \(m=3\): swallowtail
* II.4 Weyl phase diagrams
* III Swallowtail singularity in a Weyl-Josephson circuit
* III.1 Hamiltonian
* III.2 Symmetries
* III.3 Weyl points
* IV Cusp and fold singularities in superconducting systems of class D
* V Discussion
* V.1 When does the set of Weyl points form a manifold?
* V.2 Not all singularities appear on Weyl phase boundaries
* VI Conclusions
## I Introduction
Singularity theory [1] provides a classification of singularities of mappings between manifolds. An instructive and easy-to-visualise example, where the dimension of both manifolds is \(m=2\), is shown in Fig. 1. The source manifold is the curved surface embedded in the 3D space, the target manifold is a plane, and the mapping \(\pi\) is the projection of the curved surface to the plane. The singular points of this mapping are red points of the curved surface, i.e., those points that are mapped to the red points of the flat surface.
According to Whitney's theorem [2], there are two classes of singular points in this setting: the _fold_ class, exemplified by the pre-images of the values forming the two red curves, and the _cusp_ (or _pleat_) class, exemplified by the pre-image of the meeting point of the two red curves. In fact, Whitney's theorem asserts that for a _generic_ mapping between two two-dimensional (2D) manifolds, the singular points belong to one of these two classes. Further work from Mather [3] has generalised this classification for mappings between higher-dimensional manifolds. The classes of singular points (which, in technical terms, are left-right equivalence classes of map germs) are often referred to as _singularities_.
Singularity theory (sometimes referred to as catastrophe theory [4]) is strongly interlinked with physics [5; 6], e.g., via applications in optics [7; 8], seismology [9], molecular physics [10; 11], band-structure theory [12; 13; 14; 15], Hermitian and non-Hermitian quantum mechanics [16; 17; 18], and dynamical systems [19].
In particular, a recent work [17; 18] discovered and analysed, both in theory and in experiment, a new link between singularity theory and physics. That work has revealed that the swallowtail singularity, characteristic of mappings between manifolds of dimension \(m=3\), can appear as the phase diagram in the three-dimensional parameter space of the studied physical system, which is described by a parameter-dependent \(3\times 3\) non-Hermitian Hamiltonian matrix with a particular symmetry.
In this work, we show that the singularites classified by Whitney and Mather naturally appear in physi
cal systems that are described by parameter-dependent Hermitian matrices - a ubiquitous situation in quantum mechanics. We focus on the case when the number \(n_{\rm p}=3+m\) of parameters is greater than \(3\), and the parameters can be grouped into two groups: a group with \(3\) parameters, which we call the 'configurational parameters', and another group with \(m\) parameters, which we call the 'control parameters'. This setting is relevant for many physical systems, e.g., (i) electronic (phononic, magnonic, photonic) band structure theory of 3D crystals, where the configurational space is formed by the three components of the crystal momentum, and the control space is formed by any further parameters, e.g., mechanical strain of the crystal [20; 21; 22; 23; 24; 25]; (ii) spin systems in a homogeneous magnetic field, where the three magnetic-field components form the configurational space, and further parameters are the control parameters [26; 27; 28]; (iii) multi-terminal Josephson junctions, controlled by more than three parameters such as magnetic fluxes and electric voltages, etc. [29; 30; 31].
The central object of our study is the _Weyl phase diagram_: the phase diagram in the control space that shows the number of Weyl points in the configurational space. By connecting results of singularity theory with parameter-dependent Hermitian matrices, we find that under certain conditions, Weyl phase diagrams exhibit universal geometrical features, which correspond to the known singularities of generic mappings between manifolds of equal dimension.
We exemplify this general observation using two example physical setups. First, we show that the swallowtail singularity, characteristic of mappings between manifolds of dimension \(m=3\), appear in the Weyl phase diagram of multi-terminal Weyl Josephson junctions. Second, we illustrate the universality of our observation by zero-energy Weyl phase diagrams of class D matrices with random parametrization. This latter model describes the excitation spectrum of hybrid normal-superconducting systems in the presence of spin-orbit interaction and in the absence of time-reversal symmetry, and the corresponding zero-energy Weyl points appear in a 1D configurational space. The numerically obtained zero-energy Weyl phase diagrams exhibit fold lines and cusp points, as expected from our observation that this setting is related to singularities of mappings between 2D manifolds.
The rest of this paper is structured as follows. In section II., we summarize those key concepts and results from singularity which we will use to analyse the geometrical features of Weyl phase diagrams. In section III., we showcase the appearance of the swallowtail singularity, characteristic of maps between 3D manifolds, in the Weyl phase diagram of a Weyl Josephson junction. In section IV., we visually illustrate the appearance of fold and cusp singularities in the zero-energy Weyl phase diagram of parameter-dependent class D Hamiltonians that describe hybrid normal-superconducting systems. Finally, in sections V. and VI., we extend the discussion of our results, and provide conclusions.
## II Singularity theory predicts generic and stable features of Weyl phase diagrams
In this section, we first introduce the key concepts and relations from singularity theory that are relevant for the analysis of Weyl phase diagrams. We do this via simple and instructive examples of mappings between manifolds of equal dimension, for dimensions \(m=1\), \(m=2\), and \(m=3\). Then, we outline the connection between these mathematical concepts and results, and Weyl points and Weyl phase diagrams.
### Math example for \(m=1\): fold
_The source manifold._ Consider the 1D manifold \(M^{1}:\{(3x-x^{3},x)|x\in\mathbb{R}\}\in\mathbb{R}^{2}\), i.e., the graph of a cubic polynomial.
_The projection map._ We define the projection map \(\pi\) such that it maps each point \((t,x)\) of \(M^{1}\) to the first coordinate \(t\) of the point. That is, \(\pi\) is a \(M^{1}\rightarrow\mathbb{R}\) map, i.e., a map between two 1D manifolds.
_The counting function of pre-images._ To each point \(t\) of the codomain of the projection map \(\pi\), we can associate
Figure 1: Fold and cusp singularities of the projection of a curved 2D manifold to a flat 2D manifold. The Weyl points in the \(n_{\rm p}\)-dimensional total parameter space of a physical system described by Hermitian matrices usually form an \(m=n_{\rm p}-3\)-dimensional manifold. A minimal model of this Weyl-point manifold is illustrated here with a surface of dimension \(m=2\), parametrized by \((x,t_{1},-x^{3}-t_{1}x)\) in the three-dimensional space of \((x,t_{1},t_{2})\). Separating the total parameter space into a 1D configurational (\(x\)) and 2D control (\(t_{1}\),\(t_{2}\)) space correspond to a projection \(\pi\). The number of Weyl points in the configurational space corresponding to a control parameter set \((t_{1},t_{2})\) is the number of pre-images \(\#\pi^{-1}(t_{1},t_{2})\) of the projection. The characteristic Weyl-point merger processes correspond to the singularities (see text) of the projection.
the number of pre-images \(\#\pi^{-1}(t)\) of that point; we will use \(N:\mathbb{R}\to\mathbb{Z}_{0}^{+}\) to denote this function, and call it the 'counting function of pre-images'.
_Pre-image phase diagram._ The function \(N\) partitions the codomain of \(\pi\). There are three partitions that are regions with non-zero length; these are \(]-\infty,-2[,]-2,2[,\) and \(]2,\infty[\), and \(N\) takes the value \(1\), \(3\), and \(1\), respectively, in these regions. Furthermore, there are two isolated points, \(-2\) and \(2\), that separate the above regions. The counting function \(N\) takes the value \(2\) in these points.
The isolated points separating the extended regions are locations of pairwise 'creation' or 'annihilation' processes of pre-images. Let us follow the points of a curve in the target manifold \(\mathbb{R}\) from \(t<2\) to \(t>2\): as \(t\) increases in the range \(t<2\), there are \(3\) pre-images in the source manifold that move, two of them merge to a single point when \(t=2\), and those two pre-images disappear ('pair-wise annihilation') for \(t>2\) where the pre-image count is \(1\).
Following physics terminology, we call the extended regions 'pre-image phases' or 'phases' for short, and the isolated points separating them we term 'pre-image phase boundaries', or 'phase boundaries' for short.
_Phase boundaries are formed by the singular values of the projection map._ For the projection map \(\pi\), the points of the domain can be classified as regular or singular. Regular (singular) points are those where the derivative of the map is non-zero (zero). This classification of the points of the domain of \(\pi\) is strongly related to the pre-image phase diagram. In fact, the images of the singular points of the domain (i.e., the singular values of the map) appear in the pre-image phase diagram as phase boundaries.
_Extension from the example to generic maps._ The above picture, although described for the case of a single example, extends naturally to generic maps between 1D manifolds. Furthermore, for generic maps between 1D manifolds, the local behavior of the map in any two regular (singular) points is equivalent, in the following sense: In a regular (singular) point, in appropriately chosen coordinates, the map can be written as \(f(x)=x\) (\(f(x)=x^{2}\)). These singular points are also called 'fold points' (see Table 1). Furthermore, for generic maps, the structure of singular points is robust against small deformations of the map, which implies that the pre-image phase diagram is also robust against such small deformations.
### Math example for \(m=2\): cusp
_The source manifold._ Consider now the 2D manifold \(M^{2}:\{(t_{1},-x^{3}-t_{1}x,x)|(x,t_{1})\in\mathbb{R}^{2}\}\in\mathbb{R}^{3}\), as shown in Fig. 1.
_The projection map._ We define the projection map \(\pi\) such that it maps each point \((t_{1},t_{2},x)\) of \(M^{2}\) to the first two coordinates \((t_{1},t_{2})\). That is, \(\pi\) is a \(M^{2}\to\mathbb{R}^{2}\) map, i.e., a map between two 2D manifolds. The projection map \(\pi\) is also illustrated in Fig. 1.
_Counting function of pre-images._ To each point \((t_{1},t_{2})\) of the codomain of the projection map \(\pi\), we can associate the number of pre-images \(\#\pi^{-1}(t_{1},t_{2})\) of that point; we will use \(N:\mathbb{R}^{2}\to\mathbb{Z}_{0}^{+}\) to denote this function. We call \(N\) the 'counting function of pre-images'.
_Pre-image phase diagram._ The function \(N\) partitions the codomain of \(\pi\), as illustrated in Fig. 1 as the patterns on the \((t_{1},t_{2})\) plane. The light gray and dark gray partitions are extended regions with non-zero area, cor
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline dim Ct & min dim Cf & Name & Canonical form \\ \hline \hline
1 & 1 & fold point & \((x^{2})\) \\ \hline
2 & 1 & fold line & \((x^{2},y)\) \\
2 & 1 & cusp point & \((x^{3}+xy,y)\) \\ \hline
3 & 1 & fold surface & \((x^{2},y,z)\) \\
3 & 1 & cusp line & \((x^{3}+xy,y,z)\) \\
3 & 1 & swallowtail point & \((x^{4}+x^{2}y+xz,y,z)\) \\ \hline
4 & 1 & fold hypersurface & \((x^{2},y,z,w)\) \\
4 & 1 & cusp surface & \((x^{3}+xy,y,z,w)\) \\
4 & 1 & swallowtail line & \((x^{4}+x^{2}y+xz,y,z,w)\) \\
4 & 1 & butterfly point & \((x^{5}+x^{3}y+x^{2}z+xw,y,z,w)\) \\
4 & 2 & elliptic umbilic point & \((x^{2}-y^{2}+xz+yw,xy,z,w)\) \\
4 & 2 & hyperbolic umbilic point & \((x^{2}+y^{2}+xz+yw,xy,z,w)\) \\ \hline \end{tabular}
\end{table}
Table 1: Singularities of mappings between manifolds of equal dimension \(m\leq 4\). ‘Name’ and ‘Canonical form’ originate from singularity theory. A given singularity can appear on a system’s Weyl phase diagram if the system has dim Ct control parameters and at least \(\min\dim\text{Cf}\) configurational parameters. For example, the fold point singularity can appear in the Weyl phase diagram if the system is described by a parameter-dependent Hermitian matrix (implying a 3D configurational space) with a single control parameter. In contrast, the elliptic and hyperbolic umbilic points cannot appear on the zero-energy Weyl phase diagram of class-D matrices (1D configurational space) with 4 control parameters, since the configurational space dimension is less than \(\min\dim\text{Cf}=2\).
responding to pre-image counts of 1 and 3, respectively. The red curves separating the grey regions correspond to pre-image count of 2, except the cusp point where the two curves meet, corresponding to pre-image count of 1.
The curve-type boundaries correspond to pairwise 'creation' or 'annihilation' processes of pre-images. In fact, the left (right) curve boundary corresponds to the creation or annihilation of the upper (lower) two pre-images. The cusp point is a location of a three-point process [32; 33], where the number of pre-images change from 1 to 3 such that the two newborn pre-images are created at the position of the original single pre-image. Analogously to the \(m=1\) case, we call the extended regions with non-zero area 'pre-image phases' or 'phases' for short, and the boundaries separating these regions we call 'phase boundaries'.
_Phase boundaries are formed by the singular values of the projection map._ For the projection map \(\pi\), the points of its domain can be classified as regular or singular. Regular (singular) points are those where the Jacobian of the map has a non-vanishing (vanishing) determinant. Singular points can be further classified, as fold points or cusp points. This classification of the points of the domain of \(\pi\) is strongly related to the pre-image phase diagram shown in the \((t_{1},t_{2})\) plane of Fig. 1: The images of the fold points of \(\pi\) form the curved phase boundary lines, whereas the image of the single cusp point of \(\pi\) is the meeting point of the curved phase boundary lines.
_Extension from the example to generic maps._ The above picture extends naturally to generic maps between 2D manifolds. According to Whitney's theorem, singular points of such generic mappings are either cusp points, or fold points forming lines ('fold lines') (see Table 1). Furthermore, for generic maps between 2D manifolds, the local behavior of the map in all regular [fold] [[cusp]] points is equivalent, in the sense that in appropriately chosen coordinates, the map can be written in the canonical form \(f(x,y)=(x,y)\) [\(f(x,y)=(x^{2},y)\)] [\([f(x,y)=(x^{3}+xy,y)]\)]. Furthermore, for generic maps, the structure of singular points is robust against small deformations of the map, which implies that the pre-image phase diagram is also robust against such deformations.
### Math example, \(m=3\): swallowtail
_The source manifold._ Consider now the 3D manifold \(M^{3}:\{(t_{1},t_{2},-x^{4}-t_{1}x^{2}-t_{2}x,x)|(t_{1},t_{2},x)\in\mathbb{R}^{ 3}\}\in\mathbb{R}^{4}\).
_The projection map._ We define the projection map \(\pi\) such that it maps each point \((t_{1},t_{2},t_{3},x)\) of \(M^{3}\) to the first three coordinates \((t_{1},t_{2},t_{3})\). That is, \(\pi\) is a \(M^{3}\to\mathbb{R}^{3}\) map, i.e., a map between two 3D manifolds.
_Counting function of pre-images._ To each point \((t_{1},t_{2},t_{3})\) of the codomain of the projection map \(\pi\), we can associate the number of pre-images \(\#\pi^{-1}(t_{1},t_{2},t_{3})\) of that point; we will use \(N:\mathbb{R}^{3}\to\mathbb{Z}_{0}^{+}\) to denote this function, and call it the 'counting function of pre-images'.
_Pre-image phase diagram._ The function \(N\) partitions the codomain of \(\pi\). This partitioning is shown in Fig. 2b. As illustrated there, there are extended partitions of non-zero volume, there are surfaces (fold surfaces) that separate the regions, there are two curves (cusp lines) that separate the surfaces, there is an intersection curve of the fold surfaces, and there is a single point (swallowtail point), where the curves meet. The counting function takes the values 0, 2, and 4, in the bottom, top, and middle regions of the figure. Along the fold surfaces, the pre-image count is 3. Along the intersection curve of the two fold surfaces, it is 2. Along the cusp lines, it is also 2. In the swallowtail point, it is 1.
The fold surface phase boundaries correspond to pairwise creation or annihilation of pre-images. The cusp lines correspond to 'three-point processes' [32; 33], where three pre-images merge into a single one. The intersection curve of the fold surfaces corresponds to'simultaneous two-point processes', where the four pre-images merge and annihilate in two pairs, simultaneously. The swallowtail point corresponds to a 'four-point process', where the four pre-images merge in a single location and annihilate. We call the extended regions with non-zero volume 'phases', and the boundaries separating these regions 'phase boundaries'.
_Phase boundaries are formed by the singular values of the projection map._ For the projection map \(\pi\), the points of its domain can be classified as regular or singular. Regular (singular) points are those where the Jacobian of the map has a non-vanishing (vanishing) determinant. Singular points can be further classified, as fold points, cusp points, or swallowtail points. This classification of the points of the domain of \(\pi\) is strongly related to the pre-image phase diagram shown in Fig. 2b: The image of the surface formed by the fold points of \(\pi\) form the fold surfaces in Fig. 2b, the images of the curves of the cusp points of \(\pi\) form the cusp lines in Fig. 2b, and the image of the single swallowtail point of \(\pi\) is the swallowtail point in Fig. 2b.
_Extension from the example to generic maps._ The above picture extends naturally to generic maps between 3D manifolds. Singular points of such maps are either swallowtail points, or cusp points forming lines ('cusp lines'), or fold points forming surfaces ('fold surfaces'), see Table 1. Furthermore, for generic maps between 3D manifolds, the local behavior of the map in any two regular [fold] [[cusp]] [[[swallow
### Weyl phase diagrams
In this work, we focus on physical systems that are described by parameter-dependent Hamiltonians, i.e., Hermitian matrices. In particular, we assume that the number of parameters \(n_{\rm p}\) is at least 4, and the parameters are naturally grouped into two groups, of size 3 (configurational parameters) and \(m=n_{\rm p}-3\) (control parameters). We denote the configurational space as \({\rm Cf}^{3}\) and the control space as \({\rm Ct}^{m}\).
For a fixed set of the control parameters, the energy eigenvalues as functions of configurational parameters ('energy bands') might exhibit generic twofold degeneracies (Weyl points) or more exotic degeneracy patterns [34; 35]. Focus our attention to degeneracies between two specific bands - without the loss of generality, let us choose the bands of the ground state and the first excited state. As the control parameters are varied continuously, the degeneracy points 'evolve': generically, the Weyl points of the two lowest-energy bands move in the configurational space, and for special values of the control parameters, Weyl points can merge and annihilate, or Weyl points can be created. Control parameter values where Weyl points are created or annihilated are regarded as 'phase boundaries', separating different regions ('phases') in the control space characterized by different numbers of Weyl points. We call this partitioning of the control parameter space a 'Weyl phase diagram'.
Next, we argue that the Weyl phase diagram is actually a special case of a pre-image phase diagram, introduced in the previous subsections for \(m=1,2,3\). What is the corresponding source manifold, projection map, and target manifold? The source manifold is the'surface' \({\rm W}^{m}\subset{\rm Cf}^{3}\times{\rm Ct}^{m}\) drawn by the Weyl points in the product of the configuration space and the control space. Recall that Weyl points are isolated points (i.e., zero-dimensional objects) in the configurational space, and the product of the configuration space and the control space is \((3+m)\)-dimensional, hence the Weyl points draw an \(m\)-dimensional manifold \({\rm W}^{m}\) in the product space.
The projection map \(\pi:{\rm W}^{m}\rightarrow{\rm Ct}^{m},(k,t)\mapsto t\) is defined as the projection from the \(m\)-manifold of Weyl points on the control space. The counting function of pre-images \(N\), defined in the previous subsections, can be also defined for this projection map \(\pi\), and the corresponding pre-image phase diagram provides the Weyl phase diagram.
To conclude, we found that under generic conditions, a Weyl phase diagram of dimension \(m\) is a pre-image phase diagram of a specific projection map, and hence its geometric features are universal: the phase diagram consists of extended regions (phases) where the number of Weyl points is constant, these phases are separated by phase boundaries formed by the singular values of the projection map, and these phase-boundary points carry universal geometrical characteristics determined by their singularity class. In particular, for \(m=2\), the phase boundary consists of fold lines that may meet at cusp points, and for \(m=3\), the phase boundary consists of fold surfaces, that may meet in cusp lines, that may meet in swallowtail points. We note that the list of singularities is enriched further as \(m\) increases above 3, as exemplified in the lowest block of Table 1.
## III Swallowtail singularity in a Weyl-Josephson circuit
To demonstrate the Weyl-point singularities in a concrete physical system, we consider the Weyl Josephson circuit, originally proposed in Fig. 1 of [30]. The circuit consists of superconductor islands connected by Josephson junctions which form loops (Fig. 2a). In this setup, Weyl points are defined in a 3D configurational space (fluxes), the 3D control space consists of gate-voltage parameters, and the singularities (fold surfaces, cusp lines, and the swallowtail point) appear in the 3D Weyl phase diagram defined in the control (gate-voltage) parameter space.
### Hamiltonian
The Hamiltonian of the circuit reads
\[\hat{H}(\mathbf{\varphi},\mathbf{n}_{\rm g}) = E_{\rm C}\left(\hat{\mathbf{n}}-\mathbf{n}_{\rm g}\right)\cdot c^{-1} \left(\hat{\mathbf{n}}-\mathbf{n}_{\rm g}\right)\] \[- \sum_{\begin{subarray}{c}\alpha,\beta=0\\ \alpha<\beta\end{subarray}}^{3}E_{\rm J,\alpha\beta}\cos\left[\hat{\varphi}_ {\alpha}-\hat{\varphi}_{\beta}+\gamma_{\alpha\beta}(\varphi_{x},\varphi_{y}, \varphi_{z})\right].\]
The first term in the Hamiltonian is the charging energy term where the charging energy scale \(E_{\rm C}=(2e)^{2}/(2C_{0})\approx 77.5\) GHz is set by the capacitance scale \(C_{0}=1\) fF, and \(c=C/C_{0}\) is the dimensionless capacitance matrix defined from the capacitance matrix [36]\(C\) of the circuit. The elements of the vector \(\hat{\mathbf{n}}=(\hat{n}_{1},\hat{n}_{2},\hat{n}_{3})\) are the number operators \(\hat{n}_{\alpha}\) counting the Cooper pairs on the islands \(\alpha\in\{1,2,3\}\). The gate voltage \(V_{\rm g,\alpha}\) coupled to the \(\alpha\)th island through the capacitance \(C_{\rm g,\alpha}\) shifts the number operator in the Hamiltonian by the effective offset charge \(n_{\rm g,\alpha}=C_{\rm g,\alpha}V_{\rm g,\alpha}/(2e)\).
The second term in the Hamiltonian is the tunneling term with the Josephson energies \(E_{\rm J,\alpha\beta}\) of the junctions between island \(\alpha\) and \(\beta\), with the phase operators \(\hat{\varphi}_{i}\) canonically conjugated to the number operators \(\hat{n}_{i}\). The control angles \(\gamma_{\alpha\beta}\) are given by \(\gamma_{0\beta}=0\), \(\gamma_{12}=\varphi_{x}\), \(\gamma_{13}=-\varphi_{z}\), and \(\gamma_{23}=\varphi_{y}\) with the magnetic fluxes \(\varphi_{i}=\pi\Phi_{i}/\Phi_{0}\) of the loops. The Josephson energies and capacitances are given in Table 2.
The Hamiltonian is truncated to the 8-dimensional subspace spanned by the number operator eigenstates \(|n_{1},n_{2},n_{3}\rangle\) with \(n_{i}\in\{0,1\}\). Degeneracies between the ground state and first excited state is investigated. The magnetic fluxes and offset charges gives \(n_{\rm p}=6\)-dimensional total parameter space divided into \(3+3\)
where we choose the magnetic fluxes to be the configurational parameters hosting the Weyl points.
### Symmetries
The Hamiltonian has an effective time-reversal and inversion symmetry
\[H(-\mathbf{\varphi},\mathbf{n}_{\rm g}) = H^{*}(\mathbf{\varphi},\mathbf{n}_{\rm g}), \tag{2}\] \[H(-\mathbf{\varphi},1-\mathbf{n}_{\rm g}) = PH(\mathbf{\varphi},\mathbf{n}_{\rm g})P^{-1}, \tag{3}\]
with \(P\left|n_{1},n_{2},n_{3}\right\rangle=|1-n_{1},1-n_{2},1-n_{3}\rangle\). The consequence of Eq. (2) is that \(H(\mathbf{\varphi},\mathbf{n}_{\rm g})\) and \(H(-\mathbf{\varphi},\mathbf{n}_{\rm g})\) has the same spectra, meaning that a Weyl point located at \(\mathbf{\varphi}_{\rm WP}\) has a time-reversal partner with the same chirality at \(-\mathbf{\varphi}_{\rm WP}\) for any \(\mathbf{n}_{\rm g}\). The two symmetries together results that \(H(\mathbf{\varphi},\mathbf{1}/\mathbf{2})=PH^{*}(\mathbf{\varphi},\mathbf{1}/\mathbf{2})P^{-1}\) with \(\mathbf{1}/\mathbf{2}:=(1/2,1/2,1/2)\) for any \(\mathbf{\varphi}\), meaning that it is possible to do a constant (not depending on \(\mathbf{\varphi}\)) basis transformation so that \(UH(\mathbf{\varphi},\mathbf{1}/\mathbf{2})U^{-1}\) is a real-valued matrix. This lowers the codimension of the band crossings to 2 in the special control point \(\mathbf{n}_{\rm g}=\mathbf{1}/\mathbf{2}\), meaning that the general degeneracy pattern in the 3-dimensional configurational space is a 1-dimensional nodal loop [30; 31].
Due to the periodicity of the configurational (flux) parameter space, the total topological charge, i.e., the sum of topological charges of all the Weyl points is zero [37]. Therefore, the number of Weyl points must be even. Due to the additional conditions that (1) Weyl points come in time-reversal pairs, and (2) the two Weyl points of a time-reversed pair carry the same topological charge, the number of Weyl points must be a multiple of 4.
### Weyl points
To investigate exotic Weyl-point merging processes one needs as many Weyl points in the configurational space as possible. To achieve this we search the Weyl points in the vicinity of the nodal loop control parameter point \(\mathbf{n}_{\rm g,loop}=\mathbf{1}/\mathbf{2}\). This is advantageous as the nodal loop can be used as source of Weyl points. Upon a small perturbation \(\mathbf{n}_{\rm g,loop}+\delta\mathbf{n}_{\rm g}\) the nodal loop breaks into multiple Weyl points. The perturbation \(\mathbf{n}_{\rm g,loop}+t\mathbf{e}\) in the direction \(\mathbf{e}=(-4,1,9)/\sqrt{98}\) results 8 Weyl points (4 time-reversal symmetric Weyl point pairs) with sufficiently small \(t\). For larger \(t\) the 8-point region curves away from the straight line.
Fig. 2c-e show 2D cuts of the Weyl phase diagram which reveal the characteristic shape of a swallowtail singularity corresponding to the interaction of 4 Weyl points with alternating topological charges (the time-reversal pairs are far). In Fig. 2c for \(n_{\rm g,3}=0.6\) the 8-point region (yellow) appears with a triangular shape with 3 different boundaries, in Fig. 2d this triangle shrinks, and in Fig. 2e it is absent. This corresponds to the 2D \((t_{2},t_{3})\) cuts of the swallowtail shown in Fig. 2b. The boundaries are fold lines, which correspond to the merger and annihilation of 2 oppositely charged Weyl points (see Fig. 2f-h). This can happen between the 2 leftmost, between the 2 middle, or between the 2 rightmost points. The merger of the 2 leftmost and the 2 rightmost Weyl points are independent, hence these fold lines intersect at a point, where the two mergers coincide (Fig. 2i). The merger of the 2 middle points and the merger of the 2 leftmost (rightmost) points are not independent, their fold lines are touching each other at a cusp point, which correspond to the merger of the 3 leftmost (rightmost) points, see Fig. 2j. The Weyl phase diagram is actually three-dimensional with fold surfaces and cusp lines. The two cusp lines touch each other at the swallowtail point where the 4 Weyl points merge at a single point. This is illustrated in Fig. 2d where the triangular 8-point region is almost disappeared and in Fig. 2k, where the corresponding Weyl point configuration at the '+' marker shows 4 Weyl points close together. We found that the actual swallowtail point is at \(\mathbf{n}_{\rm g,swallowtail}=(0.418,0.481,0.735)\).
## IV Cusp and fold singularities in superconducting systems of class d
In the preceding sections, we focused on parameter-dependent \(n\times n\) Hermitian matrices with a 3D configurational space and an \(m\)-dimensional control space. Our considerations above, concerning the Weyl points of these matrices, hold for any pair of neighboring bands \((j,j+1)\), where \(1\leq j\leq n-1\). For these Hermitian matrices, the Weyl points are zero-dimensional objects in the 3D configuration space.
There are many quantum-mechanical models where the Hamiltonian is not a generic Hermitian matrix, but a constrained one. In particular, the tenfold-way classification of Altland and Zirnbauer defines Hamiltonian classes constrained by various combinations of time-reversal, particle-hole, and chiral symmetries [38]. In this section, we present our results corresponding to the Altland-Zirnbauer class D, which represents Hamiltonians (also called Bogoliubov-de Gennes Hamiltonians or BdG Hamiltonians) describing excitations in superconductors or hybrid normal-superconductor hybrid systems. A typical setup modelled by matrices of class D is a (possibly multi-terminal) Josephson junction in the presence of spin-orbit interaction and time-reversal-breaking magnetic fields, and in the absence of charging effects [39; 40; 41]. Non-interacting models of one
\begin{table}
\begin{tabular}{|c|c c c c c|} \hline \(\alpha\beta\) & 01 & 02 & 03 & 12 & 13 & 23 \\ \hline \(E_{\rm J,\alpha\beta}\) (GHz) & 2 & 4 & 6 & 3 & 3 & 6 \\ \(C_{\alpha\beta}\) (fF) & 2 & 1 & 2 & 3 & 4 & 3 \\ \hline \end{tabular}
\end{table}
Table 2: Weyl–Josephson circuit parameters used in the numerical calculations yielding Fig. 2c-k. Gate capacitances are set to \(C_{\rm g,1}=C_{\rm g,2}=C_{\rm g,3}=0.1\,\rm{f}\rm{F}\).
dimensional topological superconductors hosting Majorana zero modes also fall into class D [42].
Studying the properties of parameter-dependent class-D matrices is motivated by the intense experimental efforts on superconducting devices modelled by such matrices. However, here we focus on class D also because certain aspects of the singularity-theory analysis of their Weyl points can be visualised in a particularly straightforward manner using surface plots. To appreciate this, we first note that class D matrices have even dimension, i.e., \(n=2n_{s}\) with \(n_{s}\) being a positive integer. Furthermore, the eigenvalue spectrum of class D matrices is symmetric with respect to zero. Finally, the generic eigenvalue degeneracies between bands \(n_{s}\) and \(n_{s}+1\), which necessarily happen at zero energy, and sometimes referred to as 'parity switches', are special in the sense that they appear for _single-parameter_ families of matrices, as opposed to Weyl points of Hermitian matrices which require three parameters to be varied. In what follows, we will use the term 'zero-energy Weyl points' for parity switches of single-parameter class D matrix families.
Consider now a physical system described by a parameter-dependent class-D matrix, where the number of parameters is \(n_{\rm p}=3\), which are grouped into a single-parameter group defining the 1D configurational space and the two other parameters forming the control space of dimension \(m=2\). We might be interested in the number of zero-energy Weyl points in the configurational space, and how that number changes as the control parameters are varied. This dependence is characterized by the zero-energy Weyl phase diagram. This zero-energy Weyl phase diagram has certain universal geometric properties, which follows from Whitney's theorem describing the singularities of mappings between 2D manifolds. Namely, the zero-energy Weyl phase diagram consists of extended regions of finite area where the number of zero-energy Weyl points is constant, and phase boundaries constructed from fold lines that might meet in cusp points.
We now illustrate these universal geometric properties using a random-matrix approach. The BdG Hamiltonian
Figure 2: Swallowtail singularity in the Weyl–Josephson circuit. (a) The layout of the Weyl–Josephson circuit The system can be tuned with changing the magnetic fluxes \(\mathbf{\varphi}\) through the loops or the voltage differences (offset charges \(\mathbf{n}_{\rm g}\)) between the superconducting islands. Every other parameters such that the Josephson energies and capacitances are set to be constant. (b) Swallowtail singularity illustrated with the roots of the depressed quartic equation \(x^{4}+t_{1}x^{2}+t_{2}x+t_{3}=0\). The merger of (real valued) roots correspond to the characteristic self-intersecting surface in the 3D control parameter space of coefficients. (c-e) 2D cuts of the Weyl phase diagram of control parameters showing the number of Weyl points in the configurational parameter space revealing a similar structure. The triangular 8-point region disappears as \(n_{\rm g3}\) increased. The generic boundaries between the regions are fold lines which can cross each other and also touch at cusp points. These cusp points form two cusp lines in the 3D Weyl phase diagram which touch at the swallowtail point. The possible Weyl-point mergers in the configurational space correspond to the points marked in the Weyl phase diagram are shown in panel (f-k). The points are colored by their topological charge: red (blue) points have charge +1 (-1).
depends on 3 parameters in the following way:
\[H(\alpha,\beta,\gamma) = H_{1}\cos(\alpha)\cos(\beta)\cos(\gamma) \tag{4}\] \[+ H_{2}\cos(\alpha)\cos(\beta)\sin(\gamma)\] \[+ H_{3}\cos(\alpha)\sin(\beta)\cos(\gamma)\] \[+ H_{4}\cos(\alpha)\sin(\beta)\sin(\gamma)\] \[+ H_{5}\sin(\alpha)\cos(\beta)\cos(\gamma)\] \[+ H_{6}\sin(\alpha)\cos(\beta)\sin(\gamma)\] \[+ H_{7}\sin(\alpha)\sin(\beta)\cos(\gamma)\] \[+ H_{8}\sin(\alpha)\sin(\beta)\sin(\gamma),\]
where \(H_{n}\) are random matrices with the structure
\[H_{n}=\begin{pmatrix}H_{0,n}&\Delta_{n}\\ -\Delta_{n}^{*}&-H_{0,n}^{*}\end{pmatrix}, \tag{5}\]
where \(\Delta_{n}\) is a skew-symmetric complex matrix and \(H_{0,n}\) is Hermitian. We constructed these matrices with \(\Delta_{n}=d_{n}-d_{n}^{\rm T}\) and \(H_{0,n}=h_{n}+h_{n}^{\dagger}\) where the entries are pseudo-random numbers between -1/2 and 1/2, defined via
\[\operatorname{Re}d_{n,kl} = \Big{\{}\sqrt{2}k+\sqrt{3}l+\sqrt{5}n\Big{\}}-\frac{1}{2} \tag{6}\] \[\operatorname{Im}d_{n,kl} = \Big{\{}\sqrt{6}k+\sqrt{7}l+\sqrt{10}n\Big{\}}-\frac{1}{2}\] (7) \[\operatorname{Re}h_{n,kl} = \Big{\{}\sqrt{11}k+\sqrt{13}l+\sqrt{14}n\Big{\}}-\frac{1}{2}\] (8) \[\operatorname{Im}h_{n,kl} = \Big{\{}\sqrt{15}k+\sqrt{17}l+\sqrt{19}n\Big{\}}-\frac{1}{2}, \tag{9}\]
where \(\{x\}\) denotes the fractional part of \(x\). In this example the dimension of the full BdG Hamiltonian is \(n=12\times 12\).
The BdG Hamiltonian is skew-symmetric in the Majorana basis, resulting a symmetric spectrum. It also has a so-called Pfaffian for which \(\operatorname{pf}(H)^{2}=\det(H)\). The Pfaffian is a polynomial of the entries of the matrix. It changes sign when two energy levels cross at zero. Therefore, zero-energy degeneracies appear with the fine-tuning of only 1 parameter. In a 3D parameter space they generally form a 2D manifold.
Fig. 3a shows the zero-energy degeneracy surface of the pseudo-random BdG Hamiltonian in the total parameter space. The figure is produced with calculating the Pfaffian on a \(100\times 100\times 100\) grid and highlighting points where the Pfaffian changes sign. We divided the total parameter space into the configurational parameter \(\gamma\) and to the control parameters \((\alpha,\beta)\). For a fixed \(\alpha\) and \(\beta\), we counted the sign changes of the Pfaffian along the \(\gamma\) axis; plotting these counts as the function of \(\alpha\) and \(\beta\) provides the zero-energy Weyl phase diagram as shown in Fig. 3b. We created this phase diagram using \(200\times 200\times 200\) grid.
The parameter dependence of the Hamiltonian in Eq. (4) is given in a way that by shifting any angle by \(\pi\) results in the negative of the Hamiltonian, e.g., \(H(\alpha,\beta,\gamma+\pi)=-H(\alpha,\beta,\gamma)\). Because the negative have the same spectrum, the Weyl phase diagram is \(\pi\) periodic and in the generic points the Weyl-point number is divisible by 4.
The zero-energy Weyl phase diagram of the BdG Hamiltonian shows a rich structure with the stable singularities of 2D manifolds: fold lines meeting in cusp points and crossing each other. Fig. 3 highlights the interval \(-1\leq\alpha,\beta\leq 1/2\) which resembles a 2D cut of the phase diagram of a swallowtail singularity. An additional angle parameter might complete the swallowtail singularity if the two cusp lines meet upon changing the new parameter. The total phase diagram is crowded with the singularities. This structure of singularities becomes more complicated upon increasing the dimension of the Hilbert space (not shown) because this also increases the number of zero-energy Weyl points in the configurational space, leading to more possible mergers between them.
## V Discussion
### When does the set of Weyl points form a manifold?
In Sec. II.4, we have argued that the set of Weyl points in the total parameter space \(\operatorname{Cf}^{3}\times\operatorname{Ct}^{m}\) forms a manifold. Based on this precondition, we highlighted and exploited a strong connection between the Weyl-point merging processes and the stable mappings between manifolds of equal dimension. We discuss this precondition further in this subsection.
The \(n\times n\) Hermitian matrices form a \(n^{2}\)-dimensional real vector space. The subset of two-fold degenerate matrices is a \(n^{2}-3\)-dimensional (3 codimensional) submanifold [34; 35]. Furthermore, the set of matrices with two-fold degeneracy between the \(i\)-th and \((i+1)\)-th eigen
Figure 3: Fold lines and cusp points as singularities in the zero-energy Weyl phase diagram of a random BdG Hamiltonian. (a) Due to the particle-hole symmetry, zero-energy degeneracies appear with the fine-tuning of a single parameter. Therefore, zero-energy degeneracies appear as surfaces in a 3D parameter space. (b) Zero-energy Weyl phase diagram corresponding to the vertical projection of the surface on (a), i.e., obtained by counting the degeneracy points along the vertical direction. The phase diagram exhibits the generic and robust singularities in 2D: fold lines and cusp points.
values and those with two-fold degeneracy between the \((i+1)\)-th and \((i+2)\)-th eigenvalues meet at points with a three-fold degeneracy with dimension \(n^{2}-8\). In the following we denote the two-fold degeneracy set between the \(i\)-th and \((i+1)\)-th eigenvalues by \(\Sigma\). Note that our arguments remain true for the whole two-fold degeneracy set.
The Hamiltonian of a physical system is map \(H:\mathrm{Cf}^{3}\times\mathrm{Ct}^{m}\rightarrow\mathrm{Herm}(n)\) from the total parameter space to the space of \(n\times n\) Hermitian matrices. The set of Weyl points corresponding to the two-fold degeneracy set between the \(i\)-th and \((i+1)\)-th eigenvalues is the pre-image \(H^{-1}(\Sigma)\). According to the transversality theorem [43], a generic Hamiltonian map \(H\) is transverse to \(\Sigma\) (intuitively, 'non-tangential') and the pre-image \(H^{-1}(\Sigma)\) is a submanifold in the total parameter space of codimension \(3\).
Based on the above considerations, we can envision situations when the set of Weyl points is _not_ a manifold. For example, this is the case when the image of the Hamiltonian map is tangential to the two-fold degeneracy set \(\Sigma\), i.e., the intersection is non-generic; or if the image of the Hamiltonian map intersects a multi-fold degeneracy set. The former case might arise in case of fine tuning or symmetries, i.e., it does not arise when the mapping is generic. The latter case is also non-generic if \(n_{\mathrm{p}}<8\), e.g., in the case \(n_{\mathrm{p}}=6\) studied in Sec. III. However, for \(n_{\mathrm{p}}\geq 8\), stable intersections of the image of the Hamiltonian map and the multi-fold degeneracy sets can arise, and the whole degeneracy set is not a manifold. In this case, our argument is still valid _locally_, in a small neighbourhood of a two-fold degeneracy with a finite gap from the other levels.
Note also that for our argument a further condition should hold as well, namely, the projection \(\pi\) has to be generic. Despite we assumed that \(H\) is generic, \(\pi\) is not necessarily a generic map. Without providing a full analysis of this condition, we note that if \(x\mapsto H(x,t)\) is generic as a deformation of \(x\mapsto H(x,t_{0})\) for every \(t_{0}\), then the condition is satisfied.
### Not all singularities appear on Weyl phase boundaries
In Secs. II, III, and IV, we have argued and illustrated that Weyl phase diagrams are pre-image phase diagrams of mappings between manifolds of equal dimension, and that each point of a phase boundary on a Weyl phase diagram belongs to a singularity type. This result raises the following natural question: are all singularity types realised as phase boundary points of Weyl phase diagrams? No -- as we show in this subsection.
Sec. IV shows an example where the Hamiltonian has a symmetry which lowers the codimension of a two-fold degeneracy at zero energy to be \(1\). The corresponding configurational parameter space is therefore \(1\)-dimensional with \(1\)D Weyl points. Similarly, for \(\mathcal{PT}\)-symmetric Hamiltonians the codimension of a two-fold degeneracy is \(2\), thus, the configurational space is \(2\)-dimensional with \(2\)D Weyl points. We denote the codimension of the two-fold degeneracy with \(0<l\leq 3\).
The \(n_{\mathrm{p}}=l+m\)-dimensional total parameter space has an \(m\)-dimensional Weyl submanifold \(\mathrm{W}^{m}\). We defined the projection \(\pi:\mathrm{W}^{m}\rightarrow\mathrm{Ct}^{m}\) as it erases the first \(l\) configurational coordinates of the points of the Weyl manifold. Defining the 'total projection' \(\Pi:\mathrm{Cf}^{l}\times\mathrm{Ct}^{m}\rightarrow\mathrm{Ct}^{m}\) with the same definition \(\Pi(x,t)=t\) for the total parameter space, we get a mapping with corank \(l\), everywhere in the domain of \(\Pi\). Restricting the total projection \(\Pi\) to \(\mathrm{W}^{m}\) results \(\pi=\Pi|_{\mathrm{W}^{m}}\). Therefore, the mapping \(\pi\) has a corank smaller or equal to \(l\). Recall that the corank of a map \(f\) at a point \(w\) of the domain is defined as the corank of the Jacobian matrix \(\mathrm{Jac}_{w}(f)\) of \(f\) at \(w\). Clearly the corank of \(\pi\) at a point \(w\in W^{m}\) is exactly the dimension of the intersection of the \(m\)-dimensional tangent space \(T_{w}W^{m}\) of \(\mathrm{W}^{m}\) at \(w\) and the \(l\)-dimensional kernel of the Jacobian \(\mathrm{Jac}_{w}(\Pi)\) of \(\Pi\) at \(w\). Since this corank is at most \(l\), in a Weyl-point merging process only those singularities appear whose corank is less than or equal to the dimension \(l\) of the dimension of the configurational parameter space.
Concrete examples of'missing singularities' are the elliptic umbilic point and the hyperbolic umbilic point, listed as the last two entries in Table 1, which cannot appear as stable features in zero-energy Weyl phase diagrams of class-D systems.
As seen from Table 1, these singularities are characteristic of generic maps between manifolds of dimension \(m=4\), hence it is plausible to search for them in zero-energy Weyl phase diagrams of class-D systems controlled by \(5\) parameters (\(l=1\) configurational, \(m=4\) control). However, as seen from the corresponding canonical forms in Table 1, the corank of these singularities is \(2\). The consideration of the previous paragraph, on the other hand, implies that the corank of the projection map \(\pi\) is at most \(l\), which is \(1\) in this case. As a consequence, umbilic points do not appear on zero-energy Weyl phase diagrams of class D systems.
## VI Conclusions
To conclude, we have argued that singularities of maps between manifolds of equal dimension naturally appear in Weyl phase diagrams of parameter-dependent Hermitian matrices, and illustrated this by numerical results revealing the swallowtail singularity in the gate-voltage control parameter space of a Weyl Josephson junction. We have also illustrated singularities (fold and cusp) on the zero-energy Weyl phase diagram of a parameter-dependent class-D Hermitian matrices, which describe superconducting nanostructures. Based on our arguments, we expect that the results generalise in a broad range of systems; for example, Weyl phase diagrams representing Weyl-point creation and annihilation in elec
tron (phonon, magnon, photon) band structures show similar universal geometrical features characterised by singularities.
###### Acknowledgements.
We thank Z. Guba for useful discussions. This research was supported by the Ministry of Culture and Innovation and the National Research, Development and Innovation Office (NKFIH) within the Quantum Information National Laboratory of Hungary (Grant No. 2022-2.1.1-NL-2022-0004), and by NKFIH via the OTKA Grant No. 132146. Supported by the UNKP-22-3-II-BME-6 New National Excellence Program of the Ministry for Culture and Innovation from the source of the National Research, Development and Innovation Fund.
| Weyl 点 (WP) は、小さな擾乱で分裂されない、頑強なスペクトルdegeneracyであり、その非ゼロのトポロジカルチャージによって保護されています。より大きな擾乱に対しては、WPが対向荷電WPが融合し、結果として中性degeneracyが消失する。中性degeneracy は不安定であり、この擾乱に対する微調整が必要です。さらに多くのパラメータを微調整することで、より異色の WP 融合を引き起こす可能性があります。この研究では、WP 融合と奇点理論の根本的なつながりを見出し、Weyl 相位図の相界面点(つまり、Weyl 点の融合が起こる制御パラメータ値)は、等次元マニフィールド間のマッピングの奇点クラスに分類されます。Weyl--Josephson回路における WP 融合の研究で、4つの WP の融合は燕尾型奇点の形成を引き起こし、 |
2309.15715 | Wrinkling and Haefliger structures | Wrinkling is an $h$-principle technique, introduced by Eliashberg and
Mishachev, that can be used to prove statements of the form: "formal solutions
of a partial differential relation $\mathcal{R}$ can be deformed to
singular/wrinkled solutions". What a wrinkled solution is depends on the
context, but the overall idea is that it should be an object that fails to be a
solution due to the presence of mild/controlled singularities.
Much earlier, Haefliger structures were introduced by Haefliger as singular
analogues of foliations. Much like a foliation is locally modelled on a
submersion, a Haefliger structure is modelled on an arbitrary map. This implies
that Haefliger structures have better formal properties than foliations. For
instance, they can be pulled back by arbitrary maps and admit a classifying
space.
In [8], the second and third authors generalised the wrinkled embeddings of
Eliashberg and Mishachev to arbitrary order. This paper can be regarded as a
sequel in which we deal instead with generalisations of wrinkled submersions.
The main messages are that: 1) Haefliger structures provide a nice conceptual
framework in which general wrinkling statements can be made. 2) Wrinkling can
be interpreted as holonomic approximation into the \'etale space of solutions
of the relation $\mathcal{R}$.
These statements imply connectivity statements relating (1) $\mathcal{R}$ to
its \'etale space of solutions and (2) the classifying space for foliations
with transverse $\mathcal{R}$-geometry to its formal counterpart. | Anna Fokma, Álvaro del Pino, Lauran Toussaint | 2023-09-27T15:12:26 | http://arxiv.org/abs/2309.15715v1 | # Wrinkling and Haefliger structures
###### Abstract.
Wrinkling is an \(h\)-principle technique, introduced by Eliashberg and Mishachev, that can be used to prove statements of the form: "formal solutions of a partial differential relation \(\mathcal{R}\) can be deformed to singular/wrinkled solutions". What a wrinkled solution is depends on the context, but the overall idea is that it should be an object that fails to be a solution due to the presence of mild/controlled singularities.
Key words and phrases:h-principle, differential relations, Haefliger structures, wrinkling 2020 Mathematics Subject Classification: Primary: 57R30, 57R32, 57R45. Secondary: 58H05
## 1. Introduction
In [20], F. Laudenbach and G. Meigniez present the \(h\)-principle for geometric structures as a two-step process. The first step consists of producing a Haefliger microbundle for the given geometry, out of a given formal geometric structure. The second step is the so-called _regularisation_: using homotopies/surgeries one makes the base manifold transverse to the Haefliger microbundle, yielding a genuine geometric structure. Depending on the geometric problem, difficulties appear in each of the steps (or in both). Classical work of M. Gromov shows that the second step can always be achieved if the manifold is open, a particular case being precisely the \(h\)-principle for foliations of A. Haefliger [16, 17].
In [20], the authors carry out the first step for symplectic and contact structures. However, their proof uses Moser stability and, as such, cannot be adapted to other geometric structures, since most of them have a much smaller automorphism group.
The observation that motivated us to write the present article is that wrinkling techniques can be used to implement the first step for any open (or, more generally, microflexible and locally integrable) Diff-invariant partial differential relation \(\mathcal{R}\). That is, every formal solution of \(\mathcal{R}\) can be suitably homotoped to produce a Haefliger microbundle endowed transversely with a solution of \(\mathcal{R}\). This is stated as Theorem 1.11 below. To get to this statement we first prove Theorem 1.1, saying that every formal solution can be approximated by a wrinkled submersion into the etale space of solutions of \(\mathcal{R}\). We think of these as wrinkled/singular solutions of \(\mathcal{R}\).
The key ingredient behind both theorems is a parametric application of holonomic approximation. This approach is very closely related to the main construction in Y. Eliashberg's and N.M. Mishachev's work on _wrinkled submersions_[10], which has later been used in other h-principles [2]. That Haefliger
## 1. Introduction
### Motivation
The study of the _topological_ topology of a manifold \(M\) is a natural generalization of the _topological_ topology of a manifold \(M\). The topology of a manifold \(M\) is a natural generalization of the _topological_ topology of a manifold \(M\).
#### 1.1.1. Parametric and relative version
We can also state a version of Theorem 1.1 that is parametric and relative in both parameter and domain:
**Theorem 1.5**.: _Let \(K\) be compact manifold serving as parameter space. Let \(F_{k}:M\to J^{r}\Psi\) be a \(K\)-family of sections. Suppose that they are holonomic over a neighbourhood of a closed subset \(M^{\prime}\subset M\) and whenever \(k\) belongs to a neighbourhood of a closed subset \(K^{\prime}\subset K\)._
_Then, there exists a a family of maps \(G_{k}:M\to J^{\text{germs}}\Psi\) such that:_
* _Each_ \(G_{k}\) _is smooth in the etale sense and the whole family is continuous for the Whitney topology._
* \(p_{r}\circ G_{k}\) _and_ \(F_{k}\) _are_ \(C^{0}\)_-close and agree on neighbourhoods of_ \(M^{\prime}\) _and_ \(K^{\prime}\)_._
* \(p_{b}\circ G_{k}:M\to M\) _is a wrinkled family of submersions that is_ \(C^{0}\)_-close to the identity and formally homotopic to it._
We will prove a slightly stronger statement, phrased instead in terms of a single fibered wrinkled submersion \(G:M\times K\to J^{\text{germs}}(\Psi\times K)\) which, in particular, is smooth in the etale sense. This will reduce Theorem 1.5 to the non-parametric case.
### Wrinkled solutions of differential relations
Consider now a partial differential relation \(\mathcal{R}\subset J^{r}\Psi\) of order \(r\). The etale space \(\operatorname{EtSol}_{\mathcal{R}}^{M}\) of solutions of \(\mathcal{R}\) is an \(n\)-dimensional submanifold of \(J^{\text{germs}}\Psi\). By construction, \(p_{r}:J^{\text{germs}}\Psi\to J^{r}\Psi\) takes \(\operatorname{EtSol}_{\mathcal{R}}^{M}\) to \(\mathcal{R}\), but not necessarily surjectively, since a jet in \(\mathcal{R}\) may not extend to a germ of solution.
Our wrinkled holonomic approximation Theorem 1.1 immediately implies that:
**Corollary 1.6**.: _Suppose that \(\mathcal{R}\) is open. Let \(F:M\to\mathcal{R}\) be a formal solution. Then, there exists a wrinkled submersion \(G:M\to\operatorname{EtSol}_{\mathcal{R}}^{M}\) satisfying the conclusions of Theorem 1.1 and additionally \(p_{r}\circ G\) is homotopic to \(F\) within \(\mathcal{R}\)._
We also state the parametric and relative version:
**Corollary 1.7**.: _Suppose that \(\mathcal{R}\) is open. Let \(K\) be a compact manifold serving as parameter space. Let \(F_{k}:M\to\mathcal{R}\subset J^{r}\Psi\) be a \(K\)-family of sections. Suppose that they are holonomic over a neighbourhood of a closed subset \(M^{\prime}\subset M\) and whenever \(k\) belongs to a neighbourhood of a closed subset \(K^{\prime}\subset K\)._
_Then, there exists a family \(G_{k}:M\to\operatorname{EtSol}_{\mathcal{R}}^{M}\) as in the conclusions of Theorem 1.5 and additionally satisfying: the projected family \(p_{r}\circ G_{k}\) is homotopic to the family \(F_{k}\), as maps into \(\mathcal{R}\), relative to \(M^{\prime}\) and \(K^{\prime}\)._
It will be apparent from the proof of Theorem 1.1 that Corollary 1.6 also holds when \(\mathcal{R}\) is microflexible and locally integrable. The same applies to the parametric versions. We leave this to the reader.
### Haefliger microbundles
In order to state our next results we need to introduce some notation. We also need to assume here that \(\mathcal{R}\) is a \(\operatorname{Diff}\)-invariant relation of order \(r\) and dimension \(n\). This means (see Subsection 2.3 for details) that for each \(n\)-dimensional manifold \(N\) we have a bundle \(\Psi\to N\), endowed with an action of \(\operatorname{Diff}(N)\), and a subset of \(J^{r}\Psi\), still denoted by \(\mathcal{R}\), that is invariant under the lift of said action. In this case \(J^{\text{germs}}\Psi\) also inherits an action of \(\operatorname{Diff}(N)\) which is smooth and leaves \(\operatorname{EtSol}_{\mathcal{R}}^{N}\) invariant.
We then consider an \(m\)-dimensional manifold \(M\), with \(m\) possibly different from the dimension \(n\) in which \(\mathcal{R}\) lives.
**Definition 1.8**.: _A (smooth) \(\Gamma_{\mathcal{R}}\)**-microbundle** on \(M\) is a triple \((E,\mathcal{F},F)\), where:_
* \(E\to M\) _is a rank-_\(n\) _vector bundle._
* \(\mathcal{F}\) _is a germ of codimension-_\(n\) _foliation on_ \(E\)_, transverse to the fibres of_ \(E\)_._
* \(F\) _is a germ of transverse_ \(\mathcal{R}\)_-structure on_ \(\mathcal{F}\)
The meaning of the notation \(\Gamma_{\mathcal{R}}\) will become clearer in Subsection 1.5 below.
If we drop \(F\), we obtain a Haefliger microbundle \((E,\mathcal{F})\) in the classic sense [16, 17]; it has nothing to do with \(\mathcal{R}\). We define its **singular locus** to be the subset in which \(M\) fails to be transverse to \(\mathcal{F}\). If the singular locus is empty, we say that \((E,\mathcal{F})\) is **regular**. The geometry is introduced via \(F\), which is a solution of \(\mathcal{R}\) over the leaf space of \(\mathcal{F}\). Do note that this only makes sense due to Diff-invariance. Transverse \(\mathcal{R}\)-structures are introduced in Definition 4.1.
**Remark 1.9**.: Two names that we have actively tried to avoid are Haefliger structure and \(\Gamma\)-structure. The reason is that these terms are used to mean slightly different things across the literature.
Hence, what Haefliger originally [17] called a \(\Gamma\)-structure, we will call a principal \(\Gamma\)-bundle (Section 4). Some authors assume principal bundles to have submersive moment map, but we do not (and it is crucial that we do not).
Principal \(\Gamma_{\mathcal{R}}\)-bundles are not the same, although they are closely related, to the \(\Gamma_{\mathcal{R}}\)-microbundles of Definition 1.8. We will study the former via the latter, which is a standard idea in the foliation literature and goes back to [17]. \(\bullet\)
We are particularly interested in the case in which \(m=n\). We speak of **tangential** microbundles when \(E=TM\). The easiest (but most interesting) examples arise as follows: Fixing a metric on \(M\) and thus an exponential map, allows us to assign a \(\Gamma_{\mathcal{R}}\)-microbundle
\[\exp(f):=(TM,\ker(d\exp),f\circ\exp)\]
to each solution \(f\) of \(\mathcal{R}\) on \(M\). The leaf space of (the germ of) \(\ker(d\exp)\) is simply \(M\). It follows that leaf spaces of other tangential microbundles may be regarded as singular replacements of \(M\).
One can define formal analogues of \(\Gamma_{\mathcal{R}}\)-microbundles as well (see for more details Subsection 4.5):
**Definition 1.10**.: _A (smooth) \(\Gamma_{\mathcal{R}}^{f}\)**-microbundle** on \(M\) is a pair \((E,F)\), where:_
* \(E\to M\) _is a rank-_\(n\) _vector bundle._
* \(F\) _is a smoothly varying family of_ \(r\)_-jets of solutions on the fibres of_ \(E\)_, along the zero section._
Since \(\mathcal{R}\) is open, we could also ask \(F\) to be a smooth family of fibrewise solutions of \(\mathcal{R}\) in the fibres of \(E\), or a family of germs thereof. Up to homotopy all these objects are equivalent (Subsection 4.5.2).
Every formal solution \(F\) of \(\mathcal{R}\) over \(M\) can be lifted to a tangential \(\Gamma_{\mathcal{R}}^{f}\)-microbundle \(\exp(F)\) as well. The following is proven in Section 5:
**Theorem 1.11**.: _Let \(M\) be an \(n\)-manifold. Let \(F:M\to\mathcal{R}\subset J^{r}\Psi\) be a formal solution of \(\mathcal{R}\). Then, there exists a path of \(\Gamma_{\mathcal{R}}^{f}\)-microbundles starting at \(\exp(F)\) and finishing at a \(\Gamma_{\mathcal{R}}\)-microbundle \((TM,\mathcal{F},G)\)._
Let us explain briefly the relation between Theorems 1.1 and 1.11. Roughly speaking, the second one follows from the first by pulling back the universal \(\Gamma_{\mathcal{R}}\)-microbundle in the etale space \(\operatorname{EtSol}_{\mathcal{R}}^{M}\). Moreover, the non-Hausdorff manifold constructed in Corollary 1.4 can be assumed to be the leaf space of \((TM,\mathcal{F})\).
#### 1.3.1. Parametric and relative version
For etale spaces we emphasised the distinction between the etale and Whitney topologies. This was important in order to discuss parametric statements. For \(\Gamma_{\mathcal{R}}\)-microbundles one can similarly talk about **concordances** and **Whitney continuous families**. Parametric statements differ drastically between the two. We need the latter for the parametric analogue of Theorem 1.11:
**Theorem 1.12**.: _Let \(M\) be an \(n\)-manifold manifold. Let \(K\) be compact manifold serving as parameter space. Let \(F_{k}:M\to J^{r}\Psi\) be a \(K\)-family of sections. Suppose that they are holonomic over a neighbourhood of a closed subset \(M^{\prime}\subset M\) and whenever \(k\) belongs to a neighbourhood of a closed subset \(K^{\prime}\subset K\). This means that they restrict to a Whitney continuous family of \(\Gamma_{\mathcal{R}}\)-microbundles close to \(K^{\prime}\) and \(M^{\prime}\)._
_Then, there exists a Whitney continuous family of tangential \(\Gamma_{\mathcal{R}}\)-microbundles \((TM,\mathcal{F}_{k},G_{k})\) homotopic to \(\exp(F_{k})\), relative to \(M^{\prime}\) and \(K^{\prime}\)._
We dedicate the rest of the introduction to spelling out some variations and consequences of our main theorems.
### Connectivity of etale space
Consider an open relation \(\mathcal{R}\subset J^{r}\Psi\to M\), not necessarily Diff-invariant. Theorem 1.1 tells us that we can upgrade formal solutions (i.e. sections) of \(\mathcal{R}\) to wrinkled submersions into \(\operatorname{EtSol}_{\mathcal{R}}^{M}\). We think of these as being multi-valued sections. If we give up on sections altogether, we can consider instead arbitrary maps into \(\mathcal{R}\) (with domain not necessarily \(M\)) and ask whether these lift to etale space. We prove the following result in Subsection 3.5:
**Proposition 1.13**.: _The map on homotopy groups induced by \(p_{r}:\operatorname{EtSol}_{\mathcal{R}}^{M}\to\mathcal{R}\) is an isomorphism in degree \(i<\dim M\) and is surjective if \(i=\dim M\)._
### Classifying spaces of geometric structures
We now turn our attention, once more, to differential relations \(\mathcal{R}\) that are open and Diff-invariant of dimension \(n\). One may replace openness by microflexibility and local integrability in all upcoming statements; this is left to the reader.
In Section 4 we associate to every Diff-invariant differential relation \(\mathcal{R}\) a (possibly non-Hausdorff and non second countable) etale Lie groupoid \(\Gamma_{\mathcal{R}}\). This was first proven by Haefliger for various well-studied geometries (symplectic, contact, complex) [17], but the authors did not know of a statement in the literature of the required generality. The groupoid \(\Gamma_{\mathcal{R}}\) has a classifying space \(B\Gamma_{\mathcal{R}}\). One can construct formal counterparts \(\Gamma_{\mathcal{R}}^{f}\) and \(B\Gamma_{\mathcal{R}}^{f}\) as well, and they come with natural scanning maps \(\Gamma_{\mathcal{R}}\to\Gamma_{\mathcal{R}}^{f}\) and \(B\Gamma_{\mathcal{R}}\to B\Gamma_{\mathcal{R}}^{f}\). Then:
**Theorem 1.14**.: _Let \(\mathcal{R}\) be an open and \(\operatorname{Diff}\)-invariant relation of dimension \(n\). Then, the map \(B\Gamma_{\mathcal{R}}\to B\Gamma_{\mathcal{R}}^{f}\) is \(n\)-connected._
This result is a more functorial incarnation of Theorem 1.1. It is then immediate that:
**Corollary 1.15**.: _Let \(M\) be a manifold of dimension \(m\leq n\). The scanning map_
\[\operatorname{Maps}(M,B\Gamma_{\mathcal{R}})\to\operatorname{Maps}(M,B \Gamma_{\mathcal{R}}^{f})\]
_is \((n-m)\)-connected._
We think of the domain as the space of so-called **principal \(\Gamma_{\mathcal{R}}\)-bundles** on \(M\). These are introduced in Definition 4.6 but for now the reader can think of them as singular foliations on \(M\) with transverse \(\mathcal{R}\)-geometry. The target is the formal analogue.
**Remark 1.16**.: In the contact and symplectic settings better connectivity statements are known. The reason is that these geometries exhibit Moser stability, have large automorphism groups, and both stabilise (upon multiplying by \(\mathbb{R}\)) to geometries (even-contact and odd-symplectic) that abide by the \(h\)-principle and that correspond precisely to line fields with transverse contact/symplectic structure. This is extremely special. Our expectation, which we aim to tackle in future work, is that the connectivity of Theorem 1.14 is sharp for most relations \(\mathcal{R}\). In fact, we expect the \(n\)th homotopy group of \(B\Gamma_{\mathcal{R}}\) to be very large (uncountable).
McDuff proved in [22] that one can obtain \((n+1)\)-connectivity (i.e. one more than Theorem 1.14) for the map \(\Gamma_{\mathcal{R}}\to\Gamma_{\mathcal{R}}^{f}\) when \(\mathcal{R}\) is the relation defining contact and symplectic structures. Recently, this was improved to \(n+2\), in the contact case, by Nariman [25]. \(\bullet\)
A crucial point about Corollary 1.15 is that families in \(\operatorname{Maps}(M,B\Gamma_{\mathcal{R}})\) are continuous in the etale sense. However, it is also natural to consider families of \(\Gamma_{\mathcal{R}}\)-principal bundles that vary continuously with respect to the Whitney topology. These correspond to maps into a space that we denote by \(\operatorname{Maps}^{\operatorname{Wh}}(M,B\Gamma_{\mathcal{R}})\). These turn out to be much more flexible:
**Theorem 1.17**.: _Let \(M\) be a manifold of dimension \(m\leq n\). The scanning map_
\[\operatorname{Maps}^{\operatorname{Wh}}(M,B\Gamma_{\mathcal{R}})\to \operatorname{Maps}(M,B\Gamma_{\mathcal{R}}^{f})\]
_is a weak equivalence._
This full \(h\)-principle is an analogue of Theorem 1.5 in the setting of principal \(\Gamma_{\mathcal{R}}\)-bundles. All these statements follow from Propositions 5.1 and 5.4, which are non-tangential generalisations of Theorems 1.11 and 1.12, respectively.
**Remark 1.18**.: For contact and symplectic structures Theorem 1.17 was already known, being the main result in [20]. \(\bullet\)
### Structure of the paper
We discuss partial differential relations, Diff-invariance, and etale spaces of solutions in Section 2. Our theorems about wrinkling into etale space are proven in Section 3. In Section 4 we discuss Haefliger's viewpoint of classifying groupoids and we develop the language needed to deal with arbitrary Diff-invariant relations. This formalism is then combined with wrinkling in Section 5 to prove our results about Haefliger microbundles and connectivity of the associated classifying space.
### Acknowledgements
The authors want to thank L. Accornero for providing insightful comments on a preliminary version of this article.
The third author is funded by the Dutch Research Council (NWO) via the project "proper Fredholm homotopy theory" (project number OCENW.M20.195) of the research programme Open Competition ENW M20-3. In the very early stages of this project the second author was funded by the NWO grant 016.Veni.192.013 "Topology of bracket-generating distributions".
## 2. Preliminaries: Diff-invariant relations
In this section we recall the definition and basic properties of partial differential relations encoded as subsets of jet spaces. We refer the reader to [15, 12] as the standard introductions to this viewpoint. We put particular emphasis in discussing:
* Etale spaces of solutions and the different (quasi)topologies that they admit (Subsection 2.2),
* Natural bundles and the nature of Diff-invariance (Subsection 2.3).
### Solutions and formal solutions
Let \(M\) be a manifold and \(\mathcal{R}\subset J^{r}\Psi\to M\) a partial differential relation.
We write \(\Gamma(M,\Psi)\), or sometimes \(\Gamma(\Psi)\), for the space of sections of \(\Psi\), endowed with the Whitney \(C^{\infty}\)-topology. If \(M\) is open we use the weak topology to discuss continuity of families of sections, but the very strong topology to discuss whether a subset of \(\Gamma(\Psi)\) is open. We explain why in the next paragraph.
Sections of \(\mathcal{R}\) are called **formal solutions**. A **solution** of \(\mathcal{R}\) is a section \(f:M\to\Psi\) whose \(r\)-jet extension \(j^{r}f:M\to J^{r}\Psi\) takes values in \(\mathcal{R}\). We can then write \(\operatorname{Sol}_{\mathcal{R}}\subset\Gamma(\Psi)\) for the subspace of solutions. If \(\mathcal{R}\) is open, then \(\operatorname{Sol}_{\mathcal{R}}\) is also open in the strong topology (but not necessarily in the weak). However, when we discuss families of solutions, we want to let them be continuous for the weak topology, since the strong topology forces continuous families to have compact support.
We also use the notation \(\operatorname{FSol}_{\mathcal{R}}=\Gamma(\mathcal{R})\) for the space of formal solutions. There is then an **scanning map**
\[\tau_{\mathcal{R}}:\operatorname{Sol}_{\mathcal{R}}\to\operatorname{FSol}_{ \mathcal{R}}\]
and the purpose of the h-principle is understanding its connectivity.
**Remark 2.1**.: It is common in the literature to use the \(C^{r}\)-topology in \(\operatorname{Sol}_{\mathcal{R}}\) and the \(C^{0}\)-topology in \(\operatorname{FSol}_{\mathcal{R}}\). This is equivalent to our setup, due to smoothing.
### Etale spaces of solutions
Observe that we can interpret \(\Gamma(\cdot,\Psi)\) as a functor from opens in \(M\) to \(\mathbf{Top}\). This sheaf can be composed with the forgetful functor \(\mathbf{Top}\to\mathbf{Set}\) to obtain a \(\mathbf{Set}\)-sheaf. Its associated \(\mathbf{etale\ space}\) is denoted by \(J^{\mathrm{serms}}\Psi\).
Explicitly, as a set, \(J^{\mathrm{serms}}\Psi\) consists of germs of local sections of \(\Psi\) on \(M\). In other words:
\[J^{\mathrm{germs}}\Psi=\{[s]_{x}\mid s\in\Gamma(U,\Psi),x\in U\subset M\text{ open}\}.\]
A basis for its \(\mathbf{etale\ topology}\) consists of all subsets \(\{[s]_{x}\mid x\in U\}\) indexed by some open \(U\subset M\) and some section \(s\in\Gamma(U,\Psi)\).
A relevant subset of \(J^{\mathrm{germs}}\Psi\) is the \(\mathbf{etale\ space\ of\ (\mathrm{germs}\ of\ \mathrm{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ }}}}}}}}}}}}}\) of \(\mathcal{R}\). We denote it by \(\mathrm{EtSol}^{M}_{\mathcal{R}}\). The superscript will only be important later on, when we deal with Diff-invariant relations.
#### 2.2.1. The Whitney topologies
It is also possible to endow \(J^{\mathrm{serms}}\Psi\), and hence \(\mathrm{EtSol}^{M}_{\mathcal{R}}\), with the \(C^{r}\)-topology for any \(r\in\mathbb{N}\), which is the coarsest topology such that the projection \(p_{r}:J^{\mathrm{serms}}\Psi\to J^{r}\Psi\) is continuous. The \(C^{\infty}\)-topology, which we will henceforth just call \(\mathbf{Whitney\ topology}\), is the union of all of them.
The Whitney topology is coarser than the etale one. In particular this entails that every continuous map into \(J^{\mathrm{serms}}\Psi\) endowed with the etale topology will also be continuous with respect to the Whitney topology.
It is helpful to have the following observation in mind: a section of \(\mathrm{EtSol}^{M}_{\mathcal{R}}\) that is continuous for the etale topology is the lift of a solution, whereas a section that is Whitney continuous is the germ version of a formal solution.
#### 2.2.2. The representative quasi-topology
An unfortunate feature of the Whitney topology in \(J^{\mathrm{serms}}\Psi\) is that a continuous path of germs need not admit a continuous path of representatives. This is not a big problem up to a small perturbation (Lemma 2.3), but it motivates us to introduce a bit of extra language.
Following Gromov's approach to flexible sheaves [14], we now define a quasi-topology on \(J^{\mathrm{germs}}\Psi\). The quasi-topology formalism, due to Spanier and Whitehead [29], amounts to defining what continuous maps into \(J^{\mathrm{germs}}\Psi\) are. This is sufficient to define homotopy groups and weak equivalences and is thus well-suited to doing \(h\)-principles.
**Definition 2.2**.: _The **representative quasi-topology** on \(J^{\mathrm{germs}}\Psi(M)\) is defined by the following property:_
_For any topological space \(K\), a function \(f:K\to J^{\mathrm{germs}}\Psi(M)\) is continuous if there is an open \(U\subset K\times M\) and a continuous section \(g:U\to K\times\Psi(M)\) such that \(g\) restricted to \(U\cap(\{k\}\times M)\) is a representative of \(f(k)\) for all \(k\)._
Hence a map is continuous if we are able to take coherent representatives parametrically.
**Lemma 2.3**.: _Let \(\mathcal{R}\) be open. Then, the identity in \(\mathrm{EtSol}^{M}_{\mathcal{R}}\), as a map from the representative quasi-topology to the Whitney topology, is a weak equivalence._
Proof.: Given any family parametrised by a manifold, continuous for the Whitney topology, we can take arbitrary representatives (which need not vary continuously in the parameter) and then use smoothing. Since the infinite-jets did vary continuously, the smoothing may be assumed to be \(C^{\infty}\)-small in the vicinity of the basepoints. It follows that the smoothing remains in \(\mathcal{R}\). This argument is also relative in the parameter.
#### 2.2.3. The tautological solution
A very useful feature of \(\mathrm{EtSol}_{\mathcal{R}}\) is that it carries a naturally defined solution of \(\mathcal{R}\). Namely, observe that we can use \(p_{b}:\mathrm{EtSol}_{\mathcal{R}}\to M\) to lift \(\Psi\to M\) to a bundle \(p_{b}^{*}\Psi\) over \(\mathrm{EtSol}_{\mathcal{R}}\). We can then take jets and, since \(p_{b}\) is etale, there is a canonical isomorphism between \(J^{r}(p_{b}^{*}\Psi)\) and \(p_{b}^{*}(J^{r}\Psi)\). From this it follows that there is a well-defined lift \(p_{b}^{*}\mathcal{R}\subset J^{r}(p_{b}^{*}\Psi)\) of \(\mathcal{R}\). Then:
**Definition 2.4**.: _The **tautological solution**\(\tau:\operatorname{EtSol}_{\mathcal{R}}\to p_{b}^{*}\Psi\) is defined as \(\tau([f]_{x})=p_{b}^{*}f(x)\) for all possibly local solutions \(f\in\operatorname{Sol}_{\mathcal{R}}(U)\) and points \(x\in U\subset M\)._
### Natural bundles and \(\operatorname{Diff}\)-invariance
In Sections 4 and 5, the key property we will need from our partial differential relations is that they should be intrinsically formulated, not depending on the particular manifold in which they live.
To this end, we use the notion of a natural fiber bundle [26]:
**Definition 2.5**.: _A **natural fiber bundle** of dimension \(n\) is a functor \(\Psi\) from the category \(\mathbf{Man}_{n}\) of \(n\)-manifolds (with embeddings as morphisms) to the category of fiber bundles (with fibered maps as morphisms), such that:_
* \(\Psi(M)\) _is a fiber bundle over_ \(M\) _for every manifold_ \(M\)_, and_
* \(\Psi(f):\Psi(M)\to\Psi(N)\) _covers_ \(f\) _for every embedding_ \(f:M\to N\) _between manifolds._
For notational convenience we will often just write \(\Psi\to M\) to mean \(\Psi(M)\). From the definition it follows that the pseudogroup of local diffeomorphisms \(\operatorname{Diff}_{\operatorname{loc}}(M)\) acts on \(\Psi(M)\).
A large class of examples can be derived from the tangent bundle. This includes the frame bundle, the cotangent bundle, and their wedge and symmetric products. An important operation that preserves the naturality of a fiber bundle is taking \(r\)-jets. Any morphism \(\Psi(f):\Psi(M)\to\Psi(N)\) lifts to a morphism \(j^{r}\Psi(f):j^{r}\Psi(M)\to j^{r}\Psi(N)\) by looking at its action on \(r\)-jet equivalence classes.
We write \(\Gamma(-,\Psi):\mathbf{Man}_{n}\to\mathbf{Top}\) for the sheaf associating to every \(n\)-manifold \(M\) the space \(\Gamma(M,\Psi)\) of sections of \(\Psi(M)\).
#### 2.3.1. \(\operatorname{Diff}\)-invariant relations
The notion of a natural fiber bundle allows us now to abstract the relation from the particular manifold in which it lives.
**Definition 2.6**.: _A \(\operatorname{Diff}\)**-invariant partial differential relation** of order \(r\) and dimension \(n\) is a triple \((\mathcal{R},\Psi,i)\) in which:_
* \(\mathcal{R}\) _and_ \(\Psi\) _are natural fibre bundles of dimension_ \(n\)_,_
* \(i:\mathcal{R}\to j^{r}\Psi\) _is a natural transformation,_
_such that, for all \(n\)-manifolds \(M\), \(i:\mathcal{R}(M)\to j^{r}\Psi(M)\) is an inclusion._
We then have the sheaf \(\operatorname{Sol}_{\mathcal{R}}:\mathbf{Man}_{n}\to\mathbf{Top}\) that sends every \(n\)-manifold \(M\) to the space of solutions \(\operatorname{Sol}_{\mathcal{R}}(M)\) of \(\mathcal{R}\) on it. It is a subsheaf of \(\Gamma(-,\Psi)\). Moreover, we have a natural transformation:
\[\tau_{\mathcal{R}}:\operatorname{Sol}_{\mathcal{R}}\to\operatorname{FSol}_{ \mathcal{R}},\]
where the right hand side denotes \(\Gamma(-,\mathcal{R})\).
#### 2.3.2. Back to etale space
Recall the etale space \(\operatorname{EtSol}_{\mathcal{R}}^{M}\) of germs of solutions of \(\mathcal{R}\) over \(M\). Note that now, due to \(\operatorname{Diff}\)-invariance, we can speak of \(\Psi\) being a bundle over \(\operatorname{EtSol}_{\mathcal{R}}^{M}\) and we can write the tautological solution of Definition 2.4 as a section of \(\operatorname{EtSol}_{\mathcal{R}}^{M}\to\Psi\), without having to refer to pullbacks from \(M\).
## 3. Wrinkling into etale space
In this section we prove our statements about wrinkled solutions of partial differential relations. We give two proofs of the existence \(h\)-principle of Theorem 1.1. It is first proven in Section 3.1, using results from [8]. Then we give an alternative self-contained argument in Sections 3.2 and 3.3. We do this to emphasise the point that the argument is elementary. The parametric counterpart Theorem 1.5 is tackled in Section 3.4.
After that we cover some of the consequences of Theorem 1.1. In Section 3.5 we address the proof of Proposition 1.13, which deals with the connectivity of etale space. In Subsection 3.6 we recover the \(h\)-principle for folded symplectic of Cannas da Silva. In Subsection 3.7 we recover a result about horizontal homotopy groups in jet spaces.
### A short proof of Theorem 1.1
In [8], the second and third authors generalised wrinkled embeddings [13] to higher order and proved [8, Theorem 1.2] the following result:
**Proposition 3.1**.: _Given any section \(F:M\to J^{r}\Psi\), there exists an embedding \(f:M\to J^{r}\Psi\) such that:_
* \(f\) _and_ \(F\) _are_ \(C^{0}\)_-close._
* \(f\) _is tangent to the Cartan distribution in_ \(J^{r}\Psi\) _and its singularities with respect to the front projection_ \(p_{0}\) _are_ \(r\)_-order cusps along codimension_-_\(1\) _spheres._
These spheres of cusps come in pairs and together form what we called an \(r\)-order zig-zag wrinkle. A property of these singularities is that the projection to the base \(p_{b}\circ f:M\to M\) is a submersion with controlled singularities (paired up folds) that is \(C^{0}\)-close to the identity and formally homotopic to it.
We now use this result to prove Theorem 1.1. One can thus say that the existence of wrinkled embeddings implies the existence of wrinkled solutions of open differential relations.
#### 3.1.1. Proof of Theorem 1.1
We first apply Proposition 3.1 and yield \(f:M\to J^{r}\Psi\) tangent to the Cartan distribution. It is, suitably, a holonomic approximation of \(F\). Indeed, the spheres of singularities \(\{S_{i}\subset M\}\) divide (their complement in) \(M\) into open regions \(\{M_{j}\subset M\}\) such that \(f|_{M_{j}}\) is a holonomic section, since it is graphical over the base and tangent to the Cartan distribution. As such, each \(f|_{M_{j}}\) uniquely lifts to some \(G_{j}:M_{j}\to\operatorname{EtSol}_{R}^{M}\).
The issue is that the lifts \(G_{j}\) do not glue to a global map \(G\). In fact, due to the nature of the zig-zag singularities of \(f\), one can verify that \(\pi_{r+1}\circ G_{j}\) goes to infinity as we approach the singularity locus.
What we must do is flatten the zig-zags of \(f\). By this we mean taking \(f|_{S_{i}}\), extending it to a small neighbhourhood \(U_{i}\) as a holonomic section \(f_{i}\) that still approximates \(F\), and then modifying each \(f|_{M_{j}}\), holonomically, so that they agree with \(f_{i}\) close to \(S_{i}\). This is done by bumping off in the front projection and, as long as \(U_{i}\) is sufficiently small, the result will still approximate \(F\). This produces a global map into \(J^{r}\Psi\) that now lifts to etale space as desired.
We give some extra details in the next subsection.
#### 3.1.2. Proof of Corollary 1.4
We can now inspect the previous proof and observe that in the last step, when we modify the \(f|_{M_{j}}\), the following is possible: We can take bands \(S_{i}\times(-\delta,\delta)\subset U_{i}\) and ask that the chosen perturbations of the \(f|_{M_{j}}\) agree with \(f_{i}\) over the bands, while remaining disjoint in the complement.
That this can be arranged follows from the explicit nature of the zig-zag singularities of \(f\). Indeed, they are given by a explicit \(1\)-dimensional model called a _zig-zag bump function_[8, Definition 7.12], which is then stabilised by a sphere. It is immediate that such a zig-zag bump function can be flattened in the way described.
We let \(G:M\to\operatorname{EtSol}_{\mathcal{R}}^{M}\) be the map resulting from this process. Then \(N\) is the space built by glueing the \(M_{j}\) to the bands \(S_{i}\times(-\delta,\delta)\). More concretely: a point \(x\in M_{j}\) is glued to \(y\in S_{i}\times(-\delta,\delta)\) if \(f|_{M_{j}}(x)=f_{i}(y)\). This results in a non-Hausdorff manifold, since two different \(M_{j}\) will be identified close to each \(S_{i}\). The construction implies that there is a unique map \(H:N\to\operatorname{EtSol}_{\mathcal{R}}^{M}\) such that, when restricted to each \(M_{j}\), yields \(G\).
The equivalence between \(M\) and \(N\) is built as follows. There is a unique map from \(M\) to \(N\) that is the identity in the pieces \(M_{j}\). We then map \(N\) to \(M\) via the projection \(\pi_{b}\circ H\). These two maps compose to folded submersions whose singularities come in pairs in cancelling position. As such, they are homotopic to the identity.
**Remark 3.2**.: The front projection \(p_{0}\circ f:M\to\Psi\), of the map constructed in [8], has \(r\)-order cusp singularities along codimension-1 spheres. These are chosen precisely so that their lift to \(J^{r}\Psi\) desingularizes to a smooth embedding. In particular, they cannot be lifted continuously to higher jets.
The point of the flattening procedure in the proofs above is then to replace the \(r\)-order cusps by "flat cusps" that do lift to \(J^{\text{serms}}\Psi\). Flatness here has to be understood in the etale sense, meaning that the lift cannot possibly be an embedding close to the cusp locus. However, the manifold \(N\) we just built does embed into etale space. The reason is that we are precisely gluing the parts of \(M\) that fail to embed. \(\bullet\)
### Wrinkling in a cube
We now begin our second, and self-contained, proof of Theorem 1.1. The key ingredient behind it the following proposition (and its parametric counterpart Proposition 3.5). It is inspired by the construction of Eliashberg and Mishachev [10, Lemma 2.3A] of wrinkled submersions.
**Proposition 3.3**.: _Let \(\Psi\to I^{n}\) be a trivial bundle over the \(n\)-cube. Consider a section \(F:I^{n}\to J^{r}\Psi\) such that \(F=j^{r}f\) on a neighborhood \(V\) of \(\partial I^{n}\), for some \(f:I^{n}\to\Psi\). Then, there exists a wrinkled submersion \(G:I^{n}\to J^{\text{serms}}\Psi\) such that:_
* \(p_{r}\circ G\) _and_ \(F\) _are_ \(C^{0}\)_-close,_
* \(G=[f]_{x}\) _for_ \(x\in V\)_,_
* _the wrinkled submersion_ \(p_{b}\circ G:I^{n}\to I^{n}\) _is_ \(C^{0}\)_-close to the identity and formally homotopic to it._
Proof.: Throughout the proof it will be convenient to assume that \(F\) is defined on an open neighborhood \(\mathcal{O}p(I^{n})\subset\mathbb{R}^{n}\). This can always be arranged by replacing \(F\) by its restriction to \([-1+\alpha,1-\alpha]^{n}\subset I^{n}\) for some small \(\alpha>0\). We let \(x=(\tilde{x},x_{n})\) denote the coordinates on \(I^{n}=I^{n-1}\times I\). We fix a metric in the cube and in \(\Psi\).
We define a 1-parameter family
\[F_{y}:\mathcal{O}p(I^{n-1}\times\{y\})\to J^{r}\Psi\]
by the formula \(F_{y}(x)=F(x)\). This family can be made holonomic by using the 1-parametric version of holonomic approximation [12] along the core \(I^{n-1}\times\{y\}\) of each strip \(\mathcal{O}p(I^{n-1}\times\{y\})\).
Recall that holonomic approximation constructs a family of diffeotopies \(h_{y,t}:\mathcal{O}p(I^{n-1}\times\{y\})\to\mathcal{O}p(I^{n-1}\times\{y\})\), that wiggle the domain of \(F_{y}\). In our case we impose that this takes place in the direction of the last variable \(x_{n}\). We point out that the amount of wiggling depends on the size of derivatives of \(F_{y}\). Thanks to compactness, we have uniform bounds for the derivatives and we can arrange for wiggling to be induced by a single diffeotopy.
That is: For any sufficiently small \(\eta>0\) there exists a diffeotopy
\[h_{t}:I^{n}\to I^{n}\qquad t\in[0,1]\]
and a smooth family of sections
\[\tilde{F}_{y}:\mathcal{O}p(h_{1}(I^{n-1}\times\{y\}))\to J^{r}\Psi\]
such that:
1. \(d_{C^{0}}(h_{t}(x),x)<\eta\) for all \(x\in\mathcal{O}p(I^{n})\),
2. \(h_{t}=\text{id}\) on \(\mathcal{O}p(\partial I^{n})\) and if \(t=0\),
3. \(\tilde{F}_{y}\) is holonomic for all \(y\in I\) and its domain lies in the domain of \(F_{y}\).
4. \(d_{C^{0}}(\tilde{F}_{y}(x),F_{y}(x))<\eta\) for \(x\in\mathcal{O}p(I^{n})\) and \(y\in I\),
5. \(\tilde{F}_{y}=F\) on \(\mathcal{O}p(\partial I^{n-1}\times\{y\})\) for all \(y\in I\), and on \(\mathcal{O}p(I^{n-1}\times\{y\})\) for \(y\in\mathcal{O}p(\partial I)\).
By compactness there exists \(\delta>0\) such that \(\tilde{F}_{y}\) is holonomic on \(h_{1}(I^{n-1}\times(y-\delta,y+\delta))\) for all \(y\in I\). Denote by \(f_{y}:h_{1}(I^{n-1}\times(y-\delta,y+\delta))\to\Psi\) the family of functions such that \(F_{y}=j^{r}f_{y}\). Taking germs we obtain a family of sections
\[s_{y}:h_{1}(I^{n-1}\times(y-\delta,y+\delta))\to J^{\text{serms}}(\Psi),\quad x \mapsto[f_{y}]_{x},\quad\forall y\in I.\]
Each \(f_{y}\) is a good holonomic approximation of our starting data, so the rest of the proof amounts to using wrinkling to "glue together" the family \(s_{y}\) in order to deal a single map \(M\to J^{\text{serms}}\Psi\) whose projection to \(J^{r}\Psi\) is also a good (multivalued) holonomic approximation. This is illustrated in Figure 1.
At this point we can reparametrise the cube and just work in the coordinates given by \(h_{1}\). This allows us to assume that the maps \(\tilde{F}_{y}\), \(f_{y}\), and \(s_{y}\) have all \(I^{n-1}\times(y-\delta,y+\delta)\) as domain.
Let \(\ell\in\mathbb{N}\) and define \(\Delta_{i}\subset I\) as the interval of length \(\frac{9}{16\ell}\) centered around \(t_{i}=\frac{2i-1}{2\ell}\) for \(i=1,\dots,\ell\). Let \(\lambda:I\to I\) be a function such that
* \(\lambda(0)=0\) and \(\lambda(1)=1\),
* \(\lambda(t)=\frac{2i-1}{2\ell}\) on each \(\Delta_{i}\), and
* \(0<\frac{d\lambda}{dt}(t)<3\) if \(t\in I\setminus\cup_{i=1}^{\ell}\Delta_{i}\),
and let \(\gamma_{\delta,\ell}:I^{n}\to I^{n}\) be as in Lemma 3.4 below. Then we can define a wrinkled submersion \(G_{\ell}:I^{n}\to J^{\text{serms}}\Psi\) by
\[G_{\ell}(x)=s_{\lambda(x_{n})}(\gamma_{\delta,\ell}(x)).\]
We check, analogously to what Eliashberg and Mishachev do in [10, Lemma 2.3A], that for \(\ell\) large enough
* \(G_{\ell}=[f]_{x}\) for \(x\in\mathcal{O}p(\partial I^{n})\),
* \(G_{\ell}|_{I^{n-1}\times(\Gamma\cup_{i}\Delta_{i})}\) is a submersion, and
* \(G_{\ell}|_{I^{n-1}\times\Delta_{i}}=s_{t_{i}}\circ\theta_{i}\), where \(\theta_{i}:I^{n-1}\times\Delta_{i}\to I^{n-1}\times\mathbb{R}\) is a one-dimensional wrinkled map in the \(n\)-th coordinate and the identity in the first \((n-1)\) coordinates1 for all \(i=1,\dots,\ell\).
Footnote 1: Eliashberg and Mishachev [10] call this a ‘special wrinkle’.
Since the composition of a submersion and a wrinkled map does not need to be a wrinkled submersion due to possible self-intersections, we need to 'chop the wrinkles'. That is, we need to homotope the maps \(\theta_{i}\) into maps with more, but smaller wrinkles. Since this procedure is applied locally on \(\mathbb{R}^{n}\), the reasoning by Eliashberg and Mishachev [10, Lemma 2.1C] is also valid in our etale case.
Figure 1. A sketch of wrinkling map \(G_{\ell}\): on the left we see its domain \(I^{n}\) with the domains of four \(s_{y}\). The grey ovals indicate where the wrinkling takes place. On the right we see the image of \(G_{\ell}\) in \(J^{\text{serms}}\Psi\). We have drawn the wrinkles there in the color of the strip in which they take place. The dotted lines indicate the interpolation in between the strips. Note that the wrinkles do not overlap in the domain, but they do in the target when projected to \(M\).
We can now define \(G\) as the result of chopping the wrinkles of \(G_{\ell}\) for \(\ell\) large enough. The only property that we must verify is that \(p_{r}\circ G\) and \(F\) are \(C^{0}\)-close, all others are automatic. Since the chopping of the wrinkles results in a \(C^{0}\)-approximation, it suffices to argue that \(G_{\ell}\) approximates \(F\). Let \(\tilde{F}(x,x_{n})=\tilde{F}_{x_{n}}(x)\). We may assume that \(d(\tilde{F},F)\) is arbitrarily small by taking \(\eta\) small. It remains to show that \(d(p_{r}\circ G_{\ell},\tilde{F})<\epsilon/2\).
We will argue this only for those \(x=(\tilde{x},x_{n})\in I^{n}\) for which \(\tilde{\gamma}_{\delta,\ell}(x)=x_{n}+\delta\sin(2\pi\ell x_{n})\), see Equation (1) in Lemma 3.4. In other cases the argument is similar, but then the bump functions \(\chi\) and \(\phi_{\ell}\) have to be taken into account.
In this case it holds that \(\lambda(x_{n})\in[x_{n}-\frac{23}{32\ell},x_{n}+\frac{23}{32\ell}]\) and that \(\tilde{\gamma}_{\delta,\ell}(x)\in[x_{n}-\delta,x_{n}+\delta]\). By compactness we can assume that \(\delta\) is small enough such that
\[\max_{\begin{subarray}{c}(z_{1},\dots,z_{n-1})\in I^{n-1}\\ y,y^{\prime}\in[a-\delta,a+\delta]\end{subarray}}d_{C^{0}}(\tilde{F}_{a}(z_{1 },\dots,z_{n-1},y),\tilde{F}_{a}(z_{1},\dots,z_{n-1},y^{\prime}))<\epsilon/2.\]
Since the family \(\{\tilde{F}_{y}\}\) is also smooth in the parameter we additionally have that for \(\ell\) large enough
\[\max_{\begin{subarray}{c}y\in I^{n}\\ |a-b|\leq\frac{23}{32\ell}\end{subarray}}d_{C^{0}}(\tilde{F}_{a}(y),\tilde{F} _{b}(y))<\epsilon/2.\]
This implies that
\[d_{C^{0}}(p_{r}G_{\ell}(x),\tilde{F}_{x_{n}}(x)) =d_{C^{0}}(\tilde{F}_{\lambda(x_{n})}(\tilde{x},\tilde{\gamma}(x) ),\tilde{F}_{x_{n}}(x))\] \[\leq d_{C^{0}}(\tilde{F}_{\lambda(x_{n})}(\tilde{x},\tilde{ \gamma}(x)),\tilde{F}_{\lambda(x_{n})}(x))\] \[\qquad+d_{C^{0}}(\tilde{F}_{\lambda(x_{n})}(x),\tilde{F}_{x_{n}} (x))<\epsilon\]
and concludes the proof.
Note that \(p_{0}\circ G\) is the wrinkled submersion from [10], but \(p_{1}\circ G\) does not consist of the wrinkled submersion together with its regularized differential.
**Lemma 3.4**.: _Given \(\ell\in\mathbb{N}\) we denote by \(\Delta_{i}\subset I\) the interval of length \(\frac{9}{16\ell}\) centered around \(t_{i}=\frac{2i-1}{2\ell}\) for \(i=1,\dots,\ell\). Then, for any \(\delta>0\) and \(\ell\in\mathbb{N}\) there exists a wrinkled submersion \(\gamma_{\delta,\ell}:I^{n}\to I^{n}\), fibered over \(I^{n-1}\), such that:_
* \(\gamma_{\delta,\ell}\) _has a single wrinkle on_ \(I^{n-1}\times\Delta_{i}\)_, and_ \(\frac{\mathrm{d}\gamma_{\delta,\ell}}{\mathrm{d}t}\geq\delta\ell\) _on_ \(I^{n}\setminus\bigcup_{i}I^{n-1}\times\Delta_{i}\)_,_
* \(d_{C^{0}}(\gamma_{\delta,\ell},\mathrm{id})<\delta\)_, and_ \(\gamma_{\delta,\ell}=\mathrm{id}\) _on a neighborhood of_ \(\partial I^{n}\)_,_
* \(\gamma_{\delta,\ell}\) _is homotopic to the identity relative to a given neighborhood_ \(V\) _of_ \(\partial I^{n}\)_._
Proof.: Choose a bump function \(\phi_{\ell}:I\to I\) such that \(\phi|_{[0,\frac{1}{16\ell}]\cup[1-\frac{1}{16\ell},1]}=0\) and \(\phi|_{[\frac{1}{8\ell},1-\frac{1}{8\ell}]}=1\). Let \(\xi:I^{n-1}\to I\) be another bump function such that \(\xi\) is equal to \(0\) on a neighborhood of \(\partial I^{n-1}\) and \(1\) outside a larger neighborhood of \(\partial I^{n-1}\). We also require that the superlevel sets \(\{x\in I^{n-1}\mid\chi(x)\geq t\}\) are convex for \(t\in[0,1]\), and that the boundary of the level set \(\{x\in I^{n-1}\mid\chi(x)=1\}\) is contained in \(V\). Then the desired map is defined by
\[\gamma_{\delta,\ell}:I^{n}\to I^{n},\quad(\tilde{x},x_{n})\mapsto(\tilde{x}, \tilde{\gamma}_{\delta,\ell}(x)),\]
where \(\tilde{\gamma}_{\delta,\ell}:I\to I\) is equals
\[\tilde{\gamma}_{\delta,\ell}(x)=x_{n}+(1-\chi(\tilde{x}))\,\delta\phi_{\ell}(x_ {n})\sin(2\pi\ell x_{n}).\qed \tag{1}\]
Next we prove the parametric version of Proposition 3.3.
**Proposition 3.5**.: _Let \(I^{\ell}\) serve as parameter space, and \(F_{k}:I^{n}\to J^{r}\Psi\) be an \(I^{\ell}\)-family of sections. Suppose that they are holonomic on a neighbourhood of \(\partial I^{n}\) and whenever \(k\) belongs to a neighbourhood of a closed subset \(K^{\prime}\subset I^{\ell}\)._
_Then, there exists a wrinkled submersion \(G:I^{n}\times I^{\ell}\to J^{\text{germs}}(\Psi\times I^{\ell})\), fibered over \(I^{\ell}\), such that:_
* \(p_{r}\circ G_{k}\) _and_ \(F_{k}\) _are_ \(C^{0}\)_-close and agree on neighbourhoods of_ \(\partial I^{n}\) _and_ \(K^{\prime}\)_._
* \(p_{b}\circ G_{k}:M\to M\) _is a wrinkled family of submersions that is_ \(C^{0}\)_-close to the identity and formally homotopic to it._
* _Each_ \(G_{k}\) _is smooth in the etale sense and the whole family is continuous for the Whitney topology._
Proof.: Consider the section \(F:I^{n}\times I^{\ell}\to J^{r}(\Psi)\) defined by \(F(x,k)=F_{k}(x)\). We choose an arbitrary lift
\[s:I^{n}\times I^{\ell}\to J^{r}(\Psi\times I^{\ell}) \tag{2}\]
and apply to it the proof of Lemma 3.3 to obtain the desired wrinkled submersion \(G:I^{n}\times I^{\ell}\to J^{\mathrm{germs}}\Psi\times I^{\ell}\).
There are two parts of the proof that warrant further comment. Firstly, when using holonomic approximation to pass from \(F\) to a \(I^{\ell+1}\)-parameter family of holonomic sections on \(\mathcal{O}p(I^{n-1}\times\{(y,k)\}\). Here we point out that by wiggling in the \(I^{n}\)-direction, we can arrange the diffeotopy \(h_{t}\) on \(\mathcal{O}p(I^{n}\times I^{\ell})\) to be fibered over \(I^{\ell}\).
Secondly, we note that the wrinkles as constructed in the proof of Lemma 3.3 are fibered over \(I^{\ell}\). Hence the proof will result in a map \(G:I^{n}\times I^{\ell}\to J^{\mathrm{germs}}(\Psi\times I^{\ell})\) fibered over \(I^{\ell}\). These observations imply that all the requirements on the \(G_{k}\) hold automatically.
### Second proof of Theorem 1.1
Consider a section \(F:M\to J^{r}\Psi\) over an \(m\)-dimensional manifold \(M\). We choose a triangulation \(\mathcal{T}\) of \(M\), and denote its \(s\)-skeleton by \(\mathcal{T}^{(i)}\). Applying holonomic approximation we obtain a \(\delta\)-small (in \(C^{0}\) sense) diffeotopy \(h_{t}:M\to M\) for \(t\in[0,1]\) and formal solution \(\tilde{F}:M\to\mathcal{R}\) such that \(\tilde{F}\) is holonomic over \(\mathcal{O}p(h_{1}(\mathcal{T}^{(m-1)}))\) and such that \(d(\tilde{F}(x),F(x))<\epsilon\) for all \(x\in M\).
It remains to extend \(\tilde{F}\) to be holonomic on the top dimensional cells of \(h_{1}(\mathcal{T})\). To do so, we can work one \(m\)-simplex \(\Delta\) at a time. Our section \(\tilde{F}\) is holonomic in a neighbourhood of the boundary of \(\Delta\). We then parametrise an arbitrarily large part of \(\Delta\) by a cube \(I^{n}\). As such, we are in the situation of Lemma 3.3, which produces a wrinkled submersion over the cube. Repeating this for every simplex gives us the wrinkled submersion \(G:M\to J^{\mathrm{germs}}\Psi\).
### Proof of Theorem 1.5
As in Equation (2), let \(s:M\times K\to J^{r}(\Psi\times K)\) be some lift of the family \(F_{k}\). The manifold \(M\times K\) is foliated by the fibers of \(\pi_{K}\), and by Thurston's jiggling [31] there exists a triangulation \(\mathcal{T}\) of \(M\times K\) in general position with respect to this foliation. This means in particular that each top dimensional simplex \(\sigma\in\mathcal{T}^{(m+k)}\) is transverse to the leaves of the foliation.
Over the \((m+k-1)\)-skeleton we apply holonomic approximation [12], to get a (\(C^{0}\)-small) diffeotopy \(h_{t}:M\times K\to M\times K\), \(t\in[0,1]\), and a map \(\tilde{F}:M\times K\to J^{r}(\Psi\times K)\) that is holonomic in a neighborhood of \(h_{1}(\mathcal{T}^{m+k-1})\). Recall that the proof of holonomic approximation allows us to choose the direction in which \(h_{t}\) wiggles \(\mathcal{T}^{m+k-1}\). As such we can arrange that \(h_{1}(\mathcal{T})\) is still transverse to \(\ker\mathrm{d}\pi_{K}\).
It remains to extend \(\tilde{F}\) of the top dimensional simplices of \(h_{1}(\mathcal{T})\). As in the proof of Theorem 1.1 this can be done one simplex at a time, using Lemma 3.5. This results in a wrinkled submersion \(G:M\times K\to J^{\mathrm{germs}}\Psi\times K\). Since the Whitney topology is coarser than the etale one, each \(G_{k}\) is smooth in the etale sense, and the whole family \(G_{k}\) is continuous in the Whitney topology. We leave it to the reader to check that \(G\) satisfies the other properties in Theorem 1.5.
### Proof of Proposition 1.13
Let us prove surjectivity first for all \(i\leq\dim(M)\). A depiction is given in Figure 2.
Consider a map \(F:\mathbb{S}^{i}\to\mathcal{R}\) representing a given homotopy class. Triangulate \(\mathbb{S}^{i}\). We then use Thurston's jiggling to find a piecewise affine immersion \(G\) with respect to our triangulation of \(\mathbb{S}^{i}\) such that \(G\) is \(C^{0}\)-close to \(f\) and transverse to the base projection \(\pi_{b}:\mathcal{R}\to M\). The existence of \(G\) is
stated as Lemma 3.6 below. This makes all simplices graphical over \(M\), so they can be regarded as formal solutions over some projected simplex in \(M\).
Inductively on the dimension of the simplices, we then argue as follows: Consider a simplex \(G(\Delta)\), graphical over a simplex \(\sigma\subset M\), and thus described by a formal section \(H:\sigma\to\mathcal{R}\). We can extend \(H\) to a section defined over a neighbourhood \(U\) and apply holonomic approximation to yield a nearby \(j^{\prime}h:U\to\mathcal{R}\). In doing so we can work relatively to smaller simplices in the boundary of \(\sigma\). Indeed, it is enough that we choose the extension of \(H\) to \(U\) to agree with the chosen extensions along the smaller simplices. This argument works all the way up to the top-dimensional simplices. For those, we apply Theorem 1.1, relative to the boundary. This introduces wrinkles in the top cells.
For injectivity we argue using a map \(F:\mathbb{D}^{i+1}\to\mathcal{R}\) together with a lift to \(\operatorname{EtSol}_{\mathcal{R}}^{M}\) along the boundary. By a small perturbation we can assume that this lift extends to a collar. We can then triangulate relative to this collar and argue as above.
It remains to prove the following auxiliary technical lemma, which we just used in the proof.
**Lemma 3.6**.: _Let be \(P\) a compact simplicial complex and \((N,\xi)\) a manifold endowed with a distribution. Consider a map \(F:P\to(N,\xi)\)._
_Then, there is a subdivision \(\tilde{P}\) of \(P\) and a piecewise smooth map \(G:\tilde{P}\to(N,\xi)\) such that:_
* \(G\) _is_ \(C^{0}\)_-close to_ \(F\)_,_
* \(G\) _is piecewise embedded and piecewise transverse to_ \(\xi\)_._
Proof.: First we produce a piecewise smooth approximation \(F^{\prime}\) of \(F\).
The idea now is to apply Thurston's jiggling [31]. Jiggling perturbs an embedded polyhedron in \(N\) (up to subdivision), in order to yield a new polyhedron that is transverse to \(\xi\). The issue is that \(F^{\prime}\) need not be an embedding, so we cannot apply jiggling directly to \(F^{\prime}(P)\).
We address this as follows. Given any manifold \(S\), note that a map \(f:S\to N\) is immersed if and only if its graph is transverse to the horizontal distribution \(\xi_{H}\) in \(S\times N\) (consisting of the tangent spaces of the fibres of the projection onto \(N\)). Since jiggling produces approximations that are \(C^{1}\)-close, it can be applied to the graph of \(f\), yielding a map \(S\to S\times N\) that is a piecewise smooth section transverse to \(\xi_{H}\). This amounts to producing a piecewise smooth immersion \(g:S\to N\) approximating \(f\).
We apply this to \(F^{\prime}\), yielding a map \(F^{\prime\prime}:P^{\prime}\to N\) that is simplexwise immersed, for some subdivision \(P^{\prime}\). If \(P^{\prime}\) is fine enough, \(F^{\prime\prime}\) is an embedding on each simplex. We then apply jiggling once more, now with respect to \(\xi\), to conclude the proof.
### Application: folded symplectic structures
Our results recover as well the following theorem of A. Cannas da Silva on folded symplectic structures:
Figure 2. On the left, a map \(F\) into \(\mathcal{R}\). The fibres run vertically and horizontally we see the base manifold \(M\). In the middle, a piecewise perturbation of \(F\), now transverse to fibres, thanks to jiggling. On the right, a “holonomic approximation” by a map into étale space. Along each top simplex we see wrinkling.
**Proposition 3.7**.: _Every formal symplectic manifold admits a folded symplectic structure._
Proof.: Recall that the etale space \(\operatorname{EtSol}_{\mathcal{R}}^{M}\) of symplectic structures over \(M\) has a canonical symplectic structure \(\omega\). Corollary 1.6 produces for us a wrinkled submersion \(G:M\to\operatorname{EtSol}_{\mathcal{R}}^{M}\). Using surgery of singularities [13, 8], we can replace \(G\) by a map \(H\) that is instead a folded submersion. Then \(H^{*}\omega\) is a folded symplectic structure. Moreover, its formal regularisation is formally homotopic to the given starting datum.
**Remark 3.8**.: The key point behind this result is that differential forms can be pulled back by maps and, in the symplectic case, one can provide a (local) normal form for said pullback as long as the maps under study have controlled singularities (like folds).
For general geometries, this much more subtle. Indeed, in general there will be a continuous moduli of possible models for the singularities of the pullback, depending on the relative positions of the geometric structure and the map. It is then meaningful to study whether one may be able to homotope the map and the structure further to reduce the possible models to a controlled collection. This can be helpful in order to perform the regularisation step we discussed above.
Indeed, this idea enters crucially the h-principle for higher-dimensional contact structures. In [2] they use wrinkling to prove that any formal contact structure is homotopic to contact structure with singularities isomorphic to a unique "universal hole". These can then be removed using an overtwisted disc. The h-principles for overtwisted Engel structures [6, 5, 9] use similarly the fact that one can produce Engel structures whose singularities are controlled (but it is not known whether a unique "universal hole" exists in the Engel setting).
In this paper we work with \(\mathcal{R}\) arbitrary and do not pursue the idea of adapting our wrinkles further to the geometric structures under consideration (since this is highly dependent on the nature of \(\mathcal{R}\)).
### Application: horizontal homotopy groups
A topic that has received some attention in the last few years [19, 33, 27, 28] is the study of the _horizontal homotopy groups_\(\pi_{i}^{H}\). These consist of homotopy classes of tangent maps into a manifold \(N\) endowed with a distribution \(\xi\), up to tangent homotopy. As we remarked above, maps into \(\operatorname{EtSol}_{\mathcal{R}}^{M}\) project down via \(p_{r}\) to maps tangent to the Cartan distribution \(\xi_{\operatorname{can}}\) in \(\mathcal{R}\subset J^{r}\Psi\).
It follows that Proposition 1.13 recovers the following statement:
**Corollary 3.9**.: _The inclusion in homotopy groups \(\pi_{i}^{H}(\mathcal{R},\xi_{\operatorname{can}})\to\pi_{i}(\mathcal{R})\) is surjective in degrees \(i\leq\dim M\)._
In fact, the projected maps we obtain in this manner are Lipschitz for any Carnot-Caratheodory metric on \(\xi_{\operatorname{can}}\), so we also get a surjection if we take the domain to be the Lipschitz homotopy group \(\pi_{i}^{L}(\mathcal{R},\xi_{\operatorname{can}})\).
This result, not quite stated like this, is classic and, as far as the authors know, a proof was first sketched by R. Thom [4]. The proof we have just presented suggests an interesting open question.
**Question 3.10**.: Observe that \(\pi_{r}^{r^{\prime}}:J^{r^{\prime}}\Psi\to J^{r}\Psi\) takes horizontal maps to horizontal maps and therefore defines a morphism of horizontal (and also Lipschitz) homotopy groups
\[\pi_{i}^{H}(J^{r^{\prime}}\Psi,\xi_{\operatorname{can}})\to\pi_{i}^{H}(J^{r} \Psi,\xi_{\operatorname{can}}).\]
What can be said about its connectivity (particularly in degree \(\dim M\))?
More generally, one can ask about the connectivity between a relation \(\mathcal{R}\subset J^{r}\Psi\) and its prolongation in \(J^{r^{\prime}}\Psi\). This is most interesting when \(\mathcal{R}\) is not open.
## 4. Classifying groupoids for \(\operatorname{Diff}\)-invariant relations
Our results about \(\Gamma_{\mathcal{R}}\)-microbundles are proven in Section 5. To this end, we now discuss the necessary language of groupoids, principal bundles, and their classifying spaces.
The new material in this section amounts to adapting classic ideas to our more general setup of arbitrary \(\operatorname{Diff}\)-invariant differential relations. We refer the reader to Haefliger's original articles [16, 17, 18], Thurston's work on the flexibility of foliations and its relation to diffeomorphism groups [30, 31, 32], as well as more modern references in Lie groupoid theory [24, 7, 3, 1] and \(h\)-principles for foliations [11, 23].
To motivate later definitions, we first discuss (Subsection 4.1) foliations with transverse geometry modelled on a \(\operatorname{Diff}\)-invariant differential relation \(\mathcal{R}\). This leads naturally to discussing classifying groupoids associated to differential relations (Subsection 4.2), their principal bundles (Subsection 4.3), and their associated microbundles (Subsection 4.4). We finish with Subsection 4.5, where formal counterparts of the previous concepts are introduced.
### Transverse geometries
We saw in Subsection 2.3.1 that a \(\operatorname{Diff}\)-invariant differential relation \(\mathcal{R}\) of dimension \(n\) is a certain functor with domain the category of \(n\)-dimensional manifolds. In non-commutative geometry, one thinks of leaf spaces of corank \(n\) foliations as generalisations of manifolds. Morally speaking, the following definition says that we can extend the domain of \(\mathcal{R}\) to the category of leaf spaces.
**Definition 4.1**.: _Let \(M\) be an \(m\)-dimensional manifold endowed with a foliation \(\mathcal{F}\) of corank \(n\). A \(\mathcal{R}\)**-transverse structure** on \((M,\mathcal{F})\) is a maximal collection consisting of:_
* _Opens_ \(\{U_{i}\}_{i\in I}\) _covering_ \(M\)_._
* _Submersions_ \(\phi_{i}:U_{i}\to\mathbb{R}^{n}\)_, for each_ \(i\in I\)_, with_ \(\ker(d\phi_{i})=T\mathcal{F}\)_._
* _Transition functions_ \(\rho_{ij}:\phi_{i}(U_{i}\cap U_{j})\to\phi_{j}(U_{i}\cap U_{j})\) _satisfying the cocycle condition_ \(\rho_{ij}\rho_{jk}=\rho_{ik}\) _for all_ \(i,j,k\in I\)_._
* _Solutions_ \(f_{i}:\phi_{i}(U_{i})\to\Psi(\phi_{i}(U_{i}))\) _of_ \(\mathcal{R}\) _satisfying compatibility_ \(\rho_{ij}^{*}f_{j}=f_{i}\) _for all_ \(i,j\in I\)_._
_Here \(\rho_{ij}^{*}\) denotes the action by pullback of local diffeomorphisms on sections of \(\Psi\)._
The first three conditions simply say that we are taking a maximal cover of \(M\) by foliated charts of \(\mathcal{F}\). The last one relies on the \(\operatorname{Diff}\)-invariance of \(\mathcal{R}\) to define the \(f_{i}\) and the action of the \(\rho_{ij}\) on the \(f_{j}\). We interpret it as saying that the local solutions \(f_{i}\) glue to a global solution \(F\) of \(\mathcal{R}\) on the leaf space of \(\mathcal{F}\). The pair \((\mathcal{F},F)\) is said to be an \(\mathcal{R}\)**-foliation**.
Given a collection (not-necessarily maximal) satisfying these conditions, it is always possible to extend it to a unique \(\mathcal{R}\)-foliation \((\mathcal{F},F)\). Such a collection is said to be a **cocycle representation** of \((\mathcal{F},F)\).
**Example 4.2**.: Let \(M\) be an \(n\)-dimensional manifold and let \(f:M\to\Psi\) be a solution of \(\mathcal{R}\). Given any submersion \(g:N\to M\), we can foliate \(N\) by (connected components of) fibers of \(g\) and pullback \(f\) to a transverse solution \(g^{*}f\) of \(\mathcal{R}\).
More generally, if \((M,\mathcal{F},F)\) is an \(\mathcal{R}\)-foliation and \(g:N\to M\) is a map transverse to \(\mathcal{F}\), we can pullback both \(\mathcal{F}\) and \(F\), yielding another another \(\mathcal{R}\)-foliation \((N,g^{*}\mathcal{F},g^{*}F)\). \(\bullet\)
**Example 4.3**.: An important example, already encountered in the introduction and which serves to motivate \(\Gamma_{\mathcal{R}}\)-microbundles, is given by the exponential map \(\exp:TM\to M\). This is in general not a submersion due to the presence of cut locus, but we can take \(\exp:N\to M\) to be a sufficiently small neighbourhood of the zero section. Then any solution \(f\) of \(\mathcal{R}\) over \(M\) pulls back to a \(\mathcal{R}\)-foliation on \(N\) that we call \(\exp(f)\).
If we now consider the standard bundle projection \(\pi:N\to M\), we can observe that its fibres are all transverse to the fibres of \(\exp\). This implies that \(\exp(f)\) yields an \(\mathcal{R}\)-structure when restricted to each fibre \(N_{x}\) of \(\pi\). Recalling the identification by \(\exp\) of \(N_{x}\) with a neighbourhood of \(x\in M\), we deduce that \(\exp(f)\) is a coherent manner of encoding the germs of \(f\) at all points \(x\in M\) simultaneously. These germs very smoothly in the etale sense.
### Classifying groupoids
As we have just seen (and as Haefliger observed in [16, 17]), a foliation can be represented by a cocycle. It turns out that this is a cocycle for a Lie groupoid and not a Lie group. We write \(\mathfrak{s}\) and \(\mathfrak{t}\) for the source and target maps of all the upcoming groupoids.
**Definition 4.4**.: _We write \(\Gamma^{n}\rightrightarrows\mathbb{R}^{n}\) for the etale Lie groupoid of germs of local diffeomorphisms of \(\mathbb{R}^{n}\)._
The base \(\mathbb{R}^{n}\) is a smooth manifold in the usual sense and the space of arrows is endowed with the unique smooth structure turning the source and target maps into etale maps. In particular, source and target fibres carry the discrete topology.
More generally, given a Diff-invariant relation \(\mathcal{R}\), we can consider:
**Definition 4.5**.: _The **classifying groupoid**\(\Gamma_{\mathcal{R}}\rightrightarrows\mathrm{EtSol}_{\mathcal{R}}^{\mathbb{ R}^{n}}\) associated to \(\mathcal{R}\) is the action groupoid \(\mathrm{EtSol}_{\mathcal{R}}^{\mathbb{R}^{n}}\rtimes\Gamma^{n}\)._
The base is now a possibly non-Hausdorff and non-second countable \(n\)-manifold, which itself submerses onto \(\mathbb{R}^{n}\). By construction, the base encodes all possible germs of solutions of \(\mathcal{R}\) over \(\mathbb{R}^{n}\) and the arrows encode all the possible symmetries. Note that \(\Gamma_{\mathcal{R}}\) is etale and effective.
### Principal groupoid bundles
Our next goal is to discuss principal \(\Gamma_{\mathcal{R}}\)-bundles. A concrete example are the foliations with transverse \(\mathcal{R}\)-geometry of Definition 4.1. In general:
**Definition 4.6**.: _A \(\Gamma_{\mathcal{R}}\)**-cocycle** on a topological space \(S\) is a collection of:_
* _opens_ \(\{U_{i}\}_{i\in I}\) _covering_ \(S\) _and_
* _transition functions_ \(\rho_{ji}:U_{i}\cap U_{j}\to\Gamma_{\mathcal{R}}\) _verifying the cocycle condition_ \(\rho_{ij}\rho_{jk}=\rho_{ik}\)_._
Do note that each \(\rho_{ii}\) is, due to the cocycle condition, a map into the units of \(\Gamma_{\mathcal{R}}\) and can thus be regarded as a map \(f_{i}:U_{i}\to\mathrm{EtSol}_{\mathcal{R}}^{\mathbb{R}^{n}}\). I.e. over each \(U_{i}\) we are choosing germs of solutions of \(\mathcal{R}\) varying in an etale manner. Moreover, the cocycle condition then implies that \(\mathfrak{s}\circ\rho_{ji}=f_{i}\) and \(\mathfrak{t}\circ\rho_{ji}=f_{j}\). I.e. the \(\rho_{ji}\) glue the \(f_{i}\) to each other.
Haefliger [17] defined \(\Gamma_{\mathcal{R}}\)**-structures** to be maximal cocycles. This amounts to considering two cocycles equivalent if they are compatible. We will use instead the name **principal \(\Gamma_{\mathcal{R}}\)-bundles** for these maximal cocyles, even though we will not discuss their description as bundles over \(S\) with an action of \(\Gamma_{\mathcal{R}}\).
The set of all principal \(\Gamma_{\mathcal{R}}\)-bundles on \(S\) is denoted by \(H^{1}(S,\Gamma_{\mathcal{R}})\).
#### 4.3.1. Smoothness of principal bundles
When \(M\) is a smooth manifold, as will be the case for us, it is sensible to make use of those elements in \(H^{1}(M,\Gamma_{\mathcal{R}})\) that are **smooth**. These are defined as those principal bundles whose transition functions are smooth. One could also discuss structures of intermediate \(C^{r}\) regularity, but we do not need this.
**Lemma 4.7**.: _Let \(M\) be a smooth manifold of arbitrary dimension. Then, \(\mathcal{R}\)-foliations on \(M\) are in correspondence with smooth principal \(\Gamma_{\mathcal{R}}\)-bundles with all \(\rho_{ii}\) submersive._
In particular, when \(M\) is \(n\)-dimensional, smooth submersive cocycles are called \(\Gamma_{\mathcal{R}}\)**-atlases**. The maximal ones are in correspondence with solutions of \(\mathcal{R}\).
#### 4.3.2. Functoriality
As Lemma 4.7 points out, \(\mathcal{R}\)-foliations are \(\Gamma_{\mathcal{R}}\)-structures that additionally satisfy a differential condition, being submersive. It was precisely Haefliger's insight that dropping this condition produces a theory that is homotopy theoretical in nature. Indeed, we see from Definition 4.6 that principal \(\Gamma_{\mathcal{R}}\)-bundles can be pulled back by arbitrary (continuous) maps and be defined not just over manifolds, but over arbitrary topological spaces.
#### 4.3.3. The space of principal \(\Gamma_{\mathcal{R}}\)-bundles
Our overall goal in \(h\)-principle is to construct and classify principal \(\Gamma_{\mathcal{R}}\)-bundles by relating them to their "formal counterparts". In order to speak of classification, which we understand to be _up to homotopy_, we have to construct a _space_ of principal \(\Gamma_{\mathcal{R}}\)-bundles. Recall that \(H^{1}(S,\Gamma_{\mathcal{R}})\) is just a set.
We begin by explaining what a continuous family should be:
**Definition 4.8**.: _Let \(S\) and \(K\) be topological spaces, with \(K\) serving as parameter space. A \(K\)**-concordance** of principal \(\Gamma_{\mathcal{R}}\)-bundles over \(S\) is a principal \(\Gamma_{\mathcal{R}}\)-bundle on \(S\times K\)._
A \([0,1]\)-concordance (normally just called concordance) defines two principal \(\Gamma_{\mathcal{R}}\)-bundles on \(S\) by restricting to \(S\times\{0\}\) and \(S\times\{1\}\). These are said to be **concordant** to each other.
It was proven by Haefliger [17] that any topological groupoid, for us \(\Gamma_{\mathcal{R}}\), admits a **classifying space**\(B\Gamma_{\mathcal{R}}\) in the sense that:
**Proposition 4.9**.: _Let \(M\) be a manifold. There is a bijection:_
\[\frac{H^{1}(M,\Gamma_{\mathcal{R}})}{\text{concordance}}\,\cong\,[M,B\Gamma _{\mathcal{R}}]\]
_between concordance classes of principal bundles and homotopy classes of maps into classifying space._
The statement holds true for arbitrary spaces \(S\) instead of \(M\), as long as one restricts to numerable principal bundles. We do not need this generality.
Proposition 4.9 should be understood as a statement about the path-components of the space of principal \(\Gamma_{\mathcal{R}}\)-bundles over \(M\). This suggests taking \(\text{Maps}(M,B\Gamma_{\mathcal{R}})\) as the space of principal bundles over \(M\). This is further supported by the following consequence of Proposition 4.9:
\[\frac{H^{1}(M\times\mathbb{S}^{i},\Gamma_{\mathcal{R}})}{\text{concordance}}\, \cong\,[M\times\mathbb{S}^{i},B\Gamma_{\mathcal{R}}]\,\cong\,[\mathbb{S}^{i},\text{Maps}(M,B\Gamma_{\mathcal{R}})],\]
implying that the right hand side (upon passing to the pointed setting) is computing \(\mathbb{S}^{i}\)-concordances up to concordance, which we interpret at being the \(i\)th homotopy group of the space of principal \(\Gamma_{\mathcal{R}}\)-bundles over \(M\).
Haefliger's construction of the classifying space of any topological groupoid is explicit and based on Milnor's join construction for the Lie group case. We will later use that the construction is functorial in the groupoid.
### Haefliger microbundles with transverse geometry
We defined \(\Gamma_{\mathcal{R}}\)-microbundles in the introduction (Definition 1.8). The reader should think of them as being the associated bundle, with fibre a germ of an open in \(\mathbb{R}^{n}\), of a principal \(\Gamma_{\mathcal{R}}\)-bundle.
We explain this idea next. The motto is that \(\Gamma_{\mathcal{R}}\)-microbundles are more convenient to work with because we can use differential topology arguments.
#### 4.4.1. Extending to arbitrary spaces
As we have seen, principal \(\Gamma_{\mathcal{R}}\)-bundles can be defined over arbitrary spaces. The same is true for \(\Gamma_{\mathcal{R}}\)-microbundles:
**Definition 4.10**.: _A \(\Gamma_{\mathcal{R}}\)**-microbundle** on a topological space \(S\) is a triple \((E,\mathcal{F},F)\), where:_
* \(E\to S\) _is a rank-_\(n\) _microbundle._
* \((\mathcal{F},F)\) _is a germ along the zero section of principal_ \(\Gamma_{\mathcal{R}}\)_-bundle, restricting to each fibre as an_ \(\mathcal{R}\)_-foliation._
The notation \((\mathcal{F},F)\) for a principal \(\Gamma_{\mathcal{R}}\)-bundle may seem strange, but we keep it for consistency. It also feels natural if we think of \((\mathcal{F},F)\) as fibrewise \(\mathcal{R}\)-foliations, varying continuously (for the etale topology) with \(S\).
Even if \(M\) is smooth, this definition is more general than Definition 1.8. The difference is that the \(\Gamma_{\mathcal{R}}\)-microbundles introduced in Definition 1.8 are the _smooth_ ones. We discuss this in Subsection 4.4.4.
#### 4.4.2. Elementary observations
As observed in Example 4.3, restricting a \(\Gamma_{\mathcal{R}}\)-microbundle \((E,\mathcal{F},F)\) to a fibre \(E_{x}\) produces a germ of solution of \(\mathcal{R}\). Moreover, if the base is a manifold and \(\mathcal{F}\) is regular, we can also restrict \((\mathcal{F},F)\) to the zero section, yielding an \(\mathcal{R}\)-foliation. In that case, the holonomy of \(\mathcal{F}\) identifies the fibres of \(E\) with the normal bundle of \(\mathcal{F}|_{M}\). Because of this, it is customary to call \(E\) the **normal bundle** of \((E,\mathcal{F},F)\).
Much like cocycles, \(\Gamma_{\mathcal{R}}\)-microbundles \((E,\mathcal{F},F)\) admit arbitrary pullbacks \((g^{*}E,G^{*}\mathcal{F},G^{*}F)\) by continuous maps \(g:N\to S\). Effectively, we are pulling back the germ of \(\mathcal{R}\)-solution in \(E_{g(x)}\) and putting it over \(x\in N\).
We can once again speak of \(K\)**-concordances** as \(\Gamma_{\mathcal{R}}\)-microbundles over \(S\times K\) with underlying microbundle \(E\times K\).
#### 4.4.3. From principal bundles to microbundles
We now explain our earlier claim that microbundles are associated bundles of principal bundles.
**Lemma 4.11**.: _A principal \(\Gamma_{\mathcal{R}}\)-bundle on a space \(S\) is the restriction to \(S\) of some \(\Gamma_{\mathcal{R}}\)-microbundle._
Proof.: Write \(\pi\) for the submersion \(\operatorname{EtSol}_{\mathcal{R}}^{\mathbb{R}^{n}}\to\mathbb{R}^{n}\). Write \(\rho_{ij}:U_{i}\cap U_{j}\to\Gamma_{\mathcal{R}}\) for the transitions of the \(\Gamma_{\mathcal{R}}^{n}\)-structure. These are germs of solutions and diffeomorphisms and we can take representatives to yield:
* opens \(V_{i}\subset\mathbb{R}^{n}\),
* embeddings \(g_{ij}:V_{i}\to\mathbb{R}^{n}\),
* and solutions \(s_{i}:V_{i}\to\Psi(V_{i})\) of \(\mathcal{R}\),
such that \(\pi\circ\rho_{ii}(U_{i})\subset V_{i}\), \(\rho_{ij}\) is a lift of \(g_{ij}\) via \(\pi\), and \(g^{*}_{ij}s_{j}=s_{i}\).
We then define \(E=\sqcup_{i}U_{i}\times V_{i}/\sim\) where we define \((i,x,a)\sim(j,y,b)\) if and only if \(x=y\) and \(b=g_{ij}(a)\). By construction this space projects to \(S\) with disc fibres. Moreover, there is a continuous section \(\phi:S\to E\) given over \(U_{i}\) by \(\pi\circ\rho_{ii}\). Then \(E\) becomes a microbundle with zero section \(\phi(S)\).
In the aforementioned charts we immediately have maps into \(\Gamma_{\mathcal{R}}\) which piece together to a \(\Gamma_{\mathcal{R}}\)-cocycle around \(\phi(S)\). By construction, its restriction to \(\phi(S)\) is the cocycle we began with.
In the proof of Lemma 4.11 we built an actual bundle with an actual cocycle close to the zero section. Taking such representatives is often very useful, so we observe:
**Lemma 4.12**.: _Every \(\Gamma_{\mathcal{R}}\)-microbundle \((E,\mathcal{F},F)\) extends to a principal \(\Gamma_{\mathcal{R}}\)-bundle on a neighbourhood of the zero section._
#### 4.4.4. Smooth microbundles
In order to study spaces of principal \(\Gamma_{\mathcal{R}}\)-bundles, it turns out to be sufficient to work in the smooth setting:
**Lemma 4.13**.: _Let \(M\) be a smooth manifold. Any element in \(c\in H^{1}(M,\Gamma_{\mathcal{R}})\) is concordant to a smooth one \(c^{\prime}\)._
Proof.: We can first apply Lemmas 4.11 and 4.12 to \(c\), yielding a (representative of) a \(\Gamma_{\mathcal{R}}\)-microbundle \((E,\mathcal{F},F)\) restricting to \(c\). If we inspect the proof of Lemma 4.11, we see that \(E\) is smooth, but the zero section \(\phi(M)\) need not be. This is because the canonical section \(\phi\) is just continuous.
We address this by smoothing \(\phi\), which can be achieved while staying a section. Once that is done, it is moreover possible to use a fibrewise exponential to see \(E\) as a subset of a vector bundle. This produces a \(\Gamma_{\mathcal{R}}\)-microbundle in the sense of Definition 1.8. Restriction to \(M\) yields a smooth element \(c^{\prime}\) in \(H^{1}(M,\Gamma_{\mathcal{R}})\). Moreover, the interpolation between \(\phi\) and its smoothing provides a concordance between \(c\) and \(c^{\prime}\)
Observe that, since concordances are themselves principal \(\Gamma_{\mathcal{R}}\)-bundles, this lemma applies immediately to the parametric setting.
Moreover, one of the corollaries of the proof of Lemma 4.13 is that we can also smooth microbbundles (and their concordances):
**Corollary 4.14**.: _Let \(M\) be a smooth manifold. Any \(\Gamma_{\mathcal{R}}\)-microbundle is concordant to a smooth one (i.e. to a microbundle in the sense of Definition 1.8)._
#### 4.4.5. The space of microbundles
We proved in Lemma 4.11 that we can associate a microbbundle to every principal \(\Gamma_{\mathcal{R}}\)-bundle. The previous discussion says that the microbundle can be assumed to be smooth if the principal bundle is. One can interpret this more functorially as follows.
There is a topological groupoid morphism
\[\Gamma_{\mathcal{R}}\to\operatorname{GL}(n)\]
given by taking the differential of each germ of diffeomorphism. By functoriality of the classifying space construction, this yields a map
\[\eta:B\Gamma_{\mathcal{R}}\to B\operatorname{GL}(n)\]
that is precisely the classifying map of the normal bundle. For each manifold \(M\), this map specialises to
\[\eta:\operatorname{Maps}(M,B\Gamma_{\mathcal{R}})\to\operatorname{Maps}(M,B \operatorname{GL}(n)).\]
The space on the right has multiple components, each corresponding to a equivalence class of \(\operatorname{GL}(n)\)-bundle over \(M\). This motivates us to label each component as \(\operatorname{Maps}_{E}(M,B\operatorname{GL}(n))\), with \(E\) a rank-\(n\) vector bundle representing the corresponding class.
It follows that, if we are interested in \(\Gamma_{\mathcal{R}}\)-microbundles with a fixed underlying vector bundle \(E\), we should take the homotopy fibre of the map \(\eta\) over \(\operatorname{Maps}_{E}(M,B\operatorname{GL}(n))\). We denote the resulting space by \(\operatorname{Maps}_{E}(M,B\Gamma_{\mathcal{R}})\). We can inspect the proof of Lemma 4.11 to show:
**Corollary 4.15**.: _Any map \(N\to\operatorname{Maps}_{E}(M,B\Gamma_{\mathcal{R}})\) can be represented by a \(\Gamma_{\mathcal{R}}\)-microbundle on \(N\times M\) with underlying vector bundle \(N\times E\)._
**Remark 4.16**.: It is worth pointing out that \(\operatorname{Maps}_{E}(M,B\Gamma_{\mathcal{R}})\) (the "space of \(\Gamma_{\mathcal{R}}\)-microbundle structures on \(E\)") is in general not weakly equivalent to the component of \(\operatorname{Maps}(M,B\Gamma_{\mathcal{R}})\) living over \(\operatorname{Maps}_{E}(M,B\operatorname{GL}(n))\), since the latter need not be contractible.
A geometric interpretation of this discrepancy is the following: Fix \(c\in H^{1}(M,\Gamma_{\mathcal{R}})\) represented by a smooth microbundle \((E,\mathcal{F},F)\). We see that \(c\) can also be represented by any \((E,\psi^{*}\mathcal{F},\psi^{*}F)\), with \(\psi\) a vector bundle automorphism of \(E\). The space \(\operatorname{Aut}(E)\) need not be contractible, so the fibre over \(c\) need not be weakly contractible either. \(\bullet\)
**Remark 4.17**.: When \(M\) is a point and thus there is a unique \(\mathbb{R}^{n}\)-bundle \(E\) over it, we have that \(\operatorname{Maps}_{E}(M,B\Gamma_{\mathcal{R}})\) is the so-called classifying space \(B\overline{\Gamma_{\mathcal{R}}}\) of _framed_ principal \(\Gamma_{\mathcal{R}}\)-bundles. For the groupoid \(\Gamma^{n}\) of diffeomorphisms of \(\mathbb{R}^{n}\), it was classically studied by Mather [21], Thurston [30], and Haefliger, among others. \(\bullet\)
### Formal data
In this final subsection we define formal analogues of \(\Gamma_{\mathcal{R}}\), its principal bundles, and its microbundles. The reader should keep in mind that this is analogous to the passage from solutions of \(\mathcal{R}\) to formal solutions.
#### 4.5.1. The groupoid of formal solutions
We now want to define a groupoid classifying formal solutions of \(\mathcal{R}\). We consider four flavours that turn out to be equivalent. Because of this reason, we denote all of them by \(\Gamma_{\mathcal{R}}^{f}\). For some arguments we need a concrete model and, when that is the case, we state it explicitly.
**Definition 4.18**.: _The **large germ model** for \(\Gamma_{\mathcal{R}}^{f}\) is \(\Gamma_{\mathcal{R}}\) set-theoretically, endowed with the Whitney topology._
Let us elaborate. The base \(\operatorname{EtSol}_{\mathcal{R}}^{\mathbb{R}^{n}}\) is a space of germs of solutions, so it has a Whitney topology as discussed in Subsection 2.2. The space of arrows is the product \(\operatorname{EtSol}_{\mathcal{R}}^{\mathbb{R}^{n}}\times\Gamma^{n}\). Both terms are again spaces of germs so we take the Whitney topology in both. The result is a topological groupoid.
**Definition 4.19**.: _The (small) **germ model** for \(\Gamma^{f}_{\mathcal{R}}\) is the action groupoid:_
\[p_{r}^{-1}\mathcal{R}_{0}\rtimes{J^{germs}}_{0}\Gamma^{n}.\]
I.e. the fibre of the large one over the origin in \(\mathbb{R}^{n}\). Observe that this is a complete transversal of the large one, so the two are Morita equivalent.
**Definition 4.20**.: _The **large jet model** for \(\Gamma^{f}_{\mathcal{R}}\) is the finite-dimensional action Lie groupoid \(\mathcal{R}(\mathbb{R}^{n})\rtimes{J^{r}\Gamma^{n}}\)._
I.e. it is the groupoid of \(r\)-jets of the large germ model.
**Definition 4.21**.: _The (small) **jet model** for \(\Gamma^{f}_{\mathcal{R}}\) is the finite-dimensional action Lie groupoid \(\mathcal{R}_{0}\rtimes{J^{r}_{0}\Gamma^{n}}\)._
I.e. it consists of \(r\)-jets at the origin of diffeomorphisms of \(\mathbb{R}^{n}\), acting on the fibre of \(\mathcal{R}\) over the origin. It is the fibre over \(0\in\mathbb{R}^{n}\) of the large jet model and, as such, it is a complete transversal and Morita equivalent to it.
These four groupoids have weakly equivalent spaces of principal bundles. We have already related the corresponding large and small models via Morita equivalence. To relate the germ models with the jet ones it suffices to observe there are topological groupoid morphisms between them, with weakly contractible fibres.
**Remark 4.22**.: This phenomenon of having different models for the groupoid of formal solutions, one much smaller than the others, applies as well to \(\Gamma_{\mathcal{R}}\). Indeed, if the solutions of \(\mathcal{R}\) have a local model (for instance, in the symplectic or contact settings), we can consider the groupoid of local automorphisms of said local model. This has now \(\mathbb{R}^{n}\) as base, instead of the whole of \(\operatorname{EtSol}_{\mathcal{R}}^{\mathbb{R}^{n}}\). It is not difficult to prove (see Subsection 4.6.2) that this simpler groupoid is Morita equivalent to \(\Gamma_{\mathcal{R}}\). \(\bullet\)
#### 4.5.2. The scanning map
The \(h\)-principle is interested in studying the **scanning map**
\[\tau_{\mathcal{R}}:\Gamma_{\mathcal{R}}\to\Gamma^{f}_{\mathcal{R}}, \tag{3}\]
which generalises the usual inclusion of solutions into formal solutions. For the large model, this is simply the inclusion. For the (small) model we translate everything to the origin. For the jet models we moreover take \(r\)-jets of each germ.
**Remark 4.23**.: We invite the reader to think of this map, in terms of the large model, as follows: The scanning map \(\tau_{\mathcal{R}}\) is just the identity, with the Whitney topology on the right and the etale one on the left. This is analogous to the inclusion map with target a Lie group \(G\) and source \(G^{\delta}\), the same group with the discrete (i.e. etale) topology. Principal \(G^{\delta}\)-bundles are \(G\)-bundles that are additionally flat. This is still true in our setting: principal \(\Gamma_{\mathcal{R}}\)-bundles are principal \(\Gamma^{f}_{\mathcal{R}}\)-bundles with a flat structure coming from the etale topology. This is particularly visible when we pass to \(\Gamma_{\mathcal{R}}\)-microbundles, which do have a foliation transverse to the fibres. \(\bullet\)
Due to the functoriality of the classifying space construction we also have a scanning map
\[B\Gamma_{\mathcal{R}}\to B\Gamma^{f}_{\mathcal{R}},\]
which we still denote by \(\tau_{\mathcal{R}}\) if needed. Similarly, we obtain classifying maps for the spaces of principal bundles and microbundles over a given manifold \(M\):
\[\operatorname{Maps}(M,B\Gamma_{\mathcal{R}}) \to\operatorname{Maps}(M,B\Gamma^{f}_{\mathcal{R}}), \tag{5}\] \[\operatorname{Maps}_{E}(M,B\Gamma_{\mathcal{R}}) \to\operatorname{Maps}_{E}(M,B\Gamma^{f}_{\mathcal{R}}). \tag{4}\]
Let us elaborate on the last item. Observe that \(\Gamma_{\mathcal{R}}\to\Gamma^{f}_{\mathcal{R}}\) factors the map \(\Gamma_{\mathcal{R}}\to\operatorname{GL}(n)\). Differently put, the associated microbundle (without \(\mathcal{R}\)-foliation) depends only on first order formal
data. This means that the map appearing in Equation 5 is the induced map on homotopy fibres over \(\operatorname{Maps}(M,B\mathrm{GL}(n))\) of the map in Equation 4. In particular, in order to understand the connectivity of Equation 4 we just need to understand the connectivity of Equation 5. Hence we reduce the study of principal \(\Gamma_{\mathcal{R}}\)-bundles to the study of \(\Gamma_{\mathcal{R}}\)-microbundles.
#### 4.5.3. The Whitney topology in the space of principal \(\Gamma_{\mathcal{R}}\)-bundles
Using the scanning map in Equation 4, we can pullback the topology in \(\operatorname{Maps}(M,B\Gamma_{\mathcal{R}}^{f})\) to \(\operatorname{Maps}(M,B\Gamma_{\mathcal{R}})\). The resulting space we denote by \(\operatorname{Maps}^{\mathrm{Wh}}(M,B\Gamma_{\mathcal{R}})\). We say that this is the space of principal \(\Gamma_{\mathcal{R}}\)-bundles on \(M\), endowed with the **Whitney topology**. This space is in-between \(\operatorname{Maps}(M,B\Gamma_{\mathcal{R}}^{f})\) and \(\operatorname{Maps}(M,B\Gamma_{\mathcal{R}})\). Indeed, a family of maps into \(\operatorname{Maps}^{\mathrm{Wh}}(M,B\Gamma_{\mathcal{R}})\) consists of individual \(\Gamma_{\mathcal{R}}\)-bundles, but the family as a whole is only continuous in a Whitney sense (and thus possibly not continuous for the etale topology).
We can argue analogously for Equation 5 to produce the space \(\operatorname{Maps}_{E}^{\mathrm{Wh}}(M,B\Gamma_{\mathcal{R}})\) of \(\Gamma_{\mathcal{R}}\)-microbundles that vary continuously in a Whitney sense.
This leads to new scanning maps:
\[\operatorname{Maps}^{\mathrm{Wh}}(M,B\Gamma_{\mathcal{R}})\to\operatorname{ Maps}(M,B\Gamma_{\mathcal{R}}^{f}), \tag{6}\]
\[\operatorname{Maps}_{E}^{\mathrm{Wh}}(M,B\Gamma_{\mathcal{R}})\to\operatorname {Maps}_{E}(M,B\Gamma_{\mathcal{R}}^{f}), \tag{7}\]
where the second equation is obtained from the first by once again taking homotopy fibres over \(\operatorname{Maps}(M,B\mathrm{GL}(n))\). Recall that Theorem 1.17 states that Equation 6 is a weak equivalence. This will follow immediately once we show that Equation 7 is a weak equivalence.
It will be convenient to introduce the following notation: A family of \(\Gamma_{\mathcal{R}}^{f}\)-microbundles (or principal bundles) is **holonomic** over a set \(A\) (in parameter, domain, or both) if the restriction to \(A\) lifts to a Whitney family of \(\Gamma_{\mathcal{R}}\)-microbundles (or principal bundles).
### Tautological principal bundles
Some of our upcoming proofs make use of so-called _tautological principal bundles_ over our groupoids and classifying spaces, which we now introduce.
#### 4.6.1. The tautological \(\Gamma_{\mathcal{R}}\)-bundles
Recall that the etale space \(\operatorname{EtSol}_{\mathcal{R}}^{M}\) of solutions of \(\mathcal{R}\) on an \(n\)-manifold \(M\) carries a tautological solution \(\tau\) (Definition 2.4). It is immediate that \(\tau\) uniquely defines a **tautological principal \(\Gamma_{\mathcal{R}}\)-bundle** on \(\operatorname{EtSol}_{\mathcal{R}}^{M}\).
A concrete example of this is \(\operatorname{EtSol}_{\mathcal{R}}^{\mathbb{R}^{n}}\), the base of the groupoid \(\Gamma_{\mathcal{R}}\). In this case we readily see that the arrows act by automorphisms of the tautological solution. By inspecting the join construction we see that (the join model of) \(B\Gamma_{\mathcal{R}}\) inherits a tautological principal \(\Gamma_{\mathcal{R}}\)-bundle. Haefliger's classification statement Proposition 4.9 says that every principal \(\Gamma_{\mathcal{R}}\)-bundle over \(M\) is a pullback of the tautological one via a classifying map \(M\to B\Gamma_{\mathcal{R}}\).
#### 4.6.2. The groupoid over \(\operatorname{EtSol}_{\mathcal{R}}^{M}\)
The previous reasoning applies to any \(n\)-manifold \(M\), as we now explain. Consider the action groupoid \(\operatorname{EtSol}_{\mathcal{R}}^{M}\rtimes\operatorname{\overline{Diff}}^ {M}\) of germs of diffeomorphisms of \(M\) acting on germs of solutions. Let us denote it by \(\Gamma_{\mathcal{R}}^{M}\). When \(M=\mathbb{R}^{n}\) this is simply \(\Gamma_{\mathcal{R}}\). As above, we see that \(\Gamma_{\mathcal{R}}^{M}\) carries a tautological principal \(\Gamma_{\mathcal{R}}\)-bundle. One can check that this is in fact a bibundle providing a Morita equivalence between \(\Gamma_{\mathcal{R}}^{M}\) and \(\Gamma_{\mathcal{R}}\). This means that the spaces of principal bundles associated to the two of them are weakly equivalent.
#### 4.6.3. Tautological formal solutions
We now reason analogously in the formal setting. Consider \(\operatorname{EtSol}_{\mathcal{R}}^{M}\). Its tautological solution \(\tau\) induces a formal solution \(j^{\tau}\tau\), which we call the **tautological formal solution**. It is a principal \(\Gamma_{\mathcal{R}}^{f}\)-bundle for the jet models of \(\Gamma_{\mathcal{R}}^{f}\). If we want a principal \(\Gamma_{\mathcal{R}}^{f}\)-bundle for the germ models we can take \(\tau\) instead, but thinking of it as being continuous for the Whitney topology (and for the representative quasi-topology as well).
The tautological formal solution in \(\operatorname{EtSol}_{\mathcal{R}}^{M}\) turns out to be the pullback of a **tautological principal \(\Gamma_{\mathcal{R}}^{f}\)-bundle** on \(\mathcal{R}(M)\). This requires using the jet models and follows immediately from the fact that
\(\mathcal{R}(M)\) is the base of the action groupoid \(\mathcal{R}(M)\rtimes J^{r}\mathrm{Diff}(M)\), which is Morita equivalent to the jet model of \(\Gamma^{f}_{\mathcal{R}}\). One can take this a step further and note that \(\mathcal{R}_{0}\), the fibre of \(\mathcal{R}(\mathbb{R}^{n})\) over the origin, carries a canonical \(\Gamma^{f}_{\mathcal{R}}\)-bundle because it is the base of the small jet model.
It follows that \(B\Gamma^{f}_{\mathcal{R}}\) has a principal \(\Gamma^{f}_{\mathcal{R}}\)-bundle that is tautological. The reason is, as in the non-formal case, by using the explicit nature of the join construction and the fact that the groupoid acts on the base by automorphisms of the principal bundle.
## 5. Wrinkling \(h\)-principles for Haefliger microbundles with transverse geometry
We now prove the results stated in the introduction regarding Haefliger microbundles, as well as their consequences regarding the space of principal \(\Gamma_{\mathcal{R}}\)-bundles and the connectivity of the classifying space \(B\Gamma_{\mathcal{R}}\).
### Existence of tangential \(\Gamma_{\mathcal{R}}\)-microbundles
Theorem 1.11 states that in the tangential case every \(\Gamma^{f}_{\mathcal{R}}\)-microbundle can be homotoped to produce a \(\Gamma_{\mathcal{R}}\)-microbundle when the starting datum comes from a formal solution in the base.
#### 5.1.1. Proof of Theorem 1.11
Consider the starting formal datum \(F:M\to\mathcal{R}\). It defines a principal \(\Gamma^{f}_{\mathcal{R}}\)-bundle. We write \(\exp(F)\) for the associated \(\Gamma^{f}_{\mathcal{R}}\)-microbundle.
According to Theorem 1.1 (and its Corollary 1.6), there is a homotopy of wrinkled submersions \((\psi_{t}:M\to M)_{t\in[0,1]}\) such that \(\psi_{0}\) is the identity and \(\psi_{1}\) lifts to a wrinkled submersion \(G:M\to J^{\mathrm{terms}}\Psi\) with \(p_{r}\circ G\) a holonomic approximation of \(F\) taking values in \(\mathcal{R}\). We can then find \(H_{t}:M\to\mathcal{R}\) such that \(H_{0}=F\), \(H_{t}\) lifts \(\psi_{t}\), and \(G\) lifts \(H_{1}\).
We can now pullback the tautological principal \(\Gamma^{f}_{\mathcal{R}}\)-bundle on \(\mathcal{R}\) via \(H_{t}\). This yields a principal \(\Gamma^{f}_{\mathcal{R}}\)-bundle on \(M\times[0,1]\) starting at \(\exp(F)\). Since \(G\) lifts \(H_{1}\), the principal \(\Gamma^{f}_{\mathcal{R}}\)-bundle on \(M\times\{1\}\) is the image via the scanning map of a principal \(\Gamma_{\mathcal{R}}\)-bundle. Lifting principal bundles to microbundles as in Lemma 4.11 proves the claim.
#### 5.1.2. A variation on the proof of Theorem 1.11
Consider the same \(\psi_{t}\) as above. We see it as a map \(\psi:[0,1]\times M\to M\) and use it to pullback the Haefliger microbundle \(\ker(d\exp)\) (without any additional formal \(\mathcal{R}\) structure). The result is a wrinkled concordance \(\mathcal{F}\), with \([0,1]\times TM\) as underlying vector bundle, starting at \(\ker(d\exp)\), and finishing at some \(\mathcal{F}_{1}\) having wrinkle singularities with respect to \(M\).
Sufficiently close to the zero section we have that \(\exp\circ\psi_{*}:[0,1]\times TM\to M\) is a submersion. By construction, the leaves of \(\mathcal{F}\) are the connected components of the fibres of the submersion. This is shown in Figure 3. Thanks to the submersivity condition, we can lift \(F\) to a \(\Gamma^{f}_{\mathcal{R}}\)-microbundle on \([0,1]\times M\) (i.e. fibrewise in \([0,1]\times TM\) we have, along the zero section, a fibrewise jet of solution of \(\mathcal{R}\)). We denote it by \(F_{t}\).
Simultaneously, since \(\psi_{1}\) lifts to \(G\), we can uniquely lift \(\exp\circ(\psi_{1})_{*}:TM\to M\) to a submersion \(\tilde{G}:TM\to J^{\mathrm{terms}}\Psi\). We use it to pullback the universal solution and thus yield a transverse \(\mathcal{R}\)-structure for \(\mathcal{F}_{1}\). We can now fibrewise homotope \(F_{1}\) to it using the fact that \(\pi_{r}\circ G\) was homotopic to \(F\) in \(\mathcal{R}\).
#### 5.1.3. Alternate proof of Corollary 1.4
Inspecting the previous proof we see that the leaf space \(N\) of \(\mathcal{F}_{1}\), if we restrict its domain to a sufficiently small neighbourhood of the zero section in \(TM\), is homotopy equivalent to \(M\). This follows from the explicit model of a wrinkled submersion. Moreover, \(N\) and \(M\) are, by construction, concordant via the leaf space of \(\mathcal{F}\). Lastly, \(N\) submerses onto \(M\) via the map \(\exp\circ(\psi_{1})_{*}\)
Proof of Theorem 1.12: the full \(h\)-principle for tangential \(\Gamma_{\mathcal{R}}\)-microbundles
This theorem is the parametric and relative version of Theorem 1.11. Its proof amounts to repeating the argument in Subsection 5.1.1 with parameters, invoking instead Theorem 1.5, the parametric version of Theorem 1.1. There are nonetheless two key observations to be made.
The first is that Theorem 1.5 produces families in the etale space of solutions that are individually continuous for the etale topology and Whitney continuous in the parameter. This means that whenever we pullback the universal solution in \(\operatorname{EtSol}_{\mathcal{R}}^{M}\), we obtain families of principal \(\Gamma_{\mathcal{R}}\)-bundles that are continuous for the Whitney topology, but not necessarily for the etale one. This is however precisely what the statement claims.
The second observation is that the Haefliger microbundles we construct are nonetheless defined via pullback using a wrinkled family of submersions. It follows that the Haefliger microbundles, without \(\mathcal{R}\)-structure, form a concordance.
### Connectivity of classifying space
We will now tackle Theorem 1.14. The proof relies on a generalisation of Theorem 1.11, which amounts to dropping the tangential assumption.
#### 5.3.1. Existence of \(\Gamma_{\mathcal{R}}\)-microbundles
We show:
**Proposition 5.1**.: _Every \(\Gamma_{\mathcal{R}}^{f}\)-microbundle over a manifold \(M\) of dimension \(\dim(M)\leq n\) is homotopic to a \(\Gamma_{\mathcal{R}}\)-microbundle._
Proof.: It is known, due to work of Haefliger [17] on the connectivity of \(\Gamma\), that, under our dimensional assumptions, our starting \(\Gamma_{\mathcal{R}}^{f}\)-microbundle \((E,F)\) admits a flat connection (i.e. a foliation \(\mathcal{F}\) transverse to the fibres). However, compared to the proof of Theorem 1.11, the difference now is that \(\mathcal{F}\) need not be regular. It was transversality with respect to the zero section that allowed to us to apply our wrinkling Theorem 1.1.
However, we can achieve transversality via Thurston's jiggling. As explained in the proof of Lemma 3.6, jiggling produces a piecewise smooth section \(s:M\to E\) that is now simplexwise transverse to \(\mathcal{F}\). We can then proceed as in the proof of Proposition 1.13. Inductively on the dimension of the simplices, we thicken each given simplex to a small disc still transverse to \(\mathcal{F}\). If the simplex is not top-dimensional, we can apply holonomic approximation to deform the formal datum \(F\) to a genuine \(\mathcal{R}\)-transverse structure. This is done relatively to smaller simplices and leaves the foliation itself untouched.
Figure 3. A depiction of the proof and conclusion of Theorem 1.11. On the left, the horizontal direction represents \(M\) and the vertical the etale space \(\operatorname{EtSol}_{\mathcal{R}}^{M}\). The zig-zag shown is the map \(G\) produced by Theorem 1.1. \(\operatorname{EtSol}_{\mathcal{R}}^{M}\) has a canonical Haefliger microbundle associated to its tangent bundle. A practical way to imagine it is as the pullback of \(TM\), foliated by the fibres of the exponential map. These are drawn as the little diagonal segments in turquoise. We can pullback this Haefliger microbundle to \(M\) itself, via the map \(G\). Since \(G\) is wrinkled, the pullback is not regular. It is depicted on the right, exhibiting wrinkled singularities. We can see \(G\) as a solution of \(\mathcal{R}\) in its leaf space.
If the simplex is top-dimensional, we apply Theorem 1.11, introducing additional wrinkles along the top-cells. To conclude, we smooth out \(s\). This will reintroduce all the singularities that \(M\) had with respect to the foliation when we began.
We will also need a relative version:
**Corollary 5.2**.: _Let \((E,F)\) be \(\Gamma_{\mathcal{R}}^{f}\)-microbundle on \(M\). Assume that there is a closed \(M^{\prime}\subset M\) with a neighbourhood \(U\) such that \((E,F)|_{U}\) lifts to a \(\Gamma_{\mathcal{R}}\)-microbundle. Then \((E,F)\) is homotopic to a \(\Gamma_{\mathcal{R}}\)-microbundle relative to \(M^{\prime}\)._
Proof.: In the proof of Proposition 5.1, triangulate relative to a neighbourhood \(V\subset U\) of \(M\), leaving the structure there untouched.
Note that these statements imply:
**Corollary 5.3**.: _Let \(M\) be a manifold of dimension \(m\leq n\). The scanning map_
\[\operatorname{Maps}_{E}(M,B\Gamma_{\mathcal{R}})\to\operatorname{Maps}_{E} (M,B\Gamma_{\mathcal{R}}^{f})\]
_is \((n-m)\)-connected._
And thus also Corollary 1.15.
#### 5.3.2. Proof of Theorem 1.14
We want to show that the scanning map \(\tau_{\mathcal{R}}:B\Gamma_{R}\to B\Gamma_{R}^{f}\) is \(n\)-connected, where \(n\) is the dimension of \(\mathcal{R}\).
We first prove surjectivity in the \(i\)th homotopy group, with \(i\leq n\). Represent a given homotopy class in \(B\Gamma_{R}^{f}\) by a map \(\mathbb{S}^{i}\to B\Gamma_{R}^{f}\). This map corresponds to a unique principal \(\Gamma_{\mathcal{R}}\)-bundle on \(\mathbb{S}^{i}\), up to homotopy. By Lemma 4.13 this principal bundle may be assumed to be smooth and represented by a \(\Gamma_{\mathcal{R}}^{f}\)-microbundle. We then apply Proposition 5.1 to find a homotopy to a \(\Gamma_{\mathcal{R}}\)-microbundle, which restricts to the desired principal \(\Gamma_{\mathcal{R}}\)-bundle.
The proof of injectivity for \(i<n\) is similar. We take a \(\mathbb{D}^{i+1}\) disc of principal \(B\Gamma_{R}^{f}\)-bundles, that restricts to the boundary as a sphere family of \(B\Gamma_{R}\)-bundles. We can now argue as above, but using instead Corollary 5.2 relative to the boundary.
### Full \(h\)-principle for \(\Gamma_{\mathcal{R}}\)-microbundles with the Whitney topology
We will now prove Theorem 1.17. We begin by stating and proving the parametric and relative version of Proposition 5.1:
**Proposition 5.4**.: _Let \(M\) be a manifold with \(\dim(M)\leq n\). Let \(K\) be compact manifold serving as parameter space. Let \((E,\mathcal{F}_{k},F_{k})\) be a \(K\)-family of \(\Gamma_{\mathcal{R}}^{f}\)-microbundles over \(M\) such that \(F\) is holonomic over a neighbourhood of a closed subset \(M^{\prime}\subset M\) and whenever \(k\) belongs to a neighbourhood of a closed subset \(K^{\prime}\subset K\)._
_Then, there exists a Whitney continuous family of \(\Gamma_{\mathcal{R}}^{f}\)-microbundles \((E,\mathcal{F}_{k}^{\prime},F_{k}^{\prime})\) that is homotopic to \((E,\mathcal{F}_{k},F_{k})\), relative to \(M^{\prime}\) and \(K^{\prime}\)._
Proof.: We proof this analoguous to Proposition 5.1 by introducing parameters. The one subtlety is that now we triangulate the product \(M\times K\) in a manner that is transverse to the fibres of \(M\times K\to K\). This implies that the simplices have a nice fibered nature and we can thus apply holonomic approximation in all smaller dimensional cells. Theorem 1.12 is invoked to deal with the top cells.
Which, being a full \(h\)-principle, implies that:
**Corollary 5.5**.: _Let \(M\) be a manifold of dimension \(m\leq n\) and \(E\to M\) a vector bundle. The scanning map_
\[\operatorname{Maps}_{E}^{\operatorname{Wh}}(M,B\Gamma_{\mathcal{R}})\to \operatorname{Maps}_{E}(M,B\Gamma_{\mathcal{R}}^{f})\]
_is a weak equivalence._
Which, by our discussion in Subsection 4.5.3, immediately implies Theorem 1.17. | Wrinklingは、EliashbergとMishachevによって導入された$h$-原理的な技術であり、
偏微分方程式の公式表現($\mathcal{R}$)の解が、
単一性(singular)を適用することで、
単純な/制御された単一性に影響される形に変化することができることを証明できる。
単純な解の形は、その文脈によって異なるが、
基本的な考え方は、
単一性による解決が難しいような物体である。
Haefliger構造は、Haefligerによって、
単一の構造が葉の構造の Singular Analogue と呼ばれる。
葉構造は局所的に浸入モデル化され、
Haefliger構造は任意の関数がモデル化されている。
これにより、Haefliger構造は、
葉構造よりも形質が良い。
例えば、任意の関数を適用して引き戻すことができ、
分類空間も持つ。 |
2309.03177 | 3D Object Positioning Using Differentiable Multimodal Learning | This article describes a multi-modal method using simulated Lidar data via
ray tracing and image pixel loss with differentiable rendering to optimize an
object's position with respect to an observer or some referential objects in a
computer graphics scene. Object position optimization is completed using
gradient descent with the loss function being influenced by both modalities.
Typical object placement optimization is done using image pixel loss with
differentiable rendering only, this work shows the use of a second modality
(Lidar) leads to faster convergence. This method of fusing sensor input
presents a potential usefulness for autonomous vehicles, as these methods can
be used to establish the locations of multiple actors in a scene. This article
also presents a method for the simulation of multiple types of data to be used
in the training of autonomous vehicles. | Sean Zanyk-McLean, Krishna Kumar, Paul Navratil | 2023-09-06T17:30:26 | http://arxiv.org/abs/2309.03177v1 | # 3D Object Positioning Using Differentiable Multimodal Learning
###### Abstract
This article describes a multi-modal method using simulated Lidar data via ray tracing and image pixel loss with differentiable rendering to optimize an object's position with respect to an observer or some referential objects in a computer graphics scene. Object position optimization is completed using gradient descent with the loss function being influenced by both modalities. Typical object placement optimization is done using image pixel loss with differentiable rendering only, this work shows the use of a second modality (Lidar) leads to faster convergence. This method of fusing sensor input presents a potential usefulness for autonomous vehicles, as these methods can be used to establish the locations of multiple actors in a scene. This article also presents a method for the simulation of multiple types of data to be used in the training of autonomous vehicles.
Differentiable Rendering, Inverse Rendering, Lidar, Object Position Optimization, Gradient Descent, Sensor Fusion
## I Introduction
Differentiable rendering is an emerging technique in computer graphics that enables the calculation of gradients with respect to parameters in a 3D rendering pipeline. Recent advances in physics-based differentiable rendering allow researchers to generate realistic images by accurately representing light propagation through a scene. Differentiable rendering enables solving complex inverse rendering problems such as the optimization of 3D scene parameters via gradient descent [1].
Simultaneously, the integration of data from multiple sensor modalities, termed multi-modal sensor fusion, has captured researchers' attention due to its significance in the development of autonomous vehicles [2]. For instance, researchers fuse vision systems and Lidar (light detection and ranging) to enhance autonomous driving capabilities. Lidar is a remote sensing technology that uses laser pulses to measure distances to objects and create precise 3D representations of the surrounding environment. It works by emitting a laser beam that bounces off objects and returns to the sensor, allowing the creation of high-resolution 3D maps of the environment.
Lidar technology can be used with 3D object detection algorithms to identify and classify objects in the environment. 3D object detection algorithms analyze the Lidar data to identify the location, size, and shape of objects in the environment, such as cars or pedestrians. Lidar data can be combined with other sensor data, such as vision systems and radar, to provide a more complete and accurate picture of the environment and to help identify and track objects in real time. This technology has numerous applications, including self-driving cars, robotics, and urban planning.
We introduce a method for optimizing the placement of objects in a 3D graphics scene relative to a specified viewpoint leveraging multi-modal sensor fusion and differentiable rendering. In this inverse rendering setup, an object starts in an initial position within a 3D graphics scene. The goal is to transform the object's position to a predefined target location through optimization. The optimization hinges on an image loss comparison between the current object's rendered position (in terms of pixel values) and the target position's rendered image.
We employ differentiable rendering, which moves the object from its starting point to its target by using gradients of the rendered image relative to scene parameters. Our approach augments conventional differentiable rendering by incorporating Lidar data, allowing for 3D object detection and distance measurements from the sensor to the object. This depth sensing enhances the optimization by conveying object distances and positions relative to the viewpoint. The optimized object can be any visible element in the scene, such as a car or light. Alternatively, we can optimize for the observer location (camera) while visible objects remain fixed. The multi-modal fusion of vision and Lidar sensing facilitates precise 3D object positioning to match desired viewpoints.
## II Related Works
Our multimodal differentiable rendering method builds upon prior work on inverse graphics and sensor fusion. Mitsuba [3] and PyTorch3D [4] support gradient-based optimization of inverse rendering with ray tracing. These tools are enhanced and made possible by many advances in gradient-based methods for rendering, including Differentiable Monte Carlo Ray Tracing through Edge Sampling [5].
Differentiable rendering has been used in many ways, for example in Zakharov et al. [6] differentiable rendering was used to predict shapes and poses from image patches. Their pipeline combines vision and geometry losses for multimodal optimization.
There is a rich literature on sensor fusion for 3D understanding. For instance, Perception-Aware Multi-Sensor Fusion for 3D Lidar Semantic Segmentation [7] fuses appearance information from RGB images and spatial-depth information from point clouds to improve semantics segmentation.
Our method integrates insights from prior work to address multimodal inverse rendering tasks. The flexibility of differen
tiable rendering enables joint optimization over multiple data sources and loss functions.
## III Methods
At the core of our optimization is Mitsuba, a research-oriented differentiable renderer. Mitsuba leverages automatic differentiation via Dr.Jit [8] to enable gradient-based inverse rendering. We use Mitsuba to simulate both RGB images and Lidar data from 3D scenes. The built-in optimizers allow joint training over multimodal losses. To render a scene in Mitsuba, users specify parameters including integrator, max depth, samples per pixel, and sampler type. We use path tracing with a bidirectional path tracer integrator. The number of samples per pixel is set to 16 to reduce Monte Carlo noise. The gradients from the renderer are used to iteratively refine scene parameters like camera pose, lighting, and materials using the Adam optimizer.
We use Mitsuba's Reparameterized Path Replay Backpropagation integrator [9] for differentiable rendering. This technique performs integration over high-dimensional lighting and material spaces. It provides an efficient path tracer that handles discontinuities in 3D scenes. We set the max path depth to 3 for our experiments and used 16 samples per pixel. These values balance computation time and rendering quality for our purposes. Using more samples and higher max depth yields more photorealistic results at increased cost. We set the sampler type to be independent of uncorrelated noise. Experiment resources and code can be found on Github.1.
Footnote 1: [https://github.com/szanykmclean/differentiable_multimodal_learning](https://github.com/szanykmclean/differentiable_multimodal_learning)
We use a simple scene containing a car model on a homogeneous background for our experiments. This setup is motivated by autonomous driving applications, where 3D detectors are often trained to locate cars. The objective is to optimize the camera pose with respect to the stationary car object. We fix the car's position and orientation and optimize the camera location and orientation to match target renderings. This inverse rendering approach could also be applied to optimize object poses given fixed camera intrinsic and extrinsic parameters. While simple, this scene captures key challenges in multimodal inverse rendering. Optimizing camera pose requires reasoning about viewpoint, visibility, lighting, and materials. The homogeneous background enables the isolation of the car as the primary focus. More complex scenes could incorporate detailed environments and multiple objects. The differentiable rendering approach provides a principled methodology to handle complex scenarios with multiple objects, occlusion, and background clutter. Overall, this controlled setup provides a strong testbed for multifaceted inverse rendering of a central 3D object.
### _Lidar Data_
In order to generate Lidar data, we use the built-in ray-tracing functionality of the Reparameterized Path Replay Backpropagation Integrator. During rendering, the ray intersections at a depth of 0 are recorded and written to a text file. Each ray intersection is an instance of light bouncing off an object in the scene, and an \((x,y,z)\) coordinate is recorded. All the intersection points are used together to create a simple point cloud without intensity data. This point cloud effectively simulates Lidar data, a common data modality in autonomous driving. Lidar data allows for distance estimation. This data is used with a large trained 3D object detection network and will allow the system to utilize distance measures from the camera to the car.
### _3D Object Detection_
After the Lidar data is generated, a 3D object detection algorithm is used to generate a bounding box for the car located in the scene. The detection algorithm used is PointPillars [10] which is an encoder that utilizes PointNets to learn a representation of point clouds organized in vertical columns (pillars). The algorithm is pre-trained on the KITTI 3D object detection dataset [11], a large and commonly used autonomous vehicle dataset made available by the Intelligent Systems Lab Organization [12]. This pre-training allows the algorithm to detect objects during inference, specifically cars, buses, and pedestrians. Point Pillars inference is applied to the text file containing Lidar data in \((x,y,z)\) format generated via the previous section.
The algorithm detects a car object in the scene and is able to generate a bounding box around the object. This algorithm may have slight variations depending on the target camera location and a resulting point cloud of Lidar data. It will not be effective during every inference at detecting the car. This makes intuitive sense as car location being far away or at unique locations with respect to the camera will alter the Lidar data and potentially cause it to become unrecognizable to the system. In practice, the system works well in simple scenes, and the method presented will utilize the bounding box with the highest confidence score for a predicted car class. This establishes the \((x,y,z)\) location of the car in the scene and the distance from the camera location to the car.
### _Initial and Target Camera Locations_
The experiment's goal is position optimization, which involves moving an object from an initial position to a target
Fig. 1: Lidar data generated via Mitsuba ray-tracing. The variation in the colors of the points is used to improve the visualization of the Lidar data.
position with respect to the objects and observers in the scene. An initial and target object (camera) location was used with the simple 3D computer graphics scene containing one car object to experiment and show the utility of using multi-modal data for object position optimization. The initial camera location has an \((x,y,z)\) coordinate of \((20,13,23)\) and the target camera location is \((8,5,14)\). The target camera location was translated by \((12,8,9)\) to be used at the start of the optimization loop. The values are unitless as this is a simulated computer vision scene. In the images below, one can see that the initial camera location is much further away from the car and that the car is not centered in the camera view. The translation of the camera location from the target to the initial location at the start of the optimization moves the camera further away from the car in the scene. The target camera location is a distance of 16.93 units away from the center of the established bounding box for the car. The initial location is further away, with a distance of 32.51 units from the car. This is almost twice the distance from the target location to the car. This initial location was chosen to effectively show how the distance loss with Lidar data can improve the optimization. No rotations were applied to the target camera location in order to keep the optimization simple and allow for convergence.
### _Object Position Optimization_
In order to move the object position to the target position, the system utilizes Adam [13], a first-order gradient-based optimization method with a learning rate of 0.15. This learning rate was selected as it resulted in optimal convergence given the desired iteration limit. For the experiments, 30 iterations of gradient descent were used, and at each step, the object's position was moved based on the gradients of the loss function. In each iteration, the computed transformation based on the previous gradients is first applied to the object's position. A new image is rendered based on this new object's position using Mitsuba. Then, the new \((x,y,z)\) object's position is compared to the 3D object's detected car-bounding box location. The distance between the current object's position and the car is computed. Then, the current object's distance from the car is compared to the target object's distance from the car. A loss function is used to compare these two values, and this loss is combined with image pixel-wise loss between the target ending image and the current object's location rendered image. These two losses are combined to help steer the object location to the target through each step of gradient descent and help weigh the optimization to consider not only the gradients of the pixels with respect to scene parameters but also the current distance of the object from the car using the simulated Lidar data. This aims to help the object avoid moving in the wrong direction during the optimization as the camera can often lose the car off-screen when searching for a lower image loss.
### _Joint Loss Function_
One component of the joint loss function is distance loss. Distance loss is derived from the distance formula for Euclidean distance. Two distances are calculated and compared in value. The first distance calculated is from the current object location which is denoted as \((x_{c},y_{c},z_{c})\) to the center of the bounding box which is denoted as \((x_{b},y_{b},z_{b})\) and was detected via the previous object detection stage. This distance is denoted as \(d_{c}\). The second distance calculated is from the target object location which is denoted as \((x_{t},y_{t},z_{t})\) to the bounding box is also computed using the same formula and is denoted as \(d_{t}\):
\[d_{c}=\sqrt{\left(x_{c}-x_{b}\right)^{2}+\left(y_{c}-y_{b}\right)^{2}+\left(z _{c}-z_{b}\right)^{2}}\]
Fig. 3: Initial camera location is shown in the top image. The target camera location is shown in the bottom image.
Fig. 2: PointPillars object detection using Lidar data. The bounding box and arrow show a detected car object oriented towards the direction of the arrow.
These two distances are compared using Root Mean Squared Error (RMSE) with a scalar value \(\alpha\) to calculate the loss, \(L_{d}\). The other component of the joint loss function is image loss. The number of pixels in an image rendered during the optimization is denoted as \(N\) and is simply defined as \(N=l\cdot w\) where \(l\) is the length of the image and \(w\) is the width. In the experiments, \(l=200\) and \(w=300\). At each step during the optimization, image loss is computed by comparing the currently rendered image to the target image at each corresponding pixel value. The image loss function outputs one scalar value which is the Mean Squared Error (MSE). In the loss function, \(x_{j}\) is the current image pixel value at index \(j\) and \(\hat{x}_{j}\) is the target image pixel value at index \(j\). The resulting function for image loss is defined as \(L_{i}\). Both loss components are defined below:
\[L_{d}=\alpha\cdot\sqrt{(d_{c}-d_{t})^{2}}\]
\[L_{i}=\frac{\sum_{j=0}^{N}(x_{j}-\hat{x}_{j})^{2}}{N}\]
The scalar \(\alpha\) is used to help weigh the importance of distance during the optimization. Finally, the joint loss function is defined as:
\[L=L_{i}+L_{d}\]
This loss function will be used in multiple experiments to assess the usefulness of the multi-modal method and will be compared with baseline optimization methods. One experiment will utilize this joint loss \(L\) during the entire optimization. Another experiment which is shown in Fig. 4 will utilize a two-stage loss function that utilizes the joint loss function during optimization while the object (camera) is a user-defined threshold away from the target distance, then the optimization will switch to the second stage where the simple image loss will be used to optimize the location of the car in the image.
## IV Results
Experiments were conducted using the previously mentioned methods as well as baseline methods for comparison and establishing performance. After running four separate experiments with different loss functions, the two-stage loss method is able to converge in the shortest amount of iterations and to the best location. This experiment is described below and utilizes the presented method of multi-modal optimization.
### _Image Loss_
One experiment conducted was to use only image loss for the inverse rendering problem. This is a common out-of-the-box method and establishes a baseline performance. The results for this method clearly show that the optimization process will at first move further away from the target object location due to the gradients of the image with respect to the scene parameters. This is clearly sub-optimal behavior. Towards the end of the optimization, the camera location is moving in the correct direction, however, it is taking several iterations to begin converging towards the appropriate direction.
### _Distance Loss_
Another experiment conducted was to use only distance loss for the inverse rendering problem. If the translation of the target image to the initial image was simply an equal scaled
Fig. 4: Two-stage joint loss diagram.
Fig. 5: Image loss optimization. The final distance from the car bounding box to the camera was 33.7 compared to a target distance of 16.9.
move in every \((x,y,z)\) direction, then, in theory, this method would work very efficiently and be optimal. However, from the results, it is clear this is sub-optimal. The optimization only uses distance as the guiding metric for object position so the location and pixels of the car are clearly ignored and the object will only move towards the target with the goal of finding the optimal distance from the car, but will not be able to find the target location.
### _Joint Loss_
To test this presented method of camera optimization, an experiment using the joint loss method \(L\) which takes both image loss and distance loss into account is presented here. Clearly, the use of both image loss and distance loss to optimize object position is leading to lower image loss and more optimal camera location than the previous two methods. It is also leading to faster optimization. One issue that is clear with this method is that distance loss as a guide for optimization works well when the camera is relatively far away from the target distance. However, when the camera is already close to the target distance, this part of the joint loss is seemingly forcing the optimization out of the correct location. Distance loss prevents the system from doing the necessary small \((x,y,z)\) coordinate transformations that may lead to higher distance loss but allows for the image loss optimization to find the correct location. This is evident by the car in the optimized image being slightly out of the frame in Fig. 7. The value of \(\alpha\) selected for this optimization is 10. This value was selected based on experimentation and was chosen to balance out the two loss components of the loss function. The image comparison heat map shows some overlap between the initial and target car locations, however, there is still room for improvement.
### _Two-Stage Joint Loss_
To solve the issues with the joint loss optimization, a new experiment was conducted using the two-stage loss. For the first part of the optimization, the joint loss method was used. Once the camera location reaches a distance that is less than a user-defined distance threshold, the optimization will set \(\alpha\) to 0 and will then be guided only by the image loss method from the first experiment. Before the threshold is reached, the value of \(\alpha\) is again set to 10. The threshold distance where \(\alpha\) is set to 0 is also user selected and for this experiment, it was set to 2.0 units. This threshold was selected after experimenting with thresholds in the range of 1.0 to 5.0 and discovering that this value led to optimal performance. Setting the threshold too high led to slower convergence and setting it too low led to a similar performance as the Joint Loss method. This method avoids the issues of using distance loss from a close camera distance and allows for fast optimization with very optimal image loss and therefore camera location. It is clear
Fig. 6: Distance loss optimization. The final distance from the car bounding box to the camera was 17.2 compared to a target distance of 16.9.
Fig. 7: Joint loss optimization. The final distance from the car bounding box to the camera was 17.1 compared to a target distance of 16.9.
from the results that this method is the most effective for differential camera optimization. The image comparison heat map shown in Fig. 8 helps show the optimal performance of this method as the cars nearly perfectly overlap and achieve the best performance of any of the experiments.
## V Discussion
The results in the previous section clearly show there are advantages to using a joint loss function that not only considers image pixel values but also considers an Euclidean distance metric. The results obtained are heavily variant based on scene selection, parameters, and object detection quality. For instance, while testing this method with more realistic images and scenes, the object position optimization would often fail to converge at all. This is an issue with non-uniform backgrounds and images that make it extremely hard for optimal solutions to be found. This is suspected to be because of comparison of pixel values in non-homogeneous backgrounds can lead to image pixel losses that are heavily variant and can significantly increase loss values even when the object position optimization is moving towards the correct location. In other words, the complicated and realistic scenes cause the data to be very noisy and lack a sufficient signal for optimization.
### _Scene Selection_
The scene used in the experiments and optimization used a translation to convert the target object position to the initial object position, however, no rotation was applied to the initial scene. This was purposefully left out to avoid too much complexity in the object optimization. Adding rotation in as a parameter to optimize for the camera can cause the optimization to fail where it otherwise might have successfully converged. Optimization of many scene parameters at once leads to difficulties and this is an area that could be explored further.
One important finding during experimentation is that object position optimization using only image data and even using a multi-modal method is more difficult on scenes with non-homogeneous backgrounds as well as scenes that do not have sufficient lighting. These issues seem to be derived from using MSE of pixel data which will not be able to establish useful loss values if most of the pixels appear relatively dark or non-homogeneous and have similar values. For instance, the same car scene which is given a much more realistic look using a background object and lighting led to difficulties converging during experimentation. The object position optimization will often lose the referential object entirely. This presents the importance of completing image processing and background masking with segmentation before object position optimization. In order to use this method in realistic scenes, pre-processing of images is a potential area that can be explored further.
### _Hyperparameter Selection_
Results are also heavily affected by the hyperparameters. User-selected hyperparameters for these experiments include setting a learning rate of 0.15, setting a sample per pixel value of 16, choosing alpha to be 10, and setting a threshold to 2.0 for the two-stage loss. The selection of these hyperparameters was tuned by running many experiments and establishing baseline performance results. One further focus that could be implemented related to this method is establishing rules and metrics for how to choose alpha values effectively as well as threshold values in the two-stage method. Finding the right proportion of loss between the image and distance is important for finding an optimal convergence. If one is overweighted, then the solution with converge to a solution similar to either the image loss only or the distance loss only. One difficulty with this selection of alpha is that image loss values can vary heavily from scene to scene.
Fig. 8: Two-stage joint loss optimization. The final distance from the car bounding box to the camera was 18.4 compared to a target distance of 16.9.
### _3D Object Detection_
PointPillars was used for object detection and bounding box generation, which is a very important part of this system. Establishing accurate bounding boxes on the simulated Lidar data is important because a lack of accuracy of the car object in the scene will lead to inaccurate distance measurements and will lead to convergence at a potentially incorrect location. For purposes of the experiment, both the initial and target camera locations used a center bounding box location generated from the target camera location. This was done to avoid issues of incorrect 3D object detection when changing location to the initial location. The PointPillars algorithm is heavily dependent on the point cloud data it is given. Changing location and generating point clouds from the new location can cause the algorithm to miss objects, such as the car in the experiments.
The system could also work by using 3D object detection and bounding box establishment from both locations, it is however subject to more noise and potential differences in location or objects detected. Furthermore, the PointPillars algorithm was pre-trained on the KITTI dataset and it is clear from Fig. 2 that the bounding box is not perfectly enclosing the car object. This is most likely due to differences in the training data and the Lidar data given at inference time. In order to further optimize this portion of the system, more robust 3D object detection algorithms could be tested and used. Another issue with these algorithms is they can detect multiple instances of the same object where there is only one object and this was seen during testing. In order to offset this, the reference object was established as the object detected with the highest confidence and the correct classification.
## VI Conclusion
This paper presents a novel method for performing differentiable multi-modal object position optimization. The method utilizes both image data and synthesized Lidar data to inform the gradients during the optimization and leads to better convergence to the target object position in the experiments when compared with baseline methods. This method furthers the performance of inverse rendering techniques and displays ways to fuse multiple modalities to improve performance. Applications of this technology could include autonomous driving systems and robotics. These methods could improve state estimation and scene understanding for multiple vehicles in proximity to each other, especially if optimization can be completed in a fast and computationally efficient manner on embedded devices.
| この論文は、シミュレートされたLIDARデータを用いた視線追跡と画像ピクセル損失を differentiable rendering によって、観測者または参照オブジェクトに対する物体位置の最適化を実現するマルチモーダル手法を説明しています。物体位置最適化は勾配下降法によって完了され、損失関数は両方のモダリティによって影響を受けます。典型的なオブジェクト配置最適化は、 differentiable rendering のみを用いて行われ、この研究では、第2のモダリティ(LIDAR)の使用により、より速い収束を実現しました。このセンサー入力の融合手法は、自律運転車にとって潜在的な用途を提示しており、この手法は複数のアクタの位置を escena に設定するために使用することができます。この論文では、自律運転車のトレーニングに使用される複数の種類のデータのシミュレーション方法も提示しています。 |
2309.04618 | Simulation-driven engineering for the management of harmful algal and
cyanobacterial blooms | Harmful Algal and Cyanobacterial Blooms (HABs), occurring in inland and
maritime waters, pose threats to natural environments by producing toxins that
affect human and animal health. In the past, HABs have been assessed mainly by
the manual collection and subsequent analysis of water samples and occasionally
by automatic instruments that acquire information from fixed locations. These
procedures do not provide data with the desirable spatial and temporal
resolution to anticipate the formation of HABs. Hence, new tools and
technologies are needed to efficiently detect, characterize and respond to HABs
that threaten water quality. It is essential nowadays when the world's water
supply is under tremendous pressure because of climate change,
overexploitation, and pollution. This paper introduces DEVS-BLOOM, a novel
framework for real-time monitoring and management of HABs. Its purpose is to
support high-performance hazard detection with Model Based Systems Engineering
(MBSE) and Cyber-Physical Systems (CPS) infrastructure for dynamic
environments. | José L. Risco-Martín, Segundo Esteban, Jesús Chacón, Gonzalo Carazo-Barbero, Eva Besada-Portas, José A. López-Orozco | 2023-09-08T22:13:48 | http://arxiv.org/abs/2309.04618v1 | # Simulation-driven engineering for the management of harmful algal and cyanobacterial blooms
###### Abstract
Harmful Algal and Cyanobacterial Blooms (HABs), occurring in inland and maritime waters, pose threats to natural environments by producing toxins that affect human and animal health. In the past, HABs have been assessed mainly by the manual collection and subsequent analysis of water samples and occasionally by automatic instruments that acquire information from fixed locations. These procedures do not provide data with the desirable spatial and temporal resolution to anticipate the formation of HABs. Hence, new tools and technologies are needed to efficiently detect, characterize and respond to HABs that threaten water quality. It is essential nowadays when the world's water supply is under tremendous pressure because of climate change, overexplotiation, and pollution. This paper introduces DEVS-BLOOM, a novel framework for real-time monitoring and management of HABs. Its purpose is to support high-performance hazard detection with Model Based Systems Engineering (MBSE) and Cyber-Physical Systems (CPS) infrastructure for dynamic environments.
Harmful Algal and Cyanobacterial Bloom, Modeling and Simulation, Cyber-Physical System, Internet of Things, Digital Twin, Discrete Event System Specification
## 1 Introduction
Harmful Algal and Cyanobacterial Blooms (HABs) constitute an especially relevant public health hazard and ecological risk, due to their frequent production of toxic secondary metabolites. Exposure to cyanotoxins, for instance, can cause severe health effects in humans and animals, as well as significant economic losses in local communities.
HABs typically emerge in a variety of freshwater ecosystems like reservoirs, lakes, and rivers [1]. Their intensity and frequency have increased globally during the last decade, mainly due to the current vulnerability of water resources to environmental changes, such as global warming, population growth, and eutrophication. For example, in 2014, a Microystitis HAB at the water treatment plant intake for Toledo (Ohio, USA) caused the distribution of non-potable water for more than 400000 people during multiple days [2]. The danger is not limited to the closest water environment since extracellular material from freshwater HABs has been observed in the water and the atmosphere at locations far beyond their edges.
During the last 30 years, the data needed to estimate the health of a water body and the possible existence of HABs have been obtained by specialized personnel through manual collection of water samples and subsequent analysis in the laboratory, and, in the best cases, by automatic instruments placed at fixed locations, that acquire data and, in very few cases, samples. Financial and personnel resource restrictions reduce the manual collection to the moments of the year when HABs are more likely to appear at a few geographical points and with minimal frequencies. The delay suffered by analytical results and the limited capacity to interpret the current scenario reduces the reaction (prediction, prevention, and mitigation) capability of the authorities responsible for the distribution of drinking water and its recreational uses [3]. This is critical when deploying Early-Warning Systems (EWSs), whose essential work is to collect water samples and identify the cyanobacterial cell or algae density as soon as possible. Hence, it is crucial to develop new cost-effective monitoring and early detection systems capable of predicting and anticipating when and where HABs form and produce toxins to provide support to water managers/authorities for guiding their policies and protecting the public health through the deployment of effective EWSs.
In this context, Modeling and Simulation (M&S) can be used to clarify the dynamics of HABs, as it has historically done in similar areas [4]. Numerical-based and data-driven machine learning models have been extensively used to simulate HABs in aquatic systems [5, 6]. These techniques try to reach accurate predictions through what we call _base models_. These models have been integrated into more generic software tools like the EE Modeling System (EEMS) [7]. Based on these models and tools, various countries have
attempted to build EWSs with the support of predictive systems [8].
Our vision is, however, oriented to a system of systems architecture, a more holistic and _integrative model_ that includes not only the use of the aforementioned _base models_ but also the infrastructure of the EWS. Figure 1 shows our conception of the simulation framework, tightly coupled to the represented Cyber-Physical Systems (CPS). As Figure 1 illustrates, our framework follows a Internet of Things (IoT)-based architecture through the use of Digital Twins (DTs). Water bodies are monitored in the edge layer by a set of sensors, including those onboard automated boats, called here-after Unmanned Surface Vehicles (USVs), that continuously send data to the server at the nearest Ground Control Station (GCS) in the fog layer. There, domain experts can analyze the data, run some models, tests, or plan the USVs trajectories. The framework supports horizontal scalability, being able to add more water bodies with the support of a cloud layer, where authorities can compare different reports and make high-level decisions.
To simulate and operate this complex model, in this paper we propose DEVS-BLOOM, a novel M&S framework to enable real-time monitoring and hazard prediction of HABs. Our approach is based on the principles of Model Based Systems Engineering (MBSE): (i) model-based since MBSE is based on the use of models to represent and manage information about a system, (ii) system-centric, focusing on the system as a whole, (iii) iterative and incremental process, which involves the development of models over time, (iv) collaboration between stakeholders, including system engineers, domain experts, etc., (v) traceability between requirements, design, and implementation, (vi) reuse of models, components, and other artifacts to improve efficiency and reduce the risk of errors, and (vii) verification and validation to ensure that the system meets its requirements and that it operates as intended [9]. At the same time, we aim to provide high-performance real-time services, such as detecting outliers or executing complex forecasting methods. All this is achieved through the implementation of model-driven technologies and infrastructure based on the IoT and DTs paradigms. As a result, we address three main topics in the sustainable management of water resources under the umbrella of model-driven technologies: (i) provide a robust interface to design intelligent HABs management system prototypes, (ii) provide vertical scalability, modeling the whole pyramidal structure, from the sensors to the authorities, and (iii) provide horizontal scalability, being able of adding more sensors and water bodies with the support of well-grounded M&S methodologies.
The main contributions of this work can be summarized as follows:
* We present a framework where we can model the water body, the infrastructure needed to monitor and manage HABs like sensors or USVs, the computing resources needed to control that infrastructure like workstations or cloud servers, and the actions performed by the human team like operators, domain experts, or water authorities.
* The model can be simulated in virtual mode to analyze the viability of the whole system; in hybrid mode, where some components are virtual, and others like actual sensors are co-simulated to test or calibrate these sensors; or in real mode, where the framework is not a simulator but a fully operational toolkit, where all the components are real.
* The framework supports horizontal scalability, allowing us to incorporate more water bodies, or vertical scalability, allowing us to elaborate more complex models. This is possible with the parallel or distributed execution of the framework, which the internal libraries automatically provide.
DEVS-BLOOM has been developed through the Discrete Event System Specification (DEVS) [10], a well known M&S formalism. To prove the feasibility of each scenario, the framework uses formal models. It can be fed with authentic or synthetic data. Following the MBSE methodology, DEVS-BLOOM has been designed with the main objective that any virtual component is built as a DT and can be replaced by its real-world counterpart [11].
The paper is organized as follows. In the following, we introduce the related work, focused on EWSs, models of HABs behavior, USVs trajectory planning, IoT simulators and all the elements required by the proposed framework. Next, we present the architecture of our framework based on a well-known M&S formalism. Then we illustrate the simulations performed to test our hypotheses and show the results obtained under different initial conditions. Finally, we draw some conclusions and introduce future lines of research.
## Related work
As stated above, HABs pose severe threats to natural environments. To properly detect, assess, and mitigate these threats in inland waters, it is essential to envision water management from the perspective of an integrative IoT-based early warning system. HAB-centric automated EWSs can effectively help to monitor and treat water bodies since, once deployed, mitigation techniques tailored to those systems can be better designed.
Current EWSs are supported by a comprehensive set of accurate _base models_ that describe the behavior of different elements, such as the dynamics of the water (due to currents and wind) and of the cyanobacteria (due to biological growth, their vertical displacements, and the water dynamics). There exist a significant variability of base models. Eulerian models, for instance, have been used since 1970 to simulate eutrophication, water quality, and biogeochemical processes [12]. These models are composed of differential equations that simulate community dynamics in spaces. Lagrangian models introduce the possibility of adding different classes of particles with individualized properties, although conducting Lagrangian simulations with a large number of particles is a computer-intensive process [13]. Machine learning techniques can also be used to clarify the dynamics of HABs. Based on studies from 2008 to 2019, Chen _et al._ show in [14] numerous applications of machine learning models for predicting various water quality variables, such as salinity, pH, electrical conductivity, dissolved oxygen, ammonium nitrogen, etc. Finally, we may also find mechanistic or process-oriented aquatic models
based on knowledge of how target species respond to various ecosystem drivers like nutrient availability, thermal stratification, life cycle characteristics of species, etc. [15]. These models can be more appropriate than statistically based models for future predictions. However, they can be challenging because the incomplete knowledge introduced inside the models forces the incorporation of complex Bayesian networks, adding even more uncertainty to the models.
The previous base models are usually integrated inside more generic software tools with advanced Graphical User Interfaces (GUIs). For instance, EEMS [7] is a GUI that provides a broad range of pre-processing and post-processing tools to assist in developing, calibrating, and analyzing hydrodynamic, sediment-contaminant, and eutrophication models. MIKE Powered by DHI is a range of software products that enable us to accurately analyze, model and simulate any type of challenge in water environments [16]. Delft3D is a set of open source software tools that facilitates modeling subsystems like the hydrodynamic, morphodynamic, waves, water quality, or particle-based subsystems [17].
Finally, the aforementioned base models along with the GUIs are used in EWSs as forecasting tools [18, 19], helping GCS operators to make critical decisions. An example close to the authors of this paper is the Spanish Automatic Water Quality Information System, which is a network of nearly 200 automatic alert stations deployed in critical locations of the Spanish hydrographic system to (i) obtain frequent measurements of representative parameters such as water temperature, pH and dissolved oxygen; (ii) provide valuable information about the general quality of the water; and (iii) alert in real time about pollution episodes [20]. More examples of EWSs can be found in other places and settings. The Southeast Environmental Research Center Water Quality Monitoring Network, property of Florida International University, focuses on coastal monitoring of the southern tip of the Florida peninsula and includes some automatic measuring stations that are rotated between the different sampling sites [21]. United States Geological Survey's National Water Quality Monitoring Network combines data sources and techniques from 110 sites to monitor the U.S. inland waters [22]. Environment and Climate Change Canada, in collaboration with the provincial and territorial governments, runs the Freshwater Quality Monitoring and Surveillance program, which encompasses some manual and automatic monitoring networks distributed through the national territory [23].
The conception, design, and deployment of an EWS can present complex engineering and systems challenges. To properly monitor and foresee the formation of HABs, EWSs must cover large geographical areas, remain functional over long periods, and include a large variety of sensors, USVs, and data in general. Prediction of HABs also involves the use of a plethora of base models. A model-driven approach to designing such a complex and heterogeneous infrastructure would help researchers, domain experts, and water authorities to meet design requirements. It also
Figure 1: Conceptual model of the proposed framework.
would enable a model-driven control, reducing costs while increasing performance and scalability, and in general, all the benefits derived from applying a MBSE approach. There exist cases of success in other areas of research like flood detection [24], water treatment [25], or healthcare [26]. However, to our knowledge, this is the first research related to developing integrative model-driven solutions for HAB management. As mentioned above, our approach is integrative because we do not simulate only the water body but also combine the use of _base models_ with the help of models of the infrastructure like sensors, USVs, GCSs, the cloud layer, and even the operator's behavior through a simulation file, which is the main novelty with respect to other approaches in the literature.
## 0.4 System architecture and design
DEVS-BLOOM's model divides the HAB management system into the three classical IoT layers: edge, fog, and cloud. The _edge_ layer includes all the devices connected to the internet and can generate data. These devices can be sensors, wearables, and other smart devices deployed in the field. The edge layer collects and processes data locally and then sends it to the next layer for further processing. The _fog_ layer is an intermediate layer between the edge and the cloud. This layer includes devices with computing power and storage capabilities to perform basic data processing and analysis. The fog layer is responsible for processing data in real time and reducing the amount of data that needs to be sent to the cloud for further processing. The _cloud_ layer includes cloud servers and data centers that can store and process large amounts of data. The cloud layer performs complex data analytics and machine learning tasks that require significant computing power and storage capacity [27].
Figure 1 has already illustrated the general picture of the framework architecture. Our M&S framework is fed with data that may come from the actual water body or from a database, which can, in turn, store authentic or synthetic data. The virtual/real duality developed into some components, modeled as DTs, allows DEVS-BLOOM to work in virtual, real, or hybrid modes. The framework works in virtual/simulation mode when data come entirely from the database. Real/controller mode is when data come from the real water body, with actual sensors and USV deployed and the system fed with real data. Currently, DEVS-BLOOM is mostly used for infrastructure analysis and design. Thus, data usually come from the database; therefore, DEVS-BLOOM works in virtual/simulation mode. However, sometimes a prototype sensor of USV is tested for validation on the field, and then DEVS-BLOOM works in hybrid mode, where some virtual components are simulated and the actual ones are being controlled.
To clarify the specifics of the DEVS nomenclature, we first describe the basic principles of the formalism. Next, the DEVS-BLOOM system architecture is explained.
### The DEVS formalism
Parallel DEVS is a modular and hierarchical formalism for modeling discrete event systems based on set theory [10]. It includes two types of models, atomic and coupled, that have an interface consisting of input (\(X\)) and output (\(Y\)) ports to communicate with other models. Additionally, in atomic models, every model state (\(S\)) is associated with the time advance function \(ta\), which determines the duration in which the state remains unchanged.
Once the time assigned to the state has passed, an internal transition is triggered and the corresponding function (\(\delta_{\mathrm{int}}:S\to S\)) is invoked, producing a local state change (\(\delta_{\mathrm{int}}(s)=s^{\prime}\)). At that time, the results of the model execution are spread through the output ports of the model by activating an output function (\(\lambda\)).
Furthermore, external input events (received from other models) are collected in the input ports. An external transition function (\(\delta_{\mathrm{ext}}:S\times e\times X\to S\)) specifies how to react to those inputs, using the current state (\(s\)), the elapsed time since the last event (\(e\)) and the input value (\(x\)) (\(\delta_{\mathrm{ext}}((s,e),x)=s^{\prime}\)). Parallel DEVS introduces a confluent function (\(\delta_{\mathrm{con}}((s,ta(s)),x)=s^{\prime}\)), which decides the next state in cases of collision between external and internal transitions.
Coupled models are the aggregation/composition of two or more models (atomic and/or coupled), connected by explicit couplings. This makes DEVS closed under coupling and allows us to use networks of systems as components in larger coupled models, leading to hierarchical and modular designs.
Overall, DEVS provides a framework for information modeling that has several advantages in the analysis and design of complex systems: completeness, verifiability, extensibility, and maintainability.
Once a system is described according to DEVS theory, it can be easily implemented using one of the many DEVS M&S engines available [28].
DEVS-BLOOM is implemented and executed using xDEVS, a cross-platform DEVS simulator. This library includes a set of C, C++, C#, Go, Java, Python, and Rust repositories that provide equivalent DEVS interfaces. The project's final goal is to elaborate the fastest DEVS simulation interface with the capacity to simulate models in virtual and real-time and to run simulations in sequential (single-threaded), parallel (multi-threaded), and distributed (not shared memory) architectures. In particular, DEVS-BLOOM uses the xDEVS/Python module of the project. As in xDEVS, our framework can use virtual or real-time. It can run sequential or parallel simulations without modifying a single line of code in the underlying simulation model.
### Devs-Bloom
The DEVS-BLOOM root coupled model is depicted in Figure 2. The components included in this coupled model are: sensors and USVs at the edge layer, the fog coupled model, and the cloud atomic model.
There exist one singular atomic model labeled as _Simulation file_ in Figure 2. It is just a source that reads from a text file all the events that will be injected into the simulation process through its output port. Output and explicit connections related to this atomic model are not explicitly represented in Figure 2 for simplicity because this atomic model is connected to all the components of DEVS-BLOOM. Each entry in the simulation file represents an input event composed of: a time mark indicating the virtual instant in which this event will be triggered, the command type associated with the event, and the arguments each
command needs. As a result, this file replicates the set of external events that could happen in a real-world scenario. As the excerpt of Figure 2 illustrates, it always begins and ends with the triggering of the initialization and finalization of the simulation experiment (see START and STOP commands). Some services can be triggered in the middle, like outliers detection or HAB prediction. The simulation file is a pure virtual element, which does not have an exact match in the real world. In the following sections, we describe the rest of the components included in DEVS-BLOOM.
#### Edge layer
The atomic models in this layer represent edge devices such as environmental sensors, cameras placed at stationary positions, and USVs. Particularly, sensors are implemented as DTs and can process data from the actual sensor or the database mentioned above. A representation of an atomic sensor model is illustrated in Figure 2, labeled as _Digital Twin_. Data from its real counterpart is received through the \(d_{i}\) input port. In this case, data is just propagated without extra delays to the corresponding output port \(e_{i}\). On the other hand, data from the database is received by the \(d_{i}\) input port. Here the virtual sensor imitates the behavior of the actual sensor, introducing corresponding delays, noise, saturation errors, aging, etc. All these optional parameters are defined through a configuration file. Like most DEVS-BLOOM components, this is a passive atomic model, which is awakened when it receives a START event from the simulation file.
Each DT transmits, at their discretion, events that follow a predefined and generic structure that encapsulates the measurements, commands, or any other relevant information. That generic event structure, represented in Figure 3, carries a timestamp with the actual event time, a source and id that respectively identify the source and the cause of the event, and a payload which contains a set of key-value pairs with the actual measurements (e.g. 'Lat': 47.0, 'Lon': -122.0, 'Depth':-0.2, 'TEM': 19.0). Finally, any time an event is generated, it is transmitted through the corresponding output port \(e_{i}\), which in this case is connected to the fog coupled model of the water body, where the data will be curated and stored in the local fog database.
#### Fog layer
The fog layer is modeled through the _fog_ coupled model, which mainly represents the GCS associated with the water body. Here, operators and domain experts analyze data, make decisions, and take action. It is worthwhile to mention that DEVS-BLOOM can predict the bloom appearance, automatically guide USVs to the zone of interest, or take measurements. Still, all these actions must be validated or complemented by the operators. There can be as many fog-coupled models as water bodies being analyzed by the same cloud infrastructure. Figure 2 represents the first of them. As the Figure shows, the fog coupled model has several input ports that receive the events sent by the DTs located at the edge layer (sensors and USVs). It also has two output ports that send raw data collected by the sensors to the cloud and augmented or fixed sensor data using outliers detection or data analysis services, through \(d_{1}\) and \(\hat{d_{1}}\) ports, respectively. To reduce visual clutter, Figure 2 does not explicitly represent the coupling relations between fog and cloud. It is quite redundant and makes the Figure unnecessarily large. Basically, \(d_{1}\) and \(\hat{d_{1}}\) are connected through two additional external output couplings (from GCS\({}_{1}\) to Fog\({}_{1}\)) and two internal couplings (from Fog\({}_{1}\) to Cloud). The fog coupled model contains several atomic models, detailed below.
The _GCS atomic model_ represents the core of the computing infrastructure of the control station. It is usually a static workstation or laptop connected to the local network. This simplified DT receives simulation commands from the simulation file atomic model, which tell the computer when to start reading data, execute an outliers detection service, an inference over the HAB predictive models, USVs path planning, etc. When the simulation starts, sensor data are received through the \(e_{i}\) input ports and stored in the local database. These data are sent through the \(d_{1}\) fog output port, which is connected to the \(d_{1}\) cloud input port. On the other hand, when a service request is received from the simulation file, it is propagated through the output port \(req_{i}\), which is connected to the corresponding atomic model. This port is drawn in bold in Figure 2 because it represents a set of output ports. Fixed or predicted data are also stored in the local database and regularly sent through the \(\hat{d_{1}}\) output port, connected to the \(\hat{d_{1}}\) cloud input port.
The fog coupled model also has a set of atomic models in charge of executing services. They are currently part of the GCS\({}_{1}\) atomic model in the real system. Still, we have decided to implement them as external atomic models to separate the services, models, or functions that they incorporate. These atomic models receive commands from the _in_ input port and send the results through the _out_ output ports. These output ports are connected back to the GCS or the USV atomic models, controlling the navigation system of the USVs. We have currently deployed four services: one to detect and fix outliers, labeled as _Outliers services_ in Figure 2, another one to perform inference and compute the probability of HAB formation and location in the water body, labeled as _Inference service_, a third one to carry out data analysis over the database and generate reports, named _Data analysis service_, and the last one is the USVs path planner, as labeled in Figure 2, which taking the probabilities computed by the inference service calculates and sends the waypoints and trajectories that USVs must follow.
#### Cloud layer
Finally, the _cloud atomic model_ is located in the cloud layer. It receives all the data from different water bodies (raw and estimated, i.e., fixed or predicted) and stores them in the central cloud database. As in the fog coupled model, the cloud atomic model can run different services but is highly scaled to handle one or several water bodies. These services include executing big data analyses involving all the data stored in the central database or running training services to update current inference models located at the fog-coupled models. In any case, these actions are always triggered by the simulation file. We have not included dedicated atomic models to run services because they are always processes installed in docker containers, i.e., they have a distributed architecture. They do not need to be encapsulated as DEVS models, i.e., the cloud layer is viewed as a centralized entity.
in Figure 2, used to monitor a water body corresponding to an area of Lake Washington.
We provide more details of each atomic model instance included in Figure 4 throughout each use case.
### Monitoring use case
The monitoring scenario is relevant for operators and domain experts in charge of the GCS and local operative decisions, monitoring HABs state and evolution through the use of a USV, i.e., it shows how DEVS-BLOOMS is used to predict the next location of the HAB and to automatically control the USV to follow the position and confirm the prediction.
In this case, the whole water body dataset is synthetic and generated with the EEMS tool7, which incorporates an accurate model of Lake Washington. It allows us the artificial generation of HABs. As a result, DEVS-BLOOM receives EEMS input data (see Figure 4) that includes water speed, water temperature, oxygen and nitrates densities, and for validation of our framework, algae concentration.
Additionally, as Figure 4 shows, we have included a virtual irradiance sensor, which generates synthetic irradiance data taken from PVGIS'. Neither EEMS nor PVGIS give stochastic data, so there is no need to proceed with Monte Carlo simulations.
Footnote 7: [https://re.jrc.ec.europa.eu/prg_tools](https://re.jrc.ec.europa.eu/prg_tools)
Our scenario has at the edge layer a USV that must monitor the water and transmit data to the fog and cloud layers. As Figure 4 depicts, the USV is instrumented with several sensors and units. Some of them take data from the water body to continuously monitor the state of the bloom and feed the inference model, and others from internal component models:
* Temperature sensor: is in charge of measuring the water temperature. This signal influences the calibration of other sensors and the growth dynamics of the bloom.
* Power unit: includes solar panels, chargers, and batteries in charge of recharging the boat's batteries when it receives solar radiation. For this scenario, we have included the following base model: \[prop = K_{p}\cdot\sqrt{e_{lat}^{2}+e_{lon}^{2}}\] \[power = K_{e}+K_{s}\cdot sun-prop\] \[K_{p}=30\text{ is the propulsion constant, }K_{e}=-0.003\] represent the electronic power consumption, \(K_{s}=0.04\) is the sun power factor, \(prop\) is the resultant propulsion, \(e_{lat}\) and \(e_{lon}\) are the latitude and longitude error of the USV with respect to the HAB position, computed by the USV planner atomic model, \(power\) is the battery energy level, and \(sun\) is the normalized irradiance value.
* Flow meter: measures the speed and direction of the water with respect to the ship. We may infer the water's speed and direction by discounting the ship's speed.
* Positioning unit: allows us to measure the position and speed of the ship, following these two equations: \[lat_{usv} = e_{lat}+K_{2d}\cdot wfv\] \[lon_{usv} = e_{lon}+K_{2d}\cdot wfu\] \[K_{2d}=0.01\text{ is the 2D USV displacement constant, and }(wfv,wfu)\text{ is the water speed (north and east components).
* Dissolved oxygen probe: is in charge of measuring the dissolved oxygen density in the water. If there are high levels of oxygen, there may be a bloom of algae that produces oxygen by photosynthesis.
* Nitrogen probe: measures the density of dissolved nitrates in the water. Nitrate is the main food for algae. Therefore, the inference service uses this signal to predict the bloom's growth.
During the simulation, irradiance and USV sensors capture measurements and send them to the fog layer. We utilize the inference service in this layer, shown in Figure 4. It has a predictive model based on differential equations that, using water speed, temperature, coordinates, oxygen and nitrates densities, and solar irradiance, anticipates the emergence and displacement of HABs as follows:
\[\frac{dr(t)}{dt} = K_{1}\cdot photo(t)+K_{2}\cdot breath(t)\] \[-K_{3}\cdot(r(t)-r(0))\] \[\frac{dlat_{loom}(t)}{dt} = K_{v}\cdot wfv(t)\] \[\frac{dlon_{loom}(t)}{dt} = K_{v}\cdot wfu(t)\] \[photo(t) = sun(t)\cdot nox(t)\] \[breath(t) = dox(t)\cdot nox(t)\]
In the previous equation, \(r\) represents the bloom density, while \(photo\) and \(breath\) represents photosynthesis and respiration, respectively. Besides, \((lat,lon)\) are the coordinates (latitude and longitude) of the position of the bloom at a given height, whereas \((wfv,wfu)\) is the water velocity at the same coordinates. \(nox\) and \(dox\) are nitrogen and oxygen concentration, respectively (mg/l). Regarding the constants, \(K_{1}=5.0\) and \(K_{2}=0.05\) represent the HAB growth constant, whereas \(K_{3}=0.17\) is the decay constant. \(K_{v}=0.0167\) represents the percentage of the water velocity transferred to the HAB. The values of the constants are initially obtained by training the system with the least squares method.
Then the USVs planner in Figure 4 generates track points for the USV. In this preliminary version, the planner computes the error between USV and HAB positions as follows:
\[e_{lat} = lat_{loom}-lat_{usv}\] \[e_{lon} = lon_{loom}-lon_{usv}\]
To close the loop, the USV navigates to the track point and retakes measurements. During the simulation, all the data is saved into the fog and cloud databases, which can be plotted and analyzed in real time. The Data Analysis Service depicted in Figure 4 can be activated to automate this process. This atomic model executes a set of functions
to create all the figures and videos of interest for the operator or the domain expert. Details about implementing these automatically generated reports can be found in [30].
In the following, we show the simulation results. Figure 5 shows the lake area where HABs are forming. The lower part of the image shows how a channel flows into the lake in a shallow area. Such areas are known as incubators because they provide ideal conditions for forming blooms and accumulations of nitrates in areas with solar radiation. The inference model is initialized near the incubator at the beginning of the day. It is very likely that the bloom is born in this area, then grows with solar radiation, moves with the water currents, and disperses throughout the rest of the lake.
Figure 6 illustrates the simulation state while tracking a HAB. As mentioned above and depicted at the bottom of Figure 4, at this stage of the project, all the measured data from the water body are from EEMS, except for the irradiance values that are taken from PVGIS since EEMS does not include these. The rest of the data (USVs battery status, bloom displacement prediction, etc.) come from our models. Next, we describe each plot in Figure 6:
* The upper left graph shows the signals measured by the USV and the irradiance sensor as a function of the time of day: sun radiation (blue), water Temperature (red), and ship's electric power (black). At the time of the simulation, Figure 6 shows that the solar panels have fully charged the ship batteries.
* The lower left graph shows the map of water direction and velocity in the surface layer. The ship measures this signal at its position and reports it to the fog layer to estimate the bloom displacement. The simulator also uses the information from this map to perturb the ship dynamics.
* The top center graph shows the map of the dissolved oxygen density in the surface layer. The USV takes this measurement, and the inference model uses it to decide whether there is a bloom or not.
* The bottom middle graph shows the map of nitrate density on the surface. The inference model takes this measurement obtained by the USV to estimate the bloom growth.
* The right graph shows the HAB density map in the surface layer, the inferred bloom (red circle), and the USV position. The HAB density map is data directly taken from EEMS to validate that the inference model is correctly predicting the HAB dynamic.
The full simulation video can be found in [31].
As mentioned above, all the data used in this simulation are synthetic. Consequently, all the sensors work on virtual mode, as DTs. When a sensor must take a
Figure 4: DEVS-BLOOM root coupled model of the use case.
measurement, it searches the database (the EEMS file or the irradiance database), modifies the signal according to its technical characteristics, and generates a message with the signal value. The fog layer receives these signals to perform different calculations like the model inference and periodically uploads them to the cloud layer. Figure 7 shows the signal values recorded by all the sensors of this use case after several (virtual) days of simulation.
Figure 8 shows the evolution of the HAB inference model. The first plot shows a boolean value indicating whether the bloom has been detected or not. The second plot shows the estimated bloom density. The third and fourth plots show the displacement estimation: longitude and latitude. Figure 8 shows how blooms are detected and monitored almost every day. Some of these blooms have significant densities and move around significantly, requiring dynamic monitoring.
Finally, Figure 9 depicts the status of the USV model. The first graph shows the status of the power unit. The second plot shows the velocity of the USV. The third and fourth graphs show the position, longitude, and latitude. On August 30, the Figure shows that the USV runs out of battery since it has been tracking blooms to distant points for four consecutive days.
Figure 5: Lake Washington area.
Figure 6: Frame of bloom tracking simulation: (upper-left) USV measured signals, water temperature, and solar irradiance, (lower-left) water speed. (top-center) oxygen density, (bottom-middle) nitrate density, (right) HAB EEMS given density for validation, HAB prediction as a red circle and ship position as a star.
Figure 8: Bloom Inference model.
Figure 7: Sensors’ signals.
### Prediction use case
The second use case is relevant for water authorities. It consists of predicting HABs in the coming days based on weather forecasts. At the end of the day, the GCS in Figure 4 uploads all this information to the cloud layer. All the data history is available in this layer, allowing us to use the predictive model to analyze medium or long-term events.
To predict future blooms, a _Prediction Service_ atomic model has been implemented in the cloud layer. This service is responsible for predicting the occurrence of upcoming HABs and their evolution from weather forecasts. These predictions depend highly dependent on local conditions, so they must be designed ad hoc. In our case, in this area of the lake, there is a source of nitrates or dissolved sediments, which is activated by rainfall. At ideal water temperatures, these dissolved sediments and the sunlight are the main precursors of HABs. From these precursors, bloom growth can be predicted. On the other hand, surface water currents can be inferred from wind forecasts, which can be used to predict the HAB displacement.
Firstly, the state of water and dissolved sediments are inferred from wind, sun, and rainfall forecasts. Figure 10 shows the results of this inference, comparing it with the results generated with EEMS. The first plot shows the rainfall forecast and the inference of dissolved sediments, which follows a simple exponential model. The second plot shows the bloom precursor signal, Sun-Nitrates, the values generated by EEMS and those inferred by the service. The third plot shows the wind forecast, and the fourth plot shows the inferred values for the water speed.
Next, the _Prediction Service_ atomic model computes the HAB state from the previous results. Figure 11 shows the final output, comparing it to the results simulated with EEMS. The plot on the left shows the HAB density generated by EEMS versus the density predicted by the atomic model. It can be seen that it correctly predicts the 60% of the bloom cases. The graph on the right shows the trajectory of these HABs, predicting where the bloom will move accurately in most cases.
### Integration of real sensors and USV design
DEVS-BLOOM uses the xDEVS/Python library. xDEVS/Python can simulate models in real-time [28]. A scaling factor can be provided, transforming hours into minutes, minutes into seconds, etc. This is important when incorporating hardware in the loop to the virtual framework [32] since, for instance, the previous use case handles periods of 30 minutes, but we may want to perform tests with sensors sending data every minute. Additionally, xDEVS can interrupt the real-time simulation with the arrival of data sent by an external hardware device. To do this, the root coupled model must have an input port to inject data, and an atomic model must handle the arrival of this data through its external transition function.
To demonstrate the ability of DEVS-BLOOM to integrate actual sensors, we have used the xDEVS characteristics mentioned above with the irradiance sensor. Figure 11(a) depicts schematically how the real sensor is connected to the original atomic model shown in Figure 4. To this end, we use the input port \(d_{i}\) explained in Figure
Figure 9: USV model.
2, adding an input \(d_{i}\) port to the root coupled model. xDEVS/Python automatically manages the communication between the sensor and DEVS-BLOOM through a software handler. The procedure is relatively straightforward since the external transition function of the sensor DT is automatically triggered when the actual sensor injects data.
On the other hand, Figure 12b shows a picture of a real-time execution, where data received by the actual sensor is correctly logged by DEVS-BLOOM. This procedure also allows us to validate the virtual sensor model, tuning its parameters (delay, precision, noise, etc.) if necessary. The predictive algorithms automatically manage failures in sensors. There is an outliers detection phase before the prediction, where outliers and missing data are replaced by regression. An alarm is triggered in case of failure, and the domain expert can take action if necessary. The parallel DEVS formalism is of great help when dealing with these issues.
New sensors are acquired and tested through our framework as the project evolves. Currently, the most challenging part is the USV design. Figure 12c shows our first USV prototype with all the sensors embedded, and
Figure 11: Bloom prediction.
Figure 10: Water and dissolved sediments state inferred from wind, sun and rainfall forecasts.
Figure 12d depicts one of the controlled tests to validate the navigation system. As the USV evolves, the DEVS-BLOOM virtual model does the same to match the behavior of the real counterpart [33].
As it can be seen, DEVS-BLOOM can help us to design an integral EWS considering different elements and exploring all the alternatives. Our M&S framework facilitates the elaboration of sustainable and efficient HAB management systems while saving costs with well-dimensioned instruments, USVs, and GCSs.
## Conclusion and future work
HABs induce severe threats to water quality. To properly detect, assess, and mitigate these threats to water infrastructures, it is necessary to envision well-structured and robust methods to perform continuous monitoring and to deploy efficient infrastructure and proactive strategies to reduce their adverse effects. CPS integrative M&S is crucial to reaching these objectives since it provides sustainable mechanisms to analyze algorithms and the infrastructure we may need to deploy such systems. However, current approaches do not combine the analysis of _base_ models and algorithms with the infrastructure.
In this paper, we have introduced DEVS-BLOOM, a novel M&S framework to enable real-time monitoring and hazard prediction of HABs while analyzing the effectiveness of infrastructure deployment. Our framework can automatically manage the design of advanced EWSs and propose decisions over the evolution of HABs. Our approach is based on solid principles of MBSE and the DEVS M&S formalism. Furthermore, the entire infrastructure can be modeled upon
Figure 12: Integration of real sensors and USV design.
the IoT and DT paradigms. DEVS-BLOOM allows an incremental design, assuring reliability and scalability to multiple water bodies and minimizing costs in the conception of the final installations. Additionally, all the predictive models designed in the M&S phase can be later used in the real infrastructure. Our framework also allows different resolution views, for the interpretation of a domain expert at the fog layer and the interpretation of water authorities at the cloud layer, following the IoT nomenclature.
Future work includes, on the one hand, the inclusion of new models (e.g., related to the USVs dynamics) into DEVS-BLOOM, the improvement of its visualization tools, or the validation of the current HAB models against a real scenario. On the other hand, we plan to incrementally replace all the elements in the simulated model with those in a real-world use case, complementing the virtual representation of the system introduced in this paper with its final deployment.
Finally, we want to highlight that having a scientific framework to predict HABs formation and to take management actions also provides an organizing principle for fundamental research. This framework will serve and benefit the engagement of theory with M&S foundations. Complementary HAB research on mathematical models or systems engineering can be easily integrated into our DEVS-BLOOM framework. It will improve the scientific exploitation of discoveries and support the development of new bases for forecasting future effects on water quality and other sustainable water ecological challenges such as wastewater recycling or smart agriculture.
## Acknowledgements
The authors would like to thank Mr. Giordy Alexander Andrade Aimara, who implemented the integration of actual sensors into DEVS-BLOOM as part of his master's thesis. This work has been supported by the Research Projects IA-GES-BLOOM-CM (Y2020/TCS-6420) of the Synergic program of the Comunidad Autonoma de Madrid, SMART-BLOOMS (TED2021-130123B-I00) funded by MCIN/AEI/10.1303/501100011033 and the European Union NextGenerationEU/PRTR, and INSERTION (PID2021-127648OB-C33) of the Knowledge Generation Projects program of the Spanish Ministry of Science and Innovation.
| 有害藻類と藍藻の繁殖(HAB)は、内陸水域と海域において発生し、人体や動物の健康に影響を与える毒物を生産するため、自然環境への脅威となる。過去には、HABは主に水サンプルの labo 収集と分析によって評価されてきました。また、固定された位置からの情報取得を行う自動装置も偶発的に使用されてきました。これらの手順では、HABの発生を予測するための望ましい空間的かつ時系列的な解像度を提供していません。そのため、HABを効率的に検出、特徴付け、対応するために、新しいツールと技術が必要となります。気候変動、過剰な利用、汚染など、世界の水資源の供給が大変な圧力下に置かれている現代において、これは非常に重要です。この論文では、DEVS-BLOOMという、HABのリアルタイム監視と管理のための新しいフレームワークを紹介します。その目的は、モデルベースシステムエンジニア |
2309.13402 | ML Algorithm Synthesizing Domain Knowledge for Fungal Spores
Concentration Prediction | The pulp and paper manufacturing industry requires precise quality control to
ensure pure, contaminant-free end products suitable for various applications.
Fungal spore concentration is a crucial metric that affects paper usability,
and current testing methods are labor-intensive with delayed results, hindering
real-time control strategies. To address this, a machine learning algorithm
utilizing time-series data and domain knowledge was proposed. The optimal model
employed Ridge Regression achieving an MSE of 2.90 on training and validation
data. This approach could lead to significant improvements in efficiency and
sustainability by providing real-time predictions for fungal spore
concentrations. This paper showcases a promising method for real-time fungal
spore concentration prediction, enabling stringent quality control measures in
the pulp-and-paper industry. | Md Asif Bin Syed, Azmine Toushik Wasi, Imtiaz Ahmed | 2023-09-23T15:27:14 | http://arxiv.org/abs/2309.13402v1 | # ML Algorithm Synthesizing Domain Knowledge for Fungal Spores Concentration Prediction
###### Abstract
The pulp and paper manufacturing industry requires precise quality control to ensure pure, contaminant-free end products suitable for various applications. Fungal spore concentration is a crucial metric that affects paper usability, and current testing methods are labor-intensive with delayed results, hindering real-time control strategies. To address this, a machine learning algorithm utilizing time-series data and domain knowledge was proposed. The optimal model employed Ridge Regression achieving an MSE of 2.90 on training and validation data. This approach could lead to significant improvements in efficiency and sustainability by providing real-time predictions for fungal spore concentrations. This paper showcases a promising method for real-time fungal spore concentration prediction, enabling stringent quality control measures in the pulp-and-paper industry.
Fungal Spores Concentration Prediction, Machine Learning
## 1 Introduction
The pulp-and-paper manufacturing industry plays a pivotal role in providing essential materials for various applications, including packaging, printing, and writing. Ensuring the production of high-quality, contaminant-free paper products is paramount in this sector. As such, stringent quality control measures are a fundamental aspect of the industry's operations [1].
Quality control in the pulp-and-paper manufacturing sector involves a comprehensive assessment of various parameters, with a vital focus on the fungal spore concentration. Fungal spores are microscopic particles that can have a detrimental impact on paper quality. When present in excessive amounts, they can lead to a range of issues, including reduced paper strength, increased susceptibility to degradation, and compromised printability. Consequently, addressing fungal spore concentration is essential to meet industry standards and customer expectations.
Current techniques for assessing fungal spore concentration primarily rely on labor-intensive laboratory tests. These methods involve collecting paper samples from the manufacturing process and subjecting them to meticulous analysis, often taking 1-2 days to obtain results. This delay can lead to production inefficiencies and increased costs, highlighting the need for more efficient and real-time monitoring solutions.
Machine learning (ML) has emerged as a promising technology to address these challenges in the pulp-and-paper industry. ML algorithms can be trained to analyze vast amounts of data, including environmental conditions, production parameters, and historical fungal spore data. By processing this information in real-time, ML models can predict fungal spore concentrations within the manufacturing process.
Delay in quality measurement hinders real-time control strategies, emphasizing the need for precise real-time fungal spore concentration predictions to maintain exceptional quality standards. This study showcases the outcome of a data challenge focused on devising a method to predict fungal spore concentration in the pulp-and-paper production process utilizing time-series data. We propose a machine learning algorithm that synthesizes domain knowledge, based on two crucial assumptions.
### _Our Contributions_
Our contributions are summarized into four folds:
* We design and develop a novel machine learning-based method utilizing domain knowldge to predict fungal spores concentration effectively considering problem constrains.
* Our model requires much less memory than deep learning models because it does not have as many parameters to train. This low memory requirement makes it easier to integrate into embedded systems with limited memory resources.
* Our model has a closed-form solution which results in a computationally efficient training process. This means that the training time for Ridge Regression is faster than deep learning techniques like recurrent neural networks (RNNs) or convolutional neural networks (CNNs).
* Our model lightweight and easy to deploy on embedded systems because they do not require large
processing power. This makes them ideal for implementing in resource-constrained devices such as embedded systems or Internet of Things (IoT) devices and sensors in the industry.
## 2 Data Description and Exploratory Data Analysis
For this study, we were provided with one training set and one testing set [3]. The training set comprises 1526 observations, containing 113 numerical variables, a single categorical variable, and the target variable, beginning from 2021-01-30 06:23:06. Additionally, the testing data set encompasses 752 observations with an identical variable count, starting from 2021-07-24 06:29:00. To gain an in-depth understanding of the data, we conducted an exploratory data analysis (EDA). The following subsection details the handling of missing values and observed trends for several key variables. qcre2023
### _Problem Formulation and Constrains_
The task at hand involves predicting the target spore concentration for timestamps provided in a dataset. Notably, the timestamps in the test set may not necessarily occur after those in the training data, introducing an element of temporal unpredictability. A critical constraint in this problem lies in the temporal alignment: when predicting the target value for a given test set timestamp (t1), any training data with timestamps beyond t1 is expressly prohibited from being used [3]. This constraint reflects the real-world scenario where predictions must rely solely on historical information up to the prediction point, ensuring that the model's forecasting capabilities align with chronological precedence. This challenge falls within the realm of time-series forecasting, where the model's effectiveness in capturing temporal patterns and dependencies is pivotal to its predictive accuracy.
### _Missing Values_
In examining our training data, we assessed the presence of missing values, identifying several variables with missing data, as outlined in the Fig 1. We used most frequent value imputation.
### _Analysis of Time dependency of variables_
Besides missing values, we investigated the potential dependency on time. By observing the provided figure, we attempted to discern any time dependency among the features. As depicted in Fig. 2, we found no substantial time dependency for the majority of variables, with a few exceptions (e.g., variable 41).
## 3 Methodology
Upon reviewing the exploratory data analysis ad considering the project constraint that prohibits training data beyond t1 when predicting t1, we develop the following algorithm depicted in the flow chart shown in Fig 3. The following subsection discusses the components of the algorithm.
### _Train-test Splitting_
To evaluate model performance, we devised a distinctive data partitioning strategy. Initially, the first 40% of the dataset was allocated for training, ensuring sufficient data availability. Subsequently, the remaining 60% was divided into testing and training subsets, which were merged with the previous partition. This approach emulates the original problem, providing 750 initial data points for model training.
### _Model Architecture_
To mitigate noise, feature selection techniques were employed, including Principal Component Analysis, Random Forest Regressor, and SelectKBest, enabling identification of the most relevant features and enhancing model validity. One example is shown in Figure 5.
But, Figure 6 and 7 suggests that the model scores better with all features, rather than top features.
Fig. 1: Missing Values in the dataset.
Fig. 2: Analysis of Time dependency of variables
### _Model Selection and Training_
We conducted experiments with prevalent machine learning algorithms, including Linear, Ridge, and Lasso regression, Random Forest, Extreme Gradient Boosting (XGBoost), and Adaptive Boosting. A constraint required training only on data up to t for predicting t+1. To accommodate this, model parameters were stored in a binary serialization file, enabling efficient parameter updates without retraining on the entire dataset.
We found that Ridge Regression [2] performed exceptionally well on our sensor dataset with a large number of input columns. This is likely because Ridge Regression's regularization technique helps to reduce overfitting and handle multicollinearity, leading to stable coefficient estimates and better predictive performance on new data. it is a regularization technique that helps to reduce overfitting in ML models. It adds a penalty term to the cost function which reduces the magnitude of the coefficients, thus limiting the model's complexity and making it less prone to overfitting. It performs well when there is multicollinearity between the independent variables. Multicollinearity occurs when two or more independent variables are highly correlated with each other, which can lead to unstable and unreliable coefficient estimates in traditional linear regression. The penalty term in Ridge Regression helps to stabilize these estimates by reducing their variance. It can provide a solution even when the data is inconsistent or when the number of samples is smaller than the number of features. This is because it introduces bias into the estimates, which can help to overcome problems caused by inconsistencies in the data.
### _Prediction Synthesizing the Domain Knowledge_
From Figure 8, we can notice that target variable seems to be an integer which is divisible by 5. Our initial predictions are a lot close to multiples of 5, like we have 11, 11.5, 10.5 more than 12-12.5 which is a more uncertain stage. From this, we came to two assumptions which we are referring as domain knowledge. The two assumptions are:
Fig. 4: Training Data Selection based on Problem Constrains
Fig. 5: Top features by a Random Forest Model
Fig. 3: Model Architecture
1. All the "y_var" are the multipliers of the 5. Our assumption is the measurement scale has the precision of 5.
2. As the concentration cannot be negative so we have chosen any negative prediction as zero.
The equation is:
\[f(x)=\left\{\begin{array}{ll}0,&\text{if }x\leq 2.5,\\ &\text{or }x\text{ is an integer multiple of }5,\\ &\text{or }x<0\\ 5*&\text{floor }(x/5)+5,\\ &\text{if }(x-0.01)\text{ is an integer multiple of }5\\ 5^{*}\text{ floor }(x/5),\text{ otherwise}\end{array}\right\} \tag{1}\]
## 4 Results
Upon experimenting with training data, partitioned into training and validation sets that emulate the original testing dataset, we observed the following results. Using all available variables as input, alongside selected features via algorithms such as Random Forest, PCA, and SelectKBest, Random Forest demonstrated superior performance in terms of MSE and MAE. Notably, the cleaned and preprocessed original dataset outperformed feature-selected versions. A comparative analysis between the full data and Random Forest models, with respect to MSE and MAE, is provided.
### _Ablation Study_
Ridge regression was selected due to its superior performance compared to other algorithms, including Linear Regression, Adaptive Boosting, XGBoost, and Extra Trees Regressor. An optimal hyperparameter tuning was achieved with an alpha value of 2, as it demonstrated enhanced performance relative to alternatives (as shown in table **2**).
To mitigate overfitting, cross-validation was employed. The outcomes obtained with k=5 are presented, revealing satisfactory results.
## 5 Conclusion
In this study, we introduce an algorithm to predict fungal spore concentration, encompassing two components: a traditional machine learning approach for real-time training, and domain knowledge synthesis with predictions. Our chosen method effectively balances the bias-variance trade-off, as demonstrated by strong training results reflecting the problem's intrinsic model. Future research incorporating controlled deep learning approaches may yield even more accurate solutions.
| Pulpおよび紙製造業は、純粋で汚染物を含まず、さまざまな用途に適した製品を製造するための精密な品質管理が必要です。菌の種子の濃度は、紙の使用率に影響を与える重要な指標であり、現在の試験方法が、作業量が多く、結果が遅れているため、リアルタイムの制御戦略を阻害しています。これを解決するために、時間系列データと専門知識を利用する機械学習アルゴリズムが提案されました。最適なモデルは、リッジ回帰を用いてトレーニングおよび検証データでMSE 2.90 を達成しました。このアプローチは、効率と持続可能性の重要な向上に繋がることが期待されます。リアルタイムの予測を提供することで、菌の種子の濃度を予測する能力は、紙製品製造業における厳しい品質管理に役立ちます。 |
2309.03611 | Vectorial phase retrieval in super-resolution polarization microscopy | In single molecule orientation localization microscopy, valuable information
about the orientation and longitudinal position of each molecule is often
encoded in the shape of the point spread function (PSF). This shape, though,
can be affected significantly by aberrations and other imperfections in the
imaging system, leading to erroneous estimation of the measured parameters. A
basic solution is to model the aberrations as a scalar mask in the pupil plane
that is characterized through phase retrieval algorithms. However, this
approach is not suitable for cases involving polarization-dependent
aberrations, introduced either through unintentional anisotropy in the elements
or by using birefringent masks for PSF shaping. Here, this problem is addressed
by introducing a fully vectorial model in which the polarization aberrations
are represented via a spatially-dependent Jones matrix, commonly used to
describe polarization-dependent elements. It is then shown that these
aberrations can be characterized from a set of PSF measurements at varying
focal planes and for various polarization projections. This PZ-stack of PSFs,
which contains both phase and polarization projection diversity, is used in a
phase retrieval algorithm based on nonlinear optimization to determine the
aberrations. This methodology is demonstrated with numerical simulations and
experimental measurements. The pyPSFstack software developed for the modeling
and characterization is made freely available. | Rodrigo Gutiérrez-Cuevas, Luis A. Alemán-Castañeda, Isael Herrera, Sophie Brasselet, Miguel A. Alonso | 2023-09-07T10:10:07 | http://arxiv.org/abs/2309.03611v2 | Characterization of polarization dependence in super-resolution fluorescent microscopy via phase retrieval
###### Abstract
In single molecule orientation localization microscopy, aberrations cause changes in the shape of the point spread function (PSF) generated by point-like dipolar sources, which can lead to an erroneous estimation of the source's position and orientation. A common strategy for addressing this issue is to model the aberrations as a scalar pupil phase mask and characterize them using a stack of PSFs for varying defocus using phase retrieval algorithms. However, this strategy fails when there are polarization-dependent aberrations, introduced either through unintentional anisotropy in the system or by using birefringent masks for PSF shaping. Here, a model and methodology for the proper characterization of polarization-dependent aberrations are introduced. The key components are the modeling of polarization aberrations via a spatially-dependent Jones matrix, commonly used to describe birefringent elements, and the introduction of polarization diversity for its correct estimation via a phase retrieval algorithm based on a nonlinear optimization. The software pyPSFstack used for the modeling and characterization is also made freely available.
## 1 Introduction
Fluorescence microscopy is a widely-used imaging modality in biological research [1] given its strong signal, selective labeling within complex systems [2] and compatibility with super-resolution methods [3]. Moreover, this technique also allows access to the sample's structural properties [4], making it very useful for studying biomechanics at the molecular level. For example, in single-molecule orientation localization microscopy (SMOLM), the 3D spatial localization can reach a precision of a few nanometers, while allowing simultaneously the characterization of the molecule's 3D orientational behavior, for example its mean orientation and degree of wobbling. Common SMOLM techniques include polarization channel splitting [5, 6, 7, 8] and point spread function (PSF) engineering [9, 10, 11, 12], which can be used together or separately. The shape of the PSF can change considerably with the emitter's orientation and longitudinal position, so it is crucial to take this into account to enable a full estimation of the parameters [13, 14, 11] and to avoid localization biases [15]. In that respect, PSF engineering techniques aim to enhance these shape changes. Nevertheless, any optical aberration, polarization distortion or misalignment in the imaging system can affect the final shape of the PSFs and thus lead to an inaccurate estimation of the parameters. In particular, polarization aberrations are delicate to correct for, as they require additional adaptive strategies that account for the vectorial nature of light propagation [16, 17].
A common solution to this problem is to perform a set of calibration measurements [18, 19], and use them in a phase retrieval optimization algorithm to determine the aberrations present in the system. For this approach to work, it is important to have an accurate model for light's propagation from a known source to the camera and to incorporate phase diversity, such as measurements at varying focal planes, both to avoid falling into local minima and to accelerate
convergence [20]. For single-molecule localization microscopes, the standard approach is to measure the PSFs generated by fluorescent nanobeads for varying focal planes and use this Z-stack of images in a phase retrieval algorithm [21]. Initial approaches relied on scalar models assuming a point source emitting a spherical wavefront along with a scalar pupil representing the aberrations. In this simplified case, Gerchberg-Saxton iterative algorithms [22, 23] can be used, thus reducing the complexity of the implementation. However, these algorithms are less flexible since not any parameter can be included in the retrieval procedure [18, 19, 24]. A more flexible approach is offered by casting the phase retrieval problem as a nonlinear optimization routine where any parameter in the model can be included, although this requires providing analytical formulas for the gradients in these parameters, hence complicating their implementation [23]. More accurate models that take into account the vector nature of the emitted light have also been proposed [25, 26]. They incorporate the effect of the interface between the medium embedding the fluorescent particles and the coverslip which causes extra aberrations, polarization-dependent transmission and supercritical angle fluorescence (SAF) radiation [26, 27]. However, these approaches assume a scalar mask to characterize all the remaining aberrations thus preventing them from correcting any residual polarization-dependent effects. Moreover, the polarized components of the emitted fields are eventually summed in order to model an unpolarized measurement, which is not directly applicable to polarized PSFs in SMOLM.
Recently, new SMOLM techniques have been proposed that use birefringent elements either to encode efficiently the 3D orientation and 3D localization information of the emitting dipole into the shape of the two polarization components of the PSF [9, 10, 28, 11], or to understand the intensity and/or shape of their different polarization projections [7, 8, 29]. For such approaches, it is essential to take into account the true emission of the dipole source, its interaction with the interface between the embedded medium and coverslip, and polarization-dependent aberrations [30]. To address these issues, a model is used here where all vector aspects of the propagation of the light emitted by the source to the back focal plane (BFP) are taken into account, and where the aberrations are represented by a birefringent pupil distribution modeled with a spatially-varying Jones matrix [31]. It is shown that in order to properly characterize this birefringent pupil, it is necessary to introduce polarization diversity, obtained by projecting the PSFs onto various polarization states, in addition to the phase diversity given by changing the location of the focal plane. The PSF images generated with these two diversities form a _PZ-stack_ that is fed into a nonlinear optimization algorithm that allows retrieving the unknown birefringent pupil. This approach allows including many parameters that are necessary for a proper characterization, such as photo-bleaching amplitudes, the background illumination, or diversity-dependent tilts. Accurate and computationally amenable models for the light produced by fluorescent beads (commonly used for calibration measurements) are also included. These models take into account the unpolarized nature of the emitted light and the blurring due to the bead size [32]. The software package pyPSFstack used for the modeling of PZ-stacks and for the phase retrieval process can be found in [33]. The retrieval algorithm was implemented with the neural network framework PyTorch[34], which greatly simplifies its implementation and flexibility due to the automatic computation of all gradients, and its integration with GPU if available. While the emphasis of this work is on the characterization of birefringent pupils, both the theory and code are equally applicable to scalar aberration pupils as shown in the Supplement.
## 2 The point-spread function of a dipolar source
### Field at the back-focal plane
In order to properly characterize a given system, it is necessary to first derive an accurate model. Here, the situation depicted in Fig. 1 is considered: the incoherent light emitted by a fluorescent bead is collected by an immersion microscope objective with a high numerical aperture (NA). The bead is assumed to be placed at a distance \(z_{0}\) from the interface between its embedding
medium with index of refraction \(n_{i}\) and the coverslip assumed to have the same index of refraction \(n_{f}\) as that of the immersion liquid. The index mismatch between these media introduces extra aberrations (see Supplement for more details) and polarization-dependent transmission following the Fresnel coefficients. It also allows the coupling of evanescent components emitted by the bead when \(n_{f}>n_{i}\) leading to SAF radiation, which can make up a significant portion of the light detected by the camera [35, 36, 37]. All of these effects can be encapsulated into the Green tensor for a dipolar source at the BFP of the microscope objective, which can be written as [38, 39, 40, 27]
\[\mathbb{G}(\mathbf{u};z_{0},\Delta)=\exp\left\{\mathrm{i}kn_{f}|z_{0}|\left[ \frac{n_{i}}{n_{f}}\sqrt{1-\left(\frac{n_{f}u}{n_{i}}\right)^{2}}-\alpha\sqrt {1-u^{2}}\right]\right\}\exp\left(\mathrm{i}kn_{f}\Delta\sqrt{1-u^{2}}\right) \mathbb{g}(\mathbf{u}), \tag{1}\]
where \(\mathbb{g}\) is a \(2\times 3\) matrix (see Supplement for explicit form) that includes the effect of the Fresnel coefficients for the interface and depends only on the normalized pupil coordinates at the BFP, \(\mathbf{u}=(u_{x},u_{y})\) whose maximum value is limited by the NA through \(\|\mathbf{u}\|_{\mathrm{max}}=\mathrm{NA}/n_{f}\), \(z_{0}\) is the distance of the dipole from the coverslip, and \(k=2\pi/\lambda\) with \(\lambda\) being the wavelength of the emitted light. (Note that the definition of \(\mathbf{u}\) differs from that in Ref. [32] by a factor of \(n_{f}\).) The use For simplicity, it was assumed that the source is centered at the optical axis. The position of the focal plane \(z_{f}\) shown in Fig. 1 requires specifying two parameters: \(z_{f}=\Delta-\alpha|z_{0}|\), where \(\alpha\) is a dimensionless parameter fixing the position chosen for the reference focal plane (RFP) and \(\Delta\) the position of the focal plane with respect to the RFP. The parameter \(\alpha\) is generally taken to be the one producing the best focus when \(\Delta=0\), although its definition is not unique. As detailed in the Supplement, minimizing the root-mean-square spot size with fourth-order corrections for the wavefront difference between the SAF and defocus produces the simple expression
\[\alpha=\frac{n_{r}^{3}\left(32n_{r}^{2}+11\right)}{24n_{r}^{4}+16n_{r}^{2}+3}, \tag{2}\]
with \(n_{r}=n_{f}/n_{i}\), which gives satisfactory results for both water/oil (\(n_{r}=1.1391\)) and air/oil (\(n_{r}=1.515\)) interfaces. This value is taken as the default in this work, but it can be changed to match other experimental configurations.
### Birefringence at the pupil plane
As mentioned in the introduction, the goal of this work is to characterize any residual aberrations and polarization-dependent effects due to stress and/or interfaces, the use of masks aimed
Figure 1: Schematic of the experimental setup for the collection and shaping of the emission by a source. (a) Position of the fluorescent nanobead of radius \(R_{b}\), embedded in a medium of index of refraction \(n_{i}\), with respect to the interface created by the coverslip and the immersion liquid of the microscope objective, with index of refraction \(n_{f}\), and the focal plane located at \(z_{f}\), \(z_{f}<0\) (\(z_{f}>0\)) if the focal plane is in the medium with index of refraction \(n_{i}\) (\(n_{f}\)). (b) Schematic of the collection arm composed of a microscope objective (MO), followed by a birefringent mask (BM) and a polarization analyzer (PA) at the back focal plane (BFP). The light at the BFP is then focused onto the camera by the tube lens (TL) of focal length \(f_{\mathrm{tl}}\).
at tailoring the PSF, or a combination thereof. These residual effects produce a birefringent distribution at the BFP, shown as a mask in Fig. 1, which can be represented by a \(2\times 2\) space-dependent Jones matrix [31],
\[\mathbb{J}_{\mathrm{M}}(\mathbf{u})=\exp\left[\mathrm{i}2\pi W(\mathbf{u}) \right]\left(\begin{array}{cc}q_{0}(\mathbf{u})+\mathrm{i}q_{3}(\mathbf{u})&q _{2}(\mathbf{u})+\mathrm{i}q_{1}(\mathbf{u})\\ -q_{2}(\mathbf{u})+\mathrm{i}q_{1}(\mathbf{u})&q_{0}(\mathbf{u})-\mathrm{i}q_ {3}(\mathbf{u})\end{array}\right), \tag{3}\]
where \(W\) represents a scalar aberration function, and the scalar pupil functions \(q_{j}\) are real. This matrix can be made unitary by enforcing the condition \(\sum_{j}q_{j}^{2}=1\), and otherwise it includes the effect of apodization. The birefringence distribution \(\mathbb{J}_{\mathrm{M}}\) can be used to represent both a mask introduced to shape the PSFs, such as a stress-engineered optic (SEO) [41, 42, 11] or a q-plate [43, 44, 10], and the scalar and polarization aberrations of the system. (The simplest case of a scalar mask corresponds to \(q_{0}=1,q_{1}=q_{2}=q_{3}=0\).) This Jones matrix acts on the Green tensor of the dipolar source at the BFP and the result is then propagated to the image plane via
\[\mathbb{G}_{\mathrm{IP}}(\tilde{\mathbf{\rho}};z_{0},\Delta)=\iint\mathbb{J}_{ \mathrm{M}}(\mathbf{u})\cdot\mathbb{G}(\mathbf{u};z_{0},\Delta)\exp\left(- \mathrm{i}k\,\frac{n_{f}\tilde{\mathbf{\rho}}}{M}\cdot\mathbf{u}\right)\mathrm{d} ^{2}u, \tag{4}\]
where \(\tilde{\mathbf{\rho}}\) denotes the transverse position at the image plane, and \(M\) is the total magnification of the system. Note that for setups using a relay system the coordinates of \(\mathbb{G}\) should be flipped, \(\mathbf{u}\rightarrow-\mathbf{u}\).
For a fully polarized dipole oriented along the unit vector \(\tilde{\mathbf{\mu}}=(\mu_{x},\mu_{y},\mu_{z})\), the electric field distribution at the image plane is given by
\[\mathbf{E}_{\mathrm{IP}}(\mathbf{u};\mathbf{r}_{0})=\mathbb{G}_{\mathrm{IP}}( \tilde{\mathbf{\rho}};z_{0},\Delta)\cdot\tilde{\mathbf{\mu}}. \tag{5}\]
Therefore, the three columns of the Green tensor represent the field distribution produced by a dipole along each of the three coordinate axes. Since the information about the orientation of the dipole is encoded into the components of the Green tensor, in order to retrieve the dipole's orientation from the shape of the PSF, it is necessary to spatially separate its projections into two appropriately chosen orthogonal polarization states, such as horizontal and vertical linear, or left and right circular. These polarization projections can be represented by two matrices \(\mathbb{P}_{1}\) and \(\mathbb{P}_{2}\), thus for a fully polarized dipole obtaining the pair of PSFs are given by
\[I_{\mathrm{IP},j}(\tilde{\mathbf{\rho}};\mathbf{r}_{0})=\left\|\mathbb{P}_{j} \cdot\mathbb{G}_{\mathrm{IP}}(\tilde{\mathbf{\rho}};\mathbf{r}_{0})\cdot\tilde{\bm {\mu}}\right\|^{2}, \tag{6}\]
with \(j=1,2\). For unpolarized emitters, such as fluorescent beads used for characterization, the PSF is given by the incoherent sum of the components of the final Green tensor which amounts to the incoherent sum of the dipoles oriented along the three coordinates axes. In this case the pair of PSFs are given by
\[I_{\mathrm{IP},j}(\tilde{\mathbf{\rho}};\mathbf{r}_{0})=\left\|\mathbb{P}_{j} \cdot\mathbb{G}_{\mathrm{IP}}(\tilde{\mathbf{\rho}};\mathbf{r}_{0})\right\|^{2}. \tag{7}\]
Note that if no polarization projection is made then the PSF is given by the sum of the pair of PSFs \(I_{\mathrm{IP},1}\) and \(I_{\mathrm{IP},2}\).
## 3 Forward model for characterizing a birefringent pupil
### Polarization aberrations
The goal of this work is to be able to determine from a set of calibration PSFs the system's birefringent pupil distribution, represented by a Jones matrix of the form in Eq. (3). The various functions in this expression must first be expanded in terms of a basis whose expansion coefficients
are then determined through a nonlinear optimization. Common choices are given by the Zernike polynomials [45] and pixel-based optimization [26], both of which can be implemented with comparable speed since the number of discrete Fourier transforms (the most costly operation) is the same for both cases.
In what follows, a decomposition in the Zernike polynomial basis is used given that its elements are simpler to interpret, they provide a complete basis on the unit disk and allow an accurate description with fewer parameters (although examples using the pixel-based approach are shown the Supplement). Therefore, the components of the Jones matrix are decomposed as
\[W(\mathbf{u})=\sum_{l}^{\prime}c_{l}^{(W)}Z_{l}(\mathbf{u}/u_{\max}),\qquad \text{and}\qquad q_{j}(\mathbf{u})=\sum_{l}c_{l}^{(j)}Z_{l}(\mathbf{u}/u_{ \max}), \tag{8}\]
where \(j=0,\ldots,3\), and a single index notation was used for the basis elements \(Z_{l}\) (e.g. the Fringe notation). Note that \(\sum^{\prime}\) in the expression for \(W\) indicates that the terms corresponding to piston and defocus should be excluded. The piston term only fixes a global phase that is unimportant and cannot be determined from intensity measurements, while the defocus term is redundant with the more accurate defocus parameter \(\Delta\) in Eq. (1) which is also included as an optimization parameter. Also, note that any misalignment of the PSFs with respect to the optical axis is corrected by the scalar tilts present in \(W\). The number of Zernike polynomials to be included in the decomposition depends on the expected spatial dependence of the birefringent pupil. In this work, we found that using the first 15 polynomials is usually enough, even when retrieving discontinuous pupils such as the s-plate (see Fig. 3). It should also be mentioned that the smooth description provided by the Zernike model works despite the fast variations due to the SAF radiations since these are already included in the propagation model. This Zernike expansion is inspired by the Nijboer-Zernike theory [46, 47, 48] where a scalar mask would be separated into real and imaginary parts before decomposing them in terms of Zernike polynomials.
### Phase and polarization diversity
It is common practice in phase retrieval algorithms for optical microscopes to assume access to a stack of intensity images for varying focal distances \(\Delta_{\zeta}\) from the location of the best focus (a Z-stack). The phase diversity provided by the varying focal distances is taken into account by multiplying the Green tensor in Eq. (1) by the phase factor
\[D(\mathbf{u};\Delta_{\zeta})=\exp\left[\mathrm{i}kn_{f}\Delta_{\zeta}\sqrt{1-u ^{2}}\right]. \tag{9}\]
This additional information, referred to as phase diversity, helps the algorithm converge to an appropriate solution without falling into local minima, as well as discriminate between vortices with opposite topological charge.
While a Z-stack is sufficient to determine scalar masks and aberrations, it is not so for birefringent pupils since it does not discriminate between the true birefringent pupil given by \(\mathbb{J}\) and its unitary transformations \(\mathbb{J}_{\mathbb{U}}=\mathbb{U}\cdot\mathbb{J}\), where \(\mathbb{U}\) is a constant unitary matrix, as exemplified in the following section. Therefore, it is necessary to include information about the polarization dependence of the PSFs used for the retrieval. This additional information is obtained by introducing a polarization analyzer after the birefringent mask (see Fig. 1) composed of a combination of waveplates and polarizers, where at least one element rotates to generate various polarization projections of the output. This polarization diversity is modeled by a set of constant Jones matrices \(\mathbb{P}^{(P)}\) that are applied to the Green tensor at the BFP along with the defocus terms for the phase diversity in order to generate a PZ-stack of Green tensors
\[\mathbb{G}_{\mathrm{BFP}}^{(\zeta,p)}(\mathbf{u})=D^{(\zeta)}(\mathbf{u}) \mathbb{P}^{(P)}\cdot\mathbb{J}_{\mathbb{M}}(\mathbf{u})\cdot\mathbb{G}_{0}( \mathbf{u}). \tag{10}\]
It is important to notice that the constant matrices \(\mathbb{P}^{(P)}\) cannot be unitary since they would give a unitary transformation and thus have no effect of the shape of the PSFs. The simplest nonunitary matrix to implement is a projection matrix obtained by placing a linear polarizer at the end of the waveplate sequence.
This PZ-stack of Green tensors is then propagated to the image plane via
\[\mathbb{G}_{\mathrm{IP}}^{(\zeta,\,p)}(\mathbf{\rho})=\iint\mathbb{G}_{\mathrm{BFP }}^{(\zeta,\,p)}(\mathbf{u})\exp\left(-\mathrm{i}k\frac{n_{f}\mathbf{\rho}}{M} \cdot\mathbf{u}\right)\mathrm{d}^{2}u, \tag{11}\]
and its components are added incoherently (modeling an unpolarized source) to obtain a PZ-stack of PSFs
\[I_{\mathrm{IP}}^{(\zeta,\,p)}(\mathbf{\rho})=\left\|\mathbb{G}_{\mathrm{IP}}^{( \zeta,\,p)}\right\|^{2}=\sum_{i=x,\,y}\sum_{j=x,\,y,\,z}|G_{\mathrm{IP},ij}^{( \zeta,\,p)}(\mathbf{u})|^{2}, \tag{12}\]
like the one shown in Fig. 2. The polarization projections \(\mathbb{P}_{1}\) and \(\mathbb{P}_{2}\) used to extract the orientation information can also be used to define the polarization diversity as discussed in Sec. 5. It is worth noting that while experimentally the polarization diversity happens at the BFP, computationally it is better to perform it at the image plane in order to avoid the computation of unnecessary fast Fourier transforms (FFT). However, as discussed in Sec. 6, there are some situations in which this is not possible.
### Modelling the measured PSFs
Before comparing the PZ-stack computed by the model presented thus far with one measured experimentally, it is necessary to account for other effects. First, depending on the size of the fluorescent bead it might be necessary to include a blurring effect. As we showed in Ref. [32], the exact three-dimensional blurring corresponds to a superposition of two-dimensional convolutions between the PSFs generated by point sources located along the longitudinal diameter of the bead with a kernel that weights more heavily the contributions near the center of the bead and vanishes for those at the poles. This exact blurring cannot be rewritten as a three-dimensional convolution due to the loss of the translation invariance along the axis of propagation, but
Figure 2: PZ-stack for an unpolarized emitter shaped with an SEO element and modeled with \(p_{\mathrm{YPSFstack}}\). Only the PSFs at the initial, middle and final values of the defocus parameter \(\Delta_{\zeta}\) used for the phase diversity in the retrieval of the pupil shown in Fig. 3 are shown. For each \(\Delta_{\zeta}\) we show the PSFs for all values of the polarization diversity. This polarization diversity is generated with a rotating quarter wave plate followed by a projection onto linear horizontal or vertical polarization states.
it can be approximated through a semi-analytic method based on a Taylor expansion. By keeping the first term of the expansion, we obtain a two-dimensional blurring model based on a two-dimensional convolution of the PSFs. If instead we keep two terms in the expansion then we get a three-dimensional blurring model, that can model the blurring along the transverse and longitudinal directions, and that is computed via two-dimensional convolutions of the PSFs and their second-derivatives with respect to \(\Delta\). Both of these approximate models are computationally more efficient than the exact one and have been implemented in pyPSFstack. Moreover, they can be used during the pupil retrieval process, as shown in Sec. 5.2, albeit at a computational cost due to the supplementary Fourier transforms that need to be computed. Nonetheless, they allow for the analytic computation of the gradients which significantly limits the computational slowdown, as opposed to using readily available functions to blur images, such as the one used in [26] with Gaussian kernel, which require the gradients to be computed via finite differences.
The other two effects that must be considered are the photobleaching of the fluorescent beads and the background illumination. The photobleaching causes the number of photons emitted by the nanobead to diminish with time. Its effect can be taken into account by implementing an overall amplitude factor \(a^{(p,\zeta)}\) which depends on both the phase and polarization diversities. The background illumination is then added incoherently to the photobleached PSF stack. The simplest model is to assume that the background illumination is determined by a constant term \(b^{(p,\zeta)}\) that depends on the diversities. Extra terms for a spatially-dependent background can be added if needed [49]. The modelled PZ-stack to be compared to the experimental measurements is then given by
\[I_{\mathrm{tot}}^{(\zeta,p)}(\mathbf{\rho})=a^{(p,\zeta)}\mathcal{B}\left[I_{ \mathrm{IP}}^{(\zeta,p)}(\mathbf{\rho});R_{b}\right]+b^{(p,\zeta)}, \tag{13}\]
where \(\mathcal{B}\) denotes the blurring operation that depends on the radius of the bead.
### Assessing the accuracy with a cost function
The last piece of the forward model to be considered is the choice of cost function used to tune the parameters so that the modeled PSFs, \(I_{\mathrm{tot}}^{(\zeta,p)}\), best fit the measured ones, \(I_{\mathrm{exp}}^{(\zeta,p)}\). In the absence of noise, any choice of cost function that has a minimum when the two quantities are the same should provide the same result. However, noise is always present in experimental measurements and thus must be taken account. In single molecule fluorescent microscopy one is normally limited by shot noise following a Poisson distribution, in which case the log-likelihood cost function [50]
\[C=-\sum_{\zeta,p}\iint w(\mathbf{\rho})\,\left\{I_{\mathrm{exp}}^{(\zeta,p)}(\mathbf{ \rho})\log\left[I_{\mathrm{tot}}^{(\zeta,p)}(\mathbf{\rho})\right]-I_{\mathrm{tot }}^{(\zeta,p)}(\mathbf{\rho})\right\}\mathrm{d}^{2}\rho \tag{14}\]
should be used. Here, \(w\) denotes a binary window function used to represent the region considered for the optimization due to a smaller size of the experimental data, and/or to exclude bad pixels of the camera. Another common option for the cost function is the sum of differences squared, which is appropriate when the noise follows a Gaussian distribution. Both of these options are implemented in pyPSFstack. Note that for the choice of cost function to be consistent, the values of \(I_{\mathrm{exp}}^{(\zeta,p)}\) used must actually follow the assumed distribution. This means that the images should not be denoised and that the offset of the camera should be removed.
## 4 Implementing the nonlinear optimization
The goal of the nonlinear optimization routine is to find the set of optimization parameters in the forward model that minimize the cost function assessing the differences between the measured and modeled PZ-stacks. This is achieved by supplying an optimization algorithm, such as
Adam [51] or L-BFGS [52], with a function that uses all the current values of the parameters to compute the forward model all the way to the value of the cost function. Additionally, one should provide the optimization algorithm with another function that performs a backward computation to obtain the gradients of the cost function with respect to all the optimization parameters. These gradients are used to change the parameter values until a minimum is reached. The gradient computation is straightforward but tedious, and can be achieved by following the rules outlined in Ref. [53]. However, an advantage of implementing the nonlinear optimization with the neural network framework PyTorch is that only the forward model must be implemented explicitly, since the backward model for computing the gradients of the various parameters is automatically computed. This framework also offers the most common optimization algorithms. The Adam algorithm was chosen for the results presented here since it offers several advantages in terms of speed and memory use.
To implement the forward model, all calculations are performed numerically using fast Fourier transforms (FFTs), requiring all spatial quantities to be sampled consistently with the camera's pixel pitch \(p_{\text{cam}}\), so that the size at the BFP is fixed to \(L_{\text{pupil}}=\lambda M/(p_{\text{cam}}n_{f})\). The total size at the BFP is divided into \(N\) points used for the computation, and the resulting two-dimensional sampling is labelled with the lexicographic index \(\mathbf{\ell}\) which takes over all spatial dependence in terms of \(\mathbf{u}\) and \(\mathbf{\rho}\), for instance \(\mathbb{G}_{0}(\mathbf{u})\rightarrow\mathbb{G}_{0}(\mathbf{\ell})\). All the parameters and steps necessary to compute the forward model are summarized in Algorithm 1.
```
1:\(I_{\text{exp}}^{(\zeta,P)}\) measured PSFs
2:distance to the coverslip
3:\(\alpha\) parameter defining the RFP
4:\(N_{z}\) defocuses used for the phase diversity
5:\(\mathbb{P}^{(p)}\) \(N_{p}\) Jones matrices used for the polarization diversity
6:\(\Delta\) optimization parameter for defocus from RFP
7:\(c_{l}^{(W)}\), \(c_{l}^{(j)}\) optimization parameter for Zernike expansion coefficients
8:\(R_{b}\) optimization parameter for radius of the bead used for the blurring
9:\(a^{(p,\zeta)}\) optimization parameter for photobleaching amplitudes
10:\(b^{(p,\zeta)}\) optimization parameter for background illumination
```
**Algorithm 1** Implementation of the forward model
**Procedure** Forward model
```
1:\(\mathbb{G}_{J}(\mathbf{\ell})\leftarrow\mathbb{J}\left[\mathbf{\ell};c_{l}^{(W)},c_{l} ^{(j)}\right]\cdot\mathbb{G}(\mathbf{\ell};z_{0},\alpha,\Delta)\)\(\triangleright\) Apply birefringent mask to Green tensor
2:\(\mathbb{G}^{(\xi)}(\mathbf{\ell})\gets D(\Delta_{\xi})\mathbb{G}_{J}(\mathbf{\ell})\)\(\triangleright\) Apply phase diversity
3:\(\mathbb{G}_{\text{IP}}^{(\xi)}(\mathbf{\ell})\gets FFT\left\{\mathbb{G}^{(\xi)}(\mathbf{ \ell})\right\}\)\(\triangleright\) Propagate to image plane
4:\(\mathbb{G}_{\text{IP}}^{(\xi,P)}(\mathbf{\ell})\leftarrow\mathbb{P}^{(p)}\cdot \mathbb{G}_{\text{IP}}^{(\xi)}(\mathbf{\ell})\)\(\triangleright\) Apply polarization diversity
5:\(I^{(\zeta,P)}(\mathbf{\ell})\leftarrow\sum_{i}\sum_{j}|G_{\text{IP},i}^{(\xi,P)}( \mathbf{\ell})|^{2}\)\(\triangleright\) Compute intensity
6:\(I_{\text{blur}}^{(\zeta,p)}(\mathbf{\ell})\leftarrow\mathcal{B}\left[I_{\text{IP} }^{(\zeta,P)}(\mathbf{\ell}),R_{b}\right]\)\(\triangleright\) Apply blurring
7:\(I_{\text{tot}}^{(\zeta,P)}(\mathbf{\ell})\gets a^{(P,\zeta)}I_{\text{blur}}^{( \zeta,P)}(\mathbf{\ell})+b^{(P,\zeta)}\)\(\triangleright\) Photobleaching and background illumination
8:\(C\leftarrow-\sum_{\mathbf{\ell},\xi,p}w(\mathbf{\ell})\left\{I_{\text{exp}}^{(\zeta,p)}( \mathbf{\ell})\ln\left[I_{\text{tot}}^{(\zeta,P)}(\mathbf{\ell})\right]-I_{\text{tot}}^ {(\zeta,p)}(\mathbf{\ell})\right\}\)\(\triangleright\) Compute cost function
```
**Algorithm 2** Forward model
**Return**
## 5 Numerical experiments
### Polarization diversity VS more phase diversity
To exemplify the implementation of the phase retrieval algorithm and, in particular, the need for polarization diversity to properly characterize a birefringent pupil, we consider the retrieval of two masks used recently for estimating the position and orientation of single emitters: an SEO (with its parameter set to \(c=1.25\pi\)) [41, 42] and an s-plate [43, 10, 44], both shown in Fig. 3. Following the strategy outlined thus far, pyPSFstack is used to model a PZ-stack for each birefringent pupil such as the one shown in Fig. 2 for the SEO. For the phase diversity images are taken from \(-250\)nm to \(250\)nm of the nominal focal plane with a step size of \(50\)nm. For the polarization diversity, a quarter wave plate (QWP) is rotated from \(0\) to \(3\pi/8\) with a step of \(\pi/8\) and is followed by a Wollaston prism that projects the output onto horizontal and vertical linear polarizations. This choice is inspired by the setup used in [11] where the SEO is followed by a QWP and a Wollaston prism to project the PSF into left and right circular polarizations. It is also assumed that \(10000\) photons make it on average to the camera to form the PSFs, to which an additional \(50\) photons per pixel are added as background. Noise following a Poisson distribution is then added to the images. For simplicity, we consider a small fluorescent bead with \(R_{b}=10\)nm so that spatial blurring is negligible. Nonetheless, the exact blurring model introduced in [32] was used to compute the PZ-stacks for testing the birefringent pupil retrieval algorithm. Moreover, a random error of the order of \(20\)nm was introduced to the distance between the bead and the coverslip, and one of the order of \(25\)nm to the location of the RFP.
The results of the procedure are shown in Fig. 3, where the Jones matrix for the ground truth is compared to the ones retrieved from the PZ-stacks. This figure also shows the PSFs formed by dipoles oriented along the three Cartesian axes and by an unpolarized one (constructed as an incoherent mixture of the previous three). The PSFs shown are modeled using the standard projectors \(\mathbb{P}_{1}\) and \(\mathbb{P}_{2}\) (Eqs. (6) and (7)) for each birefringent mask: for the SEO the output is
Figure 3: Birefringent pupil retrieval with and without polarization diversity. (first row) Elements of the Jones matrix for the ground truth and the retrieved pupil with and without polarization diversity for (left) an SEO element and (right) an s-plate. Also shown are the PSFs generated by point dipoles (second to fourth row) oriented along each of the three Cartesian axes, and (last row) an unpolarized dipole for each of the standard projectors used for each type of birefringent window. For the SEO, the PSF are projected onto left- and right-circular polarizations, while for the s-plate they are projected onto linear horizontal and vertical polarizations.
projected onto left and right circular polarizations, while for the s-plate the output is projected onto the horizontal and vertical polarizations. For the SEO, the difference between the retrieved pupil and the corresponding PSFs cannot be distinguished visually from the one corresponding to the ground truth. However, for the s-plate there are appreciable differences between the original and the retrieved pupils. The deviation at the center is due to the chosen model, since the Zernike polynomials struggle to reproduce the singularity at the center of the s-plate. The other noticeable difference is that the phase between the two rows is not correct. This error happens only for birefringent masks that generate rotationally symmetric PSFs for unpolarized emitters for all polarization projections. Therefore, the algorithm cannot determine the global phase between the rows of the corresponding Jones matrix. This problem can be solved by placing the device that introduces the polarization diversity before the birefringent mask. Nonetheless, the reproduced PSFs for any dipole orientation are indistinguishable from the true ones and thus this difference is inconsequential for the purposes of single molecule fluorescence microscopy. Additionally, the error introduced for \(z_{0}\) and \(\alpha\) has no impact on the retrieval as long as the defocus \(\Delta\) is included as an optimization parameter.
For comparison, the retrieval for the same birefringent mask is also performed without using polarization diversity. To make this comparison consistent and fair, the number of photons reaching the camera and in the background illumination are doubled since there is no Wollaston prism to cut them in half in order to project onto a polarization state. Moreover, the phase diversity images are now taken from \(-250\)nm to \(250\)nm of the RFP with a reduced step size of \(5\)nm so that the total number of PSFs used is larger than in the previous case. The results are shown in Fig. 3, where the algorithm is seen to fail to retrieve the appropriate pupil, and thus it cannot generate the correct PSFs when they are projected onto a given polarization state. These results show the need for polarization diversity when characterizing a birefringent pupil.
Figure 4: Retrieval of a birefringent pupil from highly blurred PSFs. Comparison of PSFs for an SEO element at the nominal focal plane (\(\Delta=0\)) for the same polarization diversities as those shown in Fig. 2 for (first row) a point source and (second row) a fluorescent bead with radius \(R_{b}=150\)nm. (third row) Elements of the Jones matrix for the ground truth, and the retrieved birefringent pupils without blurring, with a two-dimensional blurring model, and a three-dimensional semi-analytic blurring model.
### Incorporating blurring due to size to the pupil retrieval
As an additional test of the present implementation, the retrieval of a birefringent pupil from a highly blurred PZ-stack like the one shown in Fig. 4 is considered. Using the SEO as an example, a PZ-stack is constructed as in the previous section with the significant difference of increasing the radius of the nanobead, which is chosen randomly from a uniform distribution between 150nm and 170nm. Here again the distance to the coverslip is taken to be equal to the radius of the bead. As shown in Fig. 4, this bead size causes a significant blur in the resulting PSFs. The blurred PSFs were computed with the exact blurring presented in [32], but this approach turns out to be computationally expensive and not necessary for the retrieval procedure as shown in what follows.
A first attempt to retrieve the pupil is performed by completely neglecting the blurring effect. However, as shown in Fig. 4, this approach fails to retrieve the appropriate pupil, showing that bead size effects cannot be neglected in this case. Next, the two approximate models presented in Ref. [32] were used for the retrieval where the radius of the bead is used as an optimization parameter for the blurring with an initial value of 150nm. The first of these approaches is a 2D convolution with a kernel given by a spherical Bessel function. This approach produces a satisfactory result, whose retrieved pupil has a correlation of 98.6% with the true one. The second is a semi-analytic model capable of reproducing the effects of the three-dimensional blurring, based on a Taylor expansion around the center of the bead and thus requiring the propagation of the first and second derivatives of the Green tensor with respect to \(\Delta\) which are then used for two-dimensional convolutions at the image plane. In this case, a pupil indistinguishable from the true one is again retrieved with a slightly better correlation of 99.5%. However, this small gain might not justify the extra computational resources needed to compute the propagation of the derivatives.
## 6 Characterization from experimental data
After validating the proposed retrieval procedure on simulated data, we apply it to retrieve a birefringent pupil distribution from a PZ-stack measured experimentally. We use a sparse sample of fluorescent nanobeads of diameter 20 nm (orange carboxylate-modified FluoSpheres), immobilized on the surface of a poly-L-lysine-coated coverslip and embedded in water. The sample is mounted on a XYZ piezo stage (Physik Instrumente), and is excited by a continuous wave laser emitting at 561 nm (Oxxius L4Cc) in a wide-field illumination configuration using an oil immersion objective lens (APO TIRF \(\times\)100, NA\(=1.49\), Nikon). The emitted fluorescence is collected by the same objective lens, and then passes through two multiband dichroichs (Semrock Rochester NY) and a fuorescence filter (Semrock, 605/40). A telescopic relay system (composed of two achromatic doublets with \(f=250\) mm so that the magnification is unity) is used to access the BFP of the objective. The different polarization projections are taken by using a QWP (AQWP05M-600, Thorlabs) placed on a rotating mount (Newport, PR50CC) used, followed by a quartz Wollaston polarizing \(2.2^{\circ}\) beamsplitter (Edmunds, 68-820). The final images are measured using an ORCA Fusion-Digital CMOS C14440-20 UP (1024\(\times\)1024 pixels, \(6.5\times 6.5\)\(\mu\)m pixrl size, Hamamatsu). PZ-stacks were acquired with a step size of 50 nm, and a rotation step for the QWP of \(30^{\circ}\). Fig. 5 shows part of the experimental PZ-stack.
Before launching the retrieval on the experimental PZ-stack, it should be noted that there are several factors that lead to the introduction of a diversity-dependent phase tilt at the BFP. First, the use of a Wollaston prism to separate spatially the two polarization components into different sections of the camera might make it difficult to have the same center for the PSFs for each polarization component. Second, any slight wedge on the rotating QWP introduces a tilt that rotates with it. Finally, any slight misalignment between the stage moving the sample and the optical axis defined by the microscope objective leads to a defocus-dependent tilt. Therefore, it is best to introduce in the forward model extra optimization parameters to independently adjust
these tilts at the BFP for each combination of diversities. This change requires modifying the forward model outlined in Sec. 4. Specifically, steps 5 and 6 in Algorithm 1 should be reversed in order to apply the polarization diversity before propagating to the image plane, and the following additional step should be added after applying the polarization diversity:
1. Apply a phase tilt of the form \(T(\mathbf{\ell})=\exp\left(\mathrm{i}2\pi\mathbf{t}^{(\zeta,p)}\cdot\mathbf{\ell}\right)\) to each diversity. **Optimization parameters: \(\mathbf{t}^{(\zeta,p)}=(t_{x}^{(\zeta,p)},t_{y}^{(\zeta,p)})\)**.
The downside of including these extra parameters is that the number of FFTs that needs to be computed for each iteration is increased by a factor equal to the number of polarization diversities.
All the optimization parameters for the retrieval procedure were used, except for the bead radius (for the blurring) since it can be neglected due to the small bead size (\(R_{b}=10\)nm). The results of the retrieval process are shown in Fig. 5, showing strong agreement between the measured PZ-stack and the one modeled with the retrieved birefringent pupil. Moreover, the presence of a SEO is visible from the retrieved pupil. In this case, since it is known there is an SEO at the BFP, it is worth trying to separate the total retrieved pupil into a misaligned SEO and another one containing the scalar and polarization aberrations of the systems. It should be noted that scalar tilts and the defocus term are not shown as part of the scalar aberrations. From this decomposition it can be seen that the largest contribution comes from the SEO element, but the aberrations are not negligible and must be taken into account. Note also that the polarization aberrations show larger variations that the scalar ones, which are almost flat, showing the need to consider polarization aberrations when using polarization-dependent systems.
Figure 5: Birefringent pupil retrieval from experimental data, taken from fluorescent nanobeads of diameter \(20\)nm. (first row) Measured and (second row) retrieved PZ-stacks, where only the PSFs at the initial, middle and final values of the defocus parameter \(\Delta\) used for the phase diversity are shown, for all polarization diversities. (last row) The retrieved elements of the Jones matrix for the birefringent pupil and its decomposition into a misaligned SEO element, and scalar and polarization aberrations.
Conclusions
A methodology and phase retrieval algorithm were presented for the characterization of birefringent distributions at the pupil plane from stacks of PSFs. In particular, it was shown that, for polarization-dependent systems, aberrations should be modeled as a birefringent pupil encoding both scalar and polarization deviations from the ideal system, and that the use of polarization diversity is essential for its proper characterization. The software program pyPSFstack created and used for the modeling and retrieval presented in this work is freely available. In particular, the birefringent pupil retrieval implementation based on PyTorch makes this software flexible and customizable, as shown in this work by incorporating several optimization parameters apart from those used to describe the birefringent pupil, as well as the blurring models presented in Ref. [32] while always keeping a manageable running time. Additionally, this software can also be used for the retrieval of scalar pupils, as shown in the Supplement for a mask known as the tetrapod [54].
While here it was assumed that the sources used for the characterization were unpolarized, the retrieval model and algorithm presented can be easily adapted to sources that are dipolar, partially polarized, or with a fixed orientation such as the ones present in molecules fixed on a surface [12] or in DNA origami [55, 56]. Likewise, this model can also be used to find optimal scalar or birefringent pupils minimizing a particular combination of the Cramer-Rao bounds for the parameters to be extracted from the shape of the PSFs. Finally, note that there are some modifications that could be implemented to the retrieval algorithm that might allow it to converge faster, such as the use of stochastic gradient descent, where only a given set of diversities are used during each iteration, or a spectral initialization [57].
## Funding
R.G.C. acknowledges funding from the Labex WIFI (ANR-10-LABX-24, ANR-10-IDEX-0001-02 PSL*). L.A.C. and M.A.A. acknowledge funding from ANR-21-CE24-0014 and S.B. from ANR-20-CE42-0003.
## Acknowledgments
R.G.C. acknowledges S. W. Paine and S. M. Popoff for useful discussions. L.A.C. acknowledges M. Sison for the experimental advice. The authors also thank T. G. Brown for supplying the SEO.
## Disclosures
The authors declare no conflicts of interest.
## Data Availability Statement
Data underlying the results presented in this paper are available in Ref. [33].
## Supplemental document
See Supplement 1 for supporting content.
## References
* [1] S. Shashkova and M. C. Leake, "Single-molecule fluorescence microscopy review: shedding new light on old problems," Biosci. Reports **37** (2017).
* [2] K. M. Dean and A. E. Palmer, "Advances in fluorescence labeling strategies for dynamic cellular imaging," Nat. Chem. Biol. **10**, 512-523 (2014).
* [3] B. Huang, M. Bates, and X. Zhuang, "Super-resolution fluorescence microscopy," Annu. Rev. Biochem. **78**, 993-1016 (2009).
* [4] S. Brasselet, "Polarization-resolved nonlinear microscopy: application to structural molecular and biological imaging," Adv. Opt. Photon. **3**, 205 (2011).
* [5] J. T. Fourkas, "Rapid determination of the three-dimensional orientation of single molecules," Opt. Lett. Vol. 26, Issue **4**, pp. 211-213 **26**, 211-213 (2001).
* [6] C. A. Valades Cruz, H. A. Shaban, A. Kress, N. Bertaux, S. Monneret, M. Mavrakis, J. Savatier, and S. Brasselet, "Quantitative nanoscale imaging of orientational order in biological filaments by polarized superresolution microscopy," Proc. Natl. Acad. Sci. **113**, E820-E828 (2016).
* [7] C. V. Rimoli, C. A. Valades-Cruz, V. Curcio, M. Mavrakis, and S. Brasselet, "4polar-storm polarized super-resolution imaging of actin filament organization in cells," Nat. Commun. **13**, 301 (2022).
* [8] M. Sison, C. A. Valades Cruz, C. S. Senthil Kumar, V. Curcio, L. A. Aleman-Castaneda, M. Mavrakis, and S. Brasselet, "4polar3D smolm : single molecule orientation and localization microscopy using a simple pupil diaphragm and ratiometric polarization splitting," (in preparation).
* [9] M. P. Backlund, M. D. Lew, A. S. Backer, S. J. Sahl, G. Grover, A. Agrawal, R. Piestun, and M. W. E., "Simultaneous, accurate measurement of the 3D position and orientation of single molecules," Proc. Nat. Acad. Sci. **109**, 19087-92 (2012).
* [10] O. Zhang, W. Zhou, J. Lu, T. Wu, and M. D. Lew, "Resolving the three-dimensional rotational and translational dynamics of single molecules using radially and azimuthally polarized fluorescence," Nano Lett. **22**, 1024-1031 (2022).
* [11] V. Curcio, L. A. Aleman-Castaneda, T. G. Brown, S. Brasselet, and M. A. Alonso, "Birefringent fourier filtering for single molecule coordinate and height super-resolution imaging with dithering and orientation," Nat. Commun. **11** (2020).
* [12] C. N. Hulleman, R. O. Thorsen, E. Kim, C. Dekker, S. Stallinga, and B. Rieger, "Simultaneous orientation and 3D localization microscopy with a vortex point spread function," Nat. Commun. **12**, 5934 (2021).
* [13] T. Ding and M. D. Lew, "Single-molecule localization microscopy of 3D orientation and anisotropic wobble using a polarized vortex point spread function," The J. Phys. Chem. B **125**, 12718-12729 (2021).
* [14] T. Wu, J. Lu, and M. D. Lew, "Dipole-spread-function engineering for simultaneously measuring the 3D orientations and 3D positions of fluorescent molecules," Optica **9**, 505-511 (2022).
* [15] J. Enderlein, E. Toprak, and P. R. Selvin, "Polarization effect on position accuracy of fluorophore localization," Opt. Express **14**, 8111-8120 (2006).
* [16] J. Xia, C. Chang, Z. Chen, and J. B. Breckinridge, W. T. S. Lam, R. A. Chipman, Q. Hu, C. He, and M. J. Booth, "Arbitrary complex retarders using a sequence of spatial light modulators as the basis for adaptive polarisation compensation," J. Opt. **23**, 065602 (2021).
* [17] C. He, M. B. C. He, and M. J. Booth, "Enhancing polarisation imaging through novel polarimetry and adaptive optics," Proc. SPIE 11963, Polarized Light. Opt. Angular Momentum for Biomed. Diagn. 2022 **1196302** (2022).
* [18] B. M. Hanser, M. G. L. Gustafsson, D. A. Agard, and J. W. Sedat, "Phase retrieval for high-numerical-aperture optical systems," Opt. Lett. **28**, 801 (2003).
* [19] B. M. Hanser, M. G. L. Gustafsson, D. A. Agard, and J. W. Sedat, "Phase-retrieved pupil functions in wide-field fluorescence microscopy," J. Microsc. **216**, 32-48 (2004).
* [20] R. A. Gonsalves, "Phase diversity: math, methods and prospects, including sequential diversity imaging," in _Unconventional Optical Imaging_, vol. 10677 C. Fournier, M. P. Georges, and G. Popescu, eds., International Society for Optics and Photonics (SPIE, 2018), p. 1067715.
* [21] P. N. Petrov, Y. Shechtman, and W. E. Moerner, "Measurement-based estimation of global pupil functions in 3d localization microscopy," Opt. Express **25**, 7945-7959 (2017).
* [22] R. W. Gerchberg and W. O. Saxton, "A practical algorithm for the determination of plane from image and diffraction pictures," Optik **35**, 237-246 (1972).
* [23] J. R. Fienup, "Phase retrieval algorithms: a comparison," Appl. Opt. **21**, 2758 (1982).
* [24] N. A. Clark, "Microscope characterization using phase retrieval applied to determine the spatial distribution of membrane-associated proteins in hematocytes," Ph.D. thesis, University of Rochester (2012).
* [25] N. H. Thao, O. Soloviev, and M. Verhaegen, "Phase retrieval based on the vectorial model of point spread function," J. Opt. Soc. Am. A **37**, 16 (2019).
* [26] B. Ferdman, E. Nehme, L. E. Weiss, R. Orange, O. Alalouf, and Y. Shechtman, "VIPR: vectorial implementation of phase retrieval for fast and accurate microscopic pixel-wise pupil estimation," Opt. Express **28**, 10179 (2020).
* [27] L. Novotny and B. Hecht, _Principles of Nano-Optics_ (Cambridge University Press, 2006).
* [28] O. Zhang, J. Lu, T. Ding, and M. D. Lew, "Imaging the three-dimensional orientation and rotational mobility of fluorescent emitters using the tri-spot point spread function," Appl. Phys. Lett. **113**, 031103 (2018).
* [29] E. Bruggeman, O. Zhang, L.-M. Needham, M. Korbel, S. Daly, M. Cheetham, R. Peters, T. Wu, A. S. Klymchenko, S. J. Davis, E. K. Paluch, D. Klenerman, M. D. Lew, K. O'Holleran, and S. F. Lee, "Polcam: Instant molecular orientation microscopy for the life sciences," bioRxiv (2023).
* [30] E. W. Hansen, "Overcoming polarization aberrations in microscopy," in _Polarization Considerations for Optical Systems_, R. A. Chipman, ed. (SPIE, 1988).
* [31] A. Vella and M. A. Alonso, "Poincare sphere representation for spatially varying birefringence," Opt. Lett. **43**, 379 (2018).
* [32] L. A. Aleman-Castaneda, S. Y.-T. Feng, R. Gutierrez-Cuevas, I. Herrera, T. G. Brown, S. Brasselet, and M. A. Alonso,
"Using fluorescent beads to emulate single fluorophores," J. Opt. Soc. Am. A **39**, C167 (2022).
* [33] R. Gutierrez-Cuevas, "pyPSFstack," [https://github.com/rodguti90/pyPSFstack](https://github.com/rodguti90/pyPSFstack).
* [34] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, "Pytorch: An imperative style, high-performance deep learning library," in _Advances in Neural Information Processing Systems_, vol. 32 H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlche-Buc, E. Fox, and R. Garnett, eds. (Curran Associates, Inc., 2019).
* [35] E. H. Hellen and D. Axelrod, "Fluorescence emission at dielectric and metal-film interfaces," J. Opt. Soc. Am. B **4**, 337 (1987).
* [36] D. Axelrod, "Total internal reflection fluorescence microscopy in cell biology," Traffic **2**, 764-774 (2001).
* [37] D. Axelrod, "Evanescent excitation and emission in fluorescence microscopy," Biophys. J. **104**, 1401-1409 (2013).
* [38] B. Richards, E. Wolf, and D. Gabor, "Electromagnetic diffraction in optical systems, ii. structure of the image field in an aplanatic system," Proc. Royal Soc. London. Ser. A. Math. Phys. Sci. **253**, 358-379 (1959).
* i. an integral representation of the image field," Proc. Royal Soc. London. Ser. A. Math. Phys. Sci. **253**, 349-357 (1959).
* [40] M. A. Lieb, J. M. Zavislan, and L. Novotny, "Single-molecule orientations determined by direct emission pattern imaging," J. Opt. Soc. Am. B **21**, 1210 (2004).
* [41] A. K. Spilman and T. G. Brown, "Stress birefringent, space-variant wave plates for vortex illumination," Appl. Opt. **46**, 61 (2007).
* [42] A. K. Spilman and T. G. Brown, "Stress-induced focal splitting," Opt. Express **15**, 8411 (2007).
* [43] L. Marrucci, C. Manzo, and D. Paparo, "Optical spin-to-orbital angular momentum conversion in inhomogeneous anisotropic media," Phys. Rev. Lett. **96**, 163905 (2006).
* [44] A. Rubano, F. Cardano, B. Piccirillo, and L. Marrucci, "Q-plate technology: a progress review [invited]," J. Opt. Soc. Am. B **36**, D70 (2019).
* [45] N. Yamamoto, J. Kye, and H. J. Levinson, "Polarization aberration analysis using pauli-zernike representation," in _Optical Microlithography XX,_ D. G. Flagello, ed. (SPIE, 2007).
* [46] A. J. E. M. Janssen, "Extended nijboer-zernike approach for the computation of optical point-spread functions," J. Opt. Soc. Am. A **19**, 849 (2002).
* [47] J. J. M. Braat, P. Dirksen, A. J. E. M. Janssen, and A. S. van de Nes, "Extended nijboer-zernike representation of the vector field in the focal region of an aberrated high-aperture optical system," J. Opt. Soc. Am. A **20**, 2281 (2003).
* [48] J. J. Braat, P. Dirksen, A. J. Janssen, S. van Haver, and A. S. van de Nes, "Extended nijboer-zernike approach to aberration and birefringence retrieval in a high-numerical-aperture optical system," J. Opt. Soc. Am. A **22**, 2635 (2005).
* [49] A. Aristov, B. Lelandais, E. Rensen, and C. Zimmer, "ZOLA-3d allows flexible 3d localization microscopy over an adjustable axial range," Nat. Commun. **9** (2018).
* [50] R. G. Paxman, T. J. Schulz, and J. R. Fienup, "Joint estimation of object and aberrations by using phase diversity," J. Opt. Soc. Am. A **9**, 1072 (1992).
* [51] D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," (2017).
* [52] D. C. Liu and J. Nocedal, "On the limited memory bfgs method for large scale optimization," Math. Program. **45**, 503-528 (1989).
* [53] A. S. Jurling and J. R. Fienup, "Applications of algorithmic differentiation to phase retrieval algorithms," J. Opt. Soc. Am. A **31**, 1348 (2014).
* [54] Y. Shechtman, S. J. Sahl, A. S. Backer, and W. E. Moerner, "Optimal point spread function design for 3d imaging," Phys. Rev. Lett. **113**, 133902 (2014).
* [55] A. K. Adamczyk, T. A. P. M. Huijben, M. Sison, A. Di Luca, G. Chiarelli, S. Vanni, S. Brasselet, K. I. Mortensen, F. D. Stefani, M. Pilo-Pais, and G. P. Acuna, "Dna self-assembly of single molecules with deterministic position and orientation," ACS Nano **16**, 16924-16931 (2022).
* [56] K. Hubner, H. Joshi, A. Aksimentiev, F. D. Stefani, P. Tinnefeld, and G. P. Acuna, "Determining the in-plane orientation and binding mode of single fluorescent dyes in dna origami structures," ACS Nano **15**, 5109-5117 (2021).
* [57] E. J. Candes, X. Li, and M. Soltanolkotabi, "Phase retrieval via wirtinger flow: Theory and algorithms," IEEE Transactions on Inf. Theory **61**, 1985-2007 (2015).
Characterization of polarization dependence in super-resolution fluorescent microscopy via phase retrieval: supplemental document
###### Abstract
This document presents supplementary information for the results presented in the main text. Section 1 gives the exact expression for the Green tensor at the back focal plane for a dipolar source. Section 2 presents various locations commonly used to define the best focus. Section 3 provides an example that uses the pixel-based approach for the retrieval of a birefringent pupil. Section 4 provides an example of the retrieval of a scalar phase mask.
## 1 Expressions for the Green tensor at the back focal plane
As outlined in [1], a closed-form for the Green tensor at the back-focal plane for a dipolar source placed close to an interface can be obtained. In particular the components of the \(\mathfrak{g}\) tensor in Eq. (1) of the main text are given by
\[\mathfrak{g}(\mathbf{u})=\frac{1}{\sqrt{\gamma_{f}(u)}}\left( \begin{array}{cc}\cos^{2}\phi\gamma_{f}(u)\Phi_{2}+\sin^{2}\phi\Phi_{3}&\cos \phi\sin\phi(\gamma_{f}(u)\Phi_{2}-\Phi_{3})&-u\cos\phi\Phi_{1}\\ \cos\phi\sin\phi(\gamma_{f}(u)\Phi_{2}-\Phi_{3})&\sin^{2}\phi\gamma_{f}(u)\Phi _{2}+\cos^{2}\phi\Phi_{3}&-u\sin\phi\Phi_{1}\end{array}\right).\] (S1)
where
\[\Phi_{1}(u)=t^{p}(u)\frac{n_{f}^{2}\gamma_{f}(u)}{n_{i}^{2}\gamma_{i}(u)}, \qquad\Phi_{2}(u)=t^{p}(u)\frac{n_{f}}{n_{i}},\qquad\Phi_{3}(u)=t^{s}(u)\frac{ n_{f}\gamma_{f}(u)}{n_{i}\gamma_{i}(u)},\] (S2)
with
\[t^{s}(u)=\frac{2n_{i}\gamma_{i}(u)}{n_{i}\gamma_{i}(u)+n_{f} \gamma_{f}(u)},\qquad t^{p}(u)=\frac{2n_{i}\gamma_{i}(u)}{n_{f}\gamma_{i}(u)+n _{i}\gamma_{f}(u)},\] (S3)
being the Fresnel coefficients for \(\mathfrak{p}\) and \(\mathfrak{s}\) polarized light, and
\[\gamma_{f}(u)=\sqrt{1-u^{2}},\qquad\text{and}\qquad\gamma_{i}(u) =\sqrt{1-\left(\frac{n_{f}u}{n_{i}}\right)^{2}}.\] (S4)
## 2 Choosing the best focus location
The index mismatch between the embedding medium and the coverslip, assumed to be the same as that of the immersion liquid of the objective, leads to a shift of the paraxial focus and higher-order circularly symmetric aberrations, such as spherical. The presence of these aberrations makes the choice of the location producing the best focus to some extent subjective. There are several criteria that could be used, each leading to a different value of the parameter \(\alpha\) as defined in Eq. (3) in the main text. These criteria are based on compensating the phase terms coming from the propagation of light from the source to the interface by a defocus term. The difference between them defines the wavefront error which can be written in terms of \(\delta\) and \(\alpha\) as,
\[\Delta W(u)=\lambda n_{f}\delta\left[\frac{1}{n_{r}}\sqrt{1-(n_{ r}u)^{2}}-\alpha\sqrt{1-u^{2}}\right],\] (S5)
where \(n_{r}=n_{f}/n_{i}\) is the relative index of refraction between the embedding medium with index of refraction \(n_{i}\) and the coverslip with index of refraction \(n_{f}\). To simplify the analysis, the radius \(u_{\text{SAF}}=1/n_{r}\) at which SAF radiation start appearing will be taken as the semi aperture. The
wavefront error is used to define various criteria for best focus, such as minimizing the wavefront root-mean square (RMS) error,
\[\Delta W_{\text{rms}}^{2}=\frac{1}{\pi}\int_{0}^{2\pi}\int_{0}^{1/n_{r}}\left[ \Delta W(u^{\prime})-\overline{\Delta W}\right]^{2}u^{\prime}\text{d}u^{\prime }\text{d}\phi,\quad\text{where}\quad\overline{\Delta W}=\frac{1}{\pi}\int_{0} ^{2\pi}\int_{0}^{1/n_{r}}\Delta W(u^{\prime})u^{\prime}\text{d}u^{\prime}\text{ d}\phi,\] (S6)
and \(u^{\prime}=n_{r}u\) or the RMS spot size
\[\epsilon_{\text{rms}}^{2}=\int_{0}^{1/n_{r}}\int_{0}^{2\pi}\left[\epsilon_{x}(u )-\overline{\epsilon_{x}}\right]^{2}u\text{d}u\text{d}\phi,\quad\text{where} \quad\epsilon_{x}=\partial_{u_{x}}\Delta W(u),\] (S7)
where only one direction is used due to the rotational symmetry of \(\Delta W\).
In the following, several common options for the location of the best focus in terms of the paramter \(\alpha\) are given:
1. The actual location of the source, which leads to \(\alpha=1\).
2. The paraxial focus, which minimizes the RMS wavefront error when it is approximated up to second order which gives \(\alpha=n_{f}/n_{i}\).
3. The minimization of the RMS wavefront error when it is approximated up to fourth order, which gives \[\alpha=\frac{n_{r}^{3}\left(75n_{r}^{2}+19\right)}{2\left(30n_{r}^{4}+15n_{r}^ {2}+2\right)}.\] (S8)
4. The minimization of the RMS wavefront error when it is approximated up to sixth order, which gives \[\alpha=\frac{n_{r}^{5}\left(36624n_{r}^{4}+9352n_{r}^{2}+4269\right)}{5\left( 5376n_{r}^{8}+2688n_{r}^{6}+1568n_{r}^{4}+336n_{r}^{2}+81\right)}.\] (S9)
5. The minimization of the RMS wavefront error, which gives \[\alpha=\frac{7n_{r}^{3}-16\sqrt{n_{r}^{2}-1}n_{r}^{2}+16\sqrt{n_{r}^{2}-1}+9 \left(n_{r}^{2}-1\right)^{2}\coth^{-1}(n_{r})-9n_{r}}{32n_{r}^{6}-32\sqrt{n_{r} ^{2}-1n_{r}^{5}}-48n_{r}^{4}+32\sqrt{n_{r}^{2}-1n_{r}^{3}}+12n_{r}^{2}+2}.\] (S10)
6. The minimization of the RMS spot size when the wavefront error is approximated up to fourth order, which gives \[\alpha=\frac{n_{r}^{3}\left(32n_{r}^{2}+11\right)}{24n_{r}^{4}+16n_{r}^{2}+3}.\] (S11)
7. The minimization of the RMS spot size when the wavefront error is approximated up to sixth order, which gives \[\alpha=\frac{n_{r}^{5}\left(1460n_{r}^{4}+512n_{r}^{2}+297\right)}{960n_{r}^{8}+64 0n_{r}^{6}+480n_{r}^{4}+144n_{r}^{2}+45}.\] (S12)
8. The circle of least confusion, where the marginal ray intersects the caustic, for which \(\alpha\) is given by the solution of the following equation: \[\left(\frac{x}{n_{r}}\right)^{2/3}=\left(\frac{1}{n_{r}}-\frac{x}{n_{r}\sqrt{n _{r}^{2}+1}}\right)^{2/3}+1.\] (S13)
As shown in Fig. S1, the PSF generated at the location where the RMS spot size with a fourth-order approximation to the wavefront error provides a good starting point and a simple formula. Therefore, this value is used as the default in PyPSFstack and in all computations performed in the main text.
## 3 Pixel-based model
For a pixel-based model of the birefringent pupil, it is still worthwhile to first decompose it as in Eq. (5) of the main text. Then, instead of expanding the components \(W\) and \(q_{j}\) into Zernike polynomials, these are discretized using the same sampling outline for the numerical implementation, and each value labelled by the lexicographic index \(\ell\) becomes an optimization parameter with all those falling outside the aperture being set to zero. A technical point of this model is that, for the optimization, the defocus parameter \(\Delta\) should be removed since it will be automatically taken care of. From the retrieved pupil the defocus term can then be removed by fitting to it the appropriate function.
This discretized model is also implemented in PyPSFstack, and Fig. S2 shows the results obtained when it is used for the retrieval of the SEO and s-plate with the same parameters as those used to obtain the results presented in Fig. 3 in the main text. While the general shape of the true pupils can be identified from the retrieved ones, there is a distinctive granularity with
Figure S2: Pixel-based model retrieval. (first row) Elements of the Jones matrix for the ground truth and the retrieved pupils with a pixel-based model for (left) an SEO and (right) an s-plate. Also shown are the PSFs generated by point dipoles (second to fourth rows) oriented along each of the three Cartesian axes, and (last row) an unpolarized dipole, for each of the standard projectors used for each type of birefringent window. For the SEO the PSFs are projected onto left- and right-circular polarizations, while for the s-plate they are projected onto linear horizontal and vertical polarizations.
fast variation from on pixel to the next. This effect is a consequence of the increased number of parameters used to retrieve the pupil, which makes it prone to fall into local minima or overfitting. Nonetheless, in both cases they produce the correct PSFs since the fast variations send light outside the regions used to compute the cost function, which are controlled by \(w\) in Eq. (15) of the main text. Therefore, these retrieved pupils effectively behave as low-pass filtered versions of themselves which incidentally would be closer to the true ones.
## 4 Retrieval of a scalar phase mask
As mentioned in the main test, pyPSFstack can also handle the retrieval of scalar pupils using both Zernike- and pixel-based models which are equivalent to those mentioned for birefringent pupils when \(q_{j}=0\) for \(j=1,2,3\). Nonetheless, they have been implemented as separate models. As an example, the retrieval of a tetrapod phase mask [2] is considered. Figure S3 shows the phase mask which was designed to optimize the localization for defocuses ranging from \(-1.5\mu\)m to \(1.5\mu\)m and the Z-stack used for the retrieval. Also shown are the results of the retrieval with both the Zernike and pixel-based models, along with the Z-stacks modeled with the retrieved pupils. We see that both models provide accurate results, with the slight difference of obtaining a smoother pupil for the Zernike-based model. Additionally, it should be mentioned that when using the same sampling, the runtime for both models is quite similar since, as mentioned in the main manuscript, the main bottleneck is the number of FFTs required, which is the same for both models, and the gradients for all parameters are computed analytically.
| 単分子配向局在顕微鏡では、各分子の orientaion と縦軸の位置に関する貴重な情報が、点散乱関数 (PSF) の形状にencodedされています。この形状は、イメージングシステムの乱れやその他の不具合によって大きく影響を受け、測定されるパラメータの誤った推定につながることがあります。この問題の解決策として、Pupil平面における乱れをスカラーマスクとしてモデル化し、その特徴は位相復元アルゴリズムにより表すことができます。ただし、この方法では、要素における不均一な偏光による乱れや、PSFを形成するために birefringent マスクを使用するケースでは不適切です。この問題は、偏光依存性の乱れを表現する多線形モデルの導入によって解決されます。これは、偏光依存性要素を表すために空間的に依存する Jones マトリックスを使用します。そして、これらの乱れを、様々な焦点距離 |
2309.15073 | Focusing and Diffraction of Light by Periodic Si Micropyramidal Arrays | This research was devoted to modeling of the optical properties of Si
micropyramids aimed at designing optimal structures for applications as light
concentrators in mid-wave infrared (MWIR) focal place arrays (FPAs). It is
shown that completely different optical properties of such structures can be
realized using two types of boundary conditions (BCs): i) periodical and ii)
perfectly matched layer. The first type (periodical BC) allowed us to describe
the Talbot effect under plane wave coherent illumination conditions. This
effect was experimentally demonstrated in the proposed structures. The second
type (perfectly matched layer BC) allows describing the optical properties of
individual micropyramids concentrating or focusing light on the photodetector.
The optimal geometries of micropyramids required for maximizing the intensity
of photonic nanojets emerging from their truncated tips are determined. | Grant W. Bidney, Amstrong R. Jean, Joshua M. Duran, Gamini Ariyawansa, Igor Anisimov, Kenneth W. Allen, Vasily N. Astratov | 2023-09-26T17:09:17 | http://arxiv.org/abs/2309.15073v1 | # Focusing and Diffraction of Light by
###### Abstract
This research was devoted to modeling of the optical properties of Si micropyramids aimed at designing optimal structures for applications as light concentrators in mid-wave infrared (MWIR) focal place arrays (FPAs). It is shown that completely different optical properties of such structures can be realized using two types of boundary conditions (BCs): i) periodical and ii) perfectly matched layer. The first type (periodical BC) allowed us to describe the Talbot effect under plane wave coherent illumination conditions. This effect was experimentally demonstrated in the proposed structures. The second type (perfectly matched layer BC) allows describing the optical properties of individual micropyramids concentrating or "focusing" light on the photodetector. The optimal geometries of micropyramids required for maximizing the intensity of "photonic nanojets" emerging from their truncated tips are determined.
_Keywords-- infrared photodetectors, light concentrators, dielectric resonance_
## I Introduction
In the recent few years, we proposed that anisotropic wet etching of Si can be used as a novel way to fabricate light concentrators for mid-wave infrared (MWIR) focal plane arrays (FPAs) [1-3]. Previously, this technology was used by the microelectromechanical (MEMS) community and its optical applications were rather limited. This method enables fast and parallel fabrication large-scale micropyramidal arrays with smooth sidewall surfaces that is attractive for optical applications. In our previous work, however, the analysis of the optical properties was limited to a fixed geometry of microcones with 14 \(\upmu\)m larger base and the smaller base varying around 4 \(\upmu\)m [37]. It created a question about the role the micropyramid's geometrical parameters have regarding their optical properties. Since micropyramidal arrays diffract light beams, it creates a broader question about how one can model these "grating" properties. On the other hand, in a final application, each micropyramid focuses light onto its own photodetector and the intensity enhancement factors (IEFs) on the detectors need to be estimated. In this final application, the incident light is typically incoherent, and the role of diffraction effects is reduced - each micropyramid concentrates light onto its own photodetector, however some crosstalk cannot be completely excluded.
In this work, the answers to these questions were obtained by carefully selecting the boundary conditions (BCs) for the problem. This approach allowed the modeling to predict and stimulated the the experimental observation of the Talbot effect in the fabricated structures. It also allowed optimization of the micropyramid's geometry required for achieving maximal IEFs of the "photonic nanojets" produced near the tips of the truncated micropyramids.
## II Simulation and Experimental Results
### _Role of the Boundary Conditions (BCs)_
Finite-difference time-domain (FDTD) software by Lumerical was used to run the computer simulations. The modeled object was a truncated Si micropyramid with the refractive index \(n\)=3.5. Since the slope of the sidewall surface was fixed at 54.7\({}^{\circ}\) by etching, the variable parameters were the sizes of the micropyramid bases in Fig. 1. The source of plane waves was embedded in a Si wafer.
Fig. 1: EM field distributions calculated at normal incidence for truncated Si micropyramids (\(n\)=3.5) with 15 \(\upmu\)m large base and 6.5 \(\upmu\)m small base. In the case of periodic BC there are multiple EM peaks due to the Talbot effect. In the case of perfectly matched layer BC, there is a single “photonic nanojet”which appears similar to the focusing of light by a lens.
It was found that the BCs play a significant role in the results of such electromagnetic (EM) modeling. Selection of the periodic BCs means that the calculations effectively represent the collective properties of an infinite periodic array, as illustrated in the left side of Fig. 1. It is seen that the EM field distribution illustrates periodal peaks due to diffraction and interference effects introduced by the micropyramidal array similar to the case of diffraction grating. This phenomenon is called the Talbot effect and it is generally known for periodic structures [5].
On the other hand, selection of the perfectly matched layer BCs means that the EM waves which reach the boundary of the computational area are allowed to escape the area. Thus, this type of BCs describes the behavior of individual micropyramids without considering contributions from neighboring structures due to diffraction and interference effects. As a result, the calculations show a single EM peak, as illustrated in the right part of Fig. 1. By analogy with the case of dielectric microspheres, such EM peaks can be termed "photonic nanojets" [6] and this terminology becomes widely accepted for microscale structures with different shapes. It is seen that the position of this peak depends on the wavelength (\(\lambda\)), see the results calculated for \(\lambda\)=3, 4, and 5 um in the right part of Fig. 1. This case is closer to the practical operation of micropyramids integrated with photodetectors.
### _Talbot Effect Modeling: Periodic BC_
To prove that EM peaks observed using periodic BC are due to the Talbot effect, we studied the dependence of the calculated EM maps on the period of the array (\(A\)), which is equal to the size of the micropyramid's large base (the neighboring micropyramids are touching). This dependence is illustrated in Fig. 2 for \(A-5\), 8, and 11 um. According to the theory of the Talbot effect, the distance between the neighboring EM maxima should be equal to the Talbot length (\(x_{\mathrm{T}}\)) [5]:
\[x_{\mathrm{T}}=\lambda[1-(1-\lambda^{2}/A^{2})^{1/2}]. \tag{1}\]
It was found that the distance between the neighboring EM peaks in Fig. 2 follows the Talbot length in good agreement with Eq. (1), thus confirming that the peaks are due to the Talbot effect. The fact that, in some cases, the EM field peaks appear with multiple maxima is likely due to the complicated shape of the truncated micropyramids compared to the simplest model consisting of a single-period sinusoidal diffraction grating.
### _Experimental Observation of the Talbot Effect_
Experimentally, the Talbot effect was studied using a setup illustrated in Fig. 3(a). Illumination was provided by a Er:YAG laser at \(\lambda\)=2.96 um slightly focused to \(\sim\)0.5 mm spot size on the micropyramidal array to increase the intensity. It can be viewed as a quasi-plane wave illumination similar to our theoretical model. The transversal intensity distributions at different imaging planes were projected on the MWIR Spiricon beam profiler using a Ge or CaF lens transparent in the MWIR range. Scanning of the 3-D intensity distribution was performed by translating the lens along the optical axis of the system with micrometer precision.
Scanning the imaging plane along the optical axis (\(x\)) revealed multiple positions where the sharply focused peaks were observable, as illustrated in Fig. 3(b). These results were obtained using a micropyramidal array with 30 um pitch and 16 um size smaller base. The brightest image takes place at the focusing plane located close to the tips of the micropyramids - termed the zero position in Fig. 3(b). It is illustrated by the image in a grey frame. It is repeated with the Talbot period with progressively smaller intensities, as illustrated by the grey bars in Fig. 3(b). Another subset of a half Talbot period shifted images is illustrated by the red bars in Fig. 3(b) with the representative image shown in a red frame. The peak positions are \(\pi\)-shifted in this subset, as it is expected for the Talbot effect. Generally, these results show that diffraction and interference grating effects are experimentally observable with Si micropyramidal arrays when illuminated with a coherent source.
Fig. 3: (a) Experimental setup including Er:YAG laser source, Si micropyramidal array, Ge or CaF lens, and Spiricon MWIR beam profilometer. (b) Positions of the focusing planes and relative intensities of the peaks obtained from the array with the 30 μm pitch and 16 μm smaller micropyramidal base are indicated by the vertical bars. There are two subsets of images where the peak positions are shifted effectively by \(\pi\) as shown by the grey and red bars. For each subset the separation between the neighboring focusing planes is equal to \(x_{\mathrm{T}}\), whereas the shift between positions of two subsets is equal to \((1/2)x_{\mathrm{T}}\).
Fig. 2: EM field distributions calculated for truncated micropyramids with the size of the small base equal to 4 μm and the size of the large base equal to \(A=5\), 8, and 11 μm, respectively. The period along \(x\)-axis is approximately equal to the Talbot length represented by Eq. (1).
### _Talbot Effect: Experiment and Modeling Comparison_
In order to further demonstrate agreement between the theoretical and experimental observations of the Talbot effect, characterization of the modeling and experimental intensity distribution along the first Talbot image was performed. The first Talbot image was calculated via modeling and imaged experimentally for micropyramids with 11.0 and 16.0 \(\upmu\)m top sizes, both with 30.0 \(\upmu\)m pitch as illustrated in Fig. 4(b). Periodic BCs were used in the modeling. The experimental imaging setup was the same as in Fig. 3(a). A Gaussian distribution was fitted to these intensity profiles to determine their full width at half maximum (FWHM). It should be noted that the resolution limit of the experimental setup was \(\sim\lambda/(2\text{NA})=2.1\)\(\upmu\)m, where the numerical aperture of the lens is \(\text{NA}=1/\lambda/2\).
With the purpose to compare the intensity distributions, the intensity enhancement factors (IEFs) were determined and can be defined as IEF = \(l_{\text{pyramid}}/l_{\text{ref}}\), where \(l_{\text{pyramid}}\) is the peak intensity produced by a micropyramid, and \(l_{\text{ref}}\) is the uniform intensity measured without the micropyramid, as shown in Fig. 4(b). The experimental IEFs were calculated based on the experimental image obtained by the MWIR Spiricon beam profiler. The micropyramid with 11.0 \(\upmu\)m top has a peak experimental IEF = 1.2 with FWHM = 6.5 \(\upmu\)m, while the micropyramid with 16.0 \(\upmu\)m top has a peak experimental IEF = 2.5 with FWHM = 7.9 \(\upmu\)m.
Fig. 4(a) displays the results from the electromagnetic field monitor of the 11.0 \(\upmu\)m top with 30.0 \(\upmu\)m pitch micropyramid at \(\lambda=3.0\)\(\upmu\)m, where the labeled power monitor spans the full 30.0 \(\upmu\)m pitch. The power monitor's adjustable position is placed at the point of highest intensity outside the micropyramid at the first Talbot image. The modeling results show the 11.0 \(\upmu\)m top microp Pyramid has a Gaussian FWHM = 4.9 \(\upmu\)m with an IEF maximum of 6.6, while the 16.0 \(\upmu\)m top microp Pyramid has a Gaussian FWHM = 6.9 \(\upmu\)m with an IEF maximum of 6.2. The modeled IEFs of the 11.0 and 16.0 \(\upmu\)m top microppyramids become increasingly large while simultaneously exhibiting smaller FWHMs as the pyramid top shrinks in size, consistent with the experimentally observed values shown in rows 3 and 4 of Fig. 4(b).
Therefore, the intensity distribution of the first Talbot image is found to be in a reasonable agreement regarding both their FWHMs and their IEFs. It is worth noting that the experimental values display larger FWHMs and lower IEFs compared to the modeling results, but this discrepancy can be attributed to limitations in the imaging system as well as the finite number of pixels in the MWIR Spiricon beam profiler.
### _Light Concentrator Modeling: Perfectly Matched Layer BC_
As it was discussed, the perfectly matched layer BCs remove the grating properties and instead allow us to study the light focusing properties of individual micropyramids. Most of the incident power can be delivered to the smaller base that defines power enhancement factors (PEFs) of microcones [3, 4].
In each case, the field monitor was placed at the position \(x\) corresponding to the maximal intensity of the photonic nanojet easily identifiable in the calculated images on the left side of Fig. 5 due to the color bar. The dependences of the IEF and the FWHM of the photonic nanojets on the size of the smaller base, as seen in Fig. 5, show that the positions of the IEF maxima correlate with the FWHM minima. This fact is not surprising since the total photon flux proportional to (Peak IEF) \(\times\) (FWHM)\({}^{2}\) is preserved along \(x\). The maximal IEF\(\sim\)7 values can be achieved with a fairly large size of the smaller base equal to 3.7\(\lambda=11.1\)\(\upmu\)m. The photonic nanojet has FWHM\(\sim\)\(\lambda=3\)\(\upmu\)m under these conditions. This is a useful result because such micropyramids are easy to fabricate in practice and, in principle, they can be integrated with various front-illuminated photodetectors.
Fig. 4: (a) Electromagnetic field map for a 30.0 \(\upmu\)m pitch with 11.0 \(\upmu\)m top Si micropyramid when concentrating light into air. The wavelength is 3.0 \(\upmu\)m, and the monitor position changes depending upon the micropyramid’s geometry. (b) Table containing four rows consisting of experimental images, modeling images, experimental intensity profile, and modeling intensity profile for micropyramids with 11.0 \(\upmu\)m top and 16.0 \(\upmu\)m top micropyramids with 30.0 \(\upmu\)m pitch. The intensity enhancement factor (IEF) is defined as the intensity in the first maximum (closest to micropyramids) of the Talbot series in interference maxima divided by the uniform reference intensity without micropyramids present. The experimental images were obtained with a Spiricon camera where the micropyramids were illuminated from the backside with a 2.96 \(\upmu\)m wavelength Sheuemann Er:YAG laser.
This optimization, however, is not complete since we did not vary the pitch of the array (the size of the larger base). We plan to complete this optimization analysis in our future work. In addition, such optimization can be performed with the tips of microp pyramids directly touching high index slab which would mimic the performance of the practical photodetector FPA when, for example, the photodetector material such as PbSe is deposited directly on top of the small base of micropzymids. It would be a similar situation to that considered in our recent publications [7, 8], but with the perfectly matched BC more adequately describing the focusing performance of such devices.
## III Conclusion
The results of this research were three-fold:
(i) Theoretical description of the Talbot effect in micropyramidal arrays by numerical modeling with periodic BC, (ii) experimental observation of the Talbot effect in micropyramidal arrays, and (iii) estimation of the IEFs provided by micropyramids using perfectly matched layer BCs. The results demonstrate good agreement of the experimentally observed Talbot images with the theory. It is found that the photonic jets produced by the individual pyramids have typical wavelength-scale dimensions.
## Acknowledgment
This work was supported by Center for Metamaterials, an NSF I/U CRC, award number 1068050. G.W.B. and V.N.A. received support from the AFRL Summer Faculty Fellowship Program.
| この研究は、ミッドウェーブインфраレッド(MWIR)焦点領域素子(FPA)における光集光器として最適な構造を設計するために、シミコピラミッドの光学特性のモデル化に焦点を当てています。異なる光学特性は、2種類の境界条件(BC)を使用することで実現できます。 i) 周期的な境界条件と ii) 完全に適合した層。第一種の境界条件(周期境界条件)は、平面波の相対的な照射条件下でタブル効果を説明することを可能にしました。この効果は、提案された構造で実験的に示されました。第二種の境界条件(完全に適合した境界条件)は、個々のミコピラミッドが光をフォトデテクターに集光するか、集光するために使用できる光学特性を説明することを可能にしました。ミコピラミッドの最適な形状を決定しました。光子ナノジェットをその |
2301.01624 | Pattern Recognition Experiments on Mathematical Expressions | We provide the results of pattern recognition experiments on mathematical
expressions.
We give a few examples of conjectured results. None of which was thoroughly
checked for novelty. We did not attempt to prove all the relations found and
focused on their generation. | David Naccache, Ofer Yifrach-Stav | 2022-12-21T10:53:32 | http://arxiv.org/abs/2301.01624v1 | # Pattern Recognition Experiments on Mathematical Expressions
###### Abstract
We provide the results of pattern recognition experiments on mathematical expressions.
We give a few examples of conjectured results. None of which was thoroughly checked for novelty. We did not attempt to prove all the relations found and focused on their generation.
## 1 Introduction
Pattern recognition is a process that involves identifying rules in data and matching them with particular case information. Pattern recognition can be seen as a type of machine learning, as it uses machine learning algorithms to recognize patterns in data. This process is characterized by the ability to learn from data, recognize familiar patterns, and recognize patterns even if they are partially visible.
Very schematically, there are three main types of pattern recognition heuristics: statistical pattern recognition, syntactic pattern recognition, and neural pattern recognition.
* Statistical pattern recognition involves using particular case data to learn from examples and generalize rules to new observations.
* Syntactic pattern recognition (a.k.a structural pattern recognition), involves identifying patterns based on simpler sub-patterns called primitives. For example, opcodes can be seen as primitives that connect to form programs.
* Neural pattern recognition relies on artificial neural networks, which are made up of many simple processors and their connections. These networks can learn complex nonlinear input-output relationships and adapt to data through sequential training procedures. Most pattern recognition heuristics proceed by two steps:
* An Explorative Stage that seeks to identify patterns
* A Descriptive Stage that categorizes patterns found during exploration
In this work we provide the results of the explorative stage of syntactic pattern recognition on mathematical expressions. Given the nature of the objects we work on (conjectures) the descriptive stage is left to a human.
We give a few examples of conjectured results. None of which was thoroughly checked for novelty. We did not attempt to prove all the relations found and focused on their generation.
## 2 The Pattern Recognition Algorithm
The pattern recognition algorithm has two components called the generalized and the identifier.
The generalizer departs from a known continued fraction or a mathematical expression (a particular case) and automatically parameterizes parts of it. The parameterized parts are target ingredients tagged by the user. For each set of particular parameter values (taken over search space), approximated values of the formula are collected for later analysis.
Target ingredients are replaced by progressions, denoted by \(\mu_{\mathbf{u}}(i)\), which can be constant, (alternating) arithmetic, geometric, harmonic or exponential depending on the parameter choices. Those are captured by the general formula:
\[\mu_{\mathbf{u}}(i)=u_{4}i^{u_{5}}+(u_{0}+iu_{1})^{u_{3}}u_{2}^{i}\]
For instance, the Ramanujan Machine Project [2, 4, 5] re-discovered an already known relation involving \(e^{\pi}\). Namely, that the continued fraction defined by \(b_{n}=n^{2}+4\) and \(a_{n}=2n+1\) converges to:
\[\frac{2\left(e^{\pi}+1\right)}{e^{\pi}-1}=1+\frac{1^{2}+4}{3+\frac{2^{2}+4}{5 +\frac{3^{2}+4}{7+\frac{4^{2}+4}{9+\ddots}}}}\]
A natural tagging query of this identity for search by the user might hence be:
\[Q(\mathbf{u})=\mu_{\mathbf{u}}(0)+\frac{\mu_{\mathbf{v}}(0)}{\mu_{ \mathbf{u}}(1)+\frac{\mu_{\mathbf{v}}(1)}{\mu_{\mathbf{u}}(2)+\frac{\mu_{ \mathbf{v}}(2)}{\mu_{\mathbf{u}}(3)+\frac{\mu_{\mathbf{v}}(3)}{\mu_{\mathbf{u}}( 4)+\ddots}}}}\]
With
\[\mathbf{u}=\{\mathbb{Q},\mathbb{Q},1,1,0,0\}\ \ \text{and}\ \ \mathbf{v}=\{ \mathbb{Z},0,1,1,\mathbb{Q},\mathbb{N}\}\]
That is:
\[\mu_{\mathbf{u}}(i)=(\mathbb{Q}+i\mathbb{Q})\ \ \text{and}\ \ \mu_{\mathbf{v}}(i)=\mathbb{Q}i^{\mathbb{N}}+\mathbb{Z}\]
When this is done, the program varies the progressions' parameters over the chosen search spaces and collects sequences of resulting values. The tests that we list here are of course non limitative and many other variants can be added to the proposed heuristic.
Remark 1: Obviously, we are quickly limited by the increasing complexity due to nested loops running over the parameters of the expressions (i.e. the \(u_{i}\)s).
Remark 2: At the risk of overlooking some gold nuggets, when we explore \(\mathbb{Q}\) we start by exploring \(\mathbb{N}\) and if the search is conclusive, we refine it by increments of \(1/6\) which have the advantage of exploring units, halves and thirds at the cost of a small multiplicative factor of \(6\). If interesting results are found with increments of \(6\) the step is refined to \(1/30\) and to Farey sequences.
The sequences obtained by varying those parameters are fed into the identifier for possible recognition. To detect conjectures the identifier performs a number of tests on the obtained sequences. Tests belong to two categories: morphological tests and serial tests. Morphological tests are applied to very few individual results and try to spot their characteristics. Serial tests are applied to more results and seek to discover relationships between them.
**Algebraic number identification (ANI)**: Collect 10 convergence limits \(Q_{0},Q_{1},\ldots Q_{9}\) and, using LLL [3], check if any of those \(Q_{i}\)s is the root of a small degree (\(\leq 10\)) polynomial. If so, check that RNI failed before returning true to avoid multiple alerts as rationals are also algebraic. This
is a morphological test. The degree 10 was chosen arbitrarily and can be changed at wish (provided that the precision is matched to the degree).
**Rational number identification (RNI)**: Collect 10 convergence limits \(Q_{0},Q_{1},\ldots Q_{9}\) and, using LLL, check if any of those \(Q_{i}\)s is a good approximation of a rational number having a (abnormally) small numerator and a small denominator. This is a morphological test.
**Constant presence identification (CPI)**: Collect 10 convergence limits \(Q_{0},Q_{1},\ldots Q_{9}\). Consider the 45 pairs \(P_{1},P_{2}\) formed from those \(Q_{i}\)s. Using LLL, check the assumption that there is at least one pair of the form:
\[P_{1}=\frac{a_{1}+b_{1}U}{c_{1}+d_{1}U}\ \ \mbox{and}\ \ P_{2}=\frac{a_{2}+b_{2}U}{c_{2}+d _{2}U}\]
Where \(U\not\in\mathbb{Q}\) and \(a_{1},b_{1},c_{1},d_{1},a_{2},b_{2},c_{2},d_{2}\in\mathbb{Z}\).
Solving for \(U\) and equating we get:
\[a_{2}b_{1}-a_{1}b_{2}+(b_{2}c_{1}-a_{2}d_{1})P_{1}+(a_{1}d_{2}-b_{1}c_{2})P_{2 }+(c_{2}d_{1}-c_{1}d_{2})P_{1}P_{2}=0\]
Hence, when called with on input \(1,P_{1},P_{2},P_{1}P_{2}\) LLL will return an abnormally short vector if the coefficients are small (as is usually the case in remarkable identities). This is a morphological test.
**Constant to exponent identification (CEI)**: Collect 10 convergence limits \(Q_{0},Q_{1},\ldots Q_{9}\). Consider the 7 quadruples \(P_{1},P_{2},P_{3},P_{4}\) formed by successive \(Q_{i}\)s1.
Footnote 1: namely: \(\{0,1,2,3\}\),\(\{1,2,3,4\}\),\(\{2,3,4,5\}\),\(\{3,4,5,6\}\),\(\{4,5,6,7\}\),\(\{5,6,7,8\}\),\(\{6,7,8,9\}\)
Here we assume that at successive ranks the limits are of the form:
\[P_{k}=\frac{a_{k}+b_{k}U^{k}}{c_{k}+d_{k}U^{k}}\]
Which implies that:
\[U^{k}=\frac{a_{k}-c_{k}P_{k}}{d_{k}P_{k}-b_{k}}\]
If follows that:
\[U=\frac{(a_{k+1}-c_{k+1}P_{k+1})(d_{k}P_{k}-b_{k})}{(d_{k+1}Q_{k+1}-b_{k+1})(a _{k}-c_{k}Q_{k})}\]
\[\frac{(a_{k+3}-c_{k+3}P_{k+3})(d_{k+2}P_{k+2}-b_{k+2})}{(d_{k+3}P_{k+3}-b_{k+3 })(a_{k+2}-c_{k+2}P_{k+2})}=\frac{(a_{k+1}-c_{k+1}P_{k+1})(d_{k}P_{k}-b_{k})}{ (d_{k+1}P_{k+1}-b_{k+1})(a_{k}-c_{k}P_{k})}\]
Let:
\[S_{1}=\{P_{k},P_{k+1},P_{k+2},P_{k+3}\}\]
\[S_{2}=\{P_{k}P_{k+1},P_{k}P_{k+2},P_{k+1}P_{k+2},P_{k}P_{k+3},P_{k+1}P_{k+3},P_{k +2}P_{k+3}\}\]
\[S_{3}=\{P_{k}P_{k+1}P_{k+2},P_{k}P_{k+1}P_{k+3},P_{k}P_{k+2}P_{k+3},P_{k+1}P_{k+2 }P_{k+3}\}\]
\[S=S_{1}\cup S_{2}\cup S_{3}\cup\{1,P_{k}P_{k+1}P_{k+2}P_{k+3}\}\]
When called with on input \(S\) LLL will return an abnormally short vector (as is usually the case in remarkable identities). This is a morphological test.
Remark 3: Both CPI and CEI can be generalized to detect the presence of multiple unknown constants in an expression (i.e. \(U_{1},U_{2},\ldots\)) or even the presence of common constants in different continued fractions. We did not implement this generalization. Following those tests we can compute a numerical approximation of \(U\) and attempt to look it \(\mathrm{up}^{2}\).
**Known constant identification (KCI)**: Let \(L\) be the following set of usual constants:
\[L=\{1,\sqrt{\pi},\pi,\pi^{2},\pi^{3},\zeta(3),\zeta(5),\zeta(7),\sqrt{e},e,e^ {2},e^{3},\phi^{2},\gamma,G,\ln 2,\ln 3,\ln 5\}\]
Collect 10 convergence limits \(Q_{0},Q_{1},\ldots Q_{9}\). Check using LLL if any of the \(Q_{i}\) is a number of the form:
\[Q_{i}\sum_{j}a_{j}L_{j}=\sum_{j}b_{j}L_{i}\ \ \mathrm{for}\ \ a_{1},a_{2},\ldots,b_{1},b_{2}\ldots\in \mathbb{Z}\]
If the solution only involves 1, a false is returned. Note that as \(L\) increases the required precision must also be increased to prevent spotting artefacts. In practice we (manually) select only a subset of \(L\) before running the KCI test according to the nature of the constants appearing the in the particular case. Note that KCI and CPI can have overlapping responses.
**Rational fraction progression (RFP)**: In this test we seek to see if when all \(u_{i}\) except one (say \(\bar{u}\)) are kept constant, the continued fraction's limit \(Q(\bar{u})\) is a ratio of two polynomials in \(\bar{u}\) with integer coefficients. This is done by a non linear model fit. The fit residuals serve as a measure of the verdict's likelihood. This is a serial test.
**Exponential function progression (EFP)**: In this test we seek to see if when all \(u_{i}\) except one (say \(\bar{u}\)) are kept constant, the continued fraction's limit \(Q(\bar{u})\) is a function of the form \(ba^{\bar{u}}\) with rational coefficients. This is done by a non linear model fit and rationality detection on \(a,b\). The fit residuals serve as a measure of the verdict's likelihood. If \(ab=0\) return false to avoid reporting the same result as the RFP. This is a serial test.
**Inverse exponential progression (IEP)**: In this test we seek to see if when all \(u_{i}\) except one (say \(\bar{u}\)) are kept constant, the continued fraction's limit \(Q(\bar{u})\) is a function of the form \(ba^{1/\bar{u}}\) with rational coefficients. This is done by a non linear model fit and rationality detection on \(a,b\). The fit residuals serve as a measure of the verdict's likelihood. If \(ab=0\) return false to avoid reporting the same result as the RFP. This is a serial test.
**Power plus constant progression (PCP)**: In this test we seek to see if when all \(u_{i}\) except one (say \(\bar{u}\)) are kept constant, the continued fraction's limit \(Q(\bar{u})\) is a function of the form \(b\bar{u}^{a}+c\) with rational coefficients. This is done by a non linear model fit and rationality detection on \(a,b,c\). The fit residuals serve as a measure of the verdict's likelihood. If \(b=0\) return false to avoid reporting the same result as the RFP. This is a serial test.
**Root plus constant progression (RCP)**: In this test we seek to see if when all \(u_{i}\) except one (say \(\bar{u}\)) are kept constant, the continued fraction's limit \(Q(\bar{u})\) is a function of the form \(b\sqrt[a]{\bar{u}}+c\) with rational coefficients. This is done by a non linear model fit and rationality detection on \(a,b,c\). The fit residuals serve as a measure of the verdict's likelihood. If \(ab=0\) return false to avoid reporting the same result as the RFP. This is a serial test.
## 3 Continued Fractions Converging to \(2u(e^{u\pi}+1)/(e^{u\pi}-1)\)
It appears that the relation:
\[\frac{2\left(e^{\pi}+1\right)}{e^{\pi}-1}=1+\frac{1^{2}+4}{3+\frac{2^{2}+4}{5+ \frac{3^{2}+4}{7+\frac{4^{2}+4}{9+\ddots}}}}\]
is the first in an infinite family:
Indeed, (RCP) linear variations in \(u\) cause identifiable \(O(\sqrt{u})\) variations in the limit. This is because very quickly:
\[\lim_{u\rightarrow\infty}\frac{e^{u\pi}+1}{e^{u\pi}-1}=1\]
This has the somewhat adverse effect of making the RNI positive very quickly as well.
The final form is detected thanks to the CEI test.
By-product:Because this holds for \(u\in\mathbb{C}^{*}\), we get a few seemingly "mysterious" corollary identities such as:
\[\frac{2\left(e+1\right)}{\pi(e-1)}=1+\frac{1^{2}+4/\pi^{2}}{3+\frac{2^{2}+4/ \pi^{2}}{5+\frac{3^{2}+4/\pi^{2}}{7+\frac{4^{2}+4/\pi^{2}}{9+\ddots}}}}\]
\[\frac{6\ln 2}{\pi}=1+\frac{1^{2}+4\ln^{2}2/\pi^{2}}{3+\frac{2^{2}+4\ln^{2}2/ \pi^{2}}{5+\frac{3^{2}+4\ln^{2}2/\pi^{2}}{7+\frac{4^{2}+4\ln^{2}2/ \pi^{2}}{9+\ddots}}}}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline ANI & RNI & CPI & CEI & KCI & RFP & EFP & IEP & PCP & RCP \\ \hline \hline ✗ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ \\ \hline \end{tabular}
\end{table}
Table 1: Test Results
## Implementation
```
1f[x_,{m_,d_}]:=m/(d+x);
2For[t = 0, t <= 5,
3den = Table[2 n + 1, {n, 1, 20000}];
4num = Table[n^2 + (2 t)^2, {n, 1, 20000}];
5r = 1 + (Fold[f, Last@num/Last@den, Reverse@Most@Transpose@{num,den}]);
6e = 2 t (1 + (E^Pi)^t)/((E^Pi)^t - 1);
7Print[{e, 2 n + 1, n^2 + (2 t)^2, N[{r, e}, 20]}];
8t += 1/2];
```
## 4 Continued Fractions Converging to Polynomial Roots
It is very well known that:
\[\frac{\sqrt{5}-1}{2}=\frac{1}{1}+\frac{1}{1}+\frac{1}{1}+\frac{1}{1}+\frac{1}{1 }+\frac{1}{1}+\frac{1}{1}+\dots\]
We tag3:
Footnote 3: Adding a 1+ by commodity which does not change anything about the infinite convergence.
\[Q(\mathbf{u})=1+\frac{\mu_{\mathbf{u}}(0)}{\mu_{\mathbf{u}}(0)}+\frac{\mu_{ \mathbf{u}}(0)}{\mu_{\mathbf{u}}(0)}+\frac{\mu_{\mathbf{u}}(0)}{\mu_{\mathbf{u }}(0)}+\frac{\mu_{\mathbf{u}}(0)}{\mu_{\mathbf{u}}(0)}+\frac{\mu_{\mathbf{u}}( 0)}{\mu_{\mathbf{u}}(0)}+\frac{\mu_{\mathbf{u}}(0)}{\mu_{\mathbf{u}}(0)}+ \frac{\mu_{\mathbf{u}}(0)}{\mu_{\mathbf{u}}(0)}+\frac{\mu_{\mathbf{u}}(0)}{ \mu_{\mathbf{u}}(0)}+\dots\]
With:
\[\mathbf{u}=\{\mathbb{Q},0,1,1,0,0\}\Rightarrow\mu_{\mathbf{u}}(i)=\mathbb{Q}\]
It appears that for \(u\in\mathbb{Q}/[-4,0]\) LLL identifies that the limit is a root of a second degree polynomial, namely:
\[Q(u)=1+\frac{u}{u}+\frac{u}{u}+\frac{u}{u}+\frac{u}{u}+\frac{u}{u}+\frac{u}{u }+\frac{u}{u}+\dots\]
\[Q(u)^{2}+u(Q(u)-1)=0\]
Which is trivial to prove by pushing the \(u\) into the continued fraction.
The CPI is positive because for \(u=1\) and \(u=5\) the respective values of \(Q(u)\) comprise the common value \(\sqrt{5}\).
## 5 Continued Fractions Converging to \(e^{2/\kappa}\)
The following relations are well-known4:
Footnote 4: [https://link.springer.com/content/pdf/bbm:978-94-91216-37-4/1.pdf](https://link.springer.com/content/pdf/bbm:978-94-91216-37-4/1.pdf)
\[e=2+\frac{1}{1}+\frac{1}{2}+\frac{1}{1}+\frac{1}{1}+\frac{1}{4}+\frac{1}{1}+\frac {1}{1}+\frac{1}{6}+\dots\]
\[\sqrt{e}=1+\frac{1}{1}+\frac{1}{1}+\frac{1}{5}+\frac{1}{1}+\frac{1}{1}+\frac{1 }{9}+\frac{1}{1}+\frac{1}{1}+\frac{1}{13}+\dots\]
\[\sqrt[3]{e}=1+\frac{1}{2}+\frac{1}{1}+\frac{1}{1}+\frac{1}{8}+\frac{1}{1}+ \frac{1}{1}+\frac{1}{14}+\frac{1}{1}+\frac{1}{1}+\frac{1}{20}+\dots\]
We hence tag the ones as constants, the progression as arithmetic and let the algorithm monitor the evolution of the limits.
Let \(b_{n}=1\). Define \(\mu(u)=\kappa(u+1/2)-1\) for \(\kappa\in\mathbb{R}\) and:
\[a_{n}=\begin{cases}\mu(n/3)=\frac{\kappa(2n+3)}{6}-1&\text{ if }n\bmod 3\equiv 0 \\ 1&\text{ otherwise.}\end{cases}\]
In other words, \(a_{n}\) is the sequence:
\[a_{n}=\{\mu(0),1,1,\mu(1),1,1,\mu(2),1,1,\mu(3),1,1,\mu(4),1,1,\cdots\}\]
Then we detect that the continued fraction generated by \(a_{n},b_{n}\) converges to \(e^{2/\kappa}\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline ANI & RNI & CPI & CEI & KCI & RFP & EFP & IEP & PCP & RCP \\ \hline \hline ✓ & ✗ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline \end{tabular}
\end{table}
Table 2: Test Results
The CEI is positive because, for instance \((e^{2/\kappa})^{2}=e^{2/\kappa^{\prime}}\) implies that \(2/\kappa=\kappa^{\prime}\) which is satisfied for several pairs of integer values.
**Implementation**
```
1f[x_,{m_,d_}]:=m/(d+x);
2For[k=-10,k<=10,
3phi=Table[kn+k/2-1,{n,0,2000-1}];
4num=Table[1,{n,1,2000}];
5den=Take[
6Flatten[Table[{phi[[i]],{1,1}},{i,1,Floor[2000/3]+1}]],{1,
72000}];
8r=1+(Fold[f,Last@num/Last@den,Reverse@Most@Transpose@{num,den}]);
9v=E^(2/k);
10Print[{k,v,N[{r,v},20]}];
11k+=1/2];
```
## 6 Continued Fractions Involving Catalan's Constant
It is well known that:
\[2G=2-\frac{1^{2}}{3}+\frac{2^{2}}{1}+\frac{2^{2}}{3}+\frac{4^{2}}{1}+\frac{4^{2} }{3}+\frac{6^{2}}{1}+\frac{6^{2}}{3}+\frac{8^{2}}{1}+\frac{8^{2}}{3}+\ldots\]
We define:
\[\Delta(u,v)=\frac{1}{2v}\times\left(\frac{1^{2}}{u}+\frac{2^{2}}{v}+\frac{2^{2 }}{u}+\frac{4^{2}}{v}+\frac{4^{2}}{u}+\frac{6^{2}}{v}+\frac{6^{2}}{u}+\frac{8 ^{2}}{v}+\frac{8^{2}}{u}+\ldots\right)\]
For all the following we observe that \(\Delta(u,v)=\Delta(v,u)\).
### For \(u=1\)
An exploration for \(\mathbf{u}=\{0,\mathbb{N},\mathbb{N},\mathbb{N},\mathbb{Z},0\}\) reveals that for \(u_{0}=0,u_{1}=2,u_{2}=1,u_{3}=2,u_{4}=-1,u_{5}=0\) we get identities when \(v=4i^{2}-1\) with the convergence values given in Table 4:
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline ANI & RNI & CPI & CEI & KCI & RFP & EFP & IEP & PCP & RCP \\ \hline \hline ✗ & ✗ & ✗ & ✓ & ✗ & ✗ & ✓ & ✗ & ✗ \\ \hline \end{tabular}
\end{table}
Table 3: Test Results
Where the general formula for \(i>1\) is:
\[\Delta(1,4i^{2}-1)=(-1)^{i+1}\left(\sum_{k=0}^{i-1}\frac{(-1)^{k}}{(2k+1)^{2}}-G\right)\]
_Remark 4_.: Note that the denominators of the numbers:
\[\eta(i)=\sum_{k=0}^{i-1}\frac{(-1)^{k}}{(2k+1)^{2}}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(u\) & \(i\) & \(v=4i^{2}-1\) & \(\Delta(u,4i^{2}-1)=\Delta(1,4i^{2}-1)\) \\ \hline \hline
1 & 0 & -1 & \(1-G\) \\ \hline
1 & 1 & 3 & \(-8/9+G\) \\ \hline
1 & 2 & 15 & \(209/225-G\) \\ \hline
1 & 3 & 35 & \(-10016/11025+G\) \\ \hline
1 & 4 & 63 & \(91369/99225-G\) \\ \hline
1 & 5 & 99 & \(-10956424/12006225+G\) \\ \hline
1 & 6 & 143 & \(1863641881/2029052025-G\) \\ \hline \end{tabular}
\end{table}
Table 4: The first convergence values for \(u=1\)
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline ANI & RNI & CPI & CEI & KCI & RFP & EFP & IEP & PCP & RCP \\ \hline \hline \(\mathcal{X}\) & \(\mathcal{X}\) & \(\mathcal{\check{\check{\check{\check{\check{\check{\check{\check{\check{ \check{\check{\check{\checkcheck{\
are interesting by their own right. At first sight they might seem perfect squares but in reality some may contain very small prime factors to an odd power.
### For \(u=3\)
The exploration in this section is interesting. It was done manually but we would have never had the idea to probe in that specific direction without the insight for the case \(u=1\) produced in the previous section.
The sequence \(f(i)\) is nearly the absolute value of the OEIS sequence A0063095:
Footnote 5: [https://oeis.org/A006309](https://oeis.org/A006309).
\[1,5,21,33,65,85,133,161,261,341,481,533,645,705,901,\tt I2863,1281,\] \[1541,1633,1825,\tt I1645,\tt I1587,2581,3201,3333\ldots\]
An unexplained phenomenon occurs for the "abnormally larger" OEIS sequence A006309 values 12803, 14615, 11537 that remains unmatched by any \(\eta(i)\) value. We do not have an explanation for this phenomenon that requires further research.
#### Implementation
The following implementation was purposely left unoptimized for the sake of clarity. We start by generating the target values for \(u=3\) and store them in an array. Then we re-generate the values for \(u=1\) and match the array's contents.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(u\) & \(i\) & \(f(i)\) & \(\Delta(3,f(i))\) \\ \hline \hline \(3\) & \(0\) & \(1\) & \(\Delta(1,\,-1)\) \\ \hline \(3\) & \(1\) & \(5\) & \(\Delta(1,\,\,\,\,3)\) \\ \hline \(3\) & \(2\) & \(21\) & \(\Delta(1,\,\,\,35)\) \\ \hline \(3\) & \(3\) & \(33\) & \(\Delta(1,\,\,\,63)\) \\ \hline \(3\) & \(4\) & \(65\) & \(\Delta(1,143)\) \\ \hline \(3\) & \(5\) & \(85\) & \(\Delta(1,255)\) \\ \hline \end{tabular}
\end{table}
Table 6: The first convergence values for \(u=3\)
1AbsA06389=
2Abs[{1, 5, -21, 33, -65, 85, -133, 161, 261, -341, -481, 533, -645, 705, 901, -12803, -1281, -1541, 1633, -1825}];
4t={};
5f[x_,{m_, d_}]:= m/(d+x);
6For[i= 1, i<= Length[AbsA06309],
7{u, v}={3, AbsA066309[[i]]};
8num=Take[
9Prepend[Flatten[Table[{(2n)^2, (2n)^2}, {n, 1, 40000}]], 1],
1040000];
11den=Flatten[Table[{u, v}, {n, 1, 40000/2}]];
12r=Fold[f, Last@num/Last@den, Reverse@Most@Transpose@{num, den}]/2/
13v;
14AppendTo[{, {AbsA06309[[i]], N[r, 30]}];
15i++};
16
17For[j=1, j<=Length[AbsA06309],
18If[{t[[j, 1]]==12803,
19Print["Exception, the value 12803 is skipped."],
20For[i=1, i<=1000000,
21{u, v}={1,4 i^2 - 1};
22den=Flatten[Table[{u, v}, {n, 1, 40000/2}]];
23r=Fold[f, Last@num/Last@den, Reverse@Most@Transpose@{num, den}]/
2/v;
25val=(-1)^(i+1)(Sum[(-1)^k/(2k+1)^2, {k, 0, i - 1}] -
26Catalan);
27If[Abs[{t[[j, 2]]--r]<10^(-6),
28Print[{i, N[r, 30], N[t[[j, 2]], 30}, "Entry", j, ": ", val,
29"matchedwithDelta[3,", t[[j, 1]], "]"];
30i=Infinity];
31i++}];
32j++};
### Subsequent \(u\) values.
Table 7 provides some additional examples for various \(u,v\) combinations.
### Variations in the numerator.
Let, for instance, \((u,v)=(1,3)\). Removing the \(1/(2v)\) factor in \(\Delta\) and replacing the \((2n)^{2}\) by \((n-i)^{2}\) we get convergence to:
\[1,\frac{4}{5},\frac{31}{51},\frac{16}{33},\frac{355}{883},\frac{11524}{33599}, \frac{171887}{575075},\frac{10147688}{3832636},\ldots\]
With the limits being quickly reached after a constant number of terms in the continued fraction.
## 7 Generalized Cloitre Series
In an unpublished note [1], Benoit Cloitre gives a beautiful BBP formula for \(\pi^{2}\) based on the identity:
\[\sum_{k=1}^{\infty}\frac{\cos(ik\pi)\left(2\cos(j\pi)\right)^{k}}{k^{2}}=(\ell \pi)^{2}\]
Here are some \(i,j,\ell\) combinations detected automatically:
A simple rule allowing to generate many identities consists in fixing a factional step \(1/u\), letting \(i=\kappa/u\) for \(\pi/3\leq i\leq 2\pi/3\) and calculating the limit for \(\{i,j\}=\{\kappa u,2-\kappa u\}\) (e.g. Table 8). However, limits for which \(i+j\neq 2\) exist as well (e.g. Table 9).
## 8 Conclusion & further research
The results given in this paper show that pattern matching can obviously be of help in detecting new mathematical conjectures. The very basic processes described in the previous sections can be improved and generalized
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(u\) & \(v\) & \(\Delta(u,v)\) \\ \hline \hline
5 & 7 & \(\Delta(1,\ 15)\) \\ \hline
5 & 39 & \(\Delta(1,143)\) \\ \hline
5 & 51 & \(\Delta(1,255)\) \\ \hline
7 & 9 & \(\Delta(1,\ 35)\) \\ \hline
9 & 11 & \(\Delta(1,\ 63)\) \\ \hline
11 & 13 & \(\Delta(1,\ 99)\) \\ \hline
13 & 15 & \(\Delta(1,143)\) \\ \hline \end{tabular}
\end{table}
Table 7: Other convergence values.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline ANI & RNI & CPI & CEI & KCI & RFP & EFP & IEP & PCP & RCP \\ \hline \hline ✗ & ✗ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline \end{tabular}
\end{table}
Table 10: Test Results
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(i/\ell\) & \(j/\ell\) & \(1/\ell\) \\ \hline \hline
76 & 16 & 30 \\ \hline
46 & 10 & 18 \\ \hline
41 & 9 & 16 \\ \hline
26 & 6 & 10 \\ \hline
21 & 5 & 8 \\ \hline \end{tabular}
\end{table}
Table 9: Example relations for which \(i+j\neq 2\)
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(i/\ell\) & \(j/\ell\) & \(1/\ell\) \\ \hline \hline
11 & 5 & 8 \\ \hline
14 & 6 & 10 \\ \hline
23 & 9 & 16 \\ \hline
26 & 10 & 18 \\ \hline
22/5 & 8/5 & 3 \\ \hline
31 & 9 & 20 \\ \hline
28 & 8 & 18 \\ \hline
19 & 5 & 12 \\ \hline
16 & 4 & 10 \\ \hline
13 & 3 & 8 \\ \hline \end{tabular}
\end{table}
Table 8: Example relations for which \(i+j=2\)
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(i/\ell\) & \(j/\ell\) & \(1/\ell\) \\ \hline \hline
76 & 16 & 30 \\ \hline
46 & 10 & 18 \\ \hline
41 & 9 & 16 \\ \hline
26 & 6 & 10 \\ \hline
21 & 5 & 8 \\ \hline \end{tabular}
\end{table}
Table 9: Example relations for which \(i+j\neq 2\)
in a number of ways. The first is obviously an enriching of the collection of tests. The second is deeper exploration which is highly dependent on the computational capabilities at hand. Finally the interpretation of results and the early pruning of less probable branches in the potential conjecture tree can also bring efficiency and pertinence in the discovered relations.
| 数学的表現についてのパターン認識実験の結果を提示しています。数多くの仮説を提示していますが、そのどれもが完全に新しいものではありませんでした。発見された関係の全てを証明しようとせず、その関係の生成に焦点を当てていました。 |
2309.05260 | Generalized Graphon Process: Convergence of Graph Frequencies in
Stretched Cut Distance | Graphons have traditionally served as limit objects for dense graph
sequences, with the cut distance serving as the metric for convergence.
However, sparse graph sequences converge to the trivial graphon under the
conventional definition of cut distance, which make this framework inadequate
for many practical applications. In this paper, we utilize the concepts of
generalized graphons and stretched cut distance to describe the convergence of
sparse graph sequences. Specifically, we consider a random graph process
generated from a generalized graphon. This random graph process converges to
the generalized graphon in stretched cut distance. We use this random graph
process to model the growing sparse graph, and prove the convergence of the
adjacency matrices' eigenvalues. We supplement our findings with experimental
validation. Our results indicate the possibility of transfer learning between
sparse graphs. | Xingchao Jian, Feng Ji, Wee Peng Tay | 2023-09-11T06:34:46 | http://arxiv.org/abs/2309.05260v1 | # Generalized Graphon Process: Convergence of Graph Frequencies in Stretched Cut Distance
###### Abstract
Graphons have traditionally served as limit objects for dense graph sequences, with the cut distance serving as the metric for convergence. However, sparse graph sequences converge to the trivial graphon under the conventional definition of cut distance, which make this framework inadequate for many practical applications. In this paper, we utilize the concepts of generalized graphons and stretched cut distance to describe the convergence of sparse graph sequences. Specifically, we consider a random graph process generated from a generalized graphon. This random graph process converges to the generalized graphon in stretched cut distance. We use this random graph process to model the growing sparse graph, and prove the convergence of the adjacency matrices' eigenvalues. We supplement our findings with experimental validation. Our results indicate the possibility of transfer learning between sparse graphs.
Generalized graphons, sparse graph sequence, convergent graph frequencies.
## I Introduction
Modern data analysis usually involves complex structures like graphs. In order to model and process signals on graphs, the graph signal processing (GSP) has established a set of tools for a variety of tasks, including sampling, reconstruction and filtering [1, 2, 3]. Besides, by introducing non-linearity, graph neural network (GNN) provides a deep learning architecture and has been largely studied. These methods usually have good performances by exploiting the graph information when the underlying graph structure is known. In addition, they usually have good computational properties such as distributed implementation [4] and robustness to perturbation [5, 6].
In practice, designing signal processing techniques separately on different graphs can be computationally expensive. For example, in order to learn a graph filter, the eigendecomposition
of graph shift operator (GSO) can be computationally prohibited when the graph is large. Therefore, it is natural to consider learning graph filter or GNN on a graph with a small or moderate size and then transfer it to other graphs which can be large. The success of such strategies relies on the inherent similarity between the graph used for training and testing. For example, the paper [7] studied the transferability of GNN by modeling the graphs as down-sampled version of a topological space, and the graph signals as samples of function on the space. The paper [8] studied the transferability of graph filters in Cayley smoothness space.
In recent years, an arising way to explain the transferability of graph filter and GNN is through the _graphon_ method [9, 10]. The graphon method can be understood in two folds: i) as the number of nodes tends to infinity, it is assumed that the graph sequence will converge to a graphon in cut distance, i.e., graphon is a _limit object_ of the graph sequence. This was proved to imply the graph frequency convergence [10]. ii) the graphs of interest are generated from the same probabilistic model (graphon). This model was utilized to bound the difference between the outputs of a GNN with fixed parameters on two different graphs sampled from the same graphon [11, 12].
Graphon is suitable for modeling the limit of dense graph sequences under the cut distance. However, if the graph sequence is sparse, then it is no longer appropriate. By saying a graph sequence \((G_{n})_{n\geq 1}\) is _sparse_, we mean that \(\lim\limits_{n\rightarrow\infty}\frac{|E(G_{n})|}{|V(G_{n})|^{2}}=0\), where \(V(G_{n})\) and \(E(G_{n})\) are the vertex and edge sets of \(G_{n}\). In this case, the cut norm of \((G_{n})_{n\geq 1}\) converges to \(0\) since it equals \(\frac{2|E(G_{n})|}{|V(G_{n})|^{2}}\), i.e., all sparse sequences \((G_{n})_{n\geq 1}\) converge to the zero graphon if we use the standard definitions of graphon and cut distance. Therefore, in order to discuss transferability of filters or GNNs on sparse graphs, we need alternative concepts of graphon and cut distance for sparse graph sequences. These concepts are the main focus of this paper. Our main contributions are:
1. We introduce the notions of generalized graphon and stretched cut distance for sparse graph convergence in place of the standard graphon and cut distance, which are more suitable for dense graph convergence. In particular, we introduce the notion of a generalized graphon process associated with the generalized graphon. This process converges to the generalized graphon in stretched cut distance. We model a sparse graph sequence as a subsequence of this random graph process.
2. We prove that under the generalized graphon process model, the graph frequencies of the process have a linear relationship with the square root of the number of edges as the graph size grows asymptotically.
3. We compare the fitness of our theories and the standard graphon's theories on real dataset to show better fitness of the generalized graphon process model and the correctness of our theoretical result.
The rest of this paper is organized as follows. In Section II we introduce a graph generating process based on a generalized notion of graphon. In Section III we prove the convergence of graph frequencies of this process. In Section IV we corroborate our result by numerical experiments. We conclude the paper in Section V.
_Notations._ For any set \(A\), we use \(I_{A}\) to denote the indicator function on it. We write \(\mathbb{R}_{+}\) as the set of non-negative real numbers. We denote cut norm by \(\|\cdot\|_{\Box}\). For two functions \(f_{1}\) and \(f_{2}\), we write their composition as \(f_{1}\circ f_{2}\). For a function \(f:\mathcal{X}\rightarrow\mathcal{Y}\), we define
\[f\times f:\mathcal{X}\times\mathcal{X} \rightarrow\mathcal{Y}\times\mathcal{Y}\] \[(x_{1},x_{2}) \mapsto(f(x_{1}),f(x_{2})).\]
For two sets \(\mathcal{X}_{1}\) and \(\mathcal{X}_{2}\), we define the projection \(\pi_{i}\) (\(i=1,2\)) as
\[\pi_{i}:\mathcal{X}_{1}\times\mathcal{X}_{2} \rightarrow\mathcal{X}_{i}\] \[(x_{1},x_{2}) \mapsto x_{i}.\]
## II Generalized Graphon Process and Stretched Cut Distance
In this section, we introduce the concepts of generalized graphon and generalized graphon process as a generating model for random sparse graph sequences. We then introduce the stretched cut distance to characterize the convergence of sparse graph sequences.
### _Generalized Graphon and Graphon Process_
In this subsection, we introduce a generalized definition of graphon, and an associated graph generating model studied in [13]. We assume that the growing graphs are generated by the following two components: an underlying feature space \(\mathscr{S}=(S,\mathcal{S},\mu)\) which is a \(\sigma\)-finite measure space and a symmetric function \(W:S\times S\mapsto[0,1]\) such that \(W\in L^{2}(S\times S)\). As we will see in the ensuing content, \(\mathscr{S}\) contains the feature that will be utilized by \(W\) to determine the edges between the vertices of an infinite graph. We refer to the tuple \(\mathcal{W}=(W,\mathscr{S})\) as _generalized graphon_. In [14] the most general exchangeable random graph model contains two more components representing isolated and star structures. Here for ease of analysis we omit
them and consider the version in [13], i.e., \(\mathcal{W}=(W,\mathscr{S})\). Note that when \(\mathscr{S}=[0,1]\), \(\mathcal{W}\) is the graphon commonly used in the exsiting graphon signal processing literature [10, 15]. We refer to these specific graphons as _standard graphons_. In the rest of this paper, we always make the following assumption unless otherwise stated.
**Assumption 1**.: _The measure spaces \(\mathscr{S}\) under consideration are \(\sigma\)-finite, Borel, and atom-less._
In order to model the growing process of graphs, we need to introduce the time dimension encoded by \(\mathbb{R}_{+}\). To be specific, we assign a Poisson point process \(\Gamma\) on \(\mathbb{R}_{+}\times S\). We denote each point of this point process as \(v=(t,x)\), where \(t\in\mathbb{R}_{+}\) denotes time and \(x\in S\) denotes feature. The process \(\Gamma\) induces an infinite graph \(\tilde{G}\) with vertex set \(V\) the set of all points generated by \(\Gamma\). The edges of \(\tilde{G}\) are randomly generated such that two different vertices \(u=(t,x)\) and \(u^{\prime}=(t^{\prime},x^{\prime})\) are connected with probability \(W(x,x^{\prime})\). We do not assign any self-loop on \(\tilde{G}\). From the nature of the generating process the graph \(\tilde{G}\) can be regarded as containing all vertices that will arise. Then the growing graph at time instance \(t\) will be \(\tilde{G}_{t}\) which is the subgraph of \(\tilde{G}\) with the vertex set \(\tilde{V}_{t}=\{(t^{\prime},x)\in V\,:\,t^{\prime}\leq t\}\). We denote \(G_{t}\) as the graph obtained by removing all isolated vertices from \(\tilde{G}_{t}\). In this paper, we will mainly focus on the sequence \((G_{t})_{t\geq 0}\), and refer to it as _generalized graphon process_. We write \(G_{t}=(V_{t},E_{t})\) where \(|V_{t}|:=N_{t}\). According to [13], \((G_{t})_{t\geq 0}\) converge to \(\mathcal{W}\) in stretched cut distance, hence is suitable for modeling sparse growing networks (see Section II-B for details).
Under Assumption 1, we know that \(\mathscr{S}\) is isomorphic to \([0,\mu(S))\) by [13, Lemma 33]. Therefore, if \(\mu(S)<\infty\), then \(\mathscr{S}\) can be identified with a bounded interval. In this paper, we allow \(\mu(S)=\infty\), in which case \(\mathscr{S}\) can be identified with \(\mathbb{R}_{+}\).
In graphon theories, the graphs can be associated with a canonical graphon [9, Section 7.1]:
**Definition 1**.: _Given a finite simple graph \(G\) with vertex set \(\{v_{i}:i=1,\ldots,n\}\) and edge set \(E\). The canonical graphon associated with \(G\) is defined as a step function_
\[W^{G}=\sum_{(v_{i},v_{j})\in E}I_{A_{ij}}, \tag{1}\]
_where \(A_{ij}\) is the square \(\left[\dfrac{i-1}{n},\dfrac{i}{n}\right)\times\left[\dfrac{j-1}{n}, \dfrac{j}{n}\right)\). We write \(\mathcal{W}^{G}:=(W^{G},\mathbb{R}_{+})\)._
Note that \(W^{G}\) can vary with the labeling of the vertex set, but this variation will not affect the evaluation of cut distance. Here we remind that the cut norm \(\|\cdot\|_{\Box}\) and cut distance \(\delta_{\Box}\) are
defined as [13, Definition 5]
\[\|W\|_{\square,S,\mu} =\sup_{U,V\in\mathcal{S}}\biggl{|}\int_{U\times V}W(x,y)\,\mathrm{d }\mu(x)\,\mathrm{d}\mu(y)\biggr{|},\] \[\delta_{\square}(\mathcal{W}_{1},\mathcal{W}_{2}) =\inf_{\hat{\mu}\in\mathcal{C}(\mu_{1},\mu_{2})}\|W_{1}\circ(\pi_{ 1}\times\pi_{1})-W_{2}\circ(\pi_{2}\times\pi_{2})\|_{\square,S_{1}\times S_{2},\hat{\mu}},\]
where \(\mathcal{C}(\mu_{1},\mu_{2})\) is the set of all couplings of \(\mu_{1}\) and \(\mu_{2}\). For a measure \(\mu\) over \((S_{1}\times S_{2},\mathcal{S}_{1}\times\mathcal{S}_{2})\), if \(\mu\) has \(\mu_{1}\) and \(\mu_{2}\) as its marginal, then \(\mu\) is called a _coupling_ of \(\mu_{1}\) and \(\mu_{2}\). Mathematically, this means \(\mu(A\times S_{2})=\mu_{1}(A)\) for all \(A\in\mathcal{S}_{1}\) and \(\mu(S_{1}\times B)=\mu_{2}(B)\) for all \(B\in\mathcal{S}_{2}\). We omit the notions of \(S\) and \(\mu\) in the subscript of cut norm if they are clear in the context.
### _Stretched Cut Distance_
In this subsection, we introduce a rescaled version of cut distance to describe the convergence of sparse graph sequence. It has been explained in Section I that the notion of standard cut distance is not able to describe the convergence in the sparse setting. In order to find reasonable limit of sparse graph sequence, the _stretched cut distance_ is defined as the cut distance between rescaled versions of graphons:
**Definition 2**.: _[_13_, Definition 11]_ _For a graphon \(\mathcal{W}=(W,\mathscr{S})\), define \(\mathcal{W}^{s}=(W,\hat{\mathscr{S}})\) where \(\hat{\mathscr{S}}=(S,\mathcal{S},\hat{\mu})=(S,\mathcal{S},\|W\|_{1}^{-\frac{ 1}{2}}\mu)\). We refer to \(\mathcal{W}^{s}\) as stretched graphon. The stretched cut distance between two graphons \(\mathcal{W}_{1}\) and \(\mathcal{W}_{2}\) is defined as \(\delta_{\square}^{s}(\mathcal{W}_{1},\mathcal{W}_{2}):=\delta_{\square}( \mathcal{W}_{1}^{s},\mathcal{W}_{2}^{s})\). A sequence of graphs \((G_{n})\) is called convergent to a graphon \(\mathcal{W}\) if their canonical graphons \((W^{G_{n}})\) converges to \(\mathcal{W}\) in stretched cut distance, i.e., \(\lim\limits_{n\to\infty}\delta_{\square}^{s}(\mathcal{W}^{G_{n},s},\mathcal{W })=0\)._
Note that, according to the construction in Definition 2, \(\|W\|_{\square,S,\hat{\mu}}=\|W\|_{1,S,\hat{\mu}}\equiv 1\). Therefore, it is more suitable to describe the convergence of sparse graph sequences using \(\delta_{\square}^{s}\). It is known that the generalized graphon process \((G_{t})\) generated from \(\mathcal{W}\) converges to \(\mathcal{W}\) almost surely in stretched cut distance [13, Theorem 28]. In addition, for any graphon \(\mathcal{W}=(W,\mathbb{R}_{+})\) on \(\mathbb{R}_{+}\), if we define \(\mathcal{W}^{s^{\prime}}:=(W(\|W\|_{1}^{\frac{1}{2}}x_{1},\|W\|_{1}^{\frac{1} {2}}x_{2}),\mathscr{S})\), then \(\delta_{\square}(\mathcal{W}^{s^{\prime}},\mathcal{W}^{s})=0\)[13]. Therefore in this case we can identify \(\mathcal{W}^{s}\) with \(\mathcal{W}^{s^{\prime}}\). Specifically, if we consider a canonical graphon \(\mathcal{W}^{G}\) induced by a graph \(G\), then we can view \(\mathcal{W}^{G,s}\) as a step function on \(\mathbb{R}_{+}\times\mathbb{R}_{+}\), in which the width and length of each square step is \(\frac{1}{\sqrt{2|E(G)|}}\). Recall that the width and length of each square step in \(\mathcal{W}^{G}\) is \(\frac{1}{|V(G)|}\). In the rest of the paper we will always view the stretched canonical graphon in this way. In Example 1 we provide an example of a sparse graph sequence converging to a generalized graphon in stretched cut distance.
**Example 1**.: _Consider a sequence of graph \((G_{n})\) such that \(|V_{n}|=n\). Let \(\alpha\in(0,1)\) be a constant. We choose \(\lfloor n^{\frac{1+\alpha}{2}}\rfloor\) vertices to form a complete subgraph, and the rest \(n-\lfloor n^{\frac{1+\alpha}{2}}\rfloor\) vertices are set as isolated. In this case, \(|E_{n}|=\frac{\lfloor n^{\frac{1+\alpha}{2}}\rfloor(\lfloor n^{\frac{1+\alpha}{ 2}}\rfloor-1)}{2}\). Therefore, we can label the vertices such that \(\mathcal{W}^{G_{n},s}\equiv 1\) on the region \(\left[0,\frac{\lfloor n^{\frac{1+\alpha}{2}}\rfloor}{\sqrt{\lfloor n^{\frac{1+ \alpha}{2}}\rfloor(\lfloor n^{\frac{1+\alpha}{2}}\rfloor-1)}}\right)^{2}\), and equals \(0\) elsewhere. It can be shown that \(\lim\limits_{n\to\infty}\delta_{\square}(\mathcal{W}^{G_{n},s},I_{[0,1]^{2}})=0\), i.e., \((G_{n})\) converges to \(I_{[0,1]^{2}}\) in stretched cut distance. On the other hand, \((G_{n})\) is a sparse graph sequence, hence will converge to a zero graphon in cut distance._
## III Convergence of Graph Frequencies
In this section, we prove the convergence of graph frequencies (i.e., the eigenvalues of graph adjacency matrices) of a generalized graphon process \((G_{t})\) generated from a generalized graphon \(\mathcal{W}\).
As in the graphon literature, we consider the integral operator \(\mathbf{T}_{\mathcal{W}}\) with integral kernel \(W\):
\[\mathbf{T}_{\mathcal{W}}:L^{2}(S) \to L^{2}(S)\] \[g \mapsto\int_{S}W(x,x^{\prime})g(x^{\prime})\,\mathrm{d}\mu(x^{ \prime}).\]
Since \(W\in L^{2}(S\times S)\), the operator \(\mathbf{T}_{\mathcal{W}}\) is a self-adjoint Hilbert-Schmidt operator. We denote the eigenvalues and orthonormal eigenvectors of \(\mathbf{T}_{\mathcal{W}}\) as \(\{\lambda_{j}(\mathcal{W}):j\in\mathbb{Z}\backslash\{0\}\}\) and \(\{\varphi_{j}(x;\mathcal{W}):j\in\mathbb{Z}\backslash\{0\}\}\). The eigenvalues are ordered such that \(\lambda_{1}(\mathcal{W})\geq\lambda_{2}(\mathcal{W})\geq\ldots 0\), and \(\lambda_{-1}(\mathcal{W})\leq\lambda_{-2}(\mathcal{W})\leq\ldots 0\). Then it can be shown by [16, Theorem 4.2.16] that, \(W\) can be decomposed as
\[W(x,x^{\prime})=\sum_{j\in\mathbb{Z}\backslash\{0\}}\lambda_{j}(\mathcal{W}) \varphi_{j}(x;\mathcal{W})\varphi_{j}(x^{\prime};\mathcal{W}). \tag{2}\]
Provided the convergence of \(W^{G_{t},s}\), we can prove the convergence of the graph frequencies of \((G_{t})\). To be specific, we will prove the convergence of the eigenvalues of \(W^{G_{t},s}\) to those of \(W\) up to a scaling factor.
**Theorem 1**.: _Define_
\[D_{W}(x):=\int_{S}W(x,x^{\prime})\,\mathrm{d}\mu(x^{\prime}),\]
_and assume that \(D_{W}(x)\in L^{p},\forall\,p\geq 1\). Then the graph frequencies of \((G_{t})\) converges in the following way:_
\[\lim\limits_{t\to\infty}\frac{\lambda_{j}(G_{t})}{\sqrt{2|E(G_{t})|}}=\frac{ \lambda_{j}(W)}{\sqrt{\|W\|_{1}}}, \tag{3}\]
_where \(\lambda_{-1}(G_{t})\leq\lambda_{-2}(G_{t})\leq\cdots\leq 0\leq\cdots\leq \lambda_{2}(G_{t})\leq\lambda_{1}(G_{t})\) are eigenvalues of \(G_{t}\)'s adjacency matrix._
Proof.: Given two simple graphs \(F\) and \(G\), we say a map \(\phi:V(F)\to V(G)\) is an adjacency preserving map if \((v_{i},v_{j})\in E(F)\) implies \((\phi(v_{i}),\phi(v_{j}))\in E(G)\). Let \(\hom(F,G)\) be the number of adjacency preserving maps between \(F\) and \(G\). Define
\[h(F,G)=\frac{\hom(F,G)}{(2|E(G)|)^{\frac{|V(F)|}{2}}}.\]
Generally, For a graphon \(\mathcal{W}\), we define
\[h(F,\mathcal{W})=\|W\|_{1}^{-\frac{|V(F)|}{2}}\int_{S^{|V(F)|}} \prod_{(v_{i},v_{j})\in E(F)}W(x_{i},x_{j})\,\mathrm{d}x_{1}\ldots\,\mathrm{d}x _{|V(F)|}.\]
It can be shown that, for a canonical graphon \(\mathcal{W}^{G}\), we have \(h(F,G)=h(F,\mathcal{W}^{G})\). Besides, according to [13, Proposition 30 (ii)], we have \(\lim\limits_{t\to\infty}h(F,G_{t})=h(F,\mathcal{W})\). Therefore,
\[\lim\limits_{t\to\infty}h(F,\mathcal{W}^{G_{t}})=h(F,\mathcal{W}). \tag{4}\]
Let \(F=C_{k}\) be a \(k\)-cycle, \(k\geq 3\). Then we have
\[\begin{split} h(C_{k},\mathcal{W})&=\|W\|_{1}^{- \frac{k}{2}}\int_{S^{k}}\prod_{(v_{i},v_{j})\in E(F)}W(x_{i},x_{j})\,\mathrm{d} x_{1}\ldots\,\mathrm{d}x_{k}\\ &=\|W\|_{1}^{-\frac{k}{2}}\sum_{j\in\mathbb{Z}\setminus\{0\}} \lambda_{j}(\mathcal{W})^{k},\end{split} \tag{5}\]
where the second equality can be obtained by replacing the integrand by (2) and using the orthogonality of \(\mathcal{W}\)'s eigenvectors. Combining (4) and (5), we have
\[\lim\limits_{t\to\infty}\sum_{j\in\mathbb{Z}\setminus\{0\}} \left(\frac{\lambda_{j}(\mathcal{W}^{G_{t}})}{\sqrt{\|W^{G_{t}}\|_{1}}} \right)^{k}=\sum_{j\in\mathbb{Z}\setminus\{0\}}\left(\frac{\lambda_{j}( \mathcal{W})}{\sqrt{\|W\|_{1}}}\right)^{k},\forall\,k\geq 3. \tag{6}\]
Note that
\[\lambda_{j}(\mathcal{W}^{G_{t}})=\frac{\lambda_{j}(G_{t})}{|V(G_{ t})|},\|W^{G_{t}}\|_{1}=2\frac{|E(G_{t})|}{|V(G_{t})|^{2}}, \tag{7}\]
hence
\[\frac{\lambda_{j}(\mathcal{W}^{G_{t}})}{\sqrt{\|W^{G_{t}}\|_{1}}} =\frac{\lambda_{j}(G_{t})}{\sqrt{2|E(G_{t})|}}. \tag{8}\]
We next prove (3) from (6) by contradiction. In the rest of this proof, we assume there exists \(k_{0}\in\mathbb{Z}\backslash\{0\}\) such that \(\{\frac{\lambda_{k_{0}}(\mathcal{W}^{G_{t}})}{\sqrt{\|W^{G_{t}}\|_{1}}}:t\in \mathbb{R}_{+}\}\) does not converge to \(\frac{\lambda_{k_{0}}(\mathcal{W})}{\sqrt{\|W\|_{1}}}\) when \(t\to\infty\).
We first observe that \(\{\frac{\lambda_{j}(\mathcal{W}^{G_{t}})}{\sqrt{\|W^{G_{t}}\|_{1}}}\,:\,t\in \mathbb{R}_{+}\}\) is a bounded set for all \(j\in\mathbb{Z}\backslash\{0\}\). The argument goes as follows: let \(k=4\). For any graphon \(\mathcal{W}^{\prime}\) We have
\[\sum_{j=1}^{m}\Biggl{(}\frac{\lambda_{j}(\mathcal{W}^{\prime})}{\sqrt{\|W^{ \prime}\|_{1}}}\Biggr{)}^{4}\leq\sum_{j\in\mathbb{Z}\backslash\{0\}}\Biggl{(} \frac{\lambda_{j}(\mathcal{W}^{\prime})}{\sqrt{\|W^{\prime}\|_{1}}}\Biggr{)}^{ 4}=h(C_{4},\mathcal{W}^{\prime}).\]
Note that \(\{\lambda_{j}(\mathcal{W}^{\prime})\,:\,j=1,2,\ldots\}\) is a non-increasing sequence. Therefore,
\[\frac{\lambda_{m}(\mathcal{W}^{\prime})}{\sqrt{\|W^{\prime}\|_{1}}}\leq \biggl{(}\frac{h(C_{4},\mathcal{W}^{\prime})}{m}\biggr{)}^{\frac{1}{4}}.\]
Similarly, we have
\[\frac{\lambda_{-m}(\mathcal{W}^{\prime})}{\sqrt{\|W^{\prime}\|_{1}}}\geq- \biggl{(}\frac{h(C_{4},\mathcal{W}^{\prime})}{m}\biggr{)}^{\frac{1}{4}}.\]
Note that \(\{h(C_{4},\mathcal{W}^{G_{t}})\}\) is a convergent sequence when \(t\to\infty\), hence bounded, so there exists a \(B>0\) such that
\[\frac{|\lambda_{j}(\mathcal{W}^{G_{t}})|}{\sqrt{\|W^{G_{t}}\|_{1}}}\leq \biggl{(}\frac{B}{|j|}\biggr{)}^{\frac{1}{4}},\forall\,j\in\mathbb{Z}\backslash \{0\},\]
i.e., the set \(\{\frac{\lambda_{j}(\mathcal{W}^{G_{t}})}{\sqrt{\|W^{G_{t}}\|_{1}}}\,:\,t\in \mathbb{R}_{+}\}\) is bounded for all \(j\in\mathbb{Z}\backslash\{0\}\).
According to our assumption, there exists a sequence \((t_{n})\to\infty\) such that \(\Biggl{(}\frac{\lambda_{k_{0}}(\mathcal{W}^{G_{t_{n}}})}{\sqrt{\|W^{G_{t_{n}}} \|_{1}}}\Biggr{)}\) does not converge to \(\frac{\lambda_{k_{0}}(\mathcal{W})}{\sqrt{\|W\|_{1}}}\) when \(n\to\infty\). For simplicity, we denote the double sequence \(\Biggl{(}\frac{\lambda_{j}(\mathcal{W}^{G_{t_{n}}})}{\sqrt{\|W^{G_{t_{n}}}\|_ {1}}}\Biggr{)}\) as \((a_{j,n})\) and write \(\frac{\lambda_{k_{0}}(\mathcal{W})}{\sqrt{\|W\|_{1}}}\) as \(b_{j}\). Note that since \((a_{k_{0},n})\) is bounded, we can assume without loss of generality that \(\lim\limits_{n\to\infty}a_{k_{0},n}\) exists and does not equal to \(b_{k_{0}}\). We next construct a subsequence \((n_{k})\) such that \(\lim\limits_{k\to\infty}a_{j,n_{k}}\) exists for every \(j\in\mathbb{Z}\backslash\{0\}\) as follows:
1. step 1: find an increasing sequence \((r_{1,i})_{i=1}^{\infty}\subset\mathbb{N}\) such that \(\lim\limits_{i\to\infty}a_{1,r_{1,i}}\) exists.
2. step 2: suppose we have constructed a sequence \((n_{i})\) such that \(\lim\limits_{i\to\infty}a_{j,n_{i}}\) exists for all \(1\leq j<s\). Then we find a sequence \((r_{s,i})_{i=1}^{\infty}\subset(n_{i})\) such that \(\lim\limits_{i\to\infty}a_{s,r_{s,i}}\) exists.
3. step 3: by construction of step 1 and 2, we have obtained a double sequence \((r_{j,i})\) such that \(\lim\limits_{i\to\infty}a_{j,r_{j,i}}\) exists for all \(j\geq 1\), and \((r_{s,i})\subset(r_{s-1,i})\). Therefore, if we consider the sequence \((r_{i,i})\), we will have \(\lim\limits_{i\to\infty}a_{j,r_{i,i}}\) exists for all \(j\geq 1\).
4. step 4: by repeating the above procedure, we can find a subsequence of \((r_{i,i})\), denoted as \((n_{k})\), such that \(\lim\limits_{i\to\infty}a_{j,n_{k}}\) exists for all \(j\leq-1\). This completes the construction.
For simplicity, we write \((a_{j,n_{k}})\) as \((a_{j,n})\), and denote \(\lim\limits_{n\to\infty}a_{j,n}\) as \(a_{j}\). According to (6), we have
\[\lim\limits_{n\to\infty}\sum\limits_{j\in\mathbb{Z}\backslash\{0\}}a_{j,n}^{k}= \sum\limits_{j\in\mathbb{Z}\backslash\{0\}}b_{j}^{k},\forall\,k\geq 3. \tag{9}\]
Note that if \(k>4\), then the infinite sum \(\sum\limits_{j\in\mathbb{Z}\backslash\{0\}}\left(\dfrac{B}{|j|} \right)^{\frac{k}{4}}\) converges. This implies that the sum in the left-hand side (L.H.S.) of (9) converges absolutely, so we can switch the summation with the limit there:
\[\sum\limits_{j\in\mathbb{Z}\backslash\{0\}}a_{j}^{k}=\sum\limits_{j\in \mathbb{Z}\backslash\{0\}}b_{j}^{k},\forall\,k>4. \tag{10}\]
We are going to prove that \(a_{j}=b_{j}\) for all \(j\in\mathbb{Z}\backslash\{0\}\) from (10). To achieve this, we rearrange the sequences as \((a_{j_{l}})\) and \((b_{j_{l}})\) such that \((|a_{j,l}|)\) and \((|b_{jl}|)\) are non-increasing. Then (10) can be rewritten as
\[\sum\limits_{l=1}^{\infty}a_{j_{l}}^{k}=\sum\limits_{l=1}^{\infty}b_{j_{l}}^{k },\forall\,k>4. \tag{11}\]
Then it suffices to prove \(a_{j_{l}}=b_{j_{l}}\). We prove this by induction on \(l\). Suppose we have proved \(a_{j_{l}}=b_{j_{l}}\) for \(l<m\). Then we have
\[\sum\limits_{l=m}^{\infty}\lvert a_{j_{l}}\rvert^{k}=\sum\limits_{l=m}^{\infty }\lvert b_{j_{l}}\rvert^{k}, \tag{12}\]
where \(k\) is even and \(k>4\). We first prove \(|a_{j_{m}}|=|b_{j_{m}}|\). If \(|a_{j_{m}}|>|b_{j_{m}}|\), then dividing both sides of (12) by \(|b_{j_{m}}|\), and let \(k\to\infty\) through even numbers, the L.H.S. is infinity, and the right-hand side (R.H.S.) is a finite number, leading to contradiction, hence \(|a_{j_{m}}|\leq|b_{j_{m}}|\). Similarly it can be shown that \(|b_{j_{m}}|\leq|a_{j_{m}}|\), thus \(|a_{j_{m}}|=|b_{j_{m}}|\).
Suppose \(b_{j_{m}}\) appears \(p\) times in \((b_{j})\) and \(q\) times in \((a_{j})\); \(-b_{j_{m}}\) appears \(p^{\prime}\) times in \((b_{j})\) and \(q^{\prime}\) times in \((a_{j})\). Then (10) can be rewritten as
\[(q+(-1)^{k}q^{\prime})b_{j_{m}}^{k}+\sum\limits_{l>m}a_{j_{l}}^{k}=(p+(-1)^{k }p^{\prime})b_{j_{m}}^{k}+\sum\limits_{l>m}b_{j_{l}}^{k}.\]
Divide both sides by \(b_{j_{m}}^{k}\) and let \(k\to\infty\) through odd numbers we have \(q-q^{\prime}=p-p^{\prime}\). Similarly by letting \(k\to\infty\) through even numbers we have \(q+q^{\prime}=p+p^{\prime}\). Thus \(p=q\) and \(p^{\prime}=q^{\prime}\), which indicates that \(a_{j_{m}}=b_{j_{m}}\), which concludes the induction. Therefore, \(a_{j}=b_{j}\) for all \(j\in\mathbb{Z}\backslash\{0\}\), which contradicts the assumption that \(a_{k_{0}}\neq b_{k_{0}}\).
Theorem 1 implies that the graph frequencies of generalized graphon process asymptotically scale linearly with the square root of number of edges. For a graph sequence converging to a
standard graphon, the graph frequencies asymptotically scale linearly with the number of nodes [10, Lemma 4]. In Section IV we will compare the validity of these hypothesis on a real dataset.
## IV Numerical Experiment
In this section, we corroborate our results on the ogbn-arxiv dataset1, where every node represents a paper, and every directed edge represents one paper citing another. We make all edges undirected in this experiment. The entire graph is denoted as \(G_{\mathrm{all}}=(V_{\mathrm{all}},E_{\mathrm{all}})\). We generate a growing graph sequence \((G_{n})\subset G\) as follows:
Footnote 1: [https://ogb.stanford.edu/docs/nodeprop/](https://ogb.stanford.edu/docs/nodeprop/)
1. we start with an empty graph \(\tilde{G}_{0}\) with no nodes or edges.
2. given \(\tilde{G}_{n}=(\tilde{V}_{n},\tilde{E}_{n})\), we randomly select \(200\) nodes from \(V_{\mathrm{all}}\backslash\tilde{V}_{n}\) without replacement, and add them into \(\tilde{V}_{n}\) to obtain \(\tilde{V}_{n+1}\). By letting \(\tilde{E}_{n+1}=E_{\mathrm{all}}\bigcap(\tilde{V}_{n+1}\times\tilde{V}_{n+1})\) we obtain \(\tilde{G}_{n+1}=(\tilde{V}_{n+1},\tilde{E}_{n+1})\). We iterate this step for \(90\) times to get \((\tilde{G}_{n})_{n=1}^{90}\).
3. By omitting all isolated vertices in every \(\tilde{G}_{n}\), we obtain the sequence \((G_{n})_{n=1}^{90}\).
We model the sequence \((G_{n})\) as a subsequence of a generalized graphon process \((G_{t})\) generated from a graphon, i.e., \((G_{n})=(G_{t_{n}})\) with \(t_{n}\rightarrow\infty\). Then according to Theorem 1, \(\{\lambda_{j}(G_{n}):n=1,2,\ldots\}\) should have a linear relationship with \(\sqrt{|E_{n}|}\) as \(n\rightarrow\infty\) for all \(j\in\mathbb{Z}\backslash\{0\}\). On the other hand, if \((G_{n})\) converge to a standard graphon in cut distance, then according to [10, Lemma 4], \(\{\lambda_{j}(G_{n}):n=1,2,\ldots\}\) should have a linear relationship with \(|V_{n}|\) as \(n\rightarrow\infty\) for all \(j\in\mathbb{Z}\backslash\{0\}\). Finally, if \((G_{n})\) has bounded degree as assumed by [17], then it can be shown that the set \(\{\lambda_{j}(G_{n}):n=1,2,\ldots,j\in\mathbb{Z}\backslash\{0\}\}\) is bounded. In order to verify which of these models fits the data best, we fit linear model (with zero interception) for pairs \(\{(\sqrt{|E_{n}|},\lambda_{j}(G_{n}))\}\) and \(\{(|V_{n}|,\lambda_{j}(G_{n}))\}\) and test their fitness by mean-squared error (MSE).
From Table I and Fig. 1 we see that the linear model for \(\{(\sqrt{|E_{n}|},\lambda_{j}(G_{n}))\}\) has better fitness than that for \(\{(|V_{n}|,\lambda_{j}(G_{n}))\}\). Due to the sparsity of the graph sequence (see Fig. 2), the standard graphon can be inappropriate as a meaningful limit object. In addition, it appears that the magnitudes of eigenvalues keep increasing instead of clearly bounded by some constant. Therefore, the generalized graphon process model has the best fitness among all models on this dataset.
\begin{tabular}{|l|l|l|l|l|l|} \hline \multicolumn{5}{|c|}{MSE of linear fit} \\ \hline & \(\lambda_{1}\) & \(\lambda_{2}\) & \(\lambda_{3}\) & \(\lambda_{4}\) & \(\lambda_{5}\) \\ \hline Generalized graphon process & 5.98 & 1.74 & 0.91 & 0.54 & 0.43 \\ Standard graphon & 19.33 & 6.83 & 4.49 & 3.57 & 3.05 \\ \hline & \(\lambda_{-1}\) & \(\lambda_{-2}\) & \(\lambda_{-3}\) & \(\lambda_{-4}\) & \(\lambda_{-5}\) \\ \hline Generalized graphon process & 6.21 & 1.94 & 1.01 & 0.60 & 0.44 \\ Standard graphon & 18.86 & 6.45 & 4.08 & 3.29 & 2.68 \\ \hline \end{tabular}
Fig. 1: Fitness under generalized graphon process and standard graphon models.
## V Conclusion
In this paper, we have introduced the notions of generalized graphon, stretched cut distance, and generalized graphon process to describe the convergence of sparse graph sequence. To be specific, we studied generalized graphon process that is known to converge to the generalized graphon in stretched cut distance, and proved the convergence of the associated adjacency matrices' eigenvalues, which are known as graph frequencies in GSP. This work lays the foundation for transfer learning on sparse graphs. Possible future work includes proving the convergence of graph Fourier transform (GFT) and filters for generalized graphon process.
| グラフンは伝統的に密接グラフシーケンスの境界オブジェクトとして用いられており、カット距離が収束の度合いの指標として使用されてきました。しかし、Sparseグラフシーケンスはカット距離の従来の定義に基づいて単純なグラフオンに収束します。これは多くの実践的なアプリケーションにおいて、このフレームワークが不適切であることを示しています。この論文では、一般化されたグラフンと伸長カット距離の概念を用いて、Sparseグラフシーケンスの収束を記述します。特に、一般化されたグラフンから生成されるランダムグラフプロセスを考慮しました。このランダムグラフプロセスは、伸長カット距離で一般化されたグラフンに収束します。私たちはこのランダムグラフプロセスを使用して、成長するSparseグラフをモデル化し、隣接行列の主値の収束性を証明しました。私たちの研究結果は、Sparseグラフ間の転移学習の可能性を示しています。 |
2309.11215 | Critical Point from Shock Waves Solution in Relativistic Anisotropic
Hydrodynamics | Solutions of shock waves in anisotropic relativistic hydrodynamics in the
absence of refraction of the flow passing through the shock wave are
considered. The existence of a critical value of the anisotropy parameter is
shown. This value is the upper limit below which an adequate description of
shock waves is possible. Shock wave solutions also provide a mechanism for
system isotropization. | Aleksandr Kovalenko | 2023-09-20T11:10:26 | http://arxiv.org/abs/2309.11215v2 | # Critical Point from Shock Waves Solution in Relativistic Anisotropic Hydrodynamics
###### Abstract
Solutions of shock waves in anisotropic relativistic hydrodynamics in the absence of refraction of the flow passing through the shock wave are considered. The existence of a critical value of the anisotropy parameter is shown. This value is the upper limit below which an adequate description of shock waves is possible. Shock wave solutions also provide a mechanism for system isotropization.
## 1 Introduction
Description of the evolution of expanding dense and hot hadronic matter using relativistic anisotropic hydrodynamics approach is promising in terms of modeling experimental data [1, 2] and large pressure anisotropies [3, 4], that appears at the early stages of heavy-ion collisions due to the rapid longitudinal expansion. The large difference between the longitudinal and transverse pressures leads to the necessary consideration of high-order gradients in dissipative hydrodynamic theories [5]. In relativistic anisotropic hydrodynamics anisotropy is introduced explicitly as an appropriate parameter in one-particle distribution function. The introduction of this parameter is a kind of resummation of the expansion gradients of the theory in a special way, which can give interesting results.
One of the main applications of fluid dynamics is the description of sound propagation and related effects. Sound phenomena in quark-gluon and nuclear matter have been studied mainly in the context of the formation of shock waves [6, 7]. The jet-quenching phenomena induce interest in considering the generation of the Mach cone [8, 9]. Early work has shown that transverse shock waves in hot quark-gluon matter can be generated by initial fluctuations in local energy density (hot spots), which are the result of a large number of QCD interactions [10].
The problem in dissipative hydrodynamic theories is the inability to adequately describe the shock waves phenomenon. For the Israel-Stewart theory, shock waves can be generated only for small Mach numbers [11, 12]. However, it is possible to obtain discontinuous solutions of shock waves in relativistic anisotropic hydrodynamics by analogy with the isotropic case of an ideal fluid [13]. Previously, analytical solutions in the ultrarelativistic case
for the longitudinal and transverse shock waves were obtained, and numerical calculations for an arbitrary polar angle were presented [14]. The main assumption was the introduction of a constant anisotropy. In this regime the anisotropy does not change for the flow moving through the shock wave. For the polar angle \(0<\alpha<\pi/2\) this behavior entails the effects associated with the defection of the flow (\(\alpha^{\prime}\neq\alpha\)), as well as the acceleration of the flow for certain values of \(\sigma=P^{{}^{\prime}}/P\) and \(\xi\). Such effects may raise certain questions in practice. Since it is natural to expect the isotropization process for the evolution of hot hadronic matter, the assumption \(\xi^{\prime}=\xi\) may cause the loss of information about the evolution of matter.
Therefore, instead of fixing the anisotropy parameter, one can assume \(\alpha^{\prime}=\alpha\) for the shock waves solutions, which brings us back to the more familiar behavior of the downstream and upstream flows in isotropic case. The present paper describes the derivation of shock wave solutions for such a case. The restrictions on the anisotropy parameter for the existence of shock waves and the system isotropization mechanism are discussed.
## 2 Basic equations
The framework of anisotropic hydrodynamics based on the kinetic theory approach [15, 16, 17], where a ansatz for the distribution function \(f\) has Romatschke-Strickland form
\[f(x,p)=f_{\rm iso}\Bigg{(}\frac{\sqrt{p^{\mu}\Xi_{\mu\nu}(x)p^{\nu}}}{\Lambda (x)}\Bigg{)}, \tag{2.1}\]
where \(\Lambda(x)\) is a temperature-like momentum scale and \(\Xi_{\mu\nu}(x)\) - momentum anisotropy tensor. We consider one-dimensional anisotropy such that \(p^{\mu}\Xi_{\mu\nu}p^{\nu}={\bf p}^{2}+\xi(x)p_{\parallel}^{2}\) in the local rest frame (LRF), where \(\xi(x)\) - anisotropy parameter.
The energy-momentum tensor \(T^{\mu\nu}\) can be described in terms of four-velocity vector \(U^{\mu}\) and space-like longitudinal vector \(Z^{\mu}\) as follows
\[T^{\mu\nu}=(\varepsilon+P_{\perp})U^{\mu}U^{\nu}-P_{\perp}g^{\mu\nu}+(P_{ \parallel}-P_{\perp})Z^{\mu}Z^{\nu}, \tag{2.2}\]
where \(P_{\parallel}\) and \(P_{\perp}\) is longitudinal (towards anisotropy direction) and transverse pressure respectively and
\[U^{\mu}=(u_{0}\cosh\vartheta,u_{x},u_{y},u_{0}\sinh\vartheta), \tag{2.3}\] \[Z^{\mu}=(\sinh\vartheta,0,0,\cosh\vartheta). \tag{2.4}\]
Here \(\vartheta\) is longitudinal rapidity, \(u_{x},u_{y}\) - transverse velocities and \(u_{0}=\sqrt{1+u_{x}^{2}+u_{y}^{2}}\).
It is important to note that the dependence on the anisotropy parameter \(\xi\) can be factorized:
\[\varepsilon=R(\xi)\varepsilon_{\rm iso}(\Lambda), \tag{2.5}\] \[P_{\perp,\parallel}=R_{\perp,\parallel}(\xi)P_{\rm iso}(\Lambda), \tag{2.6}\]
where the anisotropy-dependent factors \(R_{\perp}(\xi)\) and \(R_{\parallel}(\xi)\) are [15]
\[R_{\perp}(\xi)=\frac{3}{2\xi}\Bigg{(}\frac{1+(\xi^{2}-1)R(\xi)}{1+\xi} \Bigg{)},\hskip 14.226378ptR_{\parallel}(\xi)=\frac{3}{\xi}\Bigg{(}\frac{(\xi+1 )R(\xi)-1}{1+\xi}\Bigg{)}, \tag{2.7}\]
\[R(\xi)=\frac{1}{2}\Bigg{(}\frac{1}{1+\xi}+\frac{\arctan\sqrt{\xi}}{\sqrt{\xi}} \Bigg{)}. \tag{2.8}\]
Equation of state for the massless gas \(\varepsilon_{\rm iso}=3P_{\rm iso}\) leads to the following relation between the anisotropic functions: \(2R_{\perp}(\xi)+R_{\parallel}(\xi)=3R(\xi)\).
We focus on the shock wave solution in the ideal fluid characterized by the energy-momentum tensor (2.2). The shock wave in zero-order hydrodynamics is described by a discontinuous solution of the equations of motion [13, 18, 19].
The energy-momentum conservation leads to the following matching condition linking downstream and upstream projections of energy-momentum tensor on the direction perpendicular to the discontinuity surface:
\[T_{\mu\nu}N^{\mu}=T^{{}^{\prime}}_{\mu\nu}N^{\mu}, \tag{2.9}\]
where \(N^{\mu}\) - unit vector normal to the discontinuity surface and \(T_{\mu\nu},\ T^{{}^{\prime}}_{\mu\nu}\) correspond to upstream and downstream energy-momentum tensors correspondingly.
Consider a flow moving at an angle \(\alpha\) to the direction of the beam axis. In the case of a normal shock waves for the components of the vector normal to the discontinuity surface we have \(N_{\mu}=(0,\sin\alpha,0,\cos\alpha)\), where \(\alpha\) is polar angle. As discussed earlier, it is assumed that \(\alpha^{\prime}=\alpha\) for the shock waves solutions. Then we have three equations for \(v,v^{\prime},\xi^{\prime}\) with input parameters \(\sigma,\alpha,\xi\).
The equations (2.9) gives the following system
\[\Bigg{[}\frac{R_{1}(\xi)}{1-v^{2}}+\frac{R_{2}(\xi)}{1-v^{2}\cos^ {2}\alpha}\cos^{2}\alpha\Bigg{]}v-\Bigg{[}\frac{R_{1}(\xi^{\prime})}{1-v^{\prime 2 }}+\frac{R_{2}(\xi^{\prime})}{1-v^{\prime 2}\cos^{2}\alpha}\cos^{2}\alpha \Bigg{]}\sigma v^{\prime}=0, \tag{2.10}\] \[\Bigg{[}R_{\perp}(\xi)-\sigma R_{\perp}(\xi^{\prime})+\frac{R_{1 }(\xi)v^{2}}{1-v^{2}}-\sigma\frac{R_{1}(\xi^{\prime})v^{\prime 2}}{1-v^{\prime 2 }}\Bigg{]}\sin\alpha=0,\] (2.11) \[\Bigg{[}R_{\perp}(\xi)-\sigma R_{\perp}(\xi^{\prime})+\frac{R_{1 }(\xi)v^{2}}{1-v^{2}}-\sigma\frac{R_{1}(\xi^{\prime})v^{\prime 2}}{1-v^{ \prime 2}}+\frac{R_{2}(\xi)}{1-v^{2}\cos^{2}\alpha}-\sigma\frac{R_{2}(\xi^{ \prime})}{1-v^{\prime 2}\cos^{2}\alpha}\Bigg{]}\cos\alpha=0, \tag{2.12}\]
where, in turn,
\[R_{1}(\xi)=(R_{\parallel}(\xi)+3R_{\perp}(\xi)),\quad R_{2}(\xi)=(R_{ \parallel}(\xi)-R_{\perp}(\xi)).\]
It can be seen from the equations (2.10 - 2.12) that at the boundary values \(\alpha=0,\ \pi/2\) only two equations remain. However, for any different non-boundary values of \(\alpha\) we should consider parts in square brackets in the equations (2.11 - 2.12). To maintain the continuity of solutions, we must omit \(\sin\alpha\) in (2.11) and \(\cos\alpha\) in (2.12) for \(\alpha=0,\ \pi/2\).
## 3 Critical point
Considering the boundary cases for polar angle \(\alpha\), one finds that the transverse case (\(\alpha=\pi/2\)) gives
\[R_{\parallel}(\xi^{\prime})-R_{\perp}(\xi^{\prime})=\frac{R_{ \parallel}(\xi)-R_{\perp}(\xi)}{\sigma}. \tag{3.1}\]
Solving the (3.1) equation for \(\xi^{\prime}\) will always give two roots \(\xi^{\prime}_{1}<\xi\) and \(\xi^{\prime}_{2}>\xi\), except for the case when for \(\sigma=1\) we have \(\xi^{\prime}=\xi=\xi_{\rm crit}(\alpha=\pi/2)\), where \(\xi_{\rm crit}(\pi/2)\simeq 2.62143...\) is a solution to the equation
\[\frac{\partial R_{\parallel}(\xi)}{\partial\xi}=\frac{\partial R_{\perp}(\xi )}{\partial\xi}. \tag{3.2}\]
It is obvious to expect that at \(\sigma=1\) the shock wave should not exist, which corresponds to solution \(v^{\prime}=v,\;\xi^{\prime}=\xi\). Choosing one of the two roots of the equation (3.1) for \(\sigma>1\), we want the solution \(\xi^{\prime}\rightarrow\xi\) when \(\sigma\to 1\). Thus, the point \(\xi_{\rm crit}\) separates two solution spaces. If \(\xi<\xi_{\rm crit}\), then for the continuous limit \(\sigma\to 1\) we must choose the left solution \(\xi^{\prime}<\xi_{\rm crit}\), since only in this case the condition \(\xi^{\prime}\rightarrow\xi\) for \(\sigma\to 1\) is satisfied. Conversely, if \(\xi>\xi_{\rm crit}\), then we must choose the right solution \(\xi^{\prime}>\xi_{\rm crit}\) to ensure the same condition. If \(\xi=\xi_{\rm crit}\) then both solutions are possible. That is, for \(\sigma>1\) we lose the continuity of the solution \(\xi^{\prime}\), which shows that there is no adequate description of shock waves for arbitrary anisotropy parameter \(\xi\) for the case of a fixed polar angle \(\alpha^{\prime}=\alpha\).
Therefore, for the limit \(\xi\to 0\) one finds that left solution \(\xi^{\prime}\to 0\). However, for the case \(\xi>\xi_{\rm crit},\;\sigma>1\) we can not move to the isotropic limit \(\xi\to 0\) since we work in another solution space. Moreover, in the case of \(\xi>\xi_{\rm crit}\) the solutions for the velocities provides rarefaction shock wave behavior, i.e. \(v^{\prime}>v\). This behavior and absence of isotropic limit clearly contradicts an adequate description of an anisotropic system. As a result, one can assume that the critical point \(\xi_{\rm crit}\) is an upper bound for the anisotropy parameter for the considered shock wave solutions.
Solving the system of equations (2.10 - 2.12) for polar angle \(\alpha=0\), one can find the existence of the analogously critical point \(\xi_{\rm crit}(\alpha=0)\simeq 5.47941\).
The Fig. 1 shows the values \(\xi_{\rm crit}(\alpha)\) for an arbitrary polar angle \(\alpha\). The function \(\xi_{\rm crit}(\alpha)\) is a monotonically decreasing function that goes from \(\xi_{\rm crit}(\alpha=0)\) to \(\xi_{\rm crit}(\alpha=\pi/2)\). For the generation of such shock waves we have a constraint condition on the anisotropy parameter \(\xi\) for any polar angle \(\alpha\).
The evolution of the anisotropy parameter can be obtained by solving the equations of motion both in the boost-invariant case [15, 20] and in the non-boost-invariant one [21]. It follows from these solutions that there may be regions where \(\xi>\xi_{\rm crit}\). Moreover, with the initial condition \(\xi_{0}=0\), the anisotropy parameter reaches its maximum \(\xi_{\rm max}\) at a certain value of the proper time \(\tau=\tau^{*}\). This maximum depends on the shear viscosity to entropy density ratio \(\eta/S\). The low values of \(\eta/S\) corresponds to the low values of \(\xi_{\rm max}\). The formation
of shock waves is possible throughout the evolution of matter, if \(\xi_{\rm max}\leqslant\xi_{\rm crit}\). For instance, in the case of purely longitudinal expansion (with initial condition \(\xi_{0}=0\)), the maximum value of the anisotropy parameter \(\xi_{\rm max}=\xi_{\rm crit}(\pi/2)\simeq 2.62143\) corresponds to \(\eta/S\simeq 0.17179\) and \(\xi_{\rm max}=\xi_{\rm crit}(0)\simeq 5.47941\) corresponds to \(\eta/S\simeq 0.35619\).
Consider the angular dependences of the anisotropy parameter \(\xi^{\prime}(\alpha,\sigma)\). The Fig. 2 shows the polar angle dependence of \(\xi^{\prime}(\alpha|\sigma)\), where we consider the domain of \(\xi\in[0,\xi_{\rm crit}(\pi/2)]\). It can be seen that the system can be isotropized through the generation of shock waves. For transverse case (\(\alpha=\pi/2\)) we have higher anisotropy drop than for longitudinal case.
## 4 Conclusion
Solutions of shock waves were obtained in the absence of refraction of the passing flow (for polar angles we have \(\alpha=\alpha^{\prime}\)). With a continuous change in the anisotropy parameter \(\xi\) and \(\sigma\) the solution for the \(\xi^{\prime}\) remain continuous only if we choose \(\xi\leqslant\xi_{\rm crit}\) or \(\xi\geqslant\xi_{\rm crit}\). Thus, this critical point \(\xi_{\rm crit}\) separates two solution spaces for shock waves. Choosing a solution at \(\xi\geqslant\xi_{\rm crit}\) leads to the absence of an isotropic limit \(\xi\to 0\) and the behavior of shock wave solutions provide rarefaction shock wave pattern. Hence it follows that the consideration of a shock wave solutions is only possible for \(\xi\leqslant\xi_{\rm crit}\). Thus, the existence of shock waves is possible for small anisotropies. The connection between the evolution of the anisotropy parameter \(\xi\) and the shear viscosity to entropy density ratio \(\eta/S\) can also leads to the estimation of the values of \(\eta/S\) at which the formation of shock waves is possible. The found values of the critical point \(\xi_{\rm crit}\) show that low values of \(\eta/S\) are necessary for the existence of shock waves during the entire evolution of the system at zero initial anisotropy (\(\xi_{0}=0\)).
It was also shown that the generation of shock waves can lead to isotropization of the system since \(\xi^{\prime}\leqslant\xi\). The mechanism of isotropization in the case of transverse shock waves is stronger than for longitudinal shock waves. | Anisotropic relativistischer hydrodynamikにおける衝撃波の解について、流体の伝播経路を通過する衝撃波の屈折がない場合に検討します。この値は衝撃波の適切な記述が可能な上限です。衝撃波の解は、システムの同性化メカニズムを提供します。 |
2308.16653 | Sketches, moves and partitions: counting regions of deformations of
reflection arrangements | The collection of reflecting hyperplanes of a finite Coxeter group is called
a reflection arrangement and it appears in many subareas of combinatorics and
representation theory. We focus on the problem of counting regions of
reflection arrangements and their deformations. Inspired by the recent work of
Bernardi, we show that the notion of moves and sketches can be used to provide
a uniform and explicit bijection between regions of (the Catalan deformation
of) a reflection arrangement and certain non-nesting partitions. We then use
the exponential formula to describe a statistic on these partitions such that
distribution is given by the coefficients of the characteristic polynomial.
Finally, we consider a sub-arrangement of type C arrangement called the
threshold arrangement and its Catalan and Shi deformations. | Priyavrat Deshpande, Krishna Menon | 2023-08-31T11:51:53 | http://arxiv.org/abs/2308.16653v1 | # Sketches, moves and partitions: counting regions of deformations of reflection arrangements
###### Abstract.
The collection of reflecting hyperplanes of a finite Coxeter group is called a reflection arrangement and it appears in many subareas of combinatorics and representation theory. We focus on the problem of counting regions of reflection arrangements and their deformations. Inspired by the recent work of Bernardi, we show that the notion of moves and sketches can be used to provide a uniform and explicit bijection between regions of (the Catalan deformation of) a reflection arrangement and certain non-nesting partitions. We then use the exponential formula to describe a statistic on these partitions such that distribution is given by the coefficients of the characteristic polynomial. Finally, we consider a sub-arrangement of type C arrangement called the threshold arrangement and its Catalan and Shi deformations.
## 1. Introduction
A _hyperplane arrangement_\(\mathcal{A}\) is a finite collection of affine hyperplanes (i.e., codimension \(1\) subspaces and their translates) in \(\mathbb{R}^{n}\). A _flat_ of \(\mathcal{A}\) is a nonempty intersection of some of the hyperplanes in \(\mathcal{A}\); the ambient vector space is a flat since it is an intersection of no hyperplanes. Flats are naturally ordered by reverse set inclusion; the resulting poset is called the _intersection poset_ and is denoted by \(\operatorname{L}(\mathcal{A})\). The _rank_ of \(\mathcal{A}\) is the dimension of the span of the normal vectors to the hyperplanes. An arrangement in \(\mathbb{R}^{n}\) is called _essential_ if its rank is \(n\). A _region_ of \(\mathcal{A}\) is a connected component of \(\mathbb{R}^{n}\setminus\bigcup\mathcal{A}\). A region is said to be _bounded_ if its intersection with the subspace spanned by the normal vectors to the hyperplanes is bounded. Counting the number of regions of arrangements using diverse combinatorial methods is an active area of research.
The _characteristic polynomial_ of \(\mathcal{A}\) is defined as \(\chi_{\mathcal{A}}(t):=\sum\mu(\hat{0},x)\,t^{\dim(x)}\) where \(x\) runs over all flats in \(\operatorname{L}(\mathcal{A})\), \(\mu\) is its the Mobius function and \(\hat{0}\) corresponds to the flat \(\mathbb{R}^{n}\). Using the fact that every interval of the intersection poset of an arrangement is a geometric lattice, we have
\[\chi_{\mathcal{A}}(t)=\sum_{i=0}^{n}(-1)^{n-i}c_{i}t^{i} \tag{1}\]
where \(c_{i}\) is a non-negative integer for all \(0\leq i\leq n\)[18, Corollary 3.4]. The characteristic polynomial is a fundamental combinatorial and topological invariant of the arrangement and plays a significant role throughout the theory of hyperplane arrangements.
In this article, our focus is on the enumerative aspects of (rational) arrangements in \(\mathbb{R}^{n}\). In that direction we have the following seminal result by Zaslavsky.
**Theorem 1.1** ([20]).: _Let \(\mathcal{A}\) be an arrangement in \(\mathbb{R}^{n}\). Then the number of regions of \(\mathcal{A}\) is given by_
\[r(\mathcal{A})=(-1)^{n}\chi_{\mathcal{A}}(-1)=\sum_{i=0}^{n}c_{i}\]
###### Abstract
We consider the _Weyl group_ of \(\Phi\), where \(\Phi\) is a finite field. We consider the _Weyl group_ of \(\Phi\), where \(\Phi\) is a finite field.
as bounded regions of these arrangements. Bijective proofs for the number of regions of the type \(C\) Catalan arrangement have already been established in [10] and [13]. However, the proofs we present for the other arrangements seem to be new.
The idea used for the bijections is fairly simple but effective. This was used by Bernardi in [6, Section 8] to obtain bijections for the regions of several deformations of the braid arrangement. This idea, that we call'sketches and moves', is to consider an arrangement \(\mathcal{B}\) whose regions we wish to count as a sub-arrangement of an arrangement \(\mathcal{A}\). This is done in such a way that the regions of \(\mathcal{A}\) are well-understood and are usually total orders on certain symbols. These total orders are what we call _sketches_. Since \(\mathcal{B}\subseteq\mathcal{A}\), the regions of \(\mathcal{B}\) partition the regions of \(\mathcal{A}\) and hence define an equivalence on sketches. We define operations called _moves_ on sketches to describe the equivalence classes. In regions of \(\mathcal{A}\), moves correspond to crossing hyperplanes in \(\mathcal{A}\setminus\mathcal{B}\).
Apart from Bernardi's results, the results in [4] and [15] can also be viewed as applications of the sketches and moves idea to count regions of hyperplane arrangements.
When studying an arrangement, another interesting question is whether the coefficients of its characteristic polynomial can be combinatorially interpreted. By Theorem 1.1, we know that the sum of the absolute values of the coefficients is the number of regions. Hence, one could ask if there is a statistic on the regions whose distribution is given by the coefficients of the characteristic polynomial. The characteristic polynomial of the braid arrangement in \(\mathbb{R}^{n}\) is \(t(t-1)\cdots(t-n+1)\)[18, Corollary 2.2]. Hence, the coefficients are the Stirling numbers of the first kind. Consequently, the distribution of the statistic 'number of cycles' on the set of permutations of \([n]\) (which correspond to the regions of the arrangement) is given by the coefficients of the characteristic polynomial.
The paper is structured as follows: In Section 2, we describe the sketches and moves idea mentioned above. We also use it to study the regions of some simple arrangements in Section 3. In Section 4, we reprove the results in [13] about the type \(C\) Catalan arrangement with a modification inspired by [2]. We then use the sketches and moves idea in Section 5 to obtain bijections for the regions of the Catalan arrangements of other types. In Section 6, we describe statistics on the regions of the arrangements we have studied whose distribution is given by the corresponding characteristic polynomials. Finally, in Section 7, we use similar techniques to study an interesting arrangement called the threshold arrangement as well as some of its deformations.
## 2. Sketches, moves and trees: a quick overview of Bernardi's bijection
In his paper [6], Bernardi describes a method to count the regions of any deformation of the braid arrangement using certain objects called _boxed trees_. He also obtains explicit bijections with certain trees for several deformations. The general strategy to establish the bijection is to consider an arrangement \(\mathcal{B}\) whose regions we wish to count as a sub-arrangement of an arrangement \(\mathcal{A}\) whose regions are well-understood. The regions of \(\mathcal{B}\) then define an equivalence on the regions of \(\mathcal{A}\). This is done by declaring two regions of \(\mathcal{A}\) to be equivalent if they lie inside the same region of \(\mathcal{B}\). Now counting the number of regions of \(\mathcal{B}\) is the same as counting the number of equivalence classes of this equivalence on the regions of \(\mathcal{A}\). This is usually done by choosing a canonical representative for each equivalence class, which also gives a bijection between the regions of \(\mathcal{B}\) and certain regions of \(\mathcal{A}\).
In particular, a (transitive) deformation of the braid arrangement is a sub-arrangement of the (extended or) \(m\)-Catalan arrangement (for some large \(m\)) in \(\mathbb{R}^{n}\), whose hyperplanes are
\[\{x_{i}-x_{j}=k\mid 1\leq i<j\leq n,k\in[-m,m]\}.\]
The regions of these arrangements are known to correspond labeled \((m+1)\)-ary trees with \(n\) nodes (see [6, Section 8.1]). Using the idea mentioned above, one can show that the regions a deformation correspond to certain trees. We should mention that while he obtains direct combinatorial arguments to describe this bijection for some transitive deformations (see [6, Section 8.2]), the proof for the general bijection uses much stronger results (see [6, Section 8.3]).
Coming back to the general strategy, which we aim to generalize in order to apply it to deformations of other types. It is clear that any two equivalent regions of \(\mathcal{A}\) have to be on the same side of each hyperplane of \(\mathcal{B}\). However, it turns out that this equivalence is the transitive closure of a simpler relation. This follows from the fact that one can reach a region in an arrangement from another by crossing exactly one hyperplane at a time with respect to which the regions lie on opposite sides. We now prove this result, for which we require the following definition.
**Definition 2.1**.: _Let \(R\) be a region of an arrangement \(\mathcal{A}\). \(A\) determining set of \(R\) is a sub-arrangement \(\mathcal{D}\subseteq\mathcal{A}\) such that the region of the arrangement \(\mathcal{D}\) containing \(R\), denoted \(R_{\mathcal{D}}\), is equal to \(R\)._
Note that a region of \(\mathcal{A}\) always has the entire arrangement \(\mathcal{A}\) as a determining set. Also, if a region \(R^{\prime}\) is on the same side as a region \(R\) for each hyperplane in a determining set of \(R\), then we must have \(R=R^{\prime}\).
Before going forward, we explicitly describe regions of an arrangement. First note that any hyperplane \(H\) in \(\mathbb{R}^{n}\) is a set of the form
\[\{\mathbf{x}\in\mathbb{R}^{n}\mid P_{H}(\mathbf{x})=0\}\]
Figure 1. Bold lines form \(\mathcal{B}\) and the dotted lines form \(\mathcal{A}\setminus\mathcal{B}\). Equivalent \(\mathcal{A}\) regions can be connected by changing one \(\mathcal{A}\setminus\mathcal{B}\) inequality at a time.
where \(P_{H}(\mathbf{x})=a_{1}x_{1}+a_{2}x_{2}+\cdots+a_{n}x_{n}+c\) for some constants \(a_{1},\ldots,a_{n},c\in\mathbb{R}\). Also, the regions of an arrangement \(\mathcal{A}\) are precisely the non-empty intersections of sets of the form
\[\{\mathbf{x}\in\mathbb{R}^{n}\mid P_{H}(\mathbf{x})>0\}\text{ or }\{\mathbf{x}\in\mathbb{R}^{n} \mid P_{H}(\mathbf{x})<0\}\]
where we have one set for each \(H\in\mathcal{A}\). Hence, crossing exactly one hyperplane \(H\) in an arrangement corresponds to changing the inequality chosen for \(H\) in this description of the region.
**Theorem 2.2**.: _If \(\mathcal{D}\) is a minimal determining set of a region \(R\) of an arrangement \(\mathcal{A}\), then changing the inequality in the definition of \(R\) of exactly one \(H\in\mathcal{D}\), and keeping all other inequalities of hyperplanes in \(\mathcal{A}\) the same, describes a non-empty region of \(\mathcal{A}\)._
Before proving this, we will see how it proves the fact mentioned above. Start with two distinct regions \(R\) and \(R^{\prime}\) of an arrangement \(\mathcal{A}\). We want to get from \(R\) to \(R^{\prime}\) by crossing exactly one hyperplane at a time with respect to which the regions lie on opposite sides.
1. Let \(\mathcal{D}\) be a minimal determining set of \(R\).
2. Since \(R\neq R^{\prime}\) there is some \(H\in\mathcal{D}\) for which \(R^{\prime}\) is on the opposite side as \(R\).
3. Change the inequality corresponding to \(H\) in \(R\), call this new region \(R^{\prime\prime}\).
4. The number of hyperplanes in \(\mathcal{A}\) for which \(R^{\prime\prime}\) and \(R^{\prime}\) lie on opposite sides is less than that for \(R\) and \(R^{\prime}\).
5. Repeat this process to get to \(R^{\prime}\) by changing one inequality at a time.
Proof of Theorem 2.2.: Let \(H\in\mathcal{D}\). Since \(\mathcal{D}\) is a minimal determining set, \(\mathcal{E}=\mathcal{D}\setminus\{H\}\) is not a determining set. So \(R\) is strictly contained in \(R_{\mathcal{E}}\). This means that the hyperplane \(H\) intersects \(R_{\mathcal{E}}\) and splits it into two open convex sets, one of which is \(R\).
So we can choose a point \(p\in H\) that lies inside \(R_{\mathcal{E}}\) and an \(n\)-ball centered at \(p\) that does not touch any other hyperplanes of \(\mathcal{A}\) (since \(\mathcal{A}\) is finite). One half of the ball lies in \(R\) and the other half lies in a region \(R^{\prime}\) of \(\mathcal{A}\). Since \(R^{\prime}\) can be reached from \(R\) by just crossing the hyperplane \(H\), we get the required result.
To sum up, we start with an arrangement \(\mathcal{B}\subseteq\mathcal{A}\). We know the regions of \(\mathcal{A}\) and usually represent them by combinatorial objects we call'sketches'. We then define'moves' on these sketches that correspond to changing exactly one inequality of a hyperplane in \(\mathcal{A}\setminus\mathcal{B}\). We define sketches to be equivalent if one can be obtained from another through a series of moves. We then count the number of equivalence classes to obtain the number of regions of \(\mathcal{B}\). Before using this method to study the Catalan arrangements of various types, we first look at some simpler arrangements.
## 3. Counting regions of reflection arrangements
In this section, as a warmup exercise, we illustrate the'sketches-moves' idea to study sub-arrangements of the type \(C\) arrangement. Hence, in the spirit of Bernardi [6], we will define certain sketches corresponding to the region of the type \(C\) arrangement and for any sub-arrangement, we choose a canonical sketch from each region.
### The type C arrangement
This arrangement in \(\mathbb{R}^{n}\) is the set of reflecting hyperplanes of the root system \(C_{n}\). The defining equations of hyperplanes are
\[2x_{i} =0\] \[x_{i}+x_{j} =0\] \[x_{i}-x_{j} =0\]
for \(1\leq i<j\leq n\). Though we could write \(x_{i}=0\) for the first type of hyperplanes, we think of them as \(x_{i}+x_{i}=0\) to define sketches.
We can write the hyperplanes of the type \(C\) arrangement as follows:
\[x_{i} =x_{j},\qquad 1\leq i<j\leq n\] \[x_{i} =-x_{j},\quad i,j\in[n].\]
Hence, any region of the arrangement is given by a _valid_ total order on
\[x_{1},\dots,x_{n},-x_{1},\dots,-x_{n}.\]
A total order is said to be valid if there is some point in \(\mathbb{R}^{n}\) that satisfies it. We will represent \(x_{i}\) by \(\overset{+}{i}\) and \(-x_{i}\) by \(\overset{-}{i}\) for all \(i\in[n]\).
**Example 3.1**.: _The region \(-x_{2}<x_{3}<x_{1}<-x_{1}<-x_{3}<x_{2}\) is represented as \(\overset{-}{2}\overset{+}{3}\overset{+}{1}\overset{-}{1}\overset{-}{3} \overset{+}{2}\)._
It can be shown that words of the form
\[\overset{w_{1}}{i_{1}}\overset{w_{2}}{i_{2}}\quad\cdots\overset{w_{n}}{i_{n} }\overset{-w_{n}}{i_{n}}\quad\cdots\overset{-w_{2}}{i_{2}}\overset{-w_{1}}{i_{ 1}}\]
where \(\{i_{1},\dots,i_{n}\}=[n]\) are the ones that correspond to regions. Such orders are the only ones that can correspond to regions since negatives reverse order. Also, choosing \(n\) distinct negative numbers, it is easy to construct a point satisfying the inequalities specified by such a word. Hence the number of regions of the type \(C\) arrangement is \(2^{n}n!\). We will call such words _sketches_ (which are basically signed permutations). We will draw a line after the first \(n\) symbols to denote the reflection and call the part of the sketch before the line its first half and similarly define the second half.
**Example 3.2**.: \(\overset{+}{3}\overset{-}{1}\overset{-}{2}\overset{+}{4}\overset{-}{4} \overset{+}{2}\overset{+}{1}\overset{-}{3}\) _is a sketch._
We now study some sub-arrangements of the type \(C\) arrangement. For each such arrangement, we will define the moves that we can apply to the sketches (which represent changing exactly one inequality corresponding to a hyperplane not in the arrangement) and then choose a canonical representative from each equivalence class. By Theorem 2.2, this gives a bijection between these canonical sketches and the regions of the sub-arrangement.
### The Boolean arrangement
One of first examples one encounters when studying hyperplane arrangements is the Boolean arrangement. The Boolean arrangement in \(\mathbb{R}^{n}\) has hyperplanes \(x_{i}=0\) for all \(i\in[n]\). It is fairly straightforward to see that the number of regions is \(2^{n}\). We will do this using the idea of moves on sketches.
The hyperplanes missing from the type \(C\) arrangement in the Boolean arrangement are
\[x_{i}+x_{j} =0\] \[x_{i}-x_{j} =0\]
for \(1\leq i<j\leq n\). Hence, the Boolean moves are as follows:
1. Swapping adjacent \(\overset{+}{i}\) and \(\overset{-}{j}\) as well as \(\overset{+}{j}\) and \(\overset{-}{i}\) for distinct \(i,j\in[n]\).
2. Swapping adjacent \(\overset{+}{i}\) and \(\overset{+}{j}\) as well as \(\overset{-}{j}\) and \(\overset{-}{i}\) for distinct \(i,j\in[n]\).
The first kind of move corresponds to changing inequality corresponding to the hyperplane \(x_{i}+x_{j}=0\) and keeping all the other inequalities the same. Similarly, the second kind of move corresponds to changing only the inequality corresponding to \(x_{i}-x_{j}=0\).
**Example 3.3**.: _We can use a series of Boolean moves on a sketch as follows:_
\[\overset{-}{4}\overset{+}{1}\overset{+}{2}\overset{+}{3}\overset{-}{3} \overset{+}{3}\overset{-}{2}\overset{-}{1}\overset{+}{4}\longrightarrow \overset{-}{4}\overset{+}{2}\overset{+}{1}\overset{-}{3}\overset{-}{1} \overset{-}{2}\overset{+}{4}\longrightarrow\overset{-}{4}\overset{+}{2} \overset{-}{3}\overset{+}{1}\overset{-}{1}\overset{-}{3}\overset{-}{2} \overset{+}{4}\longrightarrow\overset{-}{4}\overset{-}{3}\overset{+}{2} \overset{+}{1}\overset{-}{1}\overset{-}{2}\overset{+}{3}\overset{+}{4}\]
It can be shown that for any sketch, we can use Boolean moves to convert it to a sketch where the order of absolute values in the second half is \(1,2,\ldots,n\) (since adjacent transpositions generate the symmetric group). Also, since the signs of the numbers in the second half do not change there is exactly one such sketch in each equivalence class. Hence the number of Boolean regions is the number of ways of assigning signs to the numbers \(1,2,\ldots,n\) which is \(2^{n}\).
### The type D arrangement
The type \(D\) arrangement in \(\mathbb{R}^{n}\) has the hyperplanes
\[x_{i}+x_{j} =0\] \[x_{i}-x_{j} =0\]
for \(1\leq i<j\leq n\). The hyperplanes missing from missing from the type \(C\) arrangement are
\[2x_{i}=0\]
for all \(i\in[n]\). Hence a type \(D\) move, which we call a \(D\) move, is swapping adjacent \(\overset{+}{i}\) and \(\overset{-}{i}\) for any \(i\in[n]\).
**Example 3.4**.: \(\overset{+}{4}\overset{+}{1}\overset{-}{3}\overset{+}{2}\overset{-}{1} \overset{-}{2}\overset{+}{3}\overset{-}{1}\overset{-}{4}\)\(\overset{D\ move}{4}\overset{+}{1}\overset{-}{3}\overset{-}{2}\)\(\overset{+}{2}\overset{+}{3}\overset{-}{1}\overset{-}{4}\)
In a sketch the only such pair is the last term of the first half and the first term of the second half. Hence \(D\) moves actually define an involution on the sketches. Hence the number of regions of the type \(D\) arrangement is \(2^{n-1}n!\). We could also choose a canonical sketch in each type \(D\) region to be the one where the first term of the second half is positive.
### The braid arrangement
The braid arrangement in \(\mathbb{R}^{n}\) has hyperplanes
\[x_{i}-x_{j} =0\]
for \(1\leq i<j\leq n\). The hyperplanes missing from the type \(C\) arrangement are
\[2x_{i} =0\] \[x_{i}+x_{j} =0\]
for all \(1\leq i<j\leq n\). Hence the braid moves are as follows:
1. (\(D\) move) Swapping adjacent \(\overset{+}{i}\) and \(\overset{-}{i}\) for any \(i\in[n]\).
2. Swapping adjacent \(\overset{+}{i}\) and \(\overset{-}{j}\) as well as \(\overset{+}{j}\) and \(\overset{-}{i}\) for distinct \(i,j\in[n]\).
**Example 3.5**.: _We can use a series of braid moves on a sketch as follows:_
\[\begin{array}{
Such orders will be represented by using the symbol \(\alpha_{i}^{(s)}\) for \(x_{i}+s\) and \(\alpha_{-i}^{(-s)}\) for \(-x_{i}-s\) for all \(i\in[n]\) and \(s\in\{0,1\}\). Let \(C(n)\) be the set
\[\{\alpha_{i}^{(s)}\mid i\in[n],\ s\in\{0,1\}\}\cup\{\alpha_{i}^{(s)}\mid-i\in[ n],\ s\in\{-1,0\}\}.\]
Hence, we use orders on the letters of \(C(n)\) to represent regions of \(\mathcal{C}_{n}\).
**Example 4.1**.: _The total order_
\[x_{1}<-x_{2}-1<x_{1}+1<x_{2}<-x_{2}<-x_{1}-1<x_{2}+1<-x_{1}\]
_is represented as \(\alpha_{1}^{(0)}\ \alpha_{-2}^{(-1)}\ \alpha_{1}^{(1)}\ \alpha_{2}^{(0)}\ \alpha_{-2}^{(0)}\ \alpha_{-1}^{(-1)}\ \alpha_{2}^{(1)}\ \alpha_{-1}^{(0)}\)._
Considering \(-x_{i}\) as \(x_{-i}\), the letter \(\alpha_{i}^{(s)}\) represents \(x_{i}+s\) for any \(\alpha_{i}^{(s)}\in C(n)\). For any \(\alpha_{i}^{(s)}\in C(n)\), we use \(\overline{\alpha_{i}^{(s)}}\) to represent the letter \(\alpha_{-i}^{(-s)}\), which we call the _conjugate_ of \(\alpha_{i}^{(s)}\).
**Definition 4.2**.: _A symmetric sketch is an order on the letters in \(C(n)\) such that the following hold for any \(\alpha_{i}^{(s)},\alpha_{j}^{(t)}\in C(n)\):_
1. _If_ \(\alpha_{i}^{(s)}\) _appears before_ \(\alpha_{j}^{(t)}\)_, then_ \(\overline{\alpha_{j}^{(t)}}\) _appears before_ \(\overline{\alpha_{i}^{(s)}}\)_._
2. _If_ \(\alpha_{i}^{(s-1)}\) _appears before_ \(\alpha_{j}^{(t-1)}\)_, then_ \(\alpha_{i}^{(s)}\) _appears before_ \(\alpha_{j}^{(t)}\)_._
3. \(\alpha_{i}^{(s-1)}\) _appears before_ \(\alpha_{i}^{(s)}\)_._
**Proposition 4.3**.: _An order on the letters of \(C(n)\) corresponds to a region of \(\mathcal{C}_{n}\) if and only if it is a symmetric sketch._
Proof.: The idea of the proof is the same as that of [2, Lemma 5.2]. It is clear that any order that corresponds to a region must satisfy the properties in Definition 4.2 and hence be a symmetric sketch. For the converse, we show that there is a point in \(\mathbb{R}^{n}\) satisfying the inequalities given by a symmetric sketch.
We prove this using induction on \(n\), the case \(n=1\) being clear. Let \(n\geq 2\) and \(w\) be a symmetric sketch. Without loss of generality, we can assume that the first letter of \(w\) is \(\alpha_{n}^{(0)}\). Deleting the letters with subscript \(n\) and \(-n\) from \(w\) gives a symmetric sketch \(w^{\prime}\) in the letters \(C(n-1)\). Using the induction hypothesis, we can choose a point \(\mathbf{x}^{\prime}\in\mathbb{R}^{n-1}\) satisfying the inequalities given by \(w^{\prime}\). Suppose the letter before \(\alpha_{n}^{(1)}\) in \(w\) is \(\alpha_{i}^{(s)}\) and the letter after it is \(\alpha_{j}^{(t)}\). We choose \(x_{n}\neq-1\) such that \(x_{i}^{\prime}+s<x_{n}+1<x_{j}^{\prime}+t\) in such a way that \(x_{n}+1\) is also in the correct position with respect \(0\) specified by \(w\). This is possible since \(\mathbf{x}^{\prime}\) satisfies \(w^{\prime}\).
We show that \((x_{1}^{\prime},\ldots,x_{n-1}^{\prime},x_{n})\) satisfies the inequalities given by \(w\). We only have to check that \(x_{n}\) and \((x_{n}+1)\) are in the correct relative position with respect to the other letters since property (1) of Definition 4.2 will then show that \(-x_{n}\) and \(-x_{n}-1\) are also in the correct relative position. By the choice of \(x_{n}\), we see that \(x_{n}+1\) in the correct position. We have to show that \(x_{n}\) is less than \(\pm x_{i}^{\prime}\) and \(\pm(x_{i}^{\prime}+1)\) for all \(i^{\prime}\in[n-1]\). If \(x_{n}>x_{1}^{\prime}\), then \(x_{n}+1>x_{1}^{\prime}+1\) and since \(x_{n}+1\) satisfies the inequalities specified by \(w\), \(\alpha_{1}^{(1)}\) must be before \(\alpha_{n}^{(1)}\) in \(w\). But by property (2) of Definition 4.2, this means that \(\alpha_{1}^{(0)}\) must be before \(\alpha_{n}^{(0)}\) in \(w\), which is a contradiction. The same logic can be used to show that \(x_{n}\) satisfies the other inequalities given by \(w\).
We now derive some properties of symmetric sketches. A symmetric sketch has \(4n\) letters, so we call the word made by the first \(2n\) letters its first half. Similarly we define its second half.
**Lemma 4.4**.: _The second half of a symmetric sketch is completely specified by its first half. In fact, it is the'mirror' of the first half, i.e., it is the reverse of the first half with each letter replaced with its conjugate._
Proof.: For any symmetric sketch, the letter \(\alpha_{i}^{(s)}\) is in the first half if and only if the letter \(\overline{\alpha_{i}^{(s)}}\) is in the second half. This property can be proved as follows: Suppose there is a pair of conjugates in the first half of a symmetric sketch. Since conjugate pairs partition \(C(n)\), this means that there is a pair of conjugates in the second half as well. But this would contradict property (1) of a symmetric sketch in Definition 4.2.
Hence, the set of letters in the second half are the conjugates of the letters in the first half. The order in which they appear is forced by property (1) of Definition 4.2, that is, the conjugates appear in the opposite order as the corresponding letters in the first half. So if the first half of a symmetric sketch is \(a_{1}\cdots a_{2n}\) where \(a_{i}\in C(n)\) for all \(i\in[2n]\), the sketch is
\[a_{1}\quad a_{2}\quad\cdots\quad a_{2n}\quad\overline{a_{2n}}\quad\cdots\quad \overline{a_{2}}\quad\overline{a_{1}}.\]
We draw a vertical line between the \(2n^{th}\) and \((2n+1)^{th}\) letter in a symmetric sketch to indicate both the mirroring and the change in sign (note that if the \(2n^{th}\) letter is \(\alpha_{i}^{(s)}\), we have \(x_{i}+s<0<-x_{i}-s\) in the corresponding region).
**Example 4.5**.: \(\alpha_{-3}^{(-1)}\ \alpha_{-3}^{(0)}\ \alpha_{1}^{(0)}\ \alpha_{-2}^{(-1)}\ \alpha_{1}^{(1)}\ \alpha_{2}^{(0)}\ |\ \alpha_{-2}^{(0)}\ \alpha_{-1}^{(-1)}\ \alpha_{2}^{(1)}\ \alpha_{-1}^{(0)}\ \alpha_{3}^{(0)}\ \alpha_{3}^{(1)}.\)__
A letter in \(C(n)\) is called an \(\alpha\)-_letter_ if it is of the form \(\alpha_{i}^{(0)}\) or \(\alpha_{-i}^{(-1)}\) where \(i\in[n]\). The other letters are called \(\beta\)-_letters_. The \(\beta\)-letter 'corresponding' to an \(\alpha\)-letter is the one with the same subscript. Hence, in a symmetric sketch, an \(\alpha\)-letter always appears before its corresponding \(\beta\)-letter by property (3) in Definition 4.2. The order in which the subscripts of the \(\alpha\)-letters appear is the same as the order in which the subscripts of the \(\beta\)-letters appear by property (2) of Definition 4.2. The proof of the following lemma is very similar to that of the previous lemma.
**Lemma 4.6**.: _The order in which the subscripts of the \(\alpha\)-letters in a symmetric sketch appear is of the form_
\[\begin{matrix}i_{1}&i_{2}&\cdots&i_{n}&-i_{n}&\cdots&-i_{2}&-i_{1}\end{matrix}\]
_where \(\{|i_{1}|,\ldots,|i_{n}|\}=[n]\)._
Using Lemmas 4.4 and 4.6, to specify the sketch, we only need to specify the following:
1. The \(\alpha,\beta\)-word corresponding to the first half.
2. The signed permutation given by the first \(n\)\(\alpha\)-letters.
The \(\alpha,\beta\)-word corresponding to the first half is a word of length \(2n\) in the letters \(\{\alpha,\beta\}\) such that the \(i^{th}\) letter is an \(\alpha\) if and only if the \(i^{th}\) letter of the symmetric sketch is an \(\alpha\)-letter.
There is at most one sketch corresponding to a pair of an \(\alpha,\beta\)-word and a signed permutation. This is because the signed permutation tells us, by Lemma 4.6, the order in which the subscripts of the \(\alpha\)-letters (and hence \(\beta\)-letters) appears. Using this and the \(\alpha,\beta\)-word, we can construct the first half and, by Lemma 4.4, the entire sketch.
**Example 4.7**.: _To the symmetric sketch_
\[\alpha_{-3}^{(-1)}\ \alpha_{-3}^{(0)}\ \alpha_{1}^{(0)}\ \alpha_{-2}^{(-1)}\ \alpha_{1}^{(1)}\ \alpha_{2}^{(0)}\ |\ \alpha_{-2}^{(0)}\ \alpha_{-1}^{(-1)}\ \alpha_{2}^{(1)}\ \alpha_{-1}^{(0)}\ \alpha_{3}^{(0)}\ \alpha_{3}^{(1)}\]
_we associate the pair consisting of the following:_
1. \(\alpha,\beta\)_-word:_ \(\alpha\beta\alpha\alpha\beta\alpha\)_._
2. _Signed permutation:_ \(-3\quad 1\quad-2\)_._
_If we are given the \(\alpha,\beta\)-word and signed permutation above, the unique sketch corresponding to it is the one given above._
The next proposition characterizes the pairs of \(\alpha,\beta\)-words and signed permutations that correspond to symmetric sketches.
**Proposition 4.8**.: _A pair consisting of_
1. _an_ \(\alpha,\beta\)_-word of length_ \(2n\) _such that any prefix of the word has at least as many_ \(\alpha\)_-letters as_ \(\beta\)_-letters and_
2. _any signed permutation_
_corresponds to a symmetric sketch and all symmetric sketches correspond to such pairs._
Proof.: By property (3) of Definition 4.2, any \(\alpha,\beta\)-word corresponding to the first half of a sketch should have at least as many \(\alpha\)-letters as \(\beta\)-letters in any prefix.
We now prove that given such a pair, there is a symmetric sketch corresponding to it. If the given \(\alpha,\beta\)-word is \(l_{1}l_{2}\cdots l_{2n}\) and the given signed permutation is \(i_{1}i_{2}\cdots i_{n}\), we construct the symmetric sketch as follows:
1. Extend the \(\alpha,\beta\)-word to the one of length \(4n\) given by \[l_{1}\quad l_{2}\quad\cdots\quad l_{2n}\quad\overline{l_{2n}}\quad\cdots \quad\overline{l_{2}}\quad\overline{l_{1}}\] where \(\overline{l_{i}}=\alpha\) if and only if \(l_{i}=\beta\) for all \(i\in[2n]\).
2. Extend the signed permutation to the sequence of length \(2n\) given by \[i_{1}\quad i_{2}\quad\cdots\quad i_{n}\quad-i_{n}\quad\cdots\quad-i_{2}\quad- i_{1}.\]
3. Label the subscripts of the \(\alpha\)-letters of the extended \(\alpha,\beta\)-word in the order given by the extended signed permutation and similarly label the \(\beta\)-letters.
If we show that the word constructed is a symmetric sketch, it is clear that it will correspond to the given \(\alpha,\beta\)-word and signed permutation. We have to check that the constructed word satisfies the properties in Definition 4.2.
The way the word was constructed, we see that it is of the form
\[a_{1}\quad a_{2}\quad\cdots\quad a_{2n}\quad\overline{a_{2n}}\quad\cdots \quad\overline{a_{2}}\quad\overline{a_{1}}\]
where \(a_{i}\in C(n)\) for all \(i\in[2n]\). Since the conjugate of the \(i^{th}\)\(\alpha\) is the \((2n-i+1)^{th}\)\(\beta\) and vice-versa, the first half of the word cannot have a pair of conjugates. Hence the word has all letters of \(C(n)\). This shows that property (1) of Definition 4.2 holds. Property (2) is taken care of since, by construction, the subscripts of the \(\alpha\)-letters appear in the same order as those of the \(\beta\)-letters.
To show that property (3) holds, it suffices to show that any prefix of the word has at least as many \(\alpha\)-letters as \(\beta\)-letters. This is already true for the first half. To show that this is true for the entire word, we consider \(\alpha\) as \(+1\) and \(\beta\) as \(-1\). Hence, the condition is that any prefix has a non-negative sum. Since any prefix of size greater than \(2n\) is of the form
\[l_{1}\quad l_{2}\quad\cdots\quad l_{2n}\quad\overline{l_{2n}}\quad\cdots \quad\overline{l_{k}}\]
for some \(k\in[2n]\), the sum is \(l_{1}+\cdots+l_{k-1}\geq 0\). So property (3) holds as well and hence the constructed word is a symmetric sketch.
We use this description to count symmetric sketches.
**Lemma 4.9**.: _The number of \(\alpha,\beta\)-words of length \(2n\) having at least as many \(\alpha\)-letters as \(\beta\)-letters in any prefix is \(\binom{2n}{n}\)._
Proof.: We consider these \(\alpha,\beta\)-words as lattice paths. Using the step \(U=(1,1)\) for \(\alpha\) and the step \(D=(1,-1)\) for \(\beta\), we have to count those lattice paths with each step \(U\) or \(D\) that start at the origin, have \(2n\) steps, and never fall below the \(x\)-axis.
Using the reflection principle (for example, see [11]), we get that the number of such lattice paths that end at \((2n,2k)\) for \(k\in[0,n]\) is given by
\[\binom{2n}{n+k}-\binom{2n}{n+k+1}.\]
The (telescoping) sum over \(k\in[0,n]\) gives the required result.
The above lemma and Proposition 4.8 immediately give the following.
**Theorem 4.10**.: _The number of symmetric sketches and hence regions of \(\mathcal{C}_{n}\) is_
\[2^{n}n!\binom{2n}{n}.\]
In [2], Athanasiadis obtains bijections between several classes of non-nesting partitions and regions of certain arrangements. We will mention the one for the arrangement \(\mathcal{C}_{n}\), which gives a bijection between the \(\alpha,\beta\)-words associated to symmetric sketches and certain non-nesting partitions.
**Definition 4.11**.: _A symmetric non-nesting partition is a partition of \([-2n,2n]\setminus\{0\}\) such that the following hold:_
1. _Each block is of size_ \(2\)_._
2. _If_ \(B=\{a,b\}\) _is a block, so is_ \(-B=\{-a,-b\}\)_._
3. _If_ \(\{a,b\}\) _is a block and_ \(c,d\in[-2n,2n]\setminus\{0\}\) _are such that_ \(a<c<d<b\)_, then_ \(\{c,d\}\) _is not a block._
Symmetric non-nesting partitions are usually represented using arc-diagrams. This is done by using \(4n\) dots to represent the numbers in \([-2n,2n]\setminus\{0\}\) in order and joining dots in the same block using an arc. The properties of these partitions imply that there are no nesting arcs and that the diagram is symmetric, which we represent by drawing a line after \(2n\) dots.
**Example 4.12**.: _The arc diagram associated to the symmetric non-nesting partition of \([-6,6]\setminus\{0\}\)_
\[\{-6,-3\},\{-5,-1\},\{-4,2\},\{-2,4\},\{1,5\},\{3,6\}\]
_is given in Figure 2._
Figure 2. The symmetric non-nesting partition of Example 4.12.
It can also be seen that there are exactly \(n\) pairs of blocks of the form \(\{B,-B\}\) with no block containing both a number and its negative. Also, the first \(n\) blocks, with blocks being read in order of the smallest element in it, do not have a pair of the form \(\{B,-B\}\). Hence, we can label the first \(n\) blocks with a signed permutation and label the block \(-B\) with the negative of the label of \(B\) to obtain a labeling of all blocks. We call such objects _labeled symmetric non-nesting partitions._ In the arc diagram, the labeling is done by replacing the dots representing the elements in a block with its label.
We can obtain a labeled symmetric non-nesting partition from a symmetric sketch by joining the letters \(\alpha_{i}^{(0)}\) and \(\alpha_{i}^{(1)}\) and similarly \(\alpha_{-i}^{(-1)}\) and \(\alpha_{-i}^{(0)}\) with arcs and replacing each letter in the sketch with its subscript. It can be shown that this construction is a bijection between symmetric sketches and labeled symmetric non-nesting partitions. In particular, the \(\alpha,\beta\)-words associated with symmetric sketches are in bijection with symmetric non-nesting partitions.
**Example 4.13**.: _To the symmetric sketch_
\[\alpha_{3}^{(0)}\alpha_{2}^{(0)}\alpha_{-1}^{(-1)}\alpha_{3}^{(1)}\alpha_{1} ^{(0)}\alpha_{2}^{(1)}|\alpha_{-2}^{(-1)}\alpha_{-1}^{(0)}\alpha_{-3}^{(-1)} \alpha_{1}^{(1)}\alpha_{-2}^{(0)}\alpha_{-3}^{(0)}\]
_we associate the labeled symmetric non-nesting partition in Figure 3._
We now describe another way to represent the regions. We have already seen that a sketch corresponds to a pair consisting of an \(\alpha,\beta\)-word and a signed permutation. We represent the \(\alpha,\beta\)-word as a lattice path just as we did in the proof of Lemma 4.9. We specify the signed permutation by labeling the first \(n\) up-steps of the lattice path.
**Example 4.14**.: _The lattice path associated to the symmetric sketch_
\[\alpha_{-3}^{(-1)}\ \alpha_{-3}^{(0)}\ \alpha_{1}^{(0)}\ \alpha_{-2}^{(-1)}\ \alpha_{1}^{(1)}\ \alpha_{2}^{(0)}\ |\ \alpha_{-2}^{(0)}\ \alpha_{-1}^{(-1)}\ \alpha_{2}^{(1)}\ \alpha_{-1}^{(0)}\ \alpha_{3}^{(0)}\ \alpha_{3}^{(1)}\]
_is given in Figure 4._
These representations for the regions of \(\mathcal{C}_{n}\) also allow us to determine and count which regions are bounded.
Figure 4. Lattice path associated to the symmetric sketch in Example 4.14.
Figure 3. Arc diagram associated to the symmetric sketch in Example 4.13.
**Theorem 4.15**.: _The number of bounded regions of the arrangement \(\mathcal{C}_{n}\) is_
\[2^{n}n!\binom{2n-1}{n}.\]
Proof.: First note that the arrangement \(\mathcal{C}_{n}\) has rank \(n\) and is hence essential. From the bijection defined above, it can be seen that the arc diagram associated to any region \(R\) of \(\mathcal{C}_{n}\) can be obtained by plotting a point \((x_{1},\ldots,x_{n})\in R\) on the real line. This is done by marking \(x_{i}\) and \(x_{i}+1\) on the real line using \(i\) for all \(i\in[n]\) and then joining them with an arc and similarly marking \(-x_{i}-1\) and \(-x_{i}\) using \(-i\) and joining them with an arc.
This can be used to show that a region of \(\mathcal{C}_{n}\) is bounded if and only if the arc diagram is 'interlinked'. For example, Figure 3 shows an arc diagram that is interlinked and Figure 5 shows one that is not. In terms of lattice paths, the bounded regions are those whose corresponding lattice path never touches the \(x\)-axis except at the origin.
This shows that the number of bounded regions of \(\mathcal{C}_{n}\) is \(2^{n}n!\) times the number of unlabeled lattice paths of length \(2n\) that never touch the \(x\)-axis apart from at the origin. Deleting the first step (which is necessarily an up-step) gives a bijection between such paths and those of length \(2n-1\) that never fall below the \(x\)-axis. Using the same idea as in the proof of Lemma 4.9, it can be checked that the number of such paths is \(\binom{2n-1}{n}\). This proves the required result.
**Remark 4.16**.: _In [13], the authors study the type \(C\) Catalan arrangement directly, i.e., without using the translation \(\mathcal{C}_{n}\) mentioned above. Hence, using the same logic, they use orders on the letters_
\[\{\alpha_{i}^{(s)}\mid i\in[-n,n]\setminus\{0\},\ s\in\{0,1\}\}\]
_to represent the regions of the type \(C\) Catalan arrangement. They claim that these orders are those such that the following hold for any \(i,j\in[-n,n]\setminus\{0\}\) and \(s\in\{0,1\}\):_
1. _If_ \(\alpha_{i}^{(0)}\) _appear before_ \(\alpha_{j}^{(0)}\)_, then_ \(\alpha_{i}^{(1)}\) _appears before_ \(\alpha_{j}^{(1)}\)_._
2. \(\alpha_{i}^{(0)}\) _appears before_ \(\alpha_{i}^{(1)}\)_._
3. _If_ \(\alpha_{i}^{(0)}\) _appears before_ \(\alpha_{j}^{(s)}\)_, then_ \(\alpha_{-j}^{(0)}\) _appears before_ \(\alpha_{-i}^{(s)}\)_._
_Though this can be shown to be true, the method used in [13] to construct a point satisfying the inequalities given by such an order does not seem to work in general. We describe their method and then exhibit a case where it does not work._
_Let \(w=w_{1}\cdots w_{4n}\) be an order satisfying the properties given above. Then construct \(\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\) as follows: Let \(z_{0}=0\) (or pick \(z_{0}\) arbitrarily). Then define \(z_{p}\) for \(p=1,2,\ldots,4n\) in order as follows: If \(w_{p}=\alpha_{i}^{(0)}\) then set \(z_{p}=z_{p-1}+\frac{1}{2n+1}\) and \(x_{i}=z_{p}\), and if \(w_{p}=\alpha_{i}^{(1)}\) then set \(z_{p}=x_{i}+1\). Here we consider \(x_{-i}=-x_{i}\) for any \(i\in[n]\). Then \(\mathbf{x}\) satisfies the inequalities given by \(w\)._
Figure 5. Arc diagram associated to the symmetric sketch of Example 4.7.
_The following example shows that this method does not always work; in fact \(\mathbf{x}\) is not always well-defined. Consider the order \(w=\alpha_{-2}^{(0)}\alpha_{1}^{(0)}\alpha_{-2}^{(1)}\alpha_{1}^{(1)}\alpha_{-1}^ {(0)}\alpha_{2}^{(0)}\alpha_{-1}^{(1)}\alpha_{2}^{(1)}\). Following the above procedure, we would get that \(x_{1}\) is both \(\frac{2}{5}\) as well as \(-1-\frac{3}{5}\)._
### Extended type C Catalan
Fix \(m,n\geq 1\). The type \(C\)\(m\)-Catalan arrangement in \(\mathbb{R}^{n}\) has hyperplanes
\[2X_{i} =0,\pm 1,\pm 2,\ldots,\pm m\] \[X_{i}+X_{j} =0,\pm 1,\pm 2,\ldots,\pm m\] \[X_{i}-X_{j} =0,\pm 1,\pm 2,\ldots,\pm m\]
for all \(1\leq i<j\leq n\). We will study the arrangement obtained by performing the translation \(X_{i}=x_{i}+\frac{m}{2}\) for all \(i\in[n]\). The translated arrangement, which we call \(\mathcal{C}_{n}^{(m)}\), has hyperplanes
\[2x_{i} =-2m,-2m+1,\ldots,0\] \[x_{i}+x_{j} =-2m,-2m+1,\ldots,0\] \[x_{i}-x_{j} =0,\pm 1,\pm 2,\ldots,\pm m\]
for all \(1\leq i<j\leq n\). Note that \(\mathcal{C}_{n}=\mathcal{C}_{n}^{(1)}\). The regions of \(\mathcal{C}_{n}^{(m)}\) are given by valid total orders on
\[\{x_{i}+s\mid i\in[n],\ s\in[0,m]\}\cup\{-x_{i}-s\mid i\in[n],\ s\in[0,m]\}.\]
Just as we did for \(\mathcal{C}_{n}\), such orders will be represented by using the symbol \(\alpha_{i}^{(s)}\) for \(x_{i}+s\) and \(\alpha_{-i}^{(-s)}\) for \(-x_{i}-s\) for all \(i\in[n]\) and \(s\in[0,m]\). Let \(C^{(m)}(n)\) be the set
\[\{\alpha_{i}^{(s)}\mid i\in[n],\ s\in[0,m]\}\cup\{\alpha_{i}^{(s)}\mid-i\in[n],\ s\in[-m,0]\}.\]
For any \(\alpha_{i}^{(s)}\in C^{(m)}(n)\), \(\overline{\alpha_{i}^{(s)}}\) represents \(\alpha_{-i}^{(-s)}\) and is called the conjugate of \(\alpha_{s}^{(i)}\). Letters of the form \(\alpha_{i}^{(0)}\) or \(\alpha_{-i}^{(-m)}\) for any \(i\in[n]\) are called \(\alpha\)-letters. The others are called \(\beta\)-letters.
**Definition 4.17**.: _An order on the letters in \(C^{(m)}(n)\) is called a symmetric \(m\)-sketch if the following hold for all \(\alpha_{i}^{(s)},\alpha_{j}^{(t)}\in C^{(m)}(n)\):_
1. _If_ \(\alpha_{i}^{(s)}\) _appears before_ \(\alpha_{j}^{(t)}\)_, then_ \(\overline{\alpha_{j}^{(t)}}\) _appears before_ \(\overline{\alpha_{i}^{(s)}}\)_._
2. _If_ \(\alpha_{i}^{(s-1)}\) _appears before_ \(\alpha_{j}^{(t-1)}\)_, then_ \(\alpha_{i}^{(s)}\) _appears before_ \(\alpha_{j}^{(t)}\)_._
3. \(\alpha_{i}^{(s-1)}\) _appears before_ \(\alpha_{i}^{(s)}\)_._
The following result can be proved just as Proposition 4.3.
**Proposition 4.18**.: _An order on the letters in \(C^{(m)}(n)\) corresponds to a region of \(\mathcal{C}_{n}^{(m)}\) if and only if it is a symmetric \(m\)-sketch._
Similar to Lemma 4.6, it can be shown that the order in which the subscripts of the \(\alpha\)-letters appear in a symmetric \(m\)-sketch is of the form
\[i_{1}\quad i_{2}\quad\cdots\quad i_{n}\quad-i_{n}\quad\cdots\quad-i_{2}\quad-i _{1}\]
where \(\{|i_{1}|,\ldots,|i_{n}|\}=[n]\). Just as in the case of symmetric sketches, we associate an \(\alpha,\beta\)-word and signed permutation to a symmetric \(m\)-sketch which completely determines it.
**Example 4.19**.: _To the symmetric 2-sketch_
\[\alpha_{2}^{(0)}\alpha_{-1}^{(-2)}\alpha_{2}^{(1)}\alpha_{-1}^{(-1)}\alpha_{ 1}^{(0)}\alpha_{-2}^{(-2)}\mid\alpha_{2}^{(2)}\alpha_{-1}^{(0)}\alpha_{1}^{(1) }\alpha_{-2}^{(-1)}\alpha_{1}^{(2)}\alpha_{-2}^{(0)}\]
_we associate the pair consisting of the following:_
1. \(\alpha,\beta\)_-word:_ \(\alpha\alpha\beta\beta\alpha\alpha\)_._
2. _Signed permutation:_ \(2\)__\(-1\)_._
The set of \(\alpha,\beta\)-words associated to symmetric \(m\)-sketches for \(m>1\) does not seem to have a simple characterization like those for symmetric sketches (see Proposition 4.8). However, looking at symmetric \(m\)-sketches as labeled non-nesting partitions as done in [2], we see that such objects have already been counted bijectively (refer [10]).
**Definition 4.20**.: _A symmetric \(m\)-non-nesting partition is a partition of \([-(m+1)n,(m+1)n]\setminus\{0\}\) such that the following hold:_
1. _Each block is of size_ \((m+1)\)_._
2. _If_ \(B\) _is a block, so is_ \(-B\)_._
3. _If_ \(a,b\) _are in some block_ \(B\)_,_ \(a<b\) _and there is no number_ \(a<c<b\) _such that_ \(c\in B\)_, then if_ \(a<c<d<b\)_,_ \(c\) _and_ \(d\) _are not in the same block._
Just as we did for the \(m=1\) case, we can obtain a labeled symmetric \(m\)-non-nesting partition from a symmetric \(m\)-sketch by joining the letters \(\alpha_{i}^{(0)},\alpha_{i}^{(1)},\ldots,\alpha_{i}^{(m)}\) and similarly \(\alpha_{-i}^{(-m)},\alpha_{-i}^{(-m+1)},\ldots,\alpha_{-i}^{(0)}\) with arcs and labeling each such chain with the subscript of the letters being joined.
**Example 4.21**.: _To the symmetric 2-sketch in Example 4.19, we associate the labeled 2-non-nesting partition of Figure 6._
The number of various classes of non-nesting partitions have been counted bijectively. In terms of [10] or [2], the symmetric \(m\)-non-nesting partitions defined above are called type \(C\) partitions of size \((m+1)n\) of type \((m+1,\ldots,m+1)\) where this is an \(n\)-tuple representing the size of the (nonzero) block pairs \(\{B,-B\}\). The number of such partitions is
\[\binom{(m+1)n}{n}.\]
Hence we get the following theorem.
**Theorem 4.22**.: _The number of symmetric \(m\)-sketches, which is the number of regions of \(\mathcal{C}_{n}^{(m)}\) is_
\[2^{n}n!\binom{(m+1)n}{n}.\]
## 5. Catalan deformations of other types
We will now use'sketches and moves', as in [6], to count the regions of Catalan arrangements of other types. Depending on the context, we represent the regions of arrangements
Figure 6. A labeled 2-non-nesting partition
using sketches, arc diagrams, or lattice paths and frequently make use of the bijections identifying them. We usually use sketches to define moves and use arc diagrams and lattice paths to count regions as well as bounded regions.
### Type D Catalan
Fix \(n\geq 2\). The type \(D\) Catalan arrangement in \(\mathbb{R}^{n}\) has hyperplanes
\[X_{i}+X_{j} =-1,0,1\] \[X_{i}-X_{j} =-1,0,1\]
for \(1\leq i<j\leq n\). Translating this arrangement by setting \(X_{i}=x_{i}+\frac{1}{2}\) for all \(i\in[n]\), we get the arrangement \(\mathcal{D}_{n}\) with hyperplanes
\[x_{i}+x_{j} =-2,-1,0\] \[x_{i}-x_{j} =-1,0,1\]
for \(1\leq i<j\leq n\). Figure 7 shows \(\mathcal{D}_{2}\) as a sub-arrangement of \(\mathcal{C}_{2}\). It also shows how the regions of \(\mathcal{D}_{2}\) partition the regions of \(\mathcal{C}_{2}\).
We use the idea of moves to count the regions of \(\mathcal{D}_{n}\) by considering it as a sub-arrangement of \(\mathcal{C}_{n}\). The hyperplanes from \(\mathcal{C}_{n}\) that are missing in \(\mathcal{D}_{n}\) are
\[2x_{i}=-2,-1,0\]
Figure 7. The arrangement \(\mathcal{C}_{2}\) with the hyperplanes in \(\mathcal{D}_{2}\) in bold. Two regions of \(\mathcal{C}_{2}\) are labeled with their symmetric labeled non-nesting partition.
for all \(i\in[n]\). Hence, the type \(D\) Catalan moves on symmetric sketches (regions of \(\mathcal{C}_{n}\)), which we call \(\mathcal{D}\) moves, are as follows:
1. Swapping the \(2n^{th}\) and \((2n+1)^{th}\) letter.
2. Swapping the \(n^{th}\) and \((n+1)^{th}\)\(\alpha\)-letters if they are adjacent, along with the \(n^{th}\) and \((n+1)^{th}\)\(\beta\)-letters.
The first move covers the inequalities corresponding to the hyperplanes \(x_{i}+1=-x_{i}-1\) and \(x_{i}=-x_{i}\) for all \(i\in[n]\) since the only conjugates that are adjacent, by Lemma 4.4, are the \(2n^{th}\) and \((2n+1)^{th}\) letter.
The second move covers the inequalities corresponding to the hyperplanes \(x_{i}=-x_{i}-1\) (equivalently, \(x_{i}+1=-x_{i}\)) for all \(i\in[n]\). This is due to the fact that the only way \(\alpha_{i}^{(0)}\) and \(\alpha_{-i}^{(-1)}\) as well as \(\alpha_{i}^{(1)}\) and \(\alpha_{-i}^{(0)}\) can be adjacent is, by Lemma 4.6, when the \(n^{th}\) and \((n+1)^{th}\)\(\alpha\)-letters are adjacent. Also, by Lemma 4.4, the \(n^{th}\) and \((n+1)^{th}\)\(\alpha\)-letters are adjacent if and only if the \(n^{th}\) and \((n+1)^{th}\)\(\beta\)-letters are adjacent.
**Example 5.1**.: \(A\) _series of \(\mathcal{D}\) moves applied to a symmetric sketch is given below:_
\[\alpha_{-1}^{(-1)}\alpha_{2}^{(0)}\alpha_{-2}^{(-1)}\alpha_{-1}^ {(0)} \mid\alpha_{1}^{(0)}\alpha_{2}^{(1)}\alpha_{-2}^{(0)}\alpha_{1}^{(1)}\] \[\xrightarrow{\mathcal{D}\;\text{move}}\alpha_{-1}^{(-1)}\alpha_ {2}^{(0)}\alpha_{-2}^{(-1)}\alpha_{1}^{(0)} \mid\alpha_{-1}^{(0)}\alpha_{2}^{(1)}\alpha_{-2}^{(0)}\alpha_{1}^{(1)}\] \[\xrightarrow{\mathcal{D}\;\text{move}}\alpha_{-1}^{(-1)}\alpha_ {-2}^{(-1)}\alpha_{2}^{(0)}\alpha_{1}^{(0)} \mid\alpha_{-1}^{(0)}\alpha_{-2}^{(0)}\alpha_{2}^{(1)}\alpha_{1}^{(1)}\] \[\xrightarrow{\mathcal{D}\;\text{move}}\alpha_{-1}^{(-1)}\alpha_ {-2}^{(-1)}\alpha_{2}^{(0)}\alpha_{-1}^{(0)} \mid\alpha_{1}^{(0)}\alpha_{-2}^{(0)}\alpha_{2}^{(1)}\alpha_{1}^{(1)}\]
To count the regions of \(\mathcal{D}_{n}\), we have to count the number of equivalence classes of symmetric sketches where two sketches are equivalent if one can be obtained from the other via a series of \(\mathcal{D}\) moves. In Figure 7, the two labeled regions of \(\mathcal{C}_{2}\) are adjacent and lie in the same region of \(\mathcal{D}_{2}\). They are related by swapping of the fourth and fifth letters of their sketches, which is a \(\mathcal{D}\) move.
The fact about these moves that will help with the count is that a series of \(\mathcal{D}\) moves do not change the sketch too much. Hence we can list the sketches that are \(\mathcal{D}\) equivalent to a given sketch.
First, consider the case when the \(n^{th}\)\(\alpha\)-letter of the symmetric sketch is not in the \((2n-1)^{th}\) position. In this case, the \(n^{th}\)\(\alpha\)-letter is far enough from the \(2n^{th}\) letter that a \(\mathcal{D}\) move of the first kind (swapping the \(2n^{th}\) and \((2n+1)^{th}\) letter) will not affect the letter after the \(n^{th}\)\(\alpha\)-letter. Hence it does not change whether the \(n^{th}\) and \((n+1)^{th}\)\(\alpha\)-letters are adjacent.
Let \(w\) be a sketch where the \(n^{th}\)\(\alpha\)-letter is not in the \((2n-1)^{th}\) position. The number of sketches \(\mathcal{D}\) equivalent to \(w\) is \(4\) when the \(n^{th}\) and \((n+1)^{th}\)\(\alpha\)-letters are adjacent. They are illustrated below:
\[\cdots\alpha_{-i}^{(-1)}\alpha_{i}^{(0)}\cdots\alpha_{j}^{(s)}\mid\alpha_{-j}^ {(-s)}\cdots\alpha_{-i}^{(0)}\alpha_{i}^{(1)}\cdots\] \[\cdots\alpha_{-i}^{(-1)}\alpha_{i}^{(0)}\cdots\alpha_{-j}^{(-s)} \mid\alpha_{j}^{(s)}\cdots\alpha_{i}^{(0)}\alpha_{i}^{(1)}\cdots\] \[\cdots\alpha_{i}^{(0)}\alpha_{-i}^{(-1)}\cdots\alpha_{j}^{(s)} \mid\alpha_{-j}^{(-s)}\cdots\alpha_{i}^{(1)}\alpha_{-i}^{(0)}\cdots\] \[\cdots\alpha_{i}^{(0)}\alpha_{-i}^{(-1)}\cdots\alpha_{-j}^{(-s)} \mid\alpha_{j}^{(s)}\cdots\alpha_{i}^{(1)}\alpha_{-i}^{(0)}\cdots\]
The number of sketches \(\mathcal{D}\) equivalent to \(w\) is \(2\) when the \(n^{th}\) and \((n+1)^{th}\)\(\alpha\)-letter are not adjacent. They are illustrated below:
\[\cdots\alpha_{j}^{(s)}\mid\alpha_{-j}^{(-s)}\cdots\quad\cdots\alpha_{-j}^{(-s)} \mid\alpha_{j}^{(s)}\cdots\]
Notice also that the equivalent sketches also satisfy the same properties (\(n^{th}\)\(\alpha\)-letter not being in the \((2n-1)^{th}\) position and whether the \(n^{th}\) and \((n+1)^{th}\)\(\alpha\)-letters are adjacent).
In case the \(n^{th}\)\(\alpha\)-letter is in the \((2n-1)^{th}\) position of the symmetric sketch, it can be checked that it has exactly 4 equivalent sketches all of which also have the \(n^{th}\)\(\alpha\)-letter in the \((2n-1)^{th}\) position:
\[\cdots\alpha_{i}^{(0)}\alpha_{i}^{(1)}\mid\alpha_{-i}^{(-1)}\alpha_{-i}^{(0)}\cdots\]
\[\cdots\alpha_{i}^{(0)}\alpha_{-i}^{(-1)}\mid\alpha_{i}^{(1)}\alpha_{-i}^{(0)}\cdots\]
\[\cdots\alpha_{-i}^{(-1)}\alpha_{i}^{(0)}\mid\alpha_{-i}^{(0)}\alpha_{i}^{(1)}\cdots\]
\[\cdots\ \alpha_{-i}^{(-1)}\alpha_{-i}^{(0)}\mid\alpha_{i}^{(0)}\alpha_{i}^{(1)}\cdots\]
Figure 7 shows that each region of \(\mathcal{D}_{2}\) contains exactly 2 or 4 regions of \(\mathcal{C}_{2}\), as expected from the above observations.
**Theorem 5.2**.: _The number of \(\mathcal{D}\) equivalence classes on symmetric sketches and hence the number of regions of \(\mathcal{D}_{n}\) is_
\[2^{n-1}\cdot\frac{(2n-2)!}{(n-1)!}\cdot(3n-2).\]
Proof.: By the observations made above, the number of sketches equivalent to a given sketch only depends on its \(\alpha,\beta\)-word (see Proposition 4.8). So, we need to count the number of \(\alpha,\beta\)-words of length \(2n\) with any prefix having at least as many \(\alpha\)-letters as \(\beta\)-letters that are of the following types:
1. The \(n^{th}\)\(\alpha\)-letter is not in the \((2n-1)^{th}\) position and 1. the letter after the \(n^{th}\)\(\alpha\)-letter is an \(\alpha\). 2. the letter after the \(n^{th}\)\(\alpha\)-letter is a \(\beta\).
2. The \(n^{th}\)\(\alpha\)-letter is in the \((2n-1)^{th}\) position.
We first count the second type of \(\alpha,\beta\)-words. If the \(n^{th}\)\(\alpha\)-letter is in the \((2n-1)^{th}\) position, the first \((2n-2)\) letters have \((n-1)\)\(\alpha\)-letters and \((n-1)\)\(\beta\)-letters and hence form a ballot sequence. This means that there is no restriction on the \(2n^{th}\) letter; it can be \(\alpha\) or \(\beta\). So, the total number of such \(\alpha,\beta\)-words is
\[2\cdot\frac{1}{n}\binom{2n-2}{n-1}.\]
The number of both the types 1(a) and 1(b) of \(\alpha,\beta\)-words mentioned above are the same. This is because changing the letter after the \(n^{th}\)\(\alpha\)-letter is an involution on the set of \(\alpha,\beta\)-word of length \(2n\) with any prefix having at least as many \(\alpha\)-letters as \(\beta\)-letters. We have just counted such words that have the \(n^{th}\)\(\alpha\)-letter in the \((2n-1)^{th}\) position. Hence, using Lemma 4.9, we get that the number of words of type 1(a) and 1(b) are both equal to
\[\frac{1}{2}\cdot\left[\binom{2n}{n}-\frac{2}{n}\binom{2n-2}{n-1}\right].\]
Combining the observations made above, we get that the number of regions of \(\mathcal{D}_{n}\) is
\[2^{n}n!\cdot\left(\frac{1}{4}\cdot\left[\frac{2}{n}\binom{2n-2}{n-1}+\frac{1} {2}\cdot\left[\binom{2n}{n}-\frac{2}{n}\binom{2n-2}{n-1}\right]\right]+\frac{ 1}{2}\cdot\left[\frac{1}{2}\cdot\left[\binom{2n}{n}-\frac{2}{n}\binom{2n-2}{ n-1}\right]\right]\right)\]
which simplifies to the required formula.
Just as we did for \(\mathcal{C}_{n}\), we can describe and count which regions of \(\mathcal{D}_{n}\) are bounded.
**Theorem 5.3**.: _The number of bounded regions of \(\mathcal{D}_{n}\) is_
\[2^{n-1}\cdot\frac{(2n-3)!}{(n-2)!}\cdot(3n-4).\]
Proof.: For \(n\geq 2\), both \(\mathcal{C}_{n}\) and \(\mathcal{D}_{n}\) have rank \(n\). Hence, a region of \(\mathcal{D}_{n}\) is bounded exactly when all the regions of \(\mathcal{C}_{n}\) it contains are bounded.
We have already seen in Theorem 4.15 that a region of \(\mathcal{C}_{n}\) is bounded exactly when its corresponding lattice path does not touch the \(x\)-axis except at the origin. Such regions are not closed under \(\mathcal{D}\) moves. However, if we include regions whose corresponding lattice paths touch the \(x\)-axis only at the origin and \((2n,0)\), this set of regions, which we call \(S\), is closed under the action of \(\mathcal{D}\) moves because such lattice paths are closed under the action of changing the \(2n^{th}\) step. Denote by \(S_{\mathcal{D}}\) the set of equivalence classes that \(\mathcal{D}\) moves partition \(S\) into, i.e., \(S_{\mathcal{D}}\) is the set of regions of \(\mathcal{D}_{n}\) that contain regions of \(S\).
Just as in the proof of Theorem 5.2, one can check that the set \(S\) is closed under the action of changing the letter after the \(n^{th}\)\(\alpha\)-letter. Also, note that the lattice paths in \(S\) do not touch the \(x\)-axis at \((2n-2,0)\), and hence the \(n^{th}\)\(\alpha\)-letter cannot be in the \((2n-1)^{th}\) position. Using the above observations and the same method to count regions of \(\mathcal{D}_{n}\) as in the proof of Theorem 5.2, we get the number of regions in \(S_{\mathcal{D}}\) is
\[2^{n}n!\cdot\frac{3}{8}\left(\binom{2n-1}{n}+\frac{1}{n}\binom{2n-2}{n-1}\right).\]
It can also be checked that each unbounded region in \(S\) is \(\mathcal{D}\) equivalent to exactly one other region of \(S\), and this region is bounded. This is because the lattice paths corresponding to these unbounded regions touch the \(x\)-axis at \((2n,0)\). Hence, they cannot have the \(n^{th}\) and \((n+1)^{th}\)\(\alpha\)-letters being adjacent and changing the \(2n^{th}\) letter to an \(\alpha\) gives a bounded region. Since the unbounded regions in \(S\) correspond to Dyck paths of length \((2n-2)\) (by deleting the first and last step), we get that the number of unbounded regions in \(S_{\mathcal{D}}\) is
\[2^{n}n!\cdot\frac{1}{n}\binom{2n-2}{n-1}.\]
Combining the above results, we get that the number of bounded regions of \(\mathcal{D}_{n}\) is
\[2^{n}n!\left(\frac{3}{8}\left(\binom{2n-1}{n}+\frac{1}{n}\binom{2n-2}{n-1} \right)-\frac{1}{n}\binom{2n-2}{n-1}\right).\]
This simplifies to give our required result.
As mentioned earlier, we can choose a specific sketch from each \(\mathcal{D}\) equivalence class to represent the regions of \(\mathcal{D}_{n}\). It can be checked that symmetric sketches that satisfy the following are in bijection with regions of \(\mathcal{D}_{n}\):
1. The last letter is a \(\beta\)-letter.
2. The \(n^{th}\)\(\alpha\)-letter must have a negative label if the letter following it is an \(\alpha\)-letter or the \(n^{th}\)\(\beta\)-letter.
We will call such sketches type \(D\) sketches. They will be used in Section 6 to interpret the coefficients of \(\chi_{\mathcal{D}_{n}}\). Note that the type \(D\) sketches that correspond to bounded regions of \(\mathcal{D}_{n}\) are those, when converted to a lattice path, do not touch the \(x\)-axis apart from at the origin.
### Pointed type C Catalan
The type \(B\) and type \(BC\) Catalan arrangements we are going to consider now are not sub-arrangements of the type \(C\) Catalan arrangement. While it is possible to consider these arrangements as sub-arrangements of the type \(C\)\(2\)-Catalan arrangement (see Section 4.1), this would add many extra hyperplanes. This would make defining moves and counting equivalence classes difficult. Also, we do not have a simple characterization of \(\alpha,\beta\)-words associated to symmetric \(2\)-sketches, as we do for symmetric sketches (see Proposition 4.8).
We instead consider them as a sub-arrangements of the arrangement \(\mathcal{P}_{n}\) in \(\mathbb{R}^{n}\) that has hyperplanes
\[x_{i} =-\frac{5}{2},-\frac{3}{2},-1,-\frac{1}{2},0,\frac{1}{2},\frac{3}{2}\] \[x_{i}+x_{j} =-2,-1,0\] \[x_{i}-x_{j} =-1,0,1\]
for all \(1\leq i<j\leq n\). It can be checked that the regions of \(\mathcal{P}_{n}\) are given by valid total orders on
\[\{x_{i}+s\mid i\in[n],\ s\in\{0,1\}\}\cup\{-x_{i}-s\mid i\in[n],\ s\in\{0,1\} \}\cup\{-\frac{3}{2},-\frac{1}{2},\frac{1}{2},\frac{3}{2}\}.\]
**Remark 5.4**.: _The arrangement \(\mathcal{P}_{n}\) is the arrangement \(\mathcal{C}_{n}(\lambda)\) defined in [2, Equation (4)] with \(\lambda_{i}=2\) for all \(i\in[n]\) and \(m=2\)._
We now define sketches that represent such orders. Just as beofre, we represent \(x_{i}+s\) as \(\alpha_{i}^{(s)}\) and \(-x_{i}-s\) as \(\alpha_{-i}^{(-s)}\) for any \(i\in[n]\) and \(s\in\{0,1\}\). The numbers \(-\frac{3}{2},-\frac{1}{2},\frac{1}{2},\frac{3}{2}\) will be represented as \(\alpha_{-}^{(-1.5)}\), \(\alpha_{-}^{(-0.5)}\), \(\alpha_{+}^{(0.5)}\), \(\alpha_{+}^{(1.5)}\) respectively.
**Example 5.5**.: _The total order_
\[-\frac{3}{2}<x_{2}<-x_{1}-1<-\frac{1}{2}<x_{1}<x_{2}+1<-x_{2}-1<-x_{1}<\frac{ 1}{2}<x_{1}+1<-x_{2}<\frac{3}{2}\]
_is represented as \(\alpha_{-}^{(-1.5)}\)\(\alpha_{2}^{(0)}\)\(\alpha_{-1}^{(-1)}\)\(\alpha_{-}^{(-0.5)}\)\(\alpha_{1}^{(0)}\)\(\alpha_{2}^{(1)}\)\(\alpha_{-2}^{(-1)}\)\(\alpha_{-1}^{(0.5)}\)\(\alpha_{+}^{(1)}\)\(\alpha_{-2}^{(0)}\)\(\alpha_{+}^{(1.5)}\)._
Set \(B(n)\) to be the set
\[\{\alpha_{i}^{(s)}\mid i\in[n],\ s\in\{0,1\}\}\cup\{\alpha_{i}^{(s)}\mid-i\in [n],\ s\in\{-1,0\}\}\cup\{\alpha_{-}^{(-1.5)},\alpha_{-}^{(-0.5)},\alpha_{+}^{ (0.5)},\alpha_{+}^{(1.5)}\}.\]
We define _pointed symmetric sketches_ to be the words in \(B(n)\) that correspond to regions of \(\mathcal{P}_{n}\) (this terminology will become clear soon). Denote by \(\overline{\alpha_{x}^{(s)}}\) the letter \(\alpha_{-x}^{(-s)}\) for any \(\alpha_{x}^{(s)}\in B(n)\). We have the following characterization of pointed symmetric sketches:
**Proposition 5.6**.: _A word in the letters \(B(n)\) is a pointed symmetric sketch if and only if the following hold for any \(\alpha_{x}^{(s)},\alpha_{y}^{(t)}\in B(n)\):_
1. _If_ \(\alpha_{x}^{(s)}\) _appears before_ \(\alpha_{y}^{(t)}\) _then_ \(\overline{\alpha_{y}^{(t)}}\) _appears before_ \(\overline{\alpha_{x}^{(s)}}\)_._
2. _If_ \(\alpha_{x}^{(s-1)}\) _appears before_ \(\alpha_{y}^{(t-1)}\) _then_ \(\alpha_{x}^{(s)}\) _appears before_ \(\alpha_{y}^{(t)}\)_._
3. \(\alpha_{x}^{(s-1)}\) _appears before_ \(\alpha_{x}^{(s)}\)_._
4. _Each letter of_ \(B(n)\) _appears exactly once._
Just as was done in the proof of Proposition 4.3, we can inductively construct a point in \(\mathbb{R}^{n}\) satisfying the inequalities specified by a pointed sketch. Also, just as for type \(C\) sketches, it can be shown that these sketches are symmetric about the center. We also represent such
sketches using arc diagrams in a similar manner. Note that in this case we also inlcude an arc between \(\alpha_{-}^{(-0.5)}\) and \(\alpha_{+}^{(0.5)}\).
**Example 5.7**.: _To the pointed sketch given below, we associate the arc diagram in Figure 8._
\[\alpha_{-}^{(-1.5)}\;\alpha_{2}^{(0)}\;\alpha_{-1}^{(-1)}\;\alpha_{-}^{(-0.5)}\; \alpha_{1}^{(0)}\;\alpha_{2}^{(1)}\;|\;\alpha_{-2}^{(-1)}\;\alpha_{-1}^{(0)}\; \alpha_{+}^{(0.5)}\;\alpha_{1}^{(1)}\;\alpha_{-2}^{(0)}\;\alpha_{+}^{(1.5)}\]
To a pointed symmetric sketch, we can associate a pointed \(\alpha,\beta\)-word of length \((2n+2)\) and a signed permutation as follows:
1. For the letters in the first half of the pointed sketch of the form \(\alpha_{i}^{(0)}\), \(\alpha_{-i}^{(-1)}\) or \(\alpha_{-}^{(-1.5)}\), we write \(\alpha\) and for the others we write \(\beta\) (\(\alpha\) corresponds to 'openers' in the arc diagram and \(\beta\) to 'closers'). The \(\beta\) corresponding to \(\alpha_{-}^{(0.5)}\) is pointed to.
2. The subscripts of the first \(n\)\(\alpha\)-letters other than \(\alpha_{-}^{(-1.5)}\) gives us the signed permutation.
**Example 5.8**.: _To the pointed sketch in Example 5.7, we associate the following pair:_
1. _Pointed_ \(\alpha,\beta\)_-word:_ \(\alpha\alpha\alpha\beta\alpha\beta\)_._
2. _Signed permutation:_ \(2\)__\(-1\)_._
As was done for symmetric sketches, we can see that the method given above to get a signed permutation does actually give a signed permutation. Also, such a pair has at most one pointed sketch associated to it. We now characterize the pointed \(\alpha,\beta\)-words and signed permutations associated to pointed sketches.
**Proposition 5.9**.: _A pair consisting of_
1. _a pointed_ \(\alpha,\beta\)_-word of length_ \((2n+2)\) _satisfying the property that in any prefix, there are at least as many_ \(\alpha\)_-letters as_ \(\beta\)_-letters and that the number of_ \(\alpha\)_-letters before the pointed_ \(\beta\) _is_ \((n+1)\)_, and_
2. _any signed permutation_
_corresponds to a pointed symmetric sketch and all pointed sketches correspond to such pairs._
Proof.: Most of the proof is the same as that for type \(C\) sketches. The main difference is pointing to the \(\beta\)-letter corresponding to \(\alpha_{-}^{(-0.5)}\). The property we have to take care of is that there is no nesting in the arc joining \(\alpha_{-}^{(0.5)}\) to \(\alpha_{+}^{(0.5)}\). This is the same as specifying when an arc drawn from a \(\beta\)-letter in the first half to its mirror image in the second half does not cause any nesting.
Figure 8. Arc diagram associated to the pointed symmetric sketch in Example 5.7.
Denote by \(N_{\alpha,b}\) the number of \(\alpha\)-letters before the \(\beta\) under consideration, \(N_{\alpha,a}\) the number of \(\alpha\)-letters in the first half after the \(\beta\) and similarly define \(N_{\beta,b}\) and \(N_{\beta,a}\). The condition that we do not want an arc inside the one joining the \(\beta\) to its mirror is given by
\[N_{\alpha,b}\geq N_{\beta,b}+1+N_{\beta,a}+N_{\alpha,a}.\]
This is because of the symmetry of the arc diagram and the fact that we want any \(\beta\)-letter between the pointed \(\beta\) and its mirror to have its corresponding \(\alpha\) before the pointed \(\beta\). Similarly, the condition that we do not want the arc joining the \(\beta\) to its mirror to be contained in any arc is given by
\[N_{\alpha,b}\leq N_{\beta,b}+1+N_{\beta,a}+N_{\alpha,a}.\]
This is because of the symmetry of the arc diagram and the fact that we want any \(\alpha\)-letter before the pointed \(\beta\) to have its corresponding \(\beta\) before the mirror of the pointed \(\beta\).
Combining the above observations, we get
\[N_{\alpha,b}=N_{\beta,b}+1+N_{\beta,a}+N_{\alpha,a}.\]
But this says that the number of \(\alpha\)-letters before the pointed \(\beta\) should be equal to the number of remaining letters in the first half. Since the total number of letters in the first half is \((2n+2)\), we get that the arc joining a \(\beta\) in the first half to its mirror does not cause nesting problems if and only if the number of \(\alpha\)-letters before it is \((n+1)\).
Just as we used lattice paths for symmetric sketches, we use pointed lattice paths to represent pointed symmetric sketches. The one corresponding to the sketch in Example 5.7 is given in Figure 10.
**Theorem 5.10**.: _The number of pointed symmetric sketches, which is the number of regions of \(\mathcal{P}_{n}\), is_
\[2^{n}n!\binom{2n+2}{n}.\]
Figure 10. Pointed lattice path corresponding to the pointed sketch in Example 5.7.
Figure 9. Arc from \(\beta\) to its mirror image.
Proof.: Since there is no condition on the signed permutations, we just have to count the \(\alpha,\beta\)-words of the form mentioned in Proposition 5.9. We show that these words are in bijection with \(\alpha,\beta\)-words of length \((2n+2)\) with any prefix having at least as many \(\alpha\)-letters as \(\beta\)-letters that have at least \((n+2)\)\(\alpha\)-letters. This means that their corresponding lattice paths do not end on the \(x\)-axis. This will prove the required result since the number of such words, using Lemma 4.9 and the fact that Catalan numbers count Dyck paths, is
\[\binom{2n+2}{n+1}-\frac{1}{n+2}\binom{2n+2}{n+1}=\binom{2n+2}{n}.\]
Given a pointed \(\alpha,\beta\)-word, we replace the pointed \(\beta\)-letter with an \(\alpha\)-letter to obtain an \(\alpha,\beta\)-word of the type described above. Starting with an \(\alpha,\beta\)-word with at least \((n+2)\)\(\alpha\)-letters, changing the \((n+2)^{th}\)\(\alpha\)-letter to a \(\beta\) and pointing to it gives a pointed \(\alpha,\beta\)-word. This gives us the required bijection.
**Theorem 5.11**.: _The number of bounded regions of \(\mathcal{P}_{n}\) is_
\[2^{n}n!\binom{2n+1}{n+1}.\]
Proof.: Just as for type \(C\) regions, the region corresponding to a pointed sketch is bounded if and only if its arc diagram is interlinked. Also, the signed permutation does not play a role in determining if a region is bounded. Note that in this case, there is an arc joining a \(\beta\)-letter between the \((n+1)^{th}\) and \((n+2)^{th}\)\(\alpha\)-letter to its mirror image. If the arc diagram obtained by deleting this arc from the pointed \(\beta\)-letter is interlinked, then clearly so was the initial arc diagram. However, even if the arc diagram consists of two interlinked pieces when the arc from the pointed \(\beta\)-letter is removed (one on either side of the reflecting line), the corresponding region would still be bounded. Examining the bijection between arc diagrams and lattice paths, it can be checked that this means that pointed lattice paths corresponding to bounded regions are those that never touch the \(x\)-axis after the origin except maybe at \((2n+2,0)\).
Using the bijection mentioned in the proof of Theorem 5.10, we can see that the pointed \(\alpha,\beta\)-words corresponding to bounded regions are in bijection \(\alpha,\beta\)-words whose lattice paths never touch the \(x\)-axis after the origin. We have already counted such paths in Theorem 4.15 and their number is
\[\binom{2n+1}{n+1}.\]
This gives the required result.
### Type B Catalan
Fix \(n\geq 1\). The type \(B\) Catalan arrangement in \(\mathbb{R}^{n}\) has the hyperplanes
\[X_{i} =-1,0,1\] \[X_{i}+X_{j} =-1,0,1\] \[X_{i}-X_{j} =-1,0,1\]
for all \(1\leq i<j\leq n\). Translating this arrangement by setting \(X_{i}=x_{i}+\frac{1}{2}\), we get the arrangement \(\mathcal{B}_{n}\) with hyperplanes
\[x_{i} =-\frac{3}{2},-\frac{1}{2},\frac{1}{2}\] \[x_{i}+x_{j} =-2,-1,0\] \[x_{i}-x_{j} =-1,0,1\]
for all \(1\leq i<j\leq n\). We consider \(\mathcal{B}_{n}\) as a sub-arrangement of \(\mathcal{P}_{n}\). The hyperplanes missing from \(\mathcal{P}_{n}\) are
\[x_{i}=-\frac{5}{2},-1,0,\frac{3}{2}\]
for all \(i\in[n]\). Hence the moves on pointed sketches corresponding to changing one of the inequalities associated to these hyperplanes are as follows:
1. Corresponding to \(x_{i}=0\), \(x_{i}=-1\): Swapping to \((2n+2)^{th}\) and \((2n+3)^{th}\) letter if they are not \(\alpha_{-}^{(-0.5)}\) and \(\alpha_{+}^{(0.5)}\).
2. Corresponding to \(x_{i}=-\frac{5}{2}\), \(x_{i}=\frac{3}{2}\): Swapping the pointed \(\beta\), that is, \(\alpha_{-}^{(-0.5)}\) and a \(\beta\)-letter immediately before or after it (and making the corresponding change in the second half).
We can see that such moves change the pointed \(\alpha,\beta\)-word associated to a sketch by at most changing the last letter or changing which of the \(\beta\)-letters between the \((n+1)^{th}\) and \((n+2)^{th}\)\(\alpha\)-letter is pointed to. So if we force that the last letter of the sketch has to be a \(\beta\)-letter and that the \(\beta\)-letter immediately after the \((n+1)^{th}\)\(\alpha\)-letter has to be pointed to, we get a canonical sketch in each equivalence class. We will call such sketches type \(B\) sketches.
**Theorem 5.12**.: _The number of type \(B\) sketches, which is the number of regions of \(\mathcal{B}_{n}\), is_
\[2^{n}n!\binom{2n}{n}.\]
Proof.: Since there is no condition on the signed permutation, we count the \(\alpha,\beta\)-words associated to type \(B\) sketches. From Proposition 5.9, we can see that the \(\alpha,\beta\)-words we need to count are those that satisfy the following properties:
1. Length of the word is \((2n+2)\).
2. In any prefix, there are at least as many \(\alpha\)-letters as \(\beta\)-letters.
3. The letter immediately after the \((n+1)^{th}\)\(\alpha\)-letter is a \(\beta\) (pointed \(\beta\)).
4. The last letter is a \(\beta\).
We exhibit a bijection between these words and \(\alpha,\beta\)-words of length \(2n\) that satisfy property 2. We already know, from Lemma 4.9, that the number of such words is \(\binom{2n}{n}\) and so this will prove the required result.
If the \((n+1)^{th}\)\(\alpha\)-letter is at the \((2n+1)^{th}\) position, deleting the last two letters gives us an \(\alpha,\beta\)-word of length \(2n\) with \(n\)\(\alpha\)-letters that satisfies property 2. If the \((n+1)^{th}\)\(\alpha\)-letter is not at the \((2n+1)^{th}\) position, we delete the \(\beta\)-letter after it as well as the last letter of the word. This gives us an \(\alpha,\beta\)-word of length \(2n\) with more than \(n\)\(\alpha\)-letters that satisfies property 2. The process described gives us the required bijection.
**Theorem 5.13**.: _The number of bounded regions of \(\mathcal{B}_{n}\) is_
\[2^{n}n!\binom{2n-1}{n}.\]
Proof.: Both \(\mathcal{B}_{n}\) and \(\mathcal{P}_{n}\) have rank \(n\). Hence a region of \(\mathcal{B}_{n}\) if bounded if and only if all regions of \(\mathcal{P}_{n}\) that it contains are bounded.
In the proof of Theorem 5.11 we have characterized the pointed \(\alpha,\beta\)-words associated to bounded regions of \(\mathcal{P}_{n}\). These are the pointed lattice paths of length \((2n+2)\) that satisfy the following properties (irrespective of the position of the pointed \(\beta\)):
1. The step after the \((n+1)^{th}\) up-step is a down step (for there to exist a pointed \(\beta\)).
2. The path never touches the \(x\)-axis after the origin expect maybe at \((2n+2,0)\).
We noted in Theorem 5.3 that lattice paths satisfying property 2 are closed under action of changing the letter after the \((n+1)^{th}\) up-step as well as the action of changing the last step. This shows that the regions of \(\mathcal{P}_{n}\) that lie inside a region of \(\mathcal{B}_{n}\) are either all bounded or all unbounded. Hence the number of bounded regions of \(\mathcal{B}_{n}\) is just the number of type \(B\) sketches whose corresponding lattice path satify property 1 and 2, which is
\[2^{n}n!\cdot\frac{1}{4}\cdot\left(\binom{2n+1}{n+1}+\frac{1}{n+1}\binom{2n}{n} \right).\]
This simplifies to give the required result.
### Type BC Catalan
The type \(BC\) Catalan arrangement in \(\mathbb{R}^{n}\) has hyperplanes
\[X_{i} =-1,0,1\] \[2X_{i} =-1,0,1\] \[X_{i}+X_{j} =-1,0,1\] \[X_{i}-X_{j} =-1,0,1\]
for all \(1\leq i<j\leq n\). Translating this arrangement by setting \(X_{i}=x_{i}+\frac{1}{2}\), we get the arrangement \(\mathcal{BC}_{n}\) with hyperplanes
\[x_{i} =-\frac{3}{2},-1,-\frac{1}{2},0,\frac{1}{2}\] \[x_{i}+x_{j} =-2,-1,0\] \[x_{i}-x_{j} =-1,0,1\]
for all \(1\leq i<j\leq n\). Again, we consider this arrangement as a sub-arrangement of \(\mathcal{P}_{n}\). To define moves on pointed sketches, note that the hyperplanes missing from \(\mathcal{P}_{n}\) are
\[x_{i}=-\frac{5}{2},\frac{3}{2}\]
for all \(i\in[n]\). Hence, the moves on pointed sketches corresponding to changing the inequalities associated to these hyperplanes are of the following form: Swapping the pointed \(\beta\), that is, \(\alpha_{-}^{(-0.5)}\) and a \(\beta\)-letter immediately before or after it (and making the corresponding change in the second half).
We can see that such moves change the pointed \(\alpha,\beta\)-word associated to a sketch by at most changing which of the \(\beta\)-letters between the \((n+1)^{th}\) and \((n+2)^{th}\)\(\alpha\)-letter is pointed to. So if we force that the \(\beta\)-letter immediately after the \((n+1)^{th}\)\(\alpha\)-letter has to be pointed
to, we get a canonical sketch in each equivalence class. We will call such sketches type \(BC\) sketches.
**Theorem 5.14**.: _The number of type \(BC\) sketches, which is the number of regions \(\mathcal{BC}_{n}\) is_
\[2^{n-1}n!\binom{2n+2}{n+1}.\]
Proof.: Since there is no condition on the signed permutation for type \(BC\) sketches, we count the number of \(\alpha,\beta\)-words that satisfy the following properties:
1. Length of the word is \((2n+2)\).
2. In any prefix, there are at least as many \(\alpha\)-letters as \(\beta\)-letters.
3. The letter immediately after the \((n+1)^{th}\)\(\alpha\)-letter is a \(\beta\) (pointed \(\beta\)).
Using the involution on the set of words satisfying properties 1 and 2 of changing the letter immediately after the \((n+1)^{th}\)\(\alpha\)-letter and the fact that there are \(\binom{2n+2}{n+1}\) words satisfying properties 1 and 2, we get that the number of words satisfying the required properties is
\[\frac{1}{2}\cdot\binom{2n+2}{n+1}.\]
This gives the required result.
**Theorem 5.15**.: _The number of bounded regions \(\mathcal{BC}_{n}\) is_
\[2^{n}n!\binom{2n}{n}.\]
Proof.: The proof of this result is very similar to that of Theorem 5.13. Since type \(BC\) sketches don't have the condition that the \(2n^{th}\) letter should be a \(\beta\)-letter, the number of bounded regions of \(\mathcal{BC}_{n}\) is
\[2^{n}n!\cdot\frac{1}{2}\cdot\left(\binom{2n+1}{n+1}+\frac{1}{n+1}\binom{2n}{n} \right).\]
This simplifies to give the required result.
## 6. Statistics on regions via generating functions
As mentioned in Section 1, the characteristic polynomial of an arrangement \(\mathcal{A}\) in \(\mathbb{R}^{n}\) is of the form
\[\chi_{\mathcal{A}}(t)=\sum_{i=0}^{n}(-1)^{n-i}c_{i}t^{i}\]
where \(c_{i}\) is a non-negative integer for all \(0\leq i\leq n\) and Zaslavsky's theorem tells us that
\[r(\mathcal{A}) =(-1)^{n}\chi_{\mathcal{A}}(-1)\] \[=\sum_{i=0}^{n}c_{i}.\]
In this section, we interpret the coefficients of the characteristic polynomials of the arrangements we have studied. More precisely, for each arrangement we have studied, we first define a statistic on the objects that we have seen correspond to its regions. We then show that the distribution of this statistic is given by the coefficients of the characteristic polynomial.
We do this by giving combinatorial meaning to the exponential generating functions for the characteristic polynomials of the arrangements we have studied. To obtain these generating functions, we use [18, Exercise 5.10], which we state and prove for convenience.
**Definition 6.1**.: _A sequence of arrangements \((\mathcal{A}_{1},\mathcal{A}_{2},\ldots)\) is called a Generalized Exponential Sequence of Arrangements (GESA) if_
* \(\mathcal{A}_{n}\) _is an arrangement in_ \(\mathbb{R}^{n}\) _such that every hyperplane is parallel to one of the form_ \(x_{i}=cx_{j}\) _for some_ \(c\in\mathbb{R}\)_._
* _For any_ \(k\)_-subset_ \(I\) _of_ \([n]\)_, the arrangement_ \[\mathcal{A}_{n}^{I}=\{H\in\mathcal{A}_{n}\mid H\text{ is parallel to }x_{i}=cx_{j}\text{ for some }i,j\in I\text{ and some }c\in\mathbb{R}\}\] _satisfies_ \(\operatorname{L}(\mathcal{A}_{n}^{I})\cong\operatorname{L}(\mathcal{A}_{k})\) _(isomorphic as posets)._
Note that all the arrangements we have studied are GESAs.
**Proposition 6.2**.: _Let \((\mathcal{A}_{1},\mathcal{A}_{2},\ldots)\) be a GESA, and define_
\[F(x) =\sum_{n\geq 0}(-1)^{n}r(\mathcal{A}_{n})\frac{x^{n}}{n!}\] \[G(x) =\sum_{n\geq 0}(-1)^{\operatorname{rank}(\mathcal{A}_{n})}b( \mathcal{A}_{n})\frac{x^{n}}{n!}.\]
_Then, we have_
\[\sum_{n\geq 0}\chi_{\mathcal{A}_{n}}(t)\frac{x^{n}}{n!}=\frac{G(x)^{(t+1)/ 2}}{F(x)^{(t-1)/2}}.\]
Proof.: The idea of the proof is the same as that of [18, Theorem 5.17]. By Whitney's Theorem [18, Theorem 2.4], we have for all \(n\),
\[\chi_{\mathcal{A}_{n}}(t)=\sum_{\mathcal{B}\subseteq\mathcal{A},\;\bigcap \mathcal{B}\neq\phi}(-1)^{\#\mathcal{B}}t^{n-\operatorname{rank}(\mathcal{B})}.\]
To each \(\mathcal{B}\subseteq\mathcal{A}_{n}\), such that \(\bigcap\mathcal{B}\neq\phi\), we associate a graph \(G(\mathcal{B})\) on the vertex set \([n]\) where there is an edge between the vertices \(i\) and \(j\) if there is a hyperplane in \(\mathcal{B}\) parallel to a hyperplane of the form \(x_{i}=cx_{j}\) for some \(c\in\mathbb{R}\).
Using [17, Corollary 5.1.6], we get
\[\sum_{n\geq 0}\chi_{\mathcal{A}_{n}}(t)\frac{x^{n}}{n!}=\exp\sum_{n \geq 1}\tilde{\chi}_{\mathcal{A}_{n}}(t)\frac{x^{n}}{n!}\]
where for any \(n\) we define
\[\tilde{\chi}_{\mathcal{A}_{n}}(t)=\sum_{\begin{subarray}{c}\mathcal{B}\subseteq \mathcal{A},\;\bigcap\mathcal{B}\neq\phi\\ G(\mathcal{B})\;\text{connected}\end{subarray}}(-1)^{\#\mathcal{B}}t^{n- \operatorname{rank}(\mathcal{B})}.\]
Note that if \(G(\mathcal{B})\) is connected, then any point in \(\bigcap\mathcal{B}\) is determined by any one of its coordinates, say \(x_{1}\). This is because any path from the vertex \(1\) to a vertex \(i\) in \(G(\mathcal{B})\) can be used to determine \(x_{i}\). This shows us that \(\operatorname{rank}(\mathcal{B})\) is either \(n\) or \(n-1\). Hence,
\(c_{n}t+d_{n}\) for some \(c_{n},d_{n}\in\mathbb{Z}\). Setting
\[\exp\sum_{n\geq 1}c_{n}\frac{x^{n}}{n!} =\sum_{n\geq 0}b_{n}\frac{x^{n}}{n!}\] \[\exp\sum_{n\geq 1}d_{n}\frac{x^{n}}{n!} =\sum_{n\geq 0}a_{n}\frac{x^{n}}{n!}\]
we get
\[\sum_{n\geq 0}\chi_{\mathcal{A}_{n}}(t)\frac{x^{n}}{n!}=\left(\sum_{n\geq 0}b_{ n}\frac{x^{n}}{n!}\right)^{t}\left(\sum_{n\geq 0}a_{n}\frac{x^{n}}{n!}\right).\]
Substituting \(t=1\) and \(t=-1\) and using Theorem 1.1, we obtain expressions for the exponential generating functions of \(\{b_{n}\}\) and \(\{c_{n}\}\) and this gives us the required result.
Before looking at the characteristic polynomials of these arrangements, we recall a few results from [17]. Suppose that \(c:\mathbb{N}\to\mathbb{N}\) is a function and for each \(n,j\in\mathbb{N}\), we define
\[c_{j}(n)=\sum_{\{B_{1},\ldots,B_{j}\}\in\Pi_{n}}c(|B_{1}|)\cdots c(|B_{j}|)\]
where \(\Pi_{n}\) is the set of partitions of \([n]\). Define for each \(n\in\mathbb{N}\),
\[h(n)=\sum_{j=0}^{n}c_{j}(n).\]
From [17, Example 5.2.2], we know that in such a situation,
\[\sum_{n,j\geq 0}c_{j}(n)t^{j}\frac{x^{n}}{n!}=\left(\sum_{n\geq 0}h(n)\frac{x^{n} }{n!}\right)^{t}.\]
Informally, we consider \(h(n)\) to be the number of "structures" that can be placed on an \(n\)-set where each structure can be uniquely broken up into a disjoint union of "connected sub-structures". Here \(c(n)\) denotes the number of connected structures on an \(n\)-set and \(c_{j}(n)\) denotes the number of structures on an \(n\)-set with exactly \(j\) connected sub-structures. We will call such structures _exponential structures_.
In fact, in most of the computations below, we will be dealing with generating functions of the form
\[\left(\sum_{n\geq 0}h(n)\frac{x^{n}}{n!}\right)^{\frac{t+1}{2}}. \tag{3}\]
We can interpret such a generating function as follows. Suppose that there are two types of connected structures, say positive and negative connected structures. Also, suppose that the number of positive connected structures on \([n]\) is the same as the number of negative ones, i.e., \(c(n)/2\). Then the coefficient of \(t^{j}\frac{x^{n}}{n!}\) in the generating function given above is the number of structures on \([n]\) that have \(j\) positive connected sub-structures.
Also, note that since the coefficients of the characteristic polynomial alternate in sign, the distribution of any appropriate statistic we define would be
\[\sum_{n\geq 0}\chi_{\mathcal{A}_{n}}(-t)\frac{(-x)^{n}}{n!}.\]
### Reflection arrangements
Before defining statistics for the Catalan arrangements, we first do so for the reflection arrangements we studied in Section 3. As we will see, the same statistic we define for sketches (regions of the type \(C\) arrangement) works for the canonical sketches we have chosen for the other arrangements as well.
#### 6.1.1. The type \(C\) arrangement
We have seen that the regions of the type \(C\) arrangement in \(\mathbb{R}^{n}\) correspond to sketches (Section 3.1) of length \(2n\). We use the second half of the sketch to represent the regions, and call them signed permutations on \([n]\).
A statistic on signed permutations whose distribution is given by the coefficients of the characteristic polynomial is given in [9, Section 2]. We define a similar statistic. First break the signed permutation into _compartments_ using right-to-left minima as follows: Ignoring the signs, draw a line before the permutation and then repeatedly draw a line immediately following the least number after the last line drawn. This is repeated until a line is drawn at the end of the permutation. It can be checked that compartments give signed permutations an exponential structure. A _positive compartment_ of a signed permutations is one where the last term is positive.
**Example 6.3**.: _The signed permutation given by_
\[\overset{+}{3}\overset{+}{1}\overset{-}{6}\overset{-}{7}\overset{-}{5}\overset {+}{2}\overset{-}{4}\]
_is split into compartments as_
\[|\overset{+}{3}\overset{+}{1}|\overset{-}{6}\overset{-}{7}\overset{-}{5}\overset {+}{2}|\overset{-}{4}|\]
_and hence has \(3\) compartments, \(2\) of which are positive._
By the combinatorial interpretation of (3), the distribution of the statistic 'number of positive compartments' on signed permutations is given by
\[\left(\frac{1}{1-2x}\right)^{\frac{t+1}{2}}.\]
Note that for the type \(C\) arrangement, in terms of Proposition 6.2, we have
\[F(x) =\left(\frac{1}{1+2x}\right),\] \[G(x) =1.\]
Hence, we get that the distribution of the statistic 'number of positive compartments' on signed permutations is given by the coefficients of the characteristic polynomial.
For the arrangements that follow, we have described canonical sketches and hence signed permutations that correspond to regions in Section 3. For each arrangement, we will show that the distribution of the statistic 'number of positive compartments' on these canonical signed permutations is given by the characteristic polynomial.
#### 6.1.2. The Boolean arrangement
The signed permutations that correspond to the Boolean arrangement in \(\mathbb{R}^{n}\) (Section 3.2) are those that have all compartments of size \(1\), i.e., the underlying permutation is \(1\ 2\ \cdots\ n\). Just as before, it can be seen that the distribution of the statistic 'number of positive compartments' on such signed permutations is given by
\[(e^{2x})^{\frac{t+1}{2}}.\]
This agrees with the generating function for the characteristic polynomial we get from Proposition 6.2 using \(F(x)=e^{-2x}\) and \(G(x)=1\).
#### 6.1.3. The type \(D\) arrangement
From Section 3.3, we can see that the regions of the type \(D\) arrangement in \(\mathbb{R}^{n}\) correspond to signed permutations on \([n]\) where the first sign is positive. Given \(i\in[n]\) and a signed permutation \(\sigma\) of \([n]\setminus\{i\}\), the signed permutation of \([n]\) obtained by appending \(\bar{i}\) to the start of \(\sigma\) has the same number of positive compartments as \(\sigma\). This shows that the distribution of the statistic on signed permutations whose first term is positive is
\[(1-x)\left(\frac{1}{1-2x}\right)^{\frac{t+1}{2}}.\]
This agrees with the generating function for the characteristic polynomial we get from Proposition 6.2 since we have
\[F(x) =\left(\frac{1+x}{1+2x}\right),\] \[G(x) =1+x.\]
Note that the expression for \(G(x)\) is due to the fact that the type \(D\) arrangement in \(\mathbb{R}^{1}\) is empty.
#### 6.1.4. The braid arrangement
From Section 3.4, we get that the regions of the brain arrangement in \(\mathbb{R}^{n}\) corresponds to the signed permutations on \([n]\) where all terms are positive. Hence, the number of positive compartments is just the number of compartments in the underlying permutation. Since compartments give permutations an exponential structure, the distribution of this statistic is
\[\left(\frac{1}{1-x}\right)^{t}.\]
This agrees with the generating function for the characteristic polynomial we get from Proposition 6.2 since \(F(x)=\frac{1}{1+x}\) and \(G(x)=1+x\).
We summarize the results of this section as follows. For any reflection arrangement \(\mathcal{A}\), we use \(\mathcal{A}\)-signed permutation to mean those described above to represent the regions of \(\mathcal{A}\).
**Theorem 6.4**.: _For any reflection arrangement \(\mathcal{A}\), the absolute value of the coefficient of \(t^{j}\) in \(\chi_{\mathcal{A}}(t)\) is the number of \(\mathcal{A}\)-signed permutations that have \(j\) positive compartments._
### Catalan deformations
We start with defining a statistic for the extended type \(C\) Catalan arrangements. Using Proposition 6.2, we then show that the generating function for the statistic and the characteristic polynomials match.
Fix \(m\geq 1\). We define a statistic on labeled symmetric non-nesting partitions and show that its distribution is given by the characteristic polynomial. To do this, we first recall some definitions and results about the type \(A\) extended Catalan arrangement.
**Definition 6.5**.: _An \(m\)-non-nesting partition of size \(n\) is a partition of \([(m+1)n]\) such that the following hold:_
1. _Each block is of size_ \((m+1)\)_._
2. _If_ \(a,b\) _are in the same block_ \(B\) _and_ \([a,b]\cap B=\{a,b\}\)_, then for any_ \(c,d\) _such that_ \(a<c<d<b,c\) _and_ \(d\) _are not in the same block._
Just as before, such partitions can be represented using arc diagrams.
**Example 6.6**.: _The arc diagram corresponding to the \(2\)-non-nesting partition of size \(3\)_
\[\{1,2,4\},\{3,5,6\},\{7,8,9\}\]
_is given in Figure 11._
It is known (for example, see [2, Theorem 2.2]) that the number of \(m\)-non-nesting partitions of size \(n\) is
\[\frac{1}{mn+1}\binom{(m+1)n}{n}.\]
These numbers are called the Fuss-Catalan numbers or generalized Catalan numbers. Setting \(m=1\) gives us the usual Catalan numbers. Labeling the \(n\) blocks distinctly using \([n]\) gives us labeled \(m\)-non-nesting partitions. These objects correspond to the regions of the type \(A\)\(m\)-Catalan arrangement in \(\mathbb{R}^{n}\) whose hyperplanes are
\[x_{i}-x_{j}=0,\pm 1,\pm 2,\ldots,\pm m\]
for all \(1\leq i<j\leq n\) (for example, see [6, Section 8.1]).
We now define a statistic on labeled non-nesting partitions similar to the one defined in [7, Section 4]. The statistic defined in [7] is for labeled \(m\)-Dyck paths but these objects are in bijection with labeled \(m\)-non-nesting partitions.
A labeled non-nesting partition can be broken up into interlinked pieces, say \(P_{1},P_{2},\ldots,P_{k}\). We group these pieces into _compartments_ as follows. If the label \(1\) is in the \(r^{th}\) interlinked piece, then the interlinked pieces \(P_{1},P_{2},\ldots,P_{r}\) form the first compartment. Let \(j\) be the smallest number in \([n]\setminus A\) where \(A\) is the set of labels in first compartment. If \(j\) is in the \(s^{th}\) interlinked piece then interlinked pieces \(P_{r+1},P_{r+2},\ldots,P_{s}\) form the second compartment. Continuing this way, we break up a labeled non-nesting partition into compartments.
**Example 6.7**.: _The labeled non-nesting partition in Figure 12 has \(3\) interlinked pieces. The first compartment consists of just the first interlinked piece since it contains the label \(1\). The smallest label in the rest of the diagram is \(3\) which is in the last interlinked piece. Hence, this labeled non-nesting partition has \(2\) compartments._
A non-nesting partition labeled with distinct integers (not necessarily of the form \([n]\)) can be broken up into compartments in the same way. Here the first compartment consists of the interlinked pieces up to the one containing the smallest label.
Figure 11. Arc diagram corresponding to the \(2\)-non-nesting partition in Example 6.6.
Figure 12. A labeled non-nesting partition with \(3\) interlinked pieces and \(2\) compartments.
It can be checked that compartments give labeled non-nesting partitions an exponential structure. This is because the order in which they appear can be determined by their labels. A labeled non-nesting partition is said to be _connected_ if it has only one compartment.
We now define a similar statistic for labeled symmetric non-nesting partitions. To a symmetric non-nesting partition we can associate a pair consisting of
1. an interlinked symmetric non-nesting partition, which we call the _bounded part_ and
2. a non-nesting partition, which we call the _unbounded part_.
This is easy to do using arc diagrams, as illustrated in the following example. The terminology becomes clear when one considers the boundedness of the coordinates in the region corresponding to a labeled symmetric non-nesting partition.
**Example 6.8**.: _To the symmetric \(2\)-non-nesting partition in Figure 13 we associate_
1. _the interlinked symmetric_ \(2\)_-non-nesting partition marked_ \(A\) _and_
2. _the_ \(2\)_-non-nesting partition marked_ \(B\)_._
_Here \(A\) is the bounded part and \(B\) is the unbounded part. We can obtain the original arc diagram back from \(A\) and \(B\) by placing a copy of \(B\) on either side of \(A\)._
This is a bijection between symmetric non-nesting partitions and such pairs. Given a labeled symmetric non-nesting partition, we define the statistic using just the unbounded part. Ignoring the signs, we break the unbounded part into compartments just as we did for non-nesting partitions. A _positive compartment_ is one whose last element has a positive label.
**Example 6.9**.: _Suppose the arc diagram in Figure 14 is the unbounded part of some symmetric non-nesting partition. Notice that ignoring the signs, this arc diagrams breaks up into compartments just as Figure 12. But only the first compartment is positive since its last element has label \(6\) which is positive._
We claim that the statistic 'number of positive compartments' meets our requirements. To prove that the distribution of this statistic is given by the characteristic polynomial, we
Figure 14. The unbounded part of a symmetric non-nesting partition that has \(1\) positive compartment.
Figure 13. Break up of a symmetric \(2\)-non-nesting partition.
apply Proposition 6.2 to the sequence of arrangements \(\{\mathcal{C}_{n}^{(m)}\}\). Using the bijection between labeled symmetric \(m\)-non-nesting partitions and regions of \(\mathcal{C}_{n}^{(m)}\), we note that those arc diagrams that are interlinked are the ones that correspond to bounded regions. Hence, using the notations form Proposition 6.2, and [17, Proposition 5.1.1], we have
\[F(-x)=G(-x)\cdot\left(\sum_{n\geq 0}\frac{2^{n}n!}{mn+1}\binom{(m+1)n}{n} \frac{x^{n}}{n!}\right). \tag{4}\]
Note that \(\operatorname{rank}(\mathcal{C}_{n}^{(m)})=n\). This gives us
\[\sum_{n\geq 0}\chi_{\mathcal{A}_{n}}(-t)\frac{(-x)^{n}}{n!}=G(-x)\cdot \left(\sum_{n\geq 0}\frac{2^{n}n!}{mn+1}\binom{(m+1)n}{n}\frac{x^{n}}{n!} \right)^{\frac{t+1}{2}}.\]
Using the combinatorial interpretation of (3), we see that the right hand side of the above equation is the generating function for the distribution of the statistic.
We also obtain corresponding statistics on symmetric sketches using the bijection in Section 4.1. This gives us the following result.
**Theorem 6.10**.: _The absolute value of the coefficient of \(t^{j}\) in \(\chi_{\mathcal{C}_{n}^{(m)}}(t)\) is the number of symmetric \(m\)-sketches of size \(n\) that have \(j\) positive compartments._
For the arrangements \(\mathcal{D}_{n}\), \(\mathcal{P}_{n}\), \(\mathcal{B}_{n}\), and \(\mathcal{BC}_{n}\) as well, the analogue of (4) holds. That is, for each of these arrangements, using the notation of Proposition 6.2, we have
\[F(-x)=G(-x)\cdot\left(\sum_{n\geq 0}\frac{2^{n}n!}{n+1}\binom{2n}{n}\frac{x^{ n}}{n!}\right).\]
This can be proved using the definitions of type \(D\), pointed, type \(B\), and type \(BC\) sketches and the description of which sketches correspond to bounded regions.
There is a slight difference in the proof for the sequence of arrangements \(\{\mathcal{D}_{n}\}\). The arrangement \(\mathcal{D}_{1}\) is empty and hence
\[G(-x)=1-x+\sum_{n\geq 2}b(\mathcal{D}_{n})\frac{x^{n}}{n!}.\]
However, from the definition of type \(D\) sketches, we see that we must not allow those symmetric non-nesting partitions where the bounded part is empty and the first interlinked piece of the unbounded part is of size \(1\) with negative label. Hence, we still get the required expression for \(F(-x)\).
Just as we did for the extended type \(C\) Catalan arrangements, we define positive compartments for the arc diagrams corresponding to the regions of these arrangements, which gives corresponding statistics on the sketches.
**Example 6.11**.: _The arc diagram in Figure 15 corresponds to a pointed sketch with \(2\) positive compartments._
The following result can be proved just as before.
**Theorem 6.12**.: _The absolute value of the coefficient of \(t^{j}\) in \(\chi_{A}(t)\) for \(\mathcal{A}=\mathcal{D}_{n}\) (respectively \(\mathcal{P}_{n}\), \(\mathcal{B}_{n},\mathcal{B}\mathcal{C}_{n}\)) is the number of type \(D\) (respectively pointed, type \(B\), type \(BC\)) sketches of size \(n\) that have \(j\) positive compartments._
## 7. Deformations of the threshold arrangement
The threshold arrangement in \(\mathbb{R}^{n}\) consists of the hyperplanes \(x_{i}+x_{j}=0\) for \(1\leq i<j\leq n\). These arrangements are of interest because their regions correspond to certain labeled graphs called _threshold graphs_ which have been extensively studied (see [12]). In this section, we study this arrangement and some of its deformations using similar techniques as in previous sections.
### Sketches and moves
We use the sketches and moves idea to study the regions of the threshold arrangement by considering it as a sub-arrangement of the type \(C\) arrangement (Section 3.1). Before doing that, we first study the arrangement obtained by adding the coordinate hyperplanes to the threshold arrangement.
#### 7.1.1. Fubini arrangement
We define the Fubini arrangement in \(\mathbb{R}^{n}\) to be the one with hyperplanes
\[2x_{i} =0\] \[x_{i}+x_{j} =0\]
for all \(1\leq i<j\leq n\). The hyperplanes missing from the type \(C\) arrangement are
\[x_{i}-x_{j} =0\]
for all \(1\leq i<j\leq n\). Hence a Fubini move, which we call an \(F\) move, is swapping adjacent \(\overset{+}{i}\) and \(\overset{+}{j}\) as well as \(\overset{-}{j}\) and \(\overset{-}{i}\) for distinct \(i,j\in[n]\).
**Example 7.1**.: _We can use a series of \(F\) moves on a sketch as follows:_
\[\overset{-}{3}\overset{-}{6}\overset{-}{2}\overset{+}{1}\overset{+}{4}\overset{ -}{5}\overset{+}{4}\overset{-}{1}\overset{+}{2}\overset{+}{6}\overset{+}{3} \longrightarrow\overset{-}{6}\overset{-}{3}\overset{-}{2}\overset{+}{1} \overset{+}{4}\overset{-}{5}\overset{+}{5}\overset{+}{4}\overset{-}{1} \overset{+}{2}\overset{+}{3}\overset{+}{6}\longrightarrow\overset{-}{6} \overset{-}{3}\overset{-}{2}\overset{+}{4}\overset{+}{1}\overset{+}{5} \overset{+}{1}\overset{-}{4}\overset{+}{2}\overset{+}{3}\overset{+}{6}\]
We define a _block_ to be the set of absolute values in a maximal string of contiguous terms in the second half of a sketch that have the same sign. The blocks of the initial sketch in Example 7.1 are \(\{5\},\{1,4\},\{2,3,6\}\) (these blocks appear in this order with the first one being positive). It can be checked that \(F\) moves do not change the sequence of signs (above the numbers) and that they can only be used to reorder the elements in a block. Hence, each equivalence class has a unique sketch where the numbers in each block appear in ascending order. The last sketch in Example 7.1 is the unique such sketch in its equivalence class.
Figure 15. Arc diagram corresponding to a pointed sketch with \(2\) positive compartments.
The number of such sketches is equal to the number of ways of choosing an ordered partition of \([n]\) (which correspond to the blocks of the sketch in order) and then choosing a sign for the first block. Hence the number of regions of the Fubini arrangement is \(2\cdot a(n)\) where \(a(n)\) is the \(n^{th}\) Fubini number, which is the number of ordered partitions of \([n]\) listed as A000670 in the OEIS [16].
#### 7.1.2. Threshold arrangement
The threshold arrangement in \(\mathbb{R}^{n}\) has the hyperplanes
\[x_{i}+x_{j}=0\]
for all \(1\leq i<j\leq n\). The hyperplanes missing from the type \(C\) arrangement are
\[2x_{i} =0\] \[x_{i}-x_{j} =0\]
for all \(1\leq i<j\leq n\). Hence the threshold moves, which we call \(T\) moves, are as follows:
(1) (\(D\) move) Swapping adjacent \(\overset{+}{i}\) and \(\overset{-}{i}\) for any \(i\in[n]\).
(2) (\(F\) move) Swapping adjacent \(\overset{+}{i}\) and \(\overset{+}{j}\) as well as \(j\) and \(\overset{-}{i}\) for distinct \(i,j\in[n]\).
For any sketch, there is a \(T\) equivalent sketch for which the first block has more than \(1\) element. This is because, if the sketch has first block of size \(1\), applying a \(D\) move (swapping the \(n^{th}\) and \((n+1)^{th}\) term), will result in a sketch where the first block has size greater than \(1\) (first step in Example 7.2).
**Example 7.2**.: _We can use a series of \(T\) moves on a sketch as follows: \(\overset{+}{5}\overset{-}{4}\overset{-}{1}\overset{+}{2}\overset{+}{6}\overset {-}{3}\overset{-}{1}\overset{+}{3}\overset{-}{6}\overset{-}{2}\overset{+}{1} \overset{+}{4}\overset{-}{5}\xrightarrow{D\ move}\overset{+}{5}\overset{+}{4} \overset{-}{1}\overset{+}{2}\overset{+}{6}\overset{+}{3}\overset{-}{1} \overset{-}{3}\overset{-}{6}\overset{-}{2}\overset{+}{1}\overset{+}{4} \overset{-}{5}\xrightarrow{F\ moves}\overset{+}{5}\overset{-}{4}\overset{-}{1} \overset{+}{6}\overset{+}{3}\overset{-}{2}\overset{+}{2}\overset{-}{3} \overset{-}{6}\overset{-}{1}\overset{+}{4}\overset{-}{5}\]_
To obtain a canonical sketch for each threshold region, we will need a small lemma.
**Lemma 7.3**.: _Two \(T\) equivalent sketches that have their first block of size greater than 1 have the same blocks which appear in the same order with the same signs._
Proof.: Looking at what the \(T\) moves do to the sequence of signs (above the numbers), we can see that they at most swap the \(n^{th}\) and \((n+1)^{th}\) sign (\(D\) move). Hence, if we require the first blocks to have size greater than \(1\), both the sketches have the same number of blocks and the number of elements in the corresponding blocks are the same. An \(F\) move can only reorder elements in the same block of a sketch. A \(D\) move changes the sign of the first element of the second half. So if there are \(k>1\) elements in the first block of a \(T\) equivalent sketch, then the set of absolute values of the first \(k\) elements of the second half remains the same in all \(T\) equivalent sketches. This gives us the required result.
Using the above lemma, we can see that for any sketch there is a unique \(T\) equivalent sketch where the size of the first block is greater than \(1\) and the elements of each block are in ascending order. The last sketch in Example 7.2 is the unique such sketch in its equivalence class. Similar to the count for Fubini regions, we get that the number of regions of the threshold arrangement is
\[2\cdot(a(n)-n\cdot a(n-1))\]
where, as before, \(a(n)\) is the \(n^{th}\) Fubini number. The number of regions of the threshold arrangement is listed as A005840 in the OEIS [16].
**Remark 7.4**.: _The regions of the threshold arrangement in \(\mathbb{R}^{n}\) are known to be in bijection with labeled threshold graphs on \(n\) vertices (see [18, Exercise 5.25]). Labeled threshold graphs on \(n\) vertices are inductively constructed starting from the empty graph. Vertices labeled \(1,\ldots,n\) are added in a specified order. At each step, the vertex added is either 'dominant' or'recessive'. A dominant vertex is one that is adjacent to all vertices added before it and a recessive vertex is one that is isolated from all vertices added before it. It is not difficult to see that the canonical sketches described above are in bijection with threshold graphs._
### Statistics
The characteristic polynomial of the threshold arrangement and a statistic on its regions whose distribution is given by the characteristic polynomial has been studied in [9]. This is done by directly looking at the coefficients of the characteristic polynomial. In fact, even the coefficients of the characteristic polynomial of the Fubini arrangement (Section 7.1.1) have already been combinatorially interpreted in [9, Section 4.1]. This can be used to define an appropriate statistic on the regions of the Fubini arrangement. Here, just as in Section 6, we use Proposition 6.2 to combinatorially interpret the generating functions of the characteristic polynomials for the Fubini and threshold arrangements. Just as before, we will show that the statistic 'number of positive compartments' works for our purposes.
#### 7.2.1. Fubini arrangement
We will use the second half of the canonical sketches described in Section 7.1.1 to represent the regions. We define blocks for signed permutations just as we did for sketches. Hence, the regions of the Fubini arrangement in \(\mathbb{R}^{n}\) correspond to signed permutations on \([n]\) where each block is increasing.
In this special class of signed permutations as well, compartments give them an exponential structure. This is because there is no condition relating the signs of the last element of a compartment and the first element of the compartment following it. This is because the last element of a compartment is necessarily smaller in absolute value than the element following it. Also, suppose we are given a signed permutation such that each block is increasing. It can be checked that the signed permutation obtained by changing all the signs also satisfies this property.
Using the above observations and the combinatorial interpretation of (3), we get that
\[\left(\frac{e^{x}}{2-e^{x}}\right)^{\frac{t+1}{2}}\]
is the exponential generating function for signed permutations where each block is increasing where \(t\) keeps track of the number of positive compartments. This agrees with the generating function for the characteristic polynomial we get from Proposition 6.2 since we have
\[F(x) =\left(\frac{1}{2e^{x}-1}\right),\] \[G(x) =1.\]
#### 7.2.2. Threshold arrangement
From Section 7.1.2, we can see that the regions of the threshold arrangement in \(\bar{\mathbb{R}}^{n}\) correspond to signed permutations on \([n]\) where each block is increasing and the first block has size greater than \(1\). If such a permutation starts with \(\bar{1}\), we instead use the signed permutation obtained by changing \(\bar{1}\) to \(\bar{1}\) to represent the region. Similar to how we obtained the generating function for the statistic for type \(D\) from the
one for type \(C\), we obtain our generating function from the one we have for the Fubini arrangement.
Suppose that we are given \(i\in[n]\) and a signed permutation \(\sigma\) on \([n]\setminus\{i\}\) whose blocks are increasing. If \(i=1\) we construct the signed permutation on \([n]\) obtained by appending \(\bar{1}\) to the front of \(\sigma\). If \(i>1\), and the first element of \(\sigma\) is \(\frac{\pm}{j}\). We construct the signed permutation on \([n]\) obtained by appending \(\frac{\mp}{i}\) to the start of \(\sigma\). In both cases, it can be checked that the number of positive compartment of the new signed permutation constructed is the same as that for \(\sigma\).
This shows that the distribution of the statistic 'number of positive compartments' on the signed permutations that correspond to regions of the threshold arrangement is
\[(1-x)\left(\frac{e^{x}}{2-e^{x}}\right)^{\frac{t+1}{2}}.\]
This agrees with the generating function for the characteristic polynomial we get from Proposition 6.2 since we have
\[F(x) =\left(\frac{1+x}{2e^{x}-1}\right),\] \[G(x) =1+x.\]
### Some deformations
Deformations of the threshold arrangement have not been as well-studied as those of the braid arrangement. However, the finite field method has been used to compute the characteristic polynomial for some deformations. In [14, 15], Seo computed the characteristic polynomials of the so called Shi and Catalan threshold arrangements. Expressions for the characteristic polynomials of more general deformations have been computed in [5].
In this section, we use the sketches and moves technique to obtain certain non-nesting partitions that are in bijection with the regions of the Catalan and Shi threshold arrangements. We do this by considering these arrangements as sub-arrangements of the type \(C\) Catalan arrangement (Section 4). Unfortunately, we were not able to directly count the non-nesting partitions we obtained since their description is not as simple as the ones we have seen before.
Fix \(n\geq 2\) throughout this section. Recall that we studied the type \(C\) Catalan arrangement by considering a translation of it called \(\mathcal{C}_{n}\) whose hyperplane are given by (2) and whose regions correspond to symmetric sketches of size \(n\) (see Definition 4.2). Symmetric sketches can also be viewed as labeled symmetric non-nesting partitions (see Example 4.13).
#### 7.3.1. Catalan threshold
The Catalan threshold arrangement in \(\mathbb{R}^{n}\) consists of the hyperplanes
\[X_{i}+X_{j}=-1,0,1\]
for all \(1\leq i<j\leq n\). The translated arrangement by setting \(X_{i}=x_{i}+\frac{1}{2}\), which we call \(\mathcal{CT}_{n}\), has hyperplanes
\[x_{i}+x_{j}=-2,-1,0\]
for all \(1\leq i<j\leq n\). We consider this arrangement as a sub-arrangement of \(\mathcal{C}_{n}\). Using Bernardi's idea of moves, we can define an equivalence on the symmetric sketches such that two sketches are equivalent if they lie in the same region of \(\mathcal{CT}_{n}\).
An \(\alpha_{+}\) letter is an \(\alpha\)-letter whose subscript is positive. We similarly define \(\alpha_{-},\beta_{+}\) and \(\beta_{-}\) letters. The'mod-value' of a letter \(\alpha_{i}^{(s)}\) is \(|i|\).
The hyperplanes in \(\mathcal{C}_{n}\) that are not in \(\mathcal{CT}_{n}\) are
\[2x_{i} =-2,-1,0\] \[x_{i}-x_{j} =-1,0,1\]
where \(1\leq i<j\leq n\). Changing the inequality corresponding to exactly one of these hyperplanes is given by the following moves on a sketch, which we call \(\mathcal{CT}\) moves.
1. Swapping the \(2n^{th}\) and \((2n+1)^{th}\) letter. This corresponds to changing the inequality corresponding to a hyperplane of the form \(2x_{i}=-2\) or \(2x_{i}=0\).
2. Swapping the \(n^{th}\) and \((n+1)^{th}\)\(\alpha\)-letter if they are consecutive (along with the \(n^{th}\) and \((n+1)^{th}\)\(\beta\)). This corresponds to changing the inequality corresponding to a hyperplane of the form \(2x_{i}=-1\).
3. Swapping consecutive \(\alpha_{+}\) and \(\beta_{+}\) letters (along with their negatives). This corresponds to changing the inequality corresponding to a hyperplane of the form \(x_{i}-x_{j}=1\).
4. Swapping \(\{\alpha_{i}^{(0)},\alpha_{j}^{(0)}\}\) as well as \(\{\alpha_{i}^{(1)},\alpha_{j}^{(1)}\}\) if both pairs are consecutive (as well as their negatives) where \(i,j\in[n]\) are distinct. This corresponds to changing the inequality corresponding to the hyperplane \(x_{i}-x_{j}=1\).
Two sketches are in the same region of \(\mathcal{CT}_{n}\) if and only if they are related by a series of \(\mathcal{CT}\) moves. We call such sketches \(\mathcal{CT}\) equivalent.
Consider the sketches to be ordered in the lexicographic order induced by the following order on the letters.
\[\alpha_{n}^{(0)}\succ\cdots\succ\alpha_{1}^{(0)}\succ\alpha_{-1}^{(-1)}\succ \cdots\succ\alpha_{-n}^{(-1)}\succ\alpha_{n}^{(1)}\succ\cdots\succ\alpha_{1}^ {(1)}\succ\alpha_{-1}^{(0)}\succ\cdots\succ\alpha_{-n}^{(0)}\]
In other words, the \(\alpha\)-letters are greater than the \(\beta\)-letters and for letters of the same type, the order is given by comparing the subscripts.
A sketch is called \(\mathcal{CT}\) maximal if it is greater (in the lexicographic order) than all sketches to which it is \(\mathcal{CT}\) equivalent. Hence the regions of \(\mathcal{CT}_{n}\) are in bijection with the \(\mathcal{CT}\) maximal sketches.
**Theorem 7.5**.: _A symmetric sketch is \(\mathcal{CT}\) maximal if and only if the following hold._
1. _If a_ \(\beta\)_-letter is followed by an_ \(\alpha\)_-letter, they should be of opposite signs and different mod-values._ \[\begin{array}{c}\includegraphics[width=142.26378pt]{images/142.eps}\\ X\end{array}\quad\Longrightarrow\begin{array}{c}\text{$X$ and $Y$ of opposite sign}\\ \text{and different mod value}.\end{array}\]
2. _If two_ \(\alpha\)_-letters and their corresponding_ \(\beta\)_-letters are both consecutive and of the same sign then the subscript of the first one should be greater._ \[\begin{array}{c}\includegraphics[width=142.26378pt]{images/142.eps}\\ a_{1}\end{array}\quad\cdots\begin{array}{c}\includegraphics[width=142.26378pt]{images/142.eps}\\ a_{2}\end{array}\quad\text{and $a_{1},a_{2}$ same sign}\implies a_{1}>a_{2}.\]
3. _If the_ \(n^{th}\) _and_ \((n+1)^{th}\)__\(\alpha\)_-letters are consecutive, then so are the_ \((n-1)^{th}\) _and_ \(n^{th}\) _with the_ \(n^{th}\)__\(\alpha\)_-letter being positive. In such a situation, if the_ \((n-1)^{th}\)__\(\alpha\)_-letter is negative and the_ \((n-1)^{th}\) _and_ \(n^{th}\)__\(\beta\)_-letters are consecutive, the_ \((n-1)^{th}\)__\(\alpha\)_-letter should have a subscript greater than that of the_ \((n+1)^{th}\)__\(\alpha\)_._
4. _If the_ \((2n-1)^{th}\) _and_ \((2n+1)^{th}\) _letters are both_ \(\beta\)_-letters of the same sign and their corresponding_ \(\alpha\)_-letters are consecutive, the subscript of the_ \((2n-1)^{th}\) _letter should be greater than that of the_ \((2n+1)^{th}\)_._ \[\begin{array}{c}\includegraphics[width=142.26378pt]{images/142.eps}\\ X\end{array}\quad\text{and $X,Y$ same sign}\implies X>Y.\]
_Hence the regions of \(\mathcal{CT}_{n}\) are in bijection with sketches of the form described above._
**Remark 7.6**.: _The idea of ordering sketches and choosing the maximal sketch in each region of \(\mathcal{CT}_{n}\) to represent it is the same one used by Bernardi [6] to study certain deformations of the braid arrangement. In fact, [6, Lemma 8.13] shows that in this case, any sketch that is locally maximal (greater than any sketch that can be obtained by applying a single move) is maximal. Note that the sketches described in the above theorem are precisely the \(2\)-locally maximal sketches. That is, these are the sketches that can neither be converted into a greater sketch by applying a single \(\mathcal{CT}\) move nor by applying two \(\mathcal{CT}\) moves. It is clear that any \(\mathcal{CT}\) maximal sketch is \(2\)-locally maximal. The theorem states the converse is true as well._
Proof of Theorem 7.5.: We first show that these conditions are required for a sketch to be \(\mathcal{CT}\) maximal.
1. The first condition is necessary since the \(\mathcal{CT}\) moves of type (a) or (c) would result in a greater sketch if it were false.
2. The second condition corresponds to \(\mathcal{CT}\) moves of type (d).
3. The part about the \(n^{th}\)\(\alpha\)-letter being positive if the \(n^{th}\) and \((n+1)^{th}\)\(\alpha\)-letters are consecutive is due to \(\mathcal{CT}\) moves of type (c). Suppose the letter before the \(n^{th}\)\(\alpha\)-letter is a \(\beta\)-letter. Then it can't be positive since we have already seen that condition (1) of the theorem statement must be satisfied. But if it is negative, we can do the following to obtain a larger \(\mathcal{CT}\) equivalent sketch: Hence the letter before the \(n^{th}\)\(\alpha\)-letter has to be an \(\alpha\)-letter. Now, suppose that the \((n-1)^{th}\)\(\alpha\)-letter is negative and the \((n-1)^{th}\)\(n^{th}\)\(\beta\)-letters are consecutive. Let the subscript of the \((n-1)^{th}\)\(\alpha\)-letter be \(-k\) and that of the \((n+1)^{th}\)\(\alpha\)-letter be \(-i\) for some \(k,i\in[n]\). If \(-k<-i\), we can do the following to obtain a larger \(\mathcal{CT}\) equivalent sketch: Hence we must have \(-k>-i\) in this case.
4. Suppose the \((2n-1)^{th}\) and \((2n+1)^{th}\) letters are both \(\beta\)-letters of the same sign and their corresponding \(\alpha\)-letters are consecutive but the subscript \(X\) of the \((2n-1)^{th}\) letter is less than the subscript \(Y\) of the \((2n+1)^{th}\) letter. We can do the following to obtain a larger \(\mathcal{CT}\) equivalent sketch: We now have to prove that these conditions are sufficient for a sketch to be \(\mathcal{CT}\) maximal. Suppose \(w\) is a symmetric sketch that satisfies the four properties mentioned in the statement of the theorem. Suppose there is a sketch \(w^{\prime}\) which is \(\mathcal{CT}\) equivalent to \(w\) but larger in the lexicographic order. This means that if \(w=w_{1}\cdots w_{4n}\) and \(w^{\prime}=w^{\prime}_{1}\cdots w^{\prime}_{4n}\), there is some \(p\in[4n]\) such that \[w_{i}=w^{\prime}_{i}\text{ for }i\in[p-1]\text{ and }w_{p}\prec w^{\prime}_{p}.\] The possible ways in which this can happen are listed below. 1. \(w_{p}\) is a \(\beta_{+}\) letter and \(w^{\prime}_{p}\) is an \(\alpha_{+}\) letter. 2. \(w_{p}\) is a \(\beta_{-}\) letter and \(w^{\prime}_{p}\) is an \(\alpha_{-}\) letter. 3. \(w_{p}\) is a \(\beta_{+}\) letter and \(w^{\prime}_{p}\) is an \(\alpha_{-}\) letter. 4. \(w_{p}\) is a \(\beta_{-}\) letter and \(w^{\prime}_{p}\) is an \(\alpha_{+}\) letter. 5. \(w_{p}\) and \(w^{\prime}_{p}\) are both \(\alpha_{+}\) letters. 6. \(w_{p}\) and \(w^{\prime}_{p}\) are both \(\alpha_{-}\) letters. 7. \(w_{p}\) is an \(\alpha_{-}\) letter and \(w^{\prime}_{p}\) is an \(\alpha_{+}\) letter.
The case of both \(w_{p}\) and \(w^{\prime}_{p}\) being \(\beta\)-letters is not possible since, by the properties of a sketch, this would mean \(w_{p}=w^{\prime}_{p}\). Since \(\alpha_{-}\prec\alpha_{+}\) we cannot have \(w_{p}\) being an \(\alpha_{+}\) letter
and \(w^{\prime}_{p}\) being an \(\alpha_{-}\) letter. We will now show that each case leads to a contradiction, which will complete the proof of the theorem.
Before going forward, we formulate the meaning of \(w\) and \(w^{\prime}\) being \(\mathcal{CT}\) equivalent in terms of sketches. Since they have to be in the same region of \(\mathcal{CT}_{n}\), the inequalities corresponding to the hyperplanes
\[x_{i}+x_{j}=-2,-1,0\]
for all \(1\leq i<j\leq n\) are the same in both sketches. This means that the relationship between the pairs of the form
\[\{\alpha_{i}^{(1)},\alpha_{-j}^{(-1)}\},\ \{\alpha_{i}^{(1)},\alpha_{-j}^{(0)}\},\ \{\alpha_{i}^{(0)},\alpha_{-j}^{(-1)}\},\text{ and }\{\alpha_{i}^{(0)},\alpha_{-j}^{(0)}\}\]
for any distinct \(i,j\in[n]\) are the same in both \(w\) and \(w^{\prime}\). This can be written as follows:
\[\begin{array}{l}\text{The relationship between letters of opposite sign and}\\ \text{different mod value have to be the same in both $w$ and $w^{\prime}$.}\end{array} \tag{5}\]
**Case 1:**\(w_{p}\) is a \(\beta_{+}\) letter and \(w^{\prime}_{p}\) is an \(\alpha_{+}\) letter.
In this case \(w\) and \(w^{\prime}\) are of the form
\[w =w_{1}\cdots w_{p-1}\alpha_{k}^{(1)}\cdots\] \[w^{\prime} =w^{\prime}_{1}\cdots w^{\prime}_{p-1}\alpha_{l}^{(0)}\cdots\]
for some \(k,l\in[n]\). Hence, \(\alpha_{l}^{(0)}\) appears after \(\alpha_{k}^{(1)}\) in \(w\). By (5), every letter between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(0)}\) in \(w\) should be positive or one of \(\alpha_{-l}^{(-1)}\) and \(\alpha_{-l}^{(0)}\). If all the letters are positive, since \(\alpha_{k}^{(1)}\) is a \(\beta_{+}\) letter and \(\alpha_{l}^{(0)}\) is an \(\alpha_{+}\) letter, there would be a consecutive pair of the form \(\beta_{+}\alpha_{+}\) in \(w\), which is a contradiction to property (1).
Now suppose \(\alpha_{-l}^{(0)}\) is between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(0)}\) in \(w\). It cannot be immediately before \(\alpha_{l}^{(0)}\) since this would contradict property (1). But if it is not immediately before \(\alpha_{l}^{(0)}\), since \(\alpha_{-l}^{(0)}\) and \(\alpha_{l}^{(0)}\) are negatives of each other, there should be some negative letter between them. But this letter cannot be \(\alpha_{-l}^{(-1)}\) (since this should be before \(\alpha_{-l}^{(0)}\)). This is a contradiction to (5). Hence \(\alpha_{-l}^{(0)}\) cannot be between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(0)}\).
So we must have \(\alpha_{-l}^{(-1)}\) between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(0)}\) in \(w\). Again, \(\alpha_{-l}^{(-1)}\) cannot be immediately before \(\alpha_{l}^{(0)}\) since this would contradict property (3). This means that there is at least one letter between \(\alpha_{-l}^{(-1)}\) and \(\alpha_{l}^{(0)}\) and all such letters are positive. If one of them is a \(\beta_{+}\) letter, since \(\alpha_{l}^{(0)}\) is an \(\alpha_{+}\) letter, there would be a consecutive pair of the form \(\beta_{+}\alpha_{+}\), which is a contradiction to property (1). Hence all the letters between \(\alpha_{-l}^{(-1)}\) and \(\alpha_{l}^{(0)}\) are \(\alpha_{+}\) letters. But this is impossible by Lemma 4.6.
**Case 2:**\(w_{p}\) is a \(\beta_{-}\) letter and \(w^{\prime}_{p}\) is an \(\alpha_{-}\) letter.
In this case \(w\) and \(w^{\prime}\) are of the form
\[w =w_{1}\cdots w_{p-1}\alpha_{-k}^{(0)}\cdots\] \[w^{\prime} =w^{\prime}_{1}\cdots w^{\prime}_{p-1}\alpha_{-l}^{(-1)}\cdots\]
for some \(k,l\in[n]\). Hence, \(\alpha_{-l}^{(-1)}\) appears after \(\alpha_{-k}^{(0)}\) in \(w\). By (5), each letter between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(-1)}\) in \(w\) has to be negative or one of \(\alpha_{l}^{(0)}\) and \(\alpha_{l}^{(1)}\). Just as before, all letters between
\(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(-1)}\) cannot be negative. The fact that \(\alpha_{l}^{(1)}\) cannot be between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(-1)}\) also has a similar proof as in the last case.
So we must have \(\alpha_{l}^{(0)}\) between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(-1)}\). All the letters between \(\alpha_{l}^{(0)}\) and \(\alpha_{-l}^{(-1)}\) have to be negative. There are no \(\beta_{-}\) letters between them, otherwise there would be consecutive letters of the form \(\beta_{-}\alpha_{-}\), which contradicts property (1). So if there are letters between \(\alpha_{l}^{(0)}\) and \(\alpha_{-l}^{(-1)}\) they should all be \(\alpha_{-}\) letters, but this cannot happen by Lemma 4.6. So \(\alpha_{l}^{(0)}\) and \(\alpha_{-l}^{(-1)}\) are consecutive. By property (3), the letter before \(\alpha_{l}^{(0)}\) should be an \(\alpha\)-letter. And by (5), it is an \(\alpha_{-}\) letter. But since \(\alpha_{-k}^{(0)}\) is a \(\beta_{-}\) letter and all letters between \(\alpha_{-k}^{(0)}\) and \(\alpha_{l}^{(0)}\) are negative, there will be a consecutive pair of the form \(\beta_{-}\alpha_{-}\), which is a contradiction to property (1).
**Case 3:**\(w_{p}\) is a \(\beta_{+}\) letter and \(w_{p}^{\prime}\) is an \(\alpha_{-}\) letter.
In this case \(w\) and \(w^{\prime}\) are of the form
\[w =w_{1}\cdots w_{p-1}\alpha_{k}^{(1)}\cdots\] \[w^{\prime} =w_{1}^{\prime}\cdots w_{p-1}^{\prime}\alpha_{-l}^{(-1)}\cdots\]
for some \(k,l\in[n]\). If \(k\neq l\), this will contradict (5) since \(\alpha_{k}^{(1)}\) will be before \(\alpha_{-l}^{(-1)}\) in \(w\) but not in \(w^{\prime}\). So \(\alpha_{-k}^{(-1)}\) appears after \(\alpha_{k}^{(1)}\) in \(w\) and all letters between them are negative by (5) (note that \(\alpha_{k}^{(0)}\) is before \(\alpha_{k}^{(1)}\)). Again, \(\alpha_{-k}^{(-1)}\) cannot be immediately after \(\alpha_{k}^{(1)}\) since this would contradict property (1) and if there were some letters between \(\alpha_{k}^{(1)}\) and \(\alpha_{-k}^{(-1)}\), at least one of them would be negative, which contradicts (5).
**Case 4:**\(w_{p}\) is a \(\beta_{-}\) letter and \(w_{p}^{\prime}\) is an \(\alpha_{+}\) letter.
Arriving at a contradiction in this case follows using the same method as in the last case.
**Case 5:**\(w_{p}\) and \(w_{p}^{\prime}\) are both \(\alpha_{+}\) letters.
In this case \(w\) and \(w^{\prime}\) are of the form
\[w =w_{1}\cdots w_{p-1}\alpha_{k}^{(0)}\cdots\] \[w^{\prime} =w_{1}^{\prime}\cdots w_{p-1}^{\prime}\alpha_{l}^{(0)}\cdots\]
for some \(1\leq k<l\leq n\). We split this case into two possibilities depending on whether or not \(\alpha_{l}^{(0)}\) is before \(\alpha_{k}^{(1)}\).
**Case 5(a):**\(\alpha_{l}^{(0)}\) is before \(\alpha_{k}^{(1)}\) in \(w\).
In this case \(w\) and \(w^{\prime}\) are of the form
\[w =w_{1}\cdots w_{p-1}\alpha_{k}^{(0)}\cdots\alpha_{l}^{(0)}\cdots \alpha_{k}^{(1)}\cdots\alpha_{l}^{(1)}\cdots\] \[w^{\prime} =w_{1}^{\prime}\cdots w_{p-1}^{\prime}\alpha_{l}^{(0)}\cdots\,.\]
By (5), each letter between \(\alpha_{k}^{(0)}\) and \(\alpha_{l}^{(0)}\) in \(w\) is positive or one of \(\alpha_{-l}^{(-1)}\) or \(\alpha_{-l}^{(0)}\). Just as in the **Case 1**, we can prove that \(\alpha_{-l}^{(-1)}\) and \(\alpha_{-l}^{(0)}\) cannot between \(\alpha_{k}^{(0)}\) and \(\alpha_{l}^{(0)}\). Hence all the letters between \(\alpha_{k}^{(0)}\) and \(\alpha_{l}^{(0)}\) are positive. In fact, they all have to be \(\alpha\)-letters. Otherwise we would be a consecutive pair of the form \(\beta_{+}\alpha_{+}\), which contradicts property (1).
Each letter between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(1)}\) is positive or one of \(\alpha_{-k}^{(-1)}\), \(\alpha_{-l}^{(-1)}\), \(\alpha_{-k}^{(0)}\) or \(\alpha_{-l}^{(0)}\). Neither \(\alpha_{-k}^{(0)}\) nor \(\alpha_{-l}^{(0)}\) can be between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(1)}\), since this would mean that \(\alpha_{-k}^{(-1)}\) or \(\alpha_{-l}^{(-1)}\) is between \(\alpha_{k}^{(0)}\) and \(\alpha_{l}^{(0)}\), which cannot happen since we have already seen that there are only positive \(\alpha\)-letters between them.
If \(\alpha_{-k}^{(-1)}\) were between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(1)}\), it could not be immediately after \(\alpha_{k}^{(1)}\) since this would contradict property (1). If there were some letters between \(\alpha_{k}^{(1)}\) and \(\alpha_{-k}^{(-1)}\), at least one of them would be a negative letter other than \(\alpha_{-l}^{(-1)}\), which contradicts (5) (since \(\alpha_{l}^{(1)}\) is after \(\alpha_{-k}^{(-1)}\)).
So the only negative letter that can be between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(1)}\) is \(\alpha_{-l}^{(-1)}\). First, suppose that all letters between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(1)}\) are positive. Then all of them would have to be \(\beta_{+}\) letters (otherwise there would be consecutive \(\beta_{+}\alpha_{+}\) which contradicts property (1)). Then we would have that all letters between \(\alpha_{k}^{(0)}\) and \(\alpha_{l}^{(0)}\) are \(\alpha_{+}\) letters and all letters between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(1)}\) are \(\beta_{+}\) letters and repeated application of property (2) would give \(k>l\), which is a contradiction.
Next, suppose \(\alpha_{-l}^{(-1)}\) is between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(1)}\). If \(\alpha_{-l}^{(-1)}\) is not immediately before \(\alpha_{l}^{(1)}\), there will be some negative letter other than \(\alpha_{-l}^{(-1)}\) between \(\alpha_{k}^{(1)}\) and \(\alpha_{l}^{(1)}\), which we have already shown is not possible. So \(\alpha_{-l}^{(-1)}\) is immediately before \(\alpha_{l}^{(1)}\) and all the letters between \(\alpha_{k}^{(1)}\) and \(\alpha_{-l}^{(-1)}\) are positive and they have to all be \(\beta_{+}\) letters (otherwise there would be a consecutive pair of the form \(\beta_{+}\alpha_{+}\)). If \(\alpha_{k^{\prime}}^{(1)}\) is the \(\beta_{+}\) letter before \(\alpha_{-l}^{(-1)}\) (\(k^{\prime}\) could be \(k\)), then \(\alpha_{k^{\prime}}^{(0)}\) is the letter before \(\alpha_{l}^{(0)}\) and hence we get that the letters between \(\alpha_{k}^{(0)}\) and \(\alpha_{k^{\prime}}^{(0)}\) are all \(\alpha_{+}\) letters and their corresponding \(\beta\)-letters are consecutive and so by property (2), \(k\geq k^{\prime}\). But property (4) tells us that \(k^{\prime}>l\). So we get \(k>l\), which is a contradiction.
**Case 5(b):**\(\alpha_{l}^{(0)}\) is after \(\alpha_{k}^{(1)}\) in \(w\).
In this case \(w\) and \(w^{\prime}\) are of the form
\[w =w_{1}\cdots w_{p-1}\alpha_{k}^{(0)}\cdots\alpha_{k}^{(1)}\cdots \alpha_{l}^{(0)}\cdots\] \[w^{\prime} =w_{1}^{\prime}\cdots w_{p-1}^{\prime}\alpha_{l}^{(0)}\cdots.\]
By (5), each letter between \(\alpha_{k}^{(0)}\) and \(\alpha_{l}^{(0)}\) in \(w\) is positive or one of \(\alpha_{-l}^{(-1)}\) or \(\alpha_{-l}^{(0)}\). Just as in **Case 1**, we can prove that \(\alpha_{-l}^{(-1)}\) and \(\alpha_{-l}^{(0)}\) cannot between \(\alpha_{k}^{(0)}\) and \(\alpha_{l}^{(0)}\). Hence all the letters between \(\alpha_{k}^{(0)}\) and \(\alpha_{l}^{(0)}\) are positive. Since \(\alpha_{k}^{(1)}\) is a \(\beta_{+}\) letter and \(\alpha_{l}^{(0)}\) is an \(\alpha_{+}\) letter and all letters in between are positive, there is a consecutive pair of the form \(\beta_{+}\alpha_{+}\), which is a contradiction to property (1).
**Case 6:**\(w_{p}\) and \(w_{p}^{\prime}\) are both \(\alpha_{-}\) letters.
In this case \(w\) and \(w^{\prime}\) are of the form
\[w =w_{1}\cdots w_{p-1}\alpha_{-k}^{(-1)}\cdots\] \[w^{\prime} =w_{1}^{\prime}\cdots w_{p-1}^{\prime}\alpha_{-l}^{(-1)}\cdots\]
for some \(1\leq l<k\leq n\). We split this case into two possibilities depending on whether or not \(\alpha_{-l}^{(-1)}\) is before \(\alpha_{-k}^{(0)}\).
**Case 6(a):**\(\alpha_{-l}^{(-1)}\) is before \(\alpha_{-k}^{(0)}\) in \(w\).
In this case \(w\) and \(w^{\prime}\) are of the form
\[w =w_{1}\cdots w_{p-1}\alpha_{-k}^{(-1)}\cdots\alpha_{-l}^{(-1)} \cdots\alpha_{-k}^{(0)}\cdots\alpha_{-l}^{(0)}\cdots\] \[w^{\prime} =w_{1}^{\prime}\cdots w_{p-1}^{\prime}\alpha_{-l}^{(-1)}\cdots\,.\]
By (5), each letter between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\) is negative or one of \(\alpha_{l}^{(0)}\) or \(\alpha_{l}^{(1)}\). If \(\alpha_{l}^{(1)}\) is between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\), it should not be immediately before \(\alpha_{-l}^{(-1)}\) since this would contradict property (1). But then there would be some positive letter other than \(\alpha_{l}^{(0)}\) between \(\alpha_{-l}^{(-1)}\) and \(\alpha_{l}^{(1)}\) which would contradict (5).
First, suppose \(\alpha_{l}^{(0)}\) is between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\). Just as before, using property (1) and Lemma 4.6, we can show that \(\alpha_{l}^{(0)}\) has to be immediately before \(\alpha_{-l}^{(-1)}\). Also, all the letters between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{l}^{(0)}\) have to be negative by (5). By property (3), the letter before \(\alpha_{l}^{(0)}\) has to be an \(\alpha\)-letter and hence here it is an \(\alpha_{-}\) letter. Hence, the letters between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{l}^{(0)}\) have to be \(\alpha_{-}\) letters since otherwise there be a consecutive pair of the form \(\beta_{-}\alpha_{-}\).
By (5), each letter between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(0)}\) is negative or one of \(\alpha_{k}^{(0)}\), \(\alpha_{l}^{(0)}\), \(\alpha_{k}^{(1)}\) or \(\alpha_{l}^{(1)}\). Now, \(\alpha_{k}^{(1)}\) cannot be between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(0)}\) since this would mean \(\alpha_{k}^{(0)}\) is between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\), which we have already shown is not possible. We have already assumed \(\alpha_{l}^{(0)}\) is between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\) and hence it cannot also be between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(0)}\). If \(\alpha_{k}^{(0)}\) were between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(0)}\), it could not have been immediately after \(\alpha_{-k}^{(0)}\) since this would contradict property (1). But then there would be some positive letter other than \(\alpha_{l}^{(1)}\) between \(\alpha_{-k}^{(0)}\) and \(\alpha_{k}^{(0)}\) (since \(\alpha_{-l}^{(-1)}\) is before \(\alpha_{-k}^{(0)}\) and hence \(\alpha_{l}^{(1)}\) is after \(\alpha_{k}^{(0)}\)), which is a contradiction to (5). This means that the only positive letter between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(0)}\) is \(\alpha_{l}^{(1)}\) which is between them since \(\alpha_{l}^{(0)}\) is between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\). Since \(\alpha_{l}^{(0)}\) and \(\alpha_{-l}^{(-1)}\) are consecutive, so are \(\alpha_{l}^{(1)}\) and \(\alpha_{-l}^{(0)}\). The letters between \(\alpha_{-k}^{(0)}\) and \(\alpha_{l}^{(1)}\) are all negative and should be \(\beta_{-}\) letters or else it would cause a contradiction to property (1).
Hence, the situation in the case that \(\alpha_{l}^{(0)}\) is between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\) is the following: There is a string of consecutive \(\alpha_{-}\) letters starting with \(\alpha_{-k}^{(-1)}\) ending before \(\alpha_{l}^{(0)}\) which is immediately before \(\alpha_{-l}^{(-1)}\) and the corresponding \(\beta\)-letters for all these \(\alpha\)-letters are consecutive. If \(\alpha_{-k}^{(-1)}\) is the \(\alpha_{-}\) letter immediately before \(\alpha_{l}^{(0)}\) (\(k^{\prime}\) could be \(k\)), then property (3) gives that \(-k^{\prime}>-l\) and property (2) gives that \(-k\geq-k^{\prime}\) and hence we get \(-k>-l\), which is a contradiction.
Next, suppose that all the letters between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\) are negative. All of them should be \(\alpha_{-}\) letters by property (1). It can be shown, just as before, that the only possible positive letter between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(0)}\) is \(\alpha_{l}^{(0)}\). If \(\alpha_{l}^{(0)}\) is not between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(0)}\), property (2) leads to a contradiction just as in **Case 5(a)**. If \(\alpha_{l}^{(0)}\) is between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(0)}\), it should be immediately before \(\alpha_{-l}^{(0)}\) and again, following a method similar to **Case 5(a)**, this leads to a contradiction using property (4).
**Case 6(b):**\(\alpha_{-l}^{(-1)}\) is after \(\alpha_{-k}^{(0)}\) in \(w\).
In this case \(w\) and \(w^{\prime}\) are of the form
\[w =w_{1}\cdots w_{p-1}\alpha_{-k}^{(-1)}\cdots\alpha_{-k}^{(0)}\cdots \alpha_{-l}^{(-1)}\cdots\alpha_{-l}^{(0)}\cdots\] \[w^{\prime} =w_{1}^{\prime}\cdots w_{p-1}^{\prime}\alpha_{-l}^{(-1)}\cdots.\]
By (5), each letter between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\) is negative or one of \(\alpha_{l}^{(0)}\) or \(\alpha_{l}^{(1)}\). Just as before, \(\alpha_{l}^{(1)}\) cannot be between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\). If \(\alpha_{l}^{(0)}\) is not between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\), then all the letters between them are negative and there is a \(\beta_{-}\) letter, namely \(\alpha_{-k}^{(0)}\), between them and this would result in a consecutive pair of the form \(\beta_{-}\alpha_{-}\), which contradicts property (1).
So \(\alpha_{l}^{(0)}\) is the only positive letter between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{-l}^{(-1)}\). If \(\alpha_{l}^{(0)}\) is before \(\alpha_{-k}^{(0)}\), we would get a consecutive pair of the form \(\beta_{-}\alpha_{-}\) between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(-1)}\) which contradicts property (1). So \(\alpha_{l}^{(0)}\) is between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-l}^{(-1)}\). If \(\alpha_{l}^{(0)}\) and \(\alpha_{-l}^{(-1)}\) were not consecutive, we would get a contradiction to property (1) if there were some \(\beta_{-}\) letter between them and if all were \(\alpha_{-}\) letters, this would contradict Lemma 4.6. So \(\alpha_{l}^{(0)}\) and \(\alpha_{-l}^{(-1)}\) are consecutive, and by property (3), the letter before \(\alpha_{l}^{(0)}\) should be an \(\alpha\)-letter and in this case an \(\alpha_{-}\) letter, say \(\alpha_{-k^{\prime}}^{(-1)}\). But then we would get a consecutive pair of the form \(\beta_{-}\alpha_{-}\) between \(\alpha_{-k}^{(0)}\) and \(\alpha_{-k^{\prime}}^{(-1)}\) which contradicts property (1).
**Case 7:**\(w_{p}\) is a \(\alpha_{-}\) letter and \(w_{p}^{\prime}\) is an \(\alpha_{+}\) letter.
In this case \(w\) and \(w^{\prime}\) are of the form
\[w =w_{1}\cdots w_{p-1}\alpha_{-k}^{(-1)}\cdots\] \[w^{\prime} =w_{1}^{\prime}\cdots w_{p-1}^{\prime}\alpha_{l}^{(0)}\cdots\]
for some \(k,l\in[n]\). If \(k\neq l\), we would get a contradiction to (5) since \(\alpha_{-k}^{(-1)}\) is before \(\alpha_{l}^{(0)}\) is \(w\) but not in \(w^{\prime}\). So \(\alpha_{k}^{(0)}\) appears after \(\alpha_{-k}^{(-1)}\) in \(w\) and each letter between them is positive or \(\alpha_{-k}^{(0)}\). Just as before \(\alpha_{-k}^{(0)}\) being between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{k}^{(0)}\) would either contradict property (1) or (5). So all letters between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{k}^{(0)}\) are positive. If there is some \(\beta_{+}\) letter between them, there will be a consecutive pair of the form \(\beta_{+}\alpha_{+}\), which would contradict property (1). Hence, all letters between \(\alpha_{-k}^{(-1)}\) and \(\alpha_{k}^{(0)}\) are \(\alpha_{+}\) letters. But this contradicts Lemma 4.6.
#### 7.3.2. Shi threshold
The Shi threshold arrangement in \(\mathbb{R}^{n}\) consists of the hyperplanes
\[X_{i}+X_{j}=0,1\]
for all \(1\leq i<j\leq n\). The translated arrangement by setting \(X_{i}=x_{i}+\frac{1}{2}\), which we call \(\mathcal{ST}_{n}\), has hyperplanes
\[x_{i}+x_{j}=-1,0\]
for all \(1\leq i<j\leq n\). We use the same method as before to study the regions of this arrangement by considering \(\mathcal{ST}_{n}\) as a sub-arrangement of \(\mathcal{C}_{n}\).
The hyperplanes in \(\mathcal{C}_{n}\) that are not in \(\mathcal{ST}_{n}\) are
\[2x_{i} =-2,-1,0\] \[x_{i}+x_{j} =-2\] \[x_{i}-x_{j} =-1,0,1\]
where \(1\leq i<j\leq n\). Changing the inequality corresponding to exactly one of these hyperplanes are given by the \(\mathcal{CT}\) moves as well as the move corresponding to \(x_{i}+x_{j}=-2\) where \(i\neq j\) are in \([n]\): Swapping consecutive \(\beta_{+}\) and \(\alpha_{-}\) letters (along with their negatives).
Two sketches are in the same region of \(\mathcal{ST}_{n}\) if and only if they are related by a series of such moves and we call such sketches \(\mathcal{ST}\) equivalent. A sketch is called \(\mathcal{ST}\) maximal if it is greater (in the lexicographic order) than all sketches to which it is \(\mathcal{ST}\) equivalent. Hence the regions of \(\mathcal{ST}_{n}\) are in bijection with the \(\mathcal{ST}\) maximal sketches. The following result can be proved just as Theorem 7.5.
**Theorem 7.7**.: _A symmetric sketch is \(\mathcal{ST}\) maximal if and only if the following hold._
1. _If a_ \(\beta\)_-letter is followed by an_ \(\alpha\)_-letter, the_ \(\beta\)_-letter should be negative and the_ \(\alpha\)_-letter should be positive with different mod-values._
2. _If two_ \(\alpha\)_-letters and their corresponding_ \(\beta\)_-letters are both consecutive and of the same sign then the subscript of the first one should be greater._
3. _If the_ \(n^{th}\) _and_ \((n+1)^{th}\)__\(\alpha\)_-letters are consecutive, then so are the_ \((n-1)^{th}\) _and_ \(n^{th}\) _with the_ \(n^{th}\)__\(\alpha\)_-letter being positive. In such a situation, if the_ \((n-1)^{th}\)__\(\alpha\)_-letter is negative and the_ \((n-1)^{th}\) _and_ \(n^{th}\)__\(\beta\)_-letters are consecutive, the_ \((n-1)^{th}\)__\(\alpha\)_-letter should have a subscript greater than that of the_ \((n+1)^{th}\)__\(\alpha\)_-letter._
4. _If the_ \((2n-1)^{th}\) _and_ \((2n+1)^{th}\) _letters are both negative_ \(\beta\)_-letters and their corresponding_ \(\alpha\)_-letters are consecutive, the subscript of the_ \((2n-1)^{th}\) _letter should be greater than that of the_ \((2n+1)^{th}\)_._
_Hence the regions of \(\mathcal{ST}_{n}\) are in bijection with sketches of the form described above._
## 8. Concluding remarks
We end the paper with some open questions. Bernardi [6] has dealt with arbitrary deformations of the braid arrangement. The first (ambitious) problem is to generalize all the results in his paper to arbitrary deformations of all reflection arrangements. This is easier said then done! Bernardi proves that the number of regions is equal to the signed sum of
certain "boxed trees". So the first step is to generalize the notion of boxed trees to certain decorated forests and then prove the counting formula, this is a work in progress. For certain well-behaved arrangements called "transitive deformations" Bernardi establishes an explicit bijection between the regions and the corresponding trees, via sketches. We don't have trees for all deformations of reflection arrangements but, we do have sketches that are in bijection with regions of (extended) Catalan deformations.
The main motivation behind Bernardi's work is an interesting pattern concerning certain statistic on labeled binary trees. Ira Gessel observed that the multivariate generating function for this statistic specializes to region counts of certain deformations of the braid arrangement. So a new research direction could be to try and define a statistic on non-nesting partitions (of all types) such that the associated generating function specializes to region counts.
Another aspect of Bernardi's work that has not been discussed in the present paper is the coboundary and Tutte polynomials. Using either, the finite field method or the method inspired by statistical mechanics one should get a closed form expression for these polynomials of the deformations we have considered. Moreover, the expression should be in terms of either sketches or non-nesting partitions.
Having a combinatorial model for the coefficients of the characteristic polynomial could be quite useful. Especially to derive various inequalities that they satisfy. For example, denote by \(C(m,n,j)\) be the number of symmetric \(m\)-non-nesting partitions of size \(n\) with \(j\) positive compartments. Then following inequalities are not difficult to prove:
1. \(C(m,n,j)\leq C(m,n+1,j)\)
2. \(C(m,n,j)\leq C(m,n+1,j+1)\)
3. \(C(m,n,j)\geq\sum_{k\geq j+1}{k\choose j}C(m,n,k)\).
A research direction here is to develop a case-free strategy to obtain more such information. For example, we know that the coefficients are unimodal so, identify the peak in each case.
Recall the Raney numbers that are defined by
\[A_{n}(m,r):=\frac{r}{n(m+1)+r}{n(m+1)+r\choose n}\]
for all positive integers \(n,m,r\). The Catalan numbers are a special case of Raney numbers, obtained by setting \(m=n=1\). It was shown in [8] that the number of regions of the hyperplane arrangement
\[\{x_{i}=0\mid i\in[n]\}\cup\{x_{i}=2^{k}x_{j}\mid k\in[-m,m],1\leq i<j\leq n\}\]
is equal to \(n!A_{n}(m,2)\). Note that these arrangements define a GESA. Find a family of arrangements which is GESA and the number of regions is \(n!A_{n}(m,r)\). One can use tuples of labeled Dyck paths to enumerate these regions. So one can try and apply techniques from this paper to find a static for these objects.
## 9. Acknowledgements
The authors are partially supported by a grant from the Infosys Foundation. The computer algebra system SageMath [19] provided valuable assistance in studying examples.
| 有限 Coxeter 群の反射 Hyperplane の集合を反射配置と呼び、それは組合せ論と表現論の多くのサブ領域に現れます。私たちは反射配置の領域と変形に関する問題に焦点を当てます。Bernardi の最近の業績を参考にして、移動と図形を用いて、反射配置の領域と特定の非nesting 分割数との間に一対一の対応関係を明示的に定めることが可能であることを示します。これに基づき、これらの分割の統計を指数式で記述し、特性多項式の係数によって分布が与えられます。最後に、C 型配置のサブ配置である閾値配置とそのカタランとシイの変形について検討します。 |
2308.00031 | Reinforcement Learning for Generative AI: State of the Art,
Opportunities and Open Research Challenges | Generative Artificial Intelligence (AI) is one of the most exciting
developments in Computer Science of the last decade. At the same time,
Reinforcement Learning (RL) has emerged as a very successful paradigm for a
variety of machine learning tasks. In this survey, we discuss the state of the
art, opportunities and open research questions in applying RL to generative AI.
In particular, we will discuss three types of applications, namely, RL as an
alternative way for generation without specified objectives; as a way for
generating outputs while concurrently maximizing an objective function; and,
finally, as a way of embedding desired characteristics, which cannot be easily
captured by means of an objective function, into the generative process. We
conclude the survey with an in-depth discussion of the opportunities and
challenges in this fascinating emerging area. | Giorgio Franceschelli, Mirco Musolesi | 2023-07-31T18:00:02 | http://arxiv.org/abs/2308.00031v4 | Reinforcement Learning for Generative AI: State of the Art, Opportunities and Open Research Challenges
###### Abstract
Generative Artificial Intelligence (AI) is one of the most exciting developments in Computer Science of the last decade. At the same time, Reinforcement Learning (RL) has emerged as a very successful paradigm for a variety of machine learning tasks. In this survey, we discuss the state of the art, opportunities and open research questions in applying RL to generative AI. In particular, we will discuss three types of applications, namely, RL as an alternative way for generation without specified objectives; as a way for generating outputs while concurrently maximizing an objective function; and, finally, as a way of embedding desired characteristics, which cannot be easily captured by means of an objective function, into the generative process. We conclude the survey with an in-depth discussion of the opportunities and challenges in this fascinating emerging area.
1. www.christies.com/features/a-collaboration-between-two-artists-one-human-one-a-machine-9332-1.aspx2. [https://openai.com/blog/chatgtpt/](https://openai.com/blog/chatgtpt/)
Footnote 2: [https://openai.com/blog/chatgtpt/](https://openai.com/blog/chatgtpt/)
Footnote 3: We assume the following definitions: we refer to large language models as language models characterized by large size in terms of number of parameters; they are are also usually based on transformer architectures. A foundation model is a large model that is trained on broad data of different types (textual, audio, image, video, etc.) at scale and is adaptable to a wide range of downstream tasks, following Bommasani et al. (2022).
## 1 Introduction
Generative Artificial Intelligence (AI) is gaining increasing attention in academia, industry, and among the general public. This has been apparent since a portrait based on Generative Adversarial Networks (Goodfellow et al., 2014) was sold for more than four hundred thousand dollars1 in 2018. Then, the introduction of transformers (Vaswani et al., 2017) for natural language processing and diffusion models (Sohl-Dickstein et al., 2015) for image generation has led to the development of generative models characterized by unprecedented performance, e.g., GPT-4 (OpenAI, 2023), LaMDA (Thoppilan et al., 2022), Llama 2 (Touvron et al., 2023), DALL-E 2 (Ramesh et al., 2022) and Stable Diffusion (Rombach et al., 2022), just to name a few. In particular, ChatGPT2, a conversational agent based on GPT-3 and GPT-4, is widely considered as a game-changing product; its introduction has indeed accelerated the development of foundation models. One of the characteristics of ChatGPT and other state-of-the-art large language models (LLMs) and foundation models3 is the use of Reinforcement Learning (RL) in order to align its production to human values (Chris
tiano et al., 2017), so as to mitigate biases and to avoid mistakes and potentially malicious uses.
In general, RL offers the opportunity to use non-differentiable functions as rewards (Ranzato et al., 2016). Examples include chemistry (Vanhaelen et al., 2020) and dialogue systems (Young et al., 2013). We believe that RL is a promising solution for designing efficient and effective generative AI system. In this article, we will explore this research space, which is, after all, largely unexplored. In particular, the contributions of this work can be summarized as follows: we first survey the current state of the art at the interface (and intersection) between generative AI and RL; we then discuss the opportunities and challenges related to the application of RL to generative AI research, outlining a potential research agenda for the coming years.
Several works have already surveyed deep generative learning (e.g., Franceschelli and Musolesi, 2021; Foster, 2023), deep reinforcement learning (e.g., Lazaridis et al., 2020; Sutton and Barto, 2018), its societal impacts (Whittlestone et al., 2021), and applications of RL for specific generative domains (e.g., Fernandes et al., 2023). To the best of our knowledge, this is the first survey on the applications (and implications) of RL applied to generative deep learning.
The remainder of the paper is structured as follows. First, we introduce and review key concepts in generative AI and RL (Section 2). Then, we discuss the different ways in which RL can be used for generative tasks, both considering past works and suggesting future directions (Section 3). Finally, we conclude the survey by discussing open research questions and analyzing future research opportunities (Section 4).
## 2 Preliminaries
### Generative Deep Learning
We will assume the following definition of _generative model_(Foster, 2023): given a dataset of observations \(X\), and assuming that \(X\) has been generated according to an unknown distribution \(P_{data}\), a generative model \(P_{model}\) is a model that can mimic \(P_{data}\). By sampling from \(P_{model}\), observations that appear to have been drawn from \(P_{data}\) can be generated. Generative deep learning consists in the application of deep learning techniques to learn \(P_{model}\).
Several families of generative deep learning techniques have been proposed in the last decade, e.g., Variational Autoencoders (VAEs) (Kingma and Welling, 2014; Rezende et al., 2014), Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), autoregressive models like Recurrent Neural Networks (RNNs) (Cho et al., 2014; Hochreiter and Schmidhuber, 1997), transformers (Vaswani et al., 2017), and denoising diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020). These models and architectures aim to approximate \(P_{data}\) by means of self-supervised learning, i.e., by minimizing a reconstruction error when trying to reproduce real examples from \(X\). The only exceptions are GANs, which aim to approximate \(P_{data}\) using adversarial learning, i.e., by maximizing the predicted probability that the outputs were generated by \(P_{data}\). We refer the interested reader to Franceschelli and Musolesi (2021) for a deeper analysis of the training and sampling processes at the basis of these solutions. Although highly effective for a variety of tasks, the outputs generated by these models do not always satisfy the desired properties. This happens for a variety of
reasons. In fact, specific objectives cannot always be cast as loss functions; and providing carefully designed datasets is typically expensive. Few-shot learning (Brown et al., 2020), prompt engineering (Strobelt et al., 2023) and fine-tuning (Dodge et al., 2020) are potential solutions to these problems. We will discuss these issues in detail in the following sections.
### Deep Reinforcement Learning
RL is a machine learning paradigm that consists in learning an action based on a current representation of the environment in order to maximize a numerical signal, i.e., the _reward_ over time (Sutton & Barto, 2018). More formally, at each time step \(t\), an _agent_ receives the current _state_ from the _environment_, then it performs an _action_ and observes the reward and the new state. Figure 1 summarizes the process. The learning process aims to teach the agent to act in order to maximize the _cumulative return_, i.e., a discounted sum of future rewards. Deep learning is also used to learn and approximate a _policy_, i.e., the mapping from states to action probabilities, or a _value function_, i.e., the mapping from states (or state-action pairs) to expected cumulative rewards. In this case, we refer to it as deep reinforcement learning. Several algorithms have been proposed to learn a value function from which it is possible to induce a policy (e.g., DQN (Mnih et al., 2013) and its variants (van Hasselt et al., 2016; Schaul et al., 2016; Wang et al., 2016)), or to directly learn a policy (e.g., A3C (Mnih et al., 2016), DDPG (Lillicrap et al., 2016), TRPO (Schulman et al., 2015), PPO (Schulman et al., 2017)). We refer the interested readers to Sutton and Barto (2018) for a comprehensive introduction to the topic.
The RL community has developed a variety of solutions to address the specific theoretical and practical problems emerging from this simple formulation. For example, if the reward signal is not known, inverse reinforcement learning (IRL) (Ng & Russell, 2000) is used to learn it from observed experience. Intrinsic motivation (Singh et al., 2004; Linke et al., 2020), e.g., curiosity (Pathak et al., 2017) can be used to deal with sparse rewards and encourage the agent to explore more. Imagination-based RL (Ha & Schmidhuber, 2018; Hafner et al., 2020) is a solution that allows to train an agent, reducing at the same time the need for interaction with the environment. Hierarchical RL (Pateria et al., 2021) allows to manage more complex problems by decomposing them into sub-tasks and working at
Figure 1: The canonical reinforcement learning framework.
different levels of abstraction. RL is not only used for training a single agent, but also in multi-agent scenarios (Zhang et al., 2021).
## 3 Generative RL
In the following, we will discuss the state of the art in RL for generative learning considering three classes of solutions, which are summarized in Table 1: RL as an alternative solution for output generation without specified objectives; RL as a way for generating output while maximizing an objective function at the same time; and, finally, RL as a way of embedding desired characteristics, which cannot easily be captured by means of an objective function, into the generative process.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Goal & Reward & Advantages & Limitations \\ \hline \multirow{4}{*}{\begin{tabular}{c} Mere generation \\ \end{tabular} } & \(\bullet\) GAN’s discriminative signal & \(\bullet\) Model domains defined by non-differentiable objectives & \(\bullet\) Learning without supervision is hard \\ & \(\bullet\) Log-likelihood of real or predicted targets & \(\bullet\) Adapt GAN to sequential tasks & \(\bullet\) Pre-training can prevent an appropriate exploration \\ & \(\bullet\) Constraint satisfaction & \(\bullet\) Can implement RL strategies, e.g., hierarchical RL & \\ \hline \multirow{4}{*}{\begin{tabular}{c} Objective maximization \\ \end{tabular} } & \(\bullet\) Test-time metrics & \(\bullet\) Countable desired or undesired characteristics & \(\bullet\) Optimize a generator from a specific domain towards desirable sub-domains & \(\bullet\) Not every desirable property is quantifiable or easy to get \\ & \(\bullet\) Quantifiable properties & \(\bullet\) Reduce the gap between training and evaluation & \(\bullet\) Goodhart’s law \\ \hline \multirow{4}{*}{\begin{tabular}{c} Improving not easily quantifiable characteristics \\ \end{tabular} } &
\begin{tabular}{c} Output of a model trained to reproduce human or AI feedback about non-quantifiable properties (e.g., helpfulness, appropriateness, creativity) \\ \end{tabular} & \(\bullet\) Address the alignment problem & \(\bullet\) Get user preferences is expensive \\ & \(\bullet\) Require preferences between candidates instead of defining a mathematical measure of desired property & \(\bullet\) Users might misbe-have, disagree, or be biased \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the three purposes for using RL with generative AI, considering the used rewards, their advantages, and their limitations.
### RL for Mere Generation
#### 3.1.1 Overview
The simplest approach is RL for _mere_ generation. In fact, due to its Markovian nature, RL can be used as a solution to the generative modeling problem in the case of sequential tasks (Bachman & Precup, 2015), e.g., text generation or stroke painting. The generative model plays the role of the agent. The current version of the generated output represents the state. For example, actions model how the state can be modified, e.g., which token to be appended or which change applied to a picture. Finally, the reward is an indicator of the "quality" in terms of the generation of the output. Figure 2 summarizes the entire process.
It is possible to identify three fundamental design aspects: the implementation of the generative agent itself, e.g., diffusion model or transformer; the definition of the dynamics of the system, i.e., the transition between a state to another; the choice of the reward structure. The first two depend on the task to be solved, e.g., music generation with LSTM composing one note after the other or painting with CNN superimposing subsequent strokes. The third one is instead responsible of the actual learning. While the reward can be structured so as to represent the classic supervised target, it also provides the designers with the opportunity of using a more diverse and complex set of reward functions, especially non-differentiable ones (which cannot be used in supervised learning due to the impossibility of computing their gradient for backpropagation).
The first example we consider is SeqGAN (Yu et al., 2017). Typically, GANs cannot be used for sequential tasks because the discriminative signal, i.e., whether the input looks real or not, is only available after the sequence is completed. SeqGAN circumvents this problem by using RL, which allows to learn from rewards received further in the future as well. Indeed, SeqGAN exploits the discriminative signal as the actual reward. The approach itself is based on a very simple policy approximation algorithm, namely REINFORCE (Williams, 1992). A similar approach is also used in MaskGAN (Fedus et al., 2018), where the generator learns with in-filling (i.e., by masking out a certain amount of words and then using the generator to predict them) through actor-critic learning (Sutton, 1984). Notably, hierar
Figure 2: The reinforcement learning framework for generative modeling.
chical RL can also be used: for example, LeakGAN (Guo et al., 2018) relies on a generator composed by a manager, which receives _leaked_ information from the discriminator, and a worker, which relies on a goal vector as a conditional input from the manager. Since SeqGAN might produce very sparse rewards, alternative strategies have been proposed. Shi et al. (2018) suggest to replace the discriminator with a reward model learned with IRL on state-action pairs, so that the reward is available at each timestep (together with an entropy regularization term). A more complex state composed of a context embedding can also be used (Li et al., 2019). Instead, Li et al. (2017) is based on a variation of SeqGAN: it uses Monte Carlo methods to get rewards at each timestep. In addition, the authors also suggest to alternate RL with a "teacher", i.e., the classic supervised training. This helps deal with tasks like text generation where the action space (i.e., the set of possible words or sub-words) is too large to be consistently explored using RL alone. Another solution to this problem is NLPO (Ramamurthy et al., 2023), which is a parameterized-masked extension of PPO (Schulman et al., 2017) that restricts the action space via top-\(p\) sampling, i.e., by only considering the smallest possible set of actions whose probabilities have a sum greater than \(p\). TrufLL (Martin et al., 2022) uses top-\(p\) sampling as well; however, it restricts the action space by means of a pre-trained task-agnostic model _before_ applying policy gradient with PPO. Similarly, AEDQN (Zahavy et al., 2018) reduces the number of possible actions through an action elimination network; once the admissible action set is obtained, DQN is then used to learn an agent from such a set.
Another reason to use RL is to take advantage of its inherent properties. For example, GOLD (Pang and He, 2021) is an algorithm that substitutes self-supervised learning with off-policy RL and importance sampling. It uses real demonstrations, which are stored in a replay buffer; the reward corresponds to either the sum or the product of the action probabilities over the sampled trajectories, i.e., of each single real token according to the model. While it can be considered close to a self-supervised approach, off-policy RL with importance sampling allows up-weighting actions with high (cumulative) return and actions preferred by the current policy, encouraging to focus on in-distribution examples.
RL is also an effective solution for learning in domains in which a differentiable objective is difficult or impossible to define. RL-Duet (Jiang et al., 2020) is an algorithm for online accompaniment generation. Learning how to produce musical notes according to a given context is a complex task: RL-Duet first learns a reward model that considers both inter-part (i.e., with counterpart) and intra-part (i.e., on its own) harmonization. Such model is composed by an ensemble of networks trained to predict different portions of music sheets (with or without human counter-part, and with or without machine context). Then, the generative agent is trained to maximize this reward by means of an actor-critic architecture with generalized advantage estimator (GAE) (Schulman et al., 2016). CodeRL (Le et al., 2022) performs code generation through a pre-trained model and RL. In particular, the model is fine-tuned with policy gradient in order to maximize the probability of passing unit tests: it receives a (sparse) reward quantifying if (and how) the generated code has passed the test for the assigned task. In addition, a critic learns a (dense) signal to predict the compiler output. The model is then trained to maximize both signals considering a baseline obtained with a greedy decoding strategy.
Another interesting application area is painting. Xie et al. (2012) suggest to model stroke painting as a Markov Decision Process, where the state is the canvas, and the actions are
the brushstrokes performed by the agent. Rewards calculated considering the location and inclination of the strokes are then used to train the agent. For instance, Doodle-SDQ (Zhou et al., 2018) fine-tunes a pre-trained sketcher with Double DQN (van Hasselt et al., 2016) and a reward that is calculated by evaluating how well a sketch reproduces a target image at pixel, movement, and color levels. Huang et al. (2019) use a discriminator trained to recognize real canvas-target image pairs to derive a corresponding reward. Instead, Singh and Zheng (2021) train a painting policy that operates at two different levels: foreground and background. Each of them uses a discriminator; in addition, they adopt a focus reward measuring the degree of indistinguishability of two object features. Finally, Intelli-Paint (Singh et al., 2022) is based on four different types of rewards, which are used to learn a painting policy with deep deterministic policy gradient (DDPG) (Lillicrap et al., 2016) based on a discriminator signal on canvas-image pairs, two penalties for the color and position of consecutive strokes, and the same semantic guidance proposed by Singh and Zheng (2021).
#### 3.1.2 Discussion
RL can represent an alternative method for deriving generative models, especially if the target loss is non-differentiable. It allows for the adaptation of known generative strategies, e.g., GANs, to tasks for which the traditional techniques are not suitable, e.g., in text generation. In addition, it can be applied to domains in which feasibility and correctness (e.g., running code as above) are essential dimensions to consider. RL can also be used to derive more complex generative strategies (e.g., through hierarchical RL).
It is possible to identify some limitations of the proposed solution. Learning without supervision is a hard task, especially when the action space is large. For this reason, pre-trained generative models are selected for this task. This can cause the agent to initially focus on highly probable tokens, increasing their associated probabilities and, because of that, failing to explore different solutions (i.e., by only moving the probability mass of the already most probable tokens) (Choshen et al., 2020). These problems can be avoided through variance reduction techniques (e.g., incorporating baselines and critics) and exploration strategies (Kiegeland & Kreutzer, 2021).
### RL for Objective Maximization
#### 3.2.1 Overview
Since RL allows us to use any non-differentiable function for modeling the rewards, one might suspect that there may be better solutions than simply replicate the behavior of self-supervised learning loss. Indeed, there is a clear mismatch between how the models are trained (i.e., on losses) and how they are evaluated (i.e., on metrics) (Ranzato et al., 2016): an emerging line of research is focusing on the use of metrics as reward functions for generative learning.
RL for quantity maximization has been mainly adopted in text generation, especially for dialogue and translation. In addition to exposure bias mitigation, it allows for replacing classic likelihood-based losses with metrics used at inference time. A pioneering work is the one by Ranzato et al. (2016), where RL is adopted to directly maximize BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) scores. To deal with the size of the action space, the authors introduce MIXER, a variant of REINFORCE algorithm that uses incremental
learning (i.e., an algorithm based on an optimal pre-trained model according to ground truth sequences) and combines reward maximization with classic cross-entropy loss by means of an annealing schedule. In this way, the model starts with preexisting knowledge, which is preserved through the classic loss, while aiming at exploring alternative but still probable solutions, which should increase score at test time. A similar approach is also used by Google's neural machine translation system (Wu et al., 2016). BLEU score is used as the reward, while fine-tuning a pre-trained neural translator with a mixed maximum likelihood and expected reward objective. Bahdanau et al. (2017) consider an actor-critic algorithm for machine translation, with the critic conditioned on the target text, and the pre-trained actor fine-tuned with BLEU as the reward. Paulus et al. (2018) suggest to learn to perform text summarization by using self-critical policy training (Rennie et al., 2017), where the reward associated with the action that would have been chosen at inference time is used as baseline. ROUGE score is considered as the reward, and linearly mixed with teacher forcing (Williams and Zipser, 1989), i.e., classic supervised learning. Scores alternative to ROUGE have been proposed as well, e.g., ROUGESal and Entail both described in Pasunuru and Bansal (2018). The former up-weighs the salient sentences or words detected via a key-phrase classifier. The latter rewards logically-entailed summaries through an entailment classifier. They are then used alternatively in subsequent mini-batches to train a Seq2Seq model (Sutskever et al., 2014) by means of REINFORCE. Finally, Zhou et al. (2017) consider BLEU score to train a dialogue system on top of collected human interactions with offline RL. An additional dialogue-level reward function (measuring the number of proposed API calls) is also used. Recently, the RL4LM library (Ramamurthy et al., 2023) started offering many of these metrics as rewards, thus facilitating their use for LM training or fine-tuning. Different families of solutions are considered, i.e., \(n\)-grams overlapping (ROUGE, BLEU, SacreBLEU (Post, 2018), METEOR (Lavie and Agarwal, 2007)), model-based methods (such as BertScore (Zhang et al., 2020) or BLEURT (Sellam et al., 2020)), task-specific metrics, and perplexity. Notably, RL4LM also allows to balance such metrics with a KL-divergence minimization with respect to a pre-trained model.
Test-time metrics are not the only quantities that can be maximized through RL. For example, Lagutin et al. (2021) suggest considering the count of 4-gram repetitions in the generated text, to reduce the likelihood of undesirable results. The combination of these techniques and classic self-supervised learning helps learn both _how to write_ and _how not to write_. Li et al. (2016) train a Seq2Seq model for dialogue by rewarding conversations that are informative (i.e., which avoid repetitions), interactive (i.e., which reduce the probability of answers like "I don't have any idea" that do not encourage further interactions), and coherent (i.e., which are characterized by high mutual information with respect to previous parts of the conversation). Sentence-level cohesion (i.e, compatibility of each pair of consecutive sentences) and paragraph-level coherence (i.e., compatibility among all sentences in a paragraph) can be achieved by maximizing the cosine similarity between the encoded version of the relative text, with the encoders trained so that the entire discriminative models are able to distinguish between real and generated pairs (Cho et al., 2019). A distance-based reward can instead guide a plot generator towards reaching desired goals. Tambwekar et al. (2019) train an agent working at event level (i.e., a tuple with the encoding of a verb, a subject, an object, and a fourth possible noun) with REINFORCE to minimize the distance between the generated verb and the goal verb. Other domain-specific rewards are used by
Yi et al. (2018), where two distinct generative models produce poetry by maximizing fluency (i.e., MLE on a fixed language model), coherence (i.e., mutual information), meaningfulness (i.e., TF-IDF), and overall quality from a learned classifier. In addition, the two models also learn from each other: the worst performing can be trained on the output produced by the other one, or its distribution can be modified in order to better approximate the other.
Another popular technique is hierarchical RL: for example, Peng et al. (2017) uses it to design a dialogue system able to perform composite tasks, i.e., sets of subtasks that need to be performed collectively. A high-level policy, trained to maximize an extrinsic reward directly provided by the user after each interaction, selects the sub-tasks. Then, "primitive" actions to complete the given sub-task are chosen according to a lower-level policy. A global state tracker on cross-subtask constraints is employed in order to provide the RL model with an intrinsic reward measuring how likely a particular subtask will be completed. Finally, ILQL (Snell et al., 2023) learns a state-action and a state-value function that is used to perturb a fixed LLM, rather than directly fine-tuning the model itself. This allows to preserve the capabilities of the given pre-trained language model, while still maximizing a specific utility function.
While text generation is one of the areas that have attracted most of the attention of researchers and practitioners in the past years, RL with quantity maximization has been applied to other sequential tasks as well. An important line of research (Jaques et al., 2016, 2017, 2017) consists of fine-tuning a pre-trained sequence predictor with imposed reward functions, while preserving the learned properties from data. For instance, a pre-trained note-based RNN can represent the starting point for the Q-network in DQN. A reward given by the probability of the chosen token according to the original model (or based on the inverse of the KL divergence) and one based on music theory rules (e.g., that all notes must belong to the same key) are used to fine-tune the model. Another possibility is to extend SeqGAN to domain-specific reward maximization, as in ORGAN (Guimaraes et al., 2017). ORGAN linearly combines the discriminative signal with desired objectives, also dividing the reward by the number of repetitions made, in order to increase diversity in the result. Music generation can then be performed by considering tonality and ratio of steps as rewards; solubility, synthesizability and drug-likenesses are instead adopted to perform molecule generation as a sequential task, i.e., by considering a string-based representation of molecules (by means of SMILES language (Weininger, 1988a)). While the original work considered RNN-based models, transformer architectures can be used as well (Li et al., 2022).
Molecular generation is indeed one of the most explored task in generative RL. While MolGAN (De Cao & Kipf, 2018) adapts ORGAN to graph-based generative models (Li et al., 2018) to directly produce molecular structures, the majority of research focuses on simplified molecular-input line-entry system (SMILES) textual notation (Weininger, 1988b), so as to leverage the recent advancements in text generation. ReLeaSe (Popova et al., 2018) fine-tunes a pre-trained generator to maximize physical, biological, or chemical properties (learned by a reward model). Olivecrona et al. (2017) propose to fine-tune a pre-trained generator with REINFORCE so as to maximize a linear combination of a prior likelihood (to avoid catastrophic forgetting) and a user-defined scoring function (e.g., to match a provided query structure or to have predicted biological activity). REINVENT (Blaschke et al., 2020) also avoids to generate molecules the model already produced (through a
memory that keeps track of the good scaffoldings generated so far). Atance et al. (2022) adopt REINVENT for the graph-based deep generative model GRAPHINVENT (Mercado et al., 2021) in order to directly obtain molecules that maximize desired properties, e.g., pharmacological activity. Instead, GENTRL (Zhavoronkov et al., 2019) generates kinase inhibitors relying on a variational autoencoder to reduce molecules to continuous latent vectors. Then REINFORCE is used to teach the decoder how to maximize three properties learned through self-organizing maps: activity of compounds against kinases; closeness to neurons associated with DDR1 inhibitors within the whole kinase map; and novelty of chemical structures. The average reward for the produced batch is assumed as a baseline to reduce variance. Notably, RL is used here for single-step generation (i.e., by means of a contextual bandit). Gaudin et al. (2019) propose to generate molecules maximizing their partition coefficient without any pre-training by working with a simplified language (Krenn et al., 2020); Thiede et al. (2022) suggest to use intrinsic rewards to better explore its solution space. GCPN (You et al., 2018) trains a graph-CNN to optimize domain-specific rewards and an adversarial loss (from a GAN-like discriminator) through PPO. Other tasks have been investigated as well. Nguyen et al. (2022) merge GAN and actor-critic in order to obtain a generator capable of producing 3D material microstructures with desired properties. Han et al. (2020) use DDPG to train an agent to design buildings (in terms of shape and position) so as to maximize several signals related to the performance and aesthetics of the generated block, e.g., solar exposure, collision, and number of buildings.
Finally, the use of techniques based on objective maximization can also be effective for image generation. Denoising Diffusion Policy Optimization (DDPO) (Black et al., 2023) can train or fine-tune a denoising diffusion model to maximize a given reward. It considers the iterative denoising procedure as a Markov Decision Process of fixed length. The state contains the conditional context, the timestep, and the current image; each action represents a denoising step; and the reward is only available for the termination state, when the final, denoised image is obtained. DDPO has therefore been used to learn how to generate images that are more compressed or uncompressed, minimizing or maximizing JPEG compression; more aesthetically-pleasing, by maximizing LAION score4; or more prompt-aligned, by maximizing the similarity between the embeddings of prompt and generated image description. Improving the aesthetics of the image while preserving the text-image alignment has also been done at the prompt level (Hao et al., 2022). A language model that given human input provides an optimized prompt can be trained with PPO to maximize both an aesthetic score (from an aesthetic predictor) and a relevance score (as CLIP embedding similarity) of the image generated from the given prompt.
Footnote 4: [https://lainon.ai/blog/lainon-aesthetics/](https://lainon.ai/blog/lainon-aesthetics/)
#### 3.2.2 Discussion
This opens up several new possibilities: generators can be adapted for particular domains or for specific problems; they can be built for tasks difficult to model through differentiable functions; and pre-trained models can be fine-tuned according to given requirements and specifications. Essentially, RL is not used only for mere generation, since it also allows for goal-oriented generative modeling. Any desired and quantifiable property can now be set as reward function, thus in a sense "teaching" a model how to achieve it. While research has
focused its attention on sequential tasks like text or music generation, other domains might be considered as well. As shown by Zhavoronkov et al. (2019), tasks not requiring multiple generative steps can be performed simply by reducing the RL problem to a contextual bandit one. In this way, RL can be considered as a technique for specific sub-domains, in a manner similar to neural style transfer (Gatys et al., 2016) or prompt engineering (Liu and Chilton, 2022).
We can identify possible drawbacks as well. Certain desired properties can be difficult to quantify, or the related measures can be expensive to compute, especially at run-time. This can lead to excessive computational time for training. While offline RL might alleviate this problem, it would require a collection of evaluated examples, thus eliminating the advantage of not needing a dataset and increasing the risk of exposure bias. Finally, a fundamental issue arises from using test-time metrics as objective functions: how should we evaluate the model we derive? In fact, according to the empirical Goodhart's Law (Goodhart, 1975), "when a measure becomes a target, it ceases to be a good measure". New metrics are then required, and a gap between training objective and test score might be inevitable.
### RL for Improving Not Easily Quantifiable Characteristics
#### 3.3.1 Overview
While test-time metrics as objectives reduce the gap between training and evaluation, they not always correlate with human judgment (Chaganty et al., 2018). In these cases, using such metrics would not help obtain the desired generative model. Moreover, there might be certain qualities that do not have a correspondent metric because they are subjective, difficult to define, or, simply, not quantifiable. Typically user only have an implicit understanding of the task objective, and, therefore, a suitable reward function is almost impossible to design: this problem is commonly referred to as the _agent alignment problem_(Leike et al., 2018).
One of the most promising directions is reward modeling, i.e., learning the reward function from interaction with the user and then optimizing the agent through RL over such function. In particular, Reinforcement Learning from Human Feedback (RLHF) (Christiano et al., 2017) allows to use human feedback to guide policy gradient methods. A reward model is trained to associate a reward to a trajectory thanks to human preferences (so that the reward associated with the preferred trajectory is higher than that associated with the others). In parallel, a policy is trained by means of this signal using a policy gradient method, while the trajectories collected at inference time are used to obtain new human feedback to improve the model. Ziegler et al. (2019) apply RLHF to text continuation, e.g., to write positive continuations of text summaries. A pre-trained language model is used to sample text continuations, which are then evaluated by humans; a reward model is trained over such preferences; and finally, the policy is fine-tuned using KL-PPO (Schulman et al., 2017) in order to maximize the signal provided by the reward model. A KL penalty is used to prevent the policy moving too far from its original version. Notably, these three steps can be performed once (offline case) or multiple times (online case).
Similarly, Stiennon et al. (2020) use RLHF to perform text summarization. The following three steps are repeated one or more times: human feedback collection, during which for each sampled reddit post different summaries are generated, and then human evaluators are
asked to rank them; reward model training on such preferences; policy training with PPO with the goal of maximizing the signal from the reward model (still using a KL penalty). Wu et al. (2021) propose to summarize entire books with RLHF by means of recursive task decomposition, i.e., by first learning to summarize small sections of a book, then summarizing those summaries into higher-level summaries, and so on. In this way, the size of the texts to be summarized is smaller. This is more efficient in terms of generative modeling and human evaluation, since the samples to be judged are shorter. InstructGPT (Ouyang et al., 2022) fine-tunes GPT-3 (Brown et al., 2020) with RLHF so that it can follow written instructions. With respect to Stiennon et al. (2020), demonstrations of desired behavior are first collected from humans and used to fine-tune GPT-3 before actually performing RLHF. Then, a prompt is sampled and multiple model outputs are generated, with a human labeler ranking them. Such rankings are finally used to train the reward model. The latter is then utilized (together with a KL penalty) to train the actual RL model with PPO. In particular, this procedure is adopted in ChatGPT and GPT-4 (OpenAI, 2023), which are fine-tuned in order to be aligned with human judgment.
Although all these methods consider human feedback regarding the "best" output for a given input (with "best" generally meaning appropriate, factual, respectful, or qualitative), more specific or different criteria are also used. Bai et al. (2022) consider human preferences for helpfulness and harmlessness. Sparrow (Glaese et al., 2022) is trained to be helpful, correct, and harmless, with the three criteria judged separately so that three more efficient rule-conditional reward models are learned. In addition, the model is trained to query the web for evidence supporting provided facts; and again RLHF is used to obtain human feedback about the appropriateness of the collected evidence. Finally, Pardinas et al. (2023) use RLHF to fine-tune GPT-2 to learn how to write _haikus_ maximizing the relevance to the provided topic, self-consistency, creativity, form, and avoiding toxic content through human feedback. In addition to text, RLHF has been used to better align text-to-image generation with human preferences. After collecting user feedback about text-image alignment, a reward model is learned to approximate such feedback, and its output is used to weight the classic loss function of denoising diffusion models (Lee et al., 2023).
While very effective, RLHF is not the only existing approach. When human ratings are available in advance for each piece of text, a reward model can be trained offline and then used to fine-tune an LLM (Bohm et al., 2019). Such a reward model can also be combined with classic MLE to effectively train a language model (Kreutzer et al., 2018) or used to prepend reward tokens to generated text, forming a replay buffer suitable for online, off-policy algorithms to unlearn undesirable properties (Lu et al., 2022). Since human ratings might be inaccurate, Nguyen et al. (2017) suggest to simulate them by applying perturbations on automatically generated scores. Alternatively, the provided dataset of scored text allows for batch (i.e., offline) policy gradient methods to train a chatbot (Kandasamy et al., 2017). A very similar approach is also followed by Jaques et al. (2020), where offline RL is used to train a dialogue system on collected conversations (with relative ratings) filtered to avoid learning misbehavior. Other strategies can be implemented as well. RELIS (Gao et al., 2019) relies on a learned reward model from human-provided judgment as the other systems discussed above; however, such reward model is used to optimize a policy directly at inference time for the provided text. Instead of training a policy over multiple inputs and then exploiting it at inference time, it trains a different policy for each required input.
Another possibility is to use AI feedback instead of, or in addition to, the human one. Constitutional AI (Bai et al., 2022b) is a method to train a non-evasive and "harmless" AI assistant without any human feedback, only relying on a _constitution_ of principles to follow. In a first supervised stage, a pre-trained LLM is used to generate responses to prompts, and then to iteratively correct them to satisfy a set of principles; once the response is deemed acceptable, it is used to fine-tune the model. Then, RLHF is performed, with the only difference that feedback is provided by the model itself and not by humans. Liu et al. (2022) use RL to fine-tune a Seq2Seq model to generate knowledge useful for a generic QA model. This is first re-trained on knowledge generated with GPT-3 (which is prompted asking to provide the knowledge required to answer a certain question). Then, RL is used to fine-tune the model so as to maximize an accuracy score using knowledge generated by the model itself as a prompt. To avoid catastrophic forgetting, a KL penalty (with respect to the initial model) is introduced. RNES (Wu and Hu, 2018) is instead a method to train an extractive summarizer (i.e., a component that selects which sentences of a given text should be included in its summary) using a reward based on coherence. A model is trained to identify the appropriate next sentence composing a coherent sentence pair; then, such a signal is used to obtain immediate rewards while training the agent (with ROUGE as the reward for the final composition). Finally, Su et al. (2016) propose to limit requests for human feedback to cases in which the learned reward model is uncertain.
#### 3.3.2 Discussion
Reward modeling, i.e., learning the reward function from interaction with users, introduces a great level of flexibility in RL for generative AI. Generative models can be trained to produce content that humans consider appropriate and of sufficient quality, by aligning them with their preferences. This is useful and in many situations essential: in fact a quantifiable measure might not exist or information to derive it might be hard to obtain. This methodology has already shown its intrinsic value in obtaining accurate, helpful, and useful text. In the same way, these techniques can be applied to other domains in which desired qualities are difficult to quantify or hard to express in a mathematical form, e.g., aesthetically pleasant or personalized (multimodal) content. A recap on covered applications is reported in Table 2.
RLHF has proven to be a highly effective approach. However, getting user feedback can be incredibly expensive. Moreover, the users might misbehave, whether on purpose or not, be biased, or disagree within each other (Fernandes et al., 2023). For these reasons, other techniques for reward modeling might be considered. If human ratings are available in advance, a reward model can be derived from them and used in offline mode. Using AI itself to provide feedback is also an option. In addition, other techniques like IRL or cooperative IRL (Hadfield-Menell et al., 2016) can be applied to induce a reward model from human demonstrations.
It is possible to identify some limitations of the approaches discussed above. Wolf et al. (2023) show that, even if aligned, a LLM can still be prompted in ways that lead to undesired behavior. In particular, "jailbreaks" out of alignment can be obtained via single prompts, especially when asking the model to simulate malicious personas (Deshpande et al., 2023). This is more likely to happen in the case of aligned models rather than of non-aligned ones
because of the so-called _waluigi effect_: by learning to behave in a certain way, the model also learns its exact opposite (Nardo, 2023). More advanced approaches would be required to mitigate this problem and completely prevent certain undesired behaviors.
\begin{table}
\begin{tabular}{|l|l|l|} \hline Application & Reward Type & Papers \\ \hline Building.design & Performance and aesthetic metrics & (Han et al., 2020) \\ \hline & Discriminator signal at each \(t\) through MC methods & (Li et al., 2017) \\ & Discriminator signal at each \(t\) through IRL & (Li et al., 2019) \\ & Repetitive or useless answer penalty + mutual information & (Li et al., 2016) \\ & Reward from user + likelihood of sub-task completion & (Peng et al., 2017) \\ & BLEU + number of proposed API calls & (Zhou et al., 2017) \\ & RLHP + KL penalty wrt original model & (Ouyang et al., 2022) \\ & RLHP + KL penalty wrt original model & (OpenAI, 2023) \\ Chatbot & RLHP on helpfulness and harmlessness + KL penalty wrt & (Bal et al., 2022a) \\ & original model & \\ & RLHP on helpfulness, harmlessness, correctness + KL & (Glasse et al., 2022) \\ & penalty wrt original model & (Ziegler et al., 2019) \\ & Collected human ratings & (Ramarao et al., 2017) \\ & Collected human ratings & (Jaques et al., 2020) \\ & Learned reward model of human ratings & (Su et al., 2016) \\ & & (Le et al., 2022) \\ \hline & \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & \multirow{4}{*}{
\begin{tabular}{} \end{tabular} } \\ & Result of unit tests & & \\ \cline{1-1} & & Reward model from human ratings & (Gao et al., 2019) \\ & Coherence ratings + ROUGE & (Wu and Hu, 2018) \\ \hline & Discriminator signal & (Fedus et al., 2018) \\ & Discriminator signal at each \(t\) through IRL & (Guo et al., 2018) \\ Generic text generation & Sum or product of log-likelihood of tokens from target text & (Pang and He, 2021) \\ & & 4gram repetition penalty + log-likelihood of target output & (Lapatin et al., 2021) \\ & Discriminator signals on coherence and cohesion & (Cho et al., 2019) \\ & & Specific utility function to maximize at inference time & (Snell et al., 2023) \\ Image generation & Compression or aesthetic or prompt alignment & (Black et al., 2023) \\ Knowledge generation & Accuracy score + KL penalty & (Liu et al., 2022) \\ & BLEU + log-likelihood of target output & (Ranzato et al., 2016) \\ Machine translation & BLEU + log-likelihood of target output & (Wu et al., 2016) \\ & Implicit task-based feedback from users & (Kreutzer et al., 2018) \\ & Perturbed predicted human ratings & (Nguyen et al., 2017) \\ \hline Microstructure generation & Adversarial loss + target properties & (Nguyen et al., 2022) \\ & Discriminator of chemical properties & (De Cao and Kipf, 2018) \\ Molecule (graph) generation & Pharmacological activity + prior likelihood & (Atance et al., 2022) \\ & Adversarial loss + desired properties & (You et al., 2018) \\ & Novelty + utility of inhibitors & (Zhavoroniov et al., 2019) \\ & Discriminator + chemical properties & (Guimaraes et al., 2017) \\ & Learned desired properties & (Popova et al., 2018) \\ Molecule (text) generation & Desired property + prior likelihood & (Olivecrona et al., 2017) \\ & As above + penalty for repetitions & (Blaschke et al., 2020) \\ & Partition coefficient & (Gaudin et al., 2019) \\ & Desired property + intrinsic reward & (Thiele et al., 2022) \\ \hline Music accompaniment & Log-likelihood for pre-trained models & (Jiang et al., 2020) \\ Music generation & Discriminator signal & (Yu et al., 2017) \\ Music generation & Music theory rules + log-likelihood for original model & (Jaques et al., 2016, 2017, 2017) \\ & Discriminator signal + totally + ratio of steps & (Gulimaraes et al., 2017) \\ \hline Plot generation & Generated vx target verbs distance & (Tambur et al., 2019) \\ Prompt optimization & Synthetic score + CLP similarity & (Mao et al., 2022) \\ & Discriminator signal & (Yu et al., 2017) \\ Pootry generation & Fluency + coherence + meaningfulness + quality & (Yi et al., 2018) \\ & RLHP on relevance, consistency, creativity, form, toxicity & (Pardina et al., 2023) \\ & Location and inclination of strokes & (Xie et al., 2012) \\ & Pixel, movement, color reproduction & (Zhou et al., 2018) \\ Stroke painting & Discriminator on canvas-target pairs & (Huang et al., 2019) \\ & Background vs foreground + focus & (Singh and Zheng, 2021) \\ & Two above + adjacent color/position & (Singh et al., 2022) \\ \hline Text continuation & RLHP + KL penalty wrt original model & (Ziegler et al., 2019) \\ & ROUGE + log-likelihood of target output & (Ramarao et al., 2016) \\ & ROUGE + log-likelihood of target output & (Paulus et al., 2018) \\ Text summarization & ROUGES2I + Entali & (Pasunuru and Bansal, 2018) \\ & RLHP + KL penalty wrt original model & (Steinon et al., 2020) \\ & RLHP + KL penalty wrt original model & (Wu et al., 2021) \\ & Reward model trained on human ratings & (Böhm et al., 2019) \\ \hline Text-to-image generation & RLHP on text-image alignment & (Lee et al., 2023) \\ \hline \end{tabular}
\end{table}
Table 2: Summary of all the applications covered by past research in RL for generative AI, with the considered rewards and the relative references.
## 4 Conclusion
Reinforcement learning for generative AI has attracted huge attention after the recent breakthroughs in the area of foundation models and, in particular, large-scale language models. In this survey, we have investigated the state of the art, the opportunities and the open challenges in this fascinating area. First, we have discussed RL for mere generation, where RL simply provides a suitable framework for domains that cannot be modeled by means of a well-defined, differentiable objective, also reducing exposure bias. Then, we have considered RL for quantity maximization, where RL is used to teach a commonly pre-trained model how to maximize a numerical property. This closes the gap between what the model is optimized for and how it is evaluated, but also to search for particular characteristics and sub-domains. Finally, we have analyzed RL for non-easily quantifiable characteristics, where RL is used for aligning it with human requirements and preferences that are not easily expressed in a mathematical form.
Since non-differentiable functions can be used as target objectives, RL allows for a broader adoption of generative modeling, taking into consideration a wide range of objectives, requirements and constraints. Current and emerging solutions are characterized by the integration of a variety of RL mechanisms, such as IRL, hierarchical RL or intrinsic motivation, just to name a few. On the other hand, the use of RL for generative AI introduces the problem of balancing exploitation and exploration, especially when dealing with a large action space; this results in the need of using pre-trained models or a mixed objective both considering rewards and classic self-supervision. In addition, the adoption of test-time metrics as reward functions might be problematic per se (see the so-called Goodhart's Law (Goodhart, 1975)), while reward modeling is prone to human biases and adversarial attacks. Many challenging problems are still open, such as the integration of techniques such as IRL and multi-agent RL and the robustness of these models, in particular for preventing "jailbreaks" out of alignment.
| 生成型人工知能(AI)は、過去10年間のコンピュータサイエンスにおける最もエキサイティングな発展の一つです。同時に、強化学習(RL)は、様々な機械学習タスクで非常に成功したパラダイムとして登場しました。この調査では、RLを生成型AIに適用するための最新の状況、機会、オープンな研究課題について議論します。特に、RLを目的を指定せずに生成する代替方法として、目的関数最大化しながら生成する方法として、そして、最終的に目的関数で簡単に捉えられない望ましい特性を生成プロセスに埋め込む方法として、3つのタイプの応用について議論します。この調査は、この魅力的な新興分野における機会と課題について深掘りした議論で締めくくられます。 |
2309.08744 | Personalized Food Image Classification: Benchmark Datasets and New
Baseline | Food image classification is a fundamental step of image-based dietary
assessment, enabling automated nutrient analysis from food images. Many current
methods employ deep neural networks to train on generic food image datasets
that do not reflect the dynamism of real-life food consumption patterns, in
which food images appear sequentially over time, reflecting the progression of
what an individual consumes. Personalized food classification aims to address
this problem by training a deep neural network using food images that reflect
the consumption pattern of each individual. However, this problem is
under-explored and there is a lack of benchmark datasets with individualized
food consumption patterns due to the difficulty in data collection. In this
work, we first introduce two benchmark personalized datasets including the
Food101-Personal, which is created based on surveys of daily dietary patterns
from participants in the real world, and the VFNPersonal, which is developed
based on a dietary study. In addition, we propose a new framework for
personalized food image classification by leveraging self-supervised learning
and temporal image feature information. Our method is evaluated on both
benchmark datasets and shows improved performance compared to existing works.
The dataset has been made available at:
https://skynet.ecn.purdue.edu/~pan161/dataset_personal.html | Xinyue Pan, Jiangpeng He, Fengqing Zhu | 2023-09-15T20:11:07 | http://arxiv.org/abs/2309.08744v1 | # Personalized Food Image Classification: Benchmark Datasets and New Baseline
###### Abstract
Food image classification is a fundamental step of image-based dietary assessment, enabling automated nutrient analysis from food images. Many current methods employ deep neural networks to train on generic food image datasets that do not reflect the dynamism of real-life food consumption patterns, in which food images appear sequentially over time, reflecting the progression of what an individual consumes. Personalized food classification aims to address this problem by training a deep neural network using food images that reflect the consumption pattern of each individual. However, this problem is under-explored and there is a lack of benchmark datasets with individualized food consumption patterns due to the difficulty in data collection. In this work, we first introduce two benchmark personalized datasets including the Food101-Personal, which is created based on surveys of daily dietary patterns from participants in the real world, and the VFN-Personal, which is developed based on a dietary study. In addition, we propose a new framework for personalized food image classification by leveraging self-supervised learning and temporal image feature information. Our method is evaluated on both benchmark datasets and shows improved performance compared to existing works. The dataset has been made available at: [https://skynet.ecn.purdue.edu/~pan161/dataset_personal.html](https://skynet.ecn.purdue.edu/~pan161/dataset_personal.html)
Food image classification, personalized classifier, image-based dietary assessment, self-supervised learning
## I Introduction
Food image classification is crucial for image-based dietary assessment, which aims to provide an accurate profile of foods consumed and their portion sizes based on an individual's habitual dietary intake [1]. Given the widespread use of mobile devices, many individuals now utilize food logging apps to daily track their food intake, aiding in maintaining a healthy diet over time [2, 3].
Although existing works [4, 5, 6, 7, 8, 9, 10] have demonstrated promising results using static food datasets, food image classification is much more challenging in real-world settings where data comes sequentially overtime [11, 12, 13, 14, 15, 16]. The most recent work focuses on addressing this issue for each individual by designing a personalized food classifier [17, 18]. In such contexts, individuals capture food images in sequence, thereby documenting their dietary habits chronologically. We refer to this sequential data as a "food consumption pattern". A Food consumption pattern typically exhibits unbalanced food distribution, diverse cooking styles, and previously unseen food classes over time [19]. The main objective of personalized food classification is to classify each food image as it appears sequentially over time in a food consumption pattern. This ensures enhanced classification accuracy tailored to a person's unique dietary progression. Fig. 1 shows an illustration of personalized food image classification, which learns the food class appeared in a food consumption pattern over time. However, there exist two major challenges. The first is a lack of publicly available benchmark personalized food image datasets. This is mainly due to the difficulty in collecting food consumption patterns from different individuals over time. The second is a lack of exploration into learning sequential image data streams containing previously unseen food classes and associated contextual information from the food consumption pattern.
Our work aims to address both aforementioned challenges by creating benchmark datasets encapsulating personalized food consumption patterns and developing a novel personalized classifier to improve the performance of existing methods [17, 18, 20]. To address the first challenge of lacking available datasets, we first introduce two benchmark personalized food consumption datasets by leveraging two public food image datasets [5, 21] with food categories and proceed as follows. For both datasets, we have short-term food consumption patterns from volunteers' input. We then extend and simulate different patterns using a method based on [22].
Existing personalized food classification methods [17, 18] store food image features extracted from pre-trained models and employ the nearest-class-mean classifier and nearest neighbor classifier to adapt each individual's eating habit. The most recent work [23] further improves the performance by sharing food records across multiple food consumption patterns. Nonetheless, these approaches exhibit several limitations. Firstly, while processing each image in a consumption pattern, the pre-trained feature extractor remains static, unable to learn and update dynamically using new food images. Secondly, existing work only considers the short-term frequency of food occurrence, lacking the exploration into temporal contextual information, which provides diet change over the time.
Fig. 1: An illustration of personalized food image classification. The objective is to train a personalized classifier based on food consumption patterns to improve food classification performance.
In this work, we introduce a personalized classifier that addresses all the aforementioned limitations. By enhancing the image feature extraction with self-supervised learning, our model updates dynamically with each new food image. Moreover, we enrich the temporal context by concatenating image features within a sliding window, facilitating a deeper consideration of image feature-based temporal nuances. The main contributions of our work can be summarized as follows:
* We introduce two new benchmark datasets for personalized food image classification including the **Food101-Personal** and the **VFN-Personal**, and we have made it open to the public.
* We propose a novel personalized classifier through feature extraction update using self-supervised learning and a sliding window technique to capture temporal contextual information based on image features.
## II Benchmark Datasets
In this section, we introduce two benchmark personalized datasets including Food101-Personal and VFN-Personal. Unlike existing food image classification methods that are trained on public food datasets [21, 5, 24], there is no publicly available personalized food image dataset due to the challenges in obtaining food consumption patterns for each individual, which reflects their dietary habits over time. Our work addresses this gap by first collecting short-term food consumption patterns through surveys or dietary studies and then simulating the long-term personalized food consumption patterns following the method in [22] where a modified Markov chain is used to capture temporal contextual information based on the provided initial short-term food consumption pattern.
**Food101-Personal:** We conducted an online survey using the Food-101 dataset [21], where participants were asked to simulate one week of food consumption patterns by selecting foods from the 101 classes in Food-101. We collected 20 participants' patterns, each with over 20 food records, and simulated long-term patterns using the method described in [22]. To develop a more representative benchmark, we cluster food images within each food class from the Food-101 dataset and employ a Gaussian distribution model as described in [22] to create intra-class dissimilarities within each class in a pattern. Overall, the benchmark includes 20 patterns with 300 images each and an average of 44 food classes per pattern.
**VFN-Personal:** For the VFN dataset [5], we conducted a dietary study from healthy participants aged 18 to 65 using the image-based dietary assessment system [25]. Participants captured images of foods they consumed for three days. We collected data from over 70 participants, retaining 26 short-term patterns which have at least 15 records each. Similar to the Food101-Personal dataset, we employed the method in [22] to simulate long-term food consumption patterns. Overall, the VFN-Personal dataset comprises 26 patterns, each containing 300 images and an average of 29 food classes per pattern.
## III Method
In this section, we introduce a novel method to improve the accuracy of personalized food image classification. Our approach consists of two key components: (1) employing self-supervised learning to update the feature extractor, as described in Section III-A, and (2) using a sliding window to capture multiple-image temporal information within a food consumption pattern, as explained in Section III-B.
### _Feature Extraction Using Self-supervised Learning_
One limitation of existing personalized food classification approaches is the fixed feature extractor, which is unable to update using new images in a food consumption pattern. In this paper, we address these issues by leveraging self-supervised learning [26, 27, 28, 29] to learn image features without ground truth labels. Our method is designed to be compatible with any self-supervised learning backbone.
To accommodate self-supervised learning in our scenario where a large training batch is not feasible as new images typically arrive sequentially one by one, we apply the following techniques to create representative input batches.
**Group normalization:** In existing self-supervised learning with batch normalization, the error tends to increase rapidly as the batch size decreases. However, utilizing large batch sizes in the early time steps is not feasible in our scenario due to the limited number of food images. To tackle this issue, we replace batch normalization layers with group normalization layers [30], which provides constant error across different batch sizes, making it a more reliable alternative.
**Random image sampling** We employ a random image sampling technique as described in [31] to select input images for the self-supervised learning algorithm. Let \(t\) denote the current time step. Our objective is to randomly sample images from time steps before \(t\), rather than sampling them in a consecutive temporal order. The input set of images can be denoted as \(I_{t}=[f_{a},f_{b},\dots],\ 1\leq a,b,\dots\leq t\), where \(a,b,\dots\) represent the sampling time steps.
**Dual Instances Learning** To tackle the issue of class imbalance and intra-class variability in food classification, we propose to use a pair of images (\(f_{i,1}\), \(f_{i,2}\)) from each class \(i\) rather than employing two augmentations of the same image as inputs. The motivation is that different images from the same class should exhibit similar feature representation.
### _Sliding Window_
Existing personalized food classification methods [17, 18] rely on a single image feature to classify images within individual consumption patterns. However, incorporating temporal contextual information based on multiple images within food consumption patterns is also important to help capture the unique diet characteristics of each individual. In this work, we propose to combine the single-image feature and multiple-image temporal information for classification where the latter is achieved by constructing the sliding window to capture past multiple-image temporal information based on concatenated image features.
Specifically, we first compute single-image similarity score \(s^{b}_{t,N}\), to find the image that is most similar to the image to be classified. Given a new image at \(t=N\) with number of \(M\) food classes appeared so far, we calculate \(s^{b}_{t,N}\) by finding the cosine similarity between input image feature \(f_{N}\) and previous image features \(f_{t},t\in{1,2,...N-1}\), with the formula of
\[s^{b}_{t,N}=\frac{f_{t}^{T}f_{N}}{||f_{t}||_{2}||f_{N}||_{2}},\;1\leq t<N \tag{1}\]
where \(T\) corresponds to the transpose of a matrix. Since the same food class may appear multiple times before \(t=N\), we first take the maximum similarity among image features in the same class before \(t=N\), denoted as \(s^{b}_{m},1\leq m\leq M\) and then apply softmax to get \(s^{b^{\prime}}_{m}\). where \(m\) denotes the food class index and \(c_{t}\) denotes the food class at time step \(t\).
Each sliding window \(W\) can be built by concatenating image features as represented in the follows:
\[W_{i}=([f_{i},f_{i+1},...,f_{i+k-1}],c_{i+k-1}),\;1\leq i\leq N-k+1 \tag{2}\]
where \(i\) denotes the sliding window index, \(k\) is the length of the window, and \(c_{i+k-1}\) represents the class label associated with the window \(W_{i}\). To find the \(W_{i}\) with the highest similarity to \(W_{N-k+1}\), which contains the current image to be classified, we apply the nearest neighbor classifier to calculate the cosine similarity among sliding windows as
\[s^{w}_{i,N-k+1}=\frac{W_{i}^{T}W_{N-k+1}}{||W_{i}||_{2}||W_{N-k+1}||_{2}} \tag{3}\]
For the food class-based similarity score, we take the maximum value among all windows belonging to the same food label, denoted as \(s^{w}_{m},1\leq m\leq M\). We then take the softmax of \(s^{w}_{m}\) and denote it as \(s^{w^{\prime}}_{m}\).
Finally, we combine the similarity scores from \(s^{w^{\prime}}\) and \(s^{b^{\prime}}\) to obtain \(R_{m}\), which is computed as follows: \(R_{m}=s^{b^{\prime}}_{m}(s^{w^{\prime}}_{m})^{\alpha},\;0\leq\alpha\leq 1\) where \(\alpha\) denotes the weight associated with \(s^{w^{\prime}}\), which controls the level of significance of the sliding window method in computing the final similarity score. The higher the \(\alpha\) value, the greater the level of significance. The final prediction is calculated by: \(p_{t}=argmax\{R_{m}\}\), where \(t\) denotes the time step at which the image is to be classified.
## IV Experiments
In this section, we evaluate our proposed methods by comparing with existing works on Food101-Personal and VFN-Personal datasets introduced in Section II. We also conduct an ablation study to demonstrate the effectiveness of each component in our proposed framework.
### _Benchmark Protocol_
Different from the general image classification task that train a model on training data and evaluate on test data, there is no split of train and test in personalized dataset. Therefore, we propose the following evaluation protocol. Given a personalized dataset containing multiple food consumption patterns from different individuals, the personalized classifier is evaluated on each pattern one by one by assuming (1) the data becomes available sequentially, and (2) the model is updated in an online scenario, _i.e._, the training epoch is 1. During each time step in a pattern, the model first make prediction on the new image as one of the food classes seen so far and then use it for update. The performance on each pattern is evaluated by calculating the cumulative mean accuracy at each time step as:
\[C\_accuracy(t)=\frac{1}{t}\Sigma_{\tau=1}^{\tau=t}\mathbb{1}(p_{\tau}=c_{\tau}) \tag{4}\]
\(\mathbb{1}(\cdot)\) is a function indicating whether the current prediction of food is correct or not. \(p_{\tau}\) denotes the prediction at time step \(\tau\) for a pattern, and \(c_{\tau}\) represents the class label at time step \(\tau\) for a pattern. The overall performance on the entire dataset is calculated as the mean accuracy for all the personalized food consumption patterns.
### _Experiment Setup_
**Methods for comparison:** We employ Simsiam [28] and Barlow Twins [27] as self-supervised learning backbones, replacing batch normalization layers with group normalization layers [30] for small input batch sizes. We compare our method with existing methods including **CNN**[32], which uses a general fixed-class convolutional neural network on ISIA-500 dataset [24]; **1-NN**[20], which is a one nearest neighbor method; **SVMIL**[33], a common incremental learning method that utilizes the SVM model and updates the model based on a new image feature at every single time step; **SPC**[18] and **SPC++**[17], which employs nearest neighbor and nearest class mean classifier with a fixed pre-trained model, and incorporating a time-dependent model and weight optimization for classification.
Furthermore, we conduct an ablation study to demonstrate the effectiveness of each proposed component including **Random Sampling (RS)**, which is a random sampling method illustrated in section III-A; **Dual Instance Learning (DIL)**,
Fig. 2: Overview of the sliding window method. We obtain the similarity score for each food class by using both the current image feature and sliding windows, denoted as \(s^{b}\) and \(s^{w}\), respectively. The final prediction is computed based on the combined similarity vector \(R\).
which is the dual instance learning approach described in section III-A; and **Sliding Window (SW)**, which employs the sliding window method to capture multiple-image temporal information, as explained in Section III-B.
**Implementation detail:** We utilize ResNet-50 [34] pre-trained on ISIA-500 dataset [24] as backbone to extract image features in food consumption patterns. The batch size is set to 32 with training epochs 1 in online scenario. For SimSiam, we use SGD optimizer with learning rate of 0.001 and weight decay 0.0001. For Barlow Twins, we utilize the LARS optimizer [35] with a learning rate of \(1\times 10^{-6}\) on normalization and bias layers and \(1\times 10^{-5}\) on other layers, along with a weight decay of \(1\times 10^{-6}\). For our sliding window **SW**, we empirically set \(\alpha=0.0025\) with a window size of \(k=5\). The method is applied when \(t\geq 50\).
### _Results and Discussion_
Table I shows the results of personalized food classification at selected time steps for the Food101-Personal and VFN-Personal dataset. The first part of table I shows the comparison of classification performance of our proposed method with existing works. In the existing works, **CNN**[32] constantly exhibits low accuracy over time, as it does not learn to classify new classes from the consumption patterns. **SVMIL** underperforms compared to **1-NN**, due to only having one new image at each time step to learn from and not addressing the mini-batch learning issue. **1-NN**[20] shows inferior performance compared to **SPC**[18] because of not considering cold start problem. **SPC++**[17] outperforms **SPC**[18] by taking into account the short-term frequency of food consumption. Our proposed method can outperform the existing works at most time steps by considering the image feature updates during training in food consumption patterns over time and multiple-image temporal information. Our method improves the classification accuracy for \(2.6\%\) and \(1\%\) on **Food101-Personal** and **VFN-Personal** dataset, respectively.
The second and third part of table I shows ablation studies of our proposed method with SimSiam and Barlow Twins as backbone respectively. From **RS+SPC++** and **DIL+SPC++** method, it can be observed that both **RS** and **DIL** contribute nearly equally to the improvement of classification accuracy, indicating their equal effectiveness in sampling input images for self-supervised learning. Integrating both methods (i.e. **RS+DIL+SPC++**) leads to further improvement of classification accuracy since it facilitates learning from a balanced class distribution, considers intra-class dissimilarity within
Fig. 3: Classification accuracy at each time stamp on Food101-Personal datasets
a class and general image features without memorizing the specific appearance order of images within a pattern. Moreover, integrating one of the sampling techniques with the **SW** method (i.e.**RS+SW** or **DIL+SW**) can further improve the classification performance, emphasizing the significance of identifying multiple-image temporal information in personalized food classification. Finally, integrating all the modules enables the model to achieve the best performance across all methods on both benchmark datasets for most time steps.
Fig 3 shows the trends of classification accuracy at each time step for different methods on **Food101-Personal** dataset. In general, all methods in comparison improve over time except for **CNN**. Our proposed method shows a faster rate of improvement especially after \(t=100\) as it contains more multiple-image temporal information from the past.
## V Conclusion
In this paper, we focus on personalized food image classification. We first introduce two new benchmark datasets, Food101-Personal and VFN-Personal. Next, we propose a personalized food classifier that leverages self-supervised learning to enhance image feature extraction capabilities. We present two sampling methods, random sampling and dual instance learning, to minimize learning biases associated with sequential data, and suggest a sliding window method to capture multiple-image temporal information for the final classification. Our method is evaluated on both benchmarks and show promising improvements compared to existing work.
| 食品画像分類は、画像に基づく食事評価の基本的なステップであり、食品画像から自動的に栄養分析が可能になります。多くの現行手法は、一般の食品画像データセットで訓練を行い、現実世界の食品消費パターンには反映されていません。これは、食品画像が時間と共に順序づけられて出現する、個人の消費の進展を示す現象です。パーソナルフード分類は、個々の消費パターンを反映した深層ニューラルネットワークを訓練することで、この問題を解決しようとしています。しかし、この問題は、データ収集の困難さにより、個別の食生活パターンを示すベンチマークデータの開発が不足しています。この研究では、まず、デイリーダイエッターの調査に基づいた「Food101-Personal」という2つのベンチマークパーソナルデータセットを導入します。また、「VFNPersonal」は、栄養学の研究に基づいて開発されています。さらに、自己教師あり学習と |
2309.07477 | Self-Supervised Prediction of the Intention to Interact with a Service
Robot | A service robot can provide a smoother interaction experience if it has the
ability to proactively detect whether a nearby user intends to interact, in
order to adapt its behavior e.g. by explicitly showing that it is available to
provide a service. In this work, we propose a learning-based approach to
predict the probability that a human user will interact with a robot before the
interaction actually begins; the approach is self-supervised because after each
encounter with a human, the robot can automatically label it depending on
whether it resulted in an interaction or not. We explore different
classification approaches, using different sets of features considering the
pose and the motion of the user. We validate and deploy the approach in three
scenarios. The first collects $3442$ natural sequences (both interacting and
non-interacting) representing employees in an office break area: a real-world,
challenging setting, where we consider a coffee machine in place of a service
robot. The other two scenarios represent researchers interacting with service
robots ($200$ and $72$ sequences, respectively). Results show that, even in
challenging real-world settings, our approach can learn without external
supervision, and can achieve accurate classification (i.e. AUROC greater than
$0.9$) of the user's intention to interact with an advance of more than $3$s
before the interaction actually occurs. | Gabriele Abbate, Alessandro Giusti, Viktor Schmuck, Oya Celiktutan, Antonio Paolillo | 2023-09-14T07:34:12 | http://arxiv.org/abs/2309.07477v1 | # Self-Supervised Prediction of the Intention to Interact with a Service Robot
###### Abstract
A service robot can provide a smoother interaction experience if it has the ability to _proactively_ detect whether a nearby user intends to interact, in order to adapt its behavior e.g. by explicitly showing that it is available to provide a service. In this work, we propose a learning-based approach to predict the probability that a human user will interact with a robot before the interaction actually begins; the approach is _self-supervised_ because after each encounter with a human, the robot can automatically label it depending on whether it resulted in an interaction or not. We explore different classification approaches, using different sets of features considering the pose and the motion of the user. We validate and deploy the approach in three scenarios. The first collects 3442 natural sequences (both interacting and non-interacting) representing employees in an office break area: a real-world, challenging setting, where we consider a coffee machine in place of a service robot. The other two scenarios represent researchers interacting with service robots (200 and 72 sequences, respectively). Results show that, even in challenging real-world settings, our approach can learn without external supervision, and can achieve accurate classification (i.e. AUROC greater than 0.9) of the user's intention to interact with an advance of more than 3 s before the interaction actually occurs.
keywords: Self-supervised learning, human-robot interaction, social robotics +
Footnote †: journal:
## 1 Introduction
Many emerging applications of robots have the potential of assisting humans in everyday life tasks or automating jobs in the future [1]. Examples include social robots offering assistance at receptions [2], in hospitality sectors [3] or at home [4]; navigation guidance in public spaces [5] or personal care [6]; and object delivery [7]. In such situations, robots should automatically understand the human intention to interact well before the interaction starts to be more proactive and offer relevant services.
The very initial phase of these interactions plays an important role in establishing an effective Human-Robot Interaction (HRI), in which the user first sees the robot and decides to approach and engage with it. When users are unfamiliar with the situation, e.g., because they enter a new environment and are unsure about the right action to take, the robot's behavior is crucial to determine if this approach phase yields a successful interaction and a good user experience [8].
Consider the everyday-life scenario of a skilled human receptionist operating in a bustling lobby. They can anticipate the arrival of a client detecting cues in the client's movement and body language well before they reach the reception desk. In these circumstances, the receptionist welcomes the client without being distracted by other nearby people who are not interested in interacting. This behavior makes it clear to the client that the receptionist is indeed available for interaction and is the right person to approach for assistance. Albeit it might just appear as a pleasant but superfluous detail for a client who already knows what to do, this subtle behavior of the receptionist can reassure novel users who might be intimidated or confused in an unfamiliar situation.
To enable the widespread deployment and social acceptance of robots in everyday life scenarios (see Fig. 1), they must develop similar skills, namely, anticipate and adapt to human intentions. Indeed, an effective service robot should have the following skills: \((i)\) keeping track of nearby people; \((ii)\) predicting when an approaching person intends to interact with it; and \((iii)\) reacting accordingly. In this paper, we use off-the-shelf tools to solve the first point and focus our contribution on the second skill, which is general and mostly independent of the type of robot. Once the intention of the user has been detected, reaction strategies can be designed according to the specific robot hardware and sensory equipment.
Our primary contribution is a learning-based method that enables the
robot to classify whether each tracked person intends to interact with it or not. As input, we use body motion cues that are provided by off-the-shelf video or RGB-D sensing subsystems. The probability that the person will interact is updated in real-time and can trigger a reaction of the robot when exceeding a threshold. Eventually, each tracked person either interacts with the robot, or leaves without interacting; the corresponding data sequences, considered with hindsight, provide additional training data that the robot collects without the need for manual labeling of data, or any other form of external supervision. Variations of this concept have been known and applied in various fields of robotics research since the mid-2000s [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 291; 292; 293; 294; 295; 296; 297; 298; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 319; 320; 321; 322; 324; 325; 326; 327; 328; 329; 333; 341; 329; 335; 336; 337; 338; 342; 339; 350; 351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 377; 378; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 398; 399; 40; 411; 42; 431; 432; 444; 45; 46; 47; 48; 491; 402; 403; 404; 42; 405; 406; 407; 408; 409; 410; 411; 433; 434; 445; 41; 436; 447; 411; 448; 415; 449; 425; 446; 447; 448; 451; 452; 453; 454; 46; 47; 47; 47; 48; 48; 48; 492; 409; 411; 43; 44; 449; 43; 445; 46; 48; 493; 410; 42; 401; 44; 44; 411; 44; 44; 412; 44; 413; 44; 42; 43; 45; 46; 47; 48; 49; 411; 44; 44; 415; 48; 49; 40; 411; 44; 42; 442; 44; 43; 44; 44; 445; 46; 49; 421; 45; 46; 47; 48; 49; 422; 48; 49; 43; 48; 49; 40; 411; 44; 44; 42; 44; 43; 44; 44; 45; 46; 47; 48; 49; 40; 412; 42; 43; 44; 45; 48; 49; 40; 42; 44; 45; 46; 47; 48; 49; 413; 49; 40; 420; 43; 41; 44; 45; 49; 40; 43; 421; 45; 46; 47; 48; 49; 40; 42; 41; 45; 48; 49; 410; 43; 44; 45; 46; 47; 48; 49; 411; 45; 49; 422; 49; 40; 43; 42; 44; 45; 46; 47; 49; 423; 48; 49; 40; 44; 47; 48; 49; 40; 44; 48; 49; 410; 44; 49; 42; 411; 45; 49; 40; 44; 41; 42; 412; 42; 43; 44; 45; 46; 47; 48; 49; 42; 48; 49; 40; 44; 49; 40; 42; 41; 45; 46; 48; 49; 41; 40; 44; 42; 45; 49; 41; 42; 42; 43; 43; 44; 44; 45; 46; 47; 49; 42; 47; 48; 49; 40; 43; 44; 48; 49; 40; 44; 45; 46; 49; 40; 41; 42; 43; 44; 44; 45; 47; 48; 49; 420; 44; 49; 40; 44; 41; 45; 49; 42; 45; 46; 47; 49; 40; 44; 42; 43; 44; 45; 46; 47; 48; 49; 40; 43; 44; 49; 41; 45; 48; 49; 42; 45; 49; 40; 44; 43; 44; 44; 45; 46; 47; 48; 49; 420; 49; 40; 44; 45; 49; 40; 41; 41; 42; 43; 44; 45; 46; 47; 48; 49; 421; 45; 48; 49; 40; 44; 45; 47; 49; 40; 41; 46; 49; 42; 41; 45; 46; 47; 48; 49; 40; 45; 48; 49; 40; 44; 49; 42; 43; 44; 45; 49; 410; 44; 45; 46; 49; 40; 46; 47; 48; 49; 411; 45; 49; 42; 45; 46; 47; 48; 49; 42; 48; 49; 43; 49; 40; 44; 45; 49; 40; 46; 47; 48; 49; 40; 48; 49; 410; 49; 42; 49; 40; 41; 45; 49; 42; 41; 46;
15, 16, 17, 18, 19, 20], denoted with the term _self-supervised learning_, which highlights that the robot autonomously generates labeled data for the task of interest.
The remainder of the paper is organized as follows. After reviewing related work (Sec. 2), we describe our approach (Sec. 3) and its implementation (Sec. 4); experimental results are presented in Sec. 5. We finally derive our conclusions in Sec. 6, discussing future work directions.
## 2 Related work
Nonverbal communication cues [21], such as body motion and language, play a central role in HRI, from both users' and robots' perspective [22; 23]. However, the perception of social nonverbal behaviors is a challenging task to solve in HRI [24], especially for the first phases of the interactions [8]. Nonetheless, it is important to be able to predict the intention to interact with the robot so that an effective reaction strategy can be well accommodated to the users' needs. For example, human intention navigation is inferred using motion features [25]. In the context of collaborative tasks, the human intention is estimated from gaze and motion features in virtual reality [26] or analyzing the motion performed in front of a humanoid robot [27]. In these cited works, the intention of the human is intended to be related to the next action to take in the context of an ongoing activity. Similarly, other systems based on body motion cues are used to classify the social behavior of humans standing in the robot's proximity [28; 29]. In our work, instead, we aim at predicting the human intent well before the interaction actually starts. The intention to interact based on gaze and body motion has also been proposed as a tool to evaluate the engagement of a user standing in front of a system at a fixed distance [30]. Our work focuses on a more general scenario since we want to predict the intention of any users free to move into social spaces. It is worth mentioning that a significant body of work bases the intention recognition only on gaze cues, as can be found in a recent review [31]. However, our work aims at predicting human intention from far distances, where the performances of gaze trackers are expected to decrease. Our work is more similar to approaches using multi-modal features, including body motion, to train a binary classifier predicting users' intention to interact [32; 33; 34] or to assess the intensity of human engagement intention [35]. These works rely on hand-labeled datasets collected in controlled environments, which are expensive, and sometimes unfeasible, to acquire for
each deployment scenario. In contrast, our approach is self-supervised, as discussed below.
In a standard supervised paradigm, one would need to collect large training datasets composed of a large number of tracks representing a given human in the robot's vicinity, and manually provide labels assigning a class to every track depending on whether the human interacts with the robot or not. In contrast, our work relies on the robot's ability to reconsider its experience in hindsight, and automatically assign a label to each recorded track, depending on whether it eventually resulted in an interaction with the robot or not. This is a form of _self-supervised robot learning_, that derives labels from data available to the robot only _after_ the sample was observed; robots capable of self-supervised learning rely on data collected in previous experiences by their own sensors in order to self-generate meaningful supervision, a paradigm initially adopted in robotics for segmentation of traversable terrain [9; 10; 11], then applied to other tasks such as grasping [12; 13; 14] and long-range sensing for navigation [15; 16; 17; 18; 19; 20]. It is worth noting that in the recent deep learning literature, the term "self-supervised" has a different meaning: it denotes the practice of using pretext tasks [36; 37; 38] for learning useful data representations [39] from large amounts of unlabeled data.
One of the advantages of Self-Supervised Learning (SSL) approaches is that they allow the system to continuously update its models with new training data acquired on the spot. This is especially valuable in our scenario, as the robot can use these data to learn human behavior cues that are specific to its deployment environment. A related but different field of research is _continual learning_[40], which provides methods to efficiently adapt models as new training data becomes available, without having to store the entire training dataset and avoiding the problem of catastrophic forgetting. In our work, we adopt a simpler approach: we store the entire dataset and retrain the model from scratch, without resorting to continual learning techniques.
## 3 Approach
### Problem formulation
We consider a robot standing in an environment shared with humans, some of which might approach the robot in order to engage with it. The robot is equipped with sensors capable to detect and track people at least within a distance of 4 m, i.e. the robot's _social space_[24; 41], but possibly
beyond. During normal operation, people routinely pass nearby the robot, entering and exiting the robot's social space; occasionally, some users engage with the robot.
We define \(\mathcal{F}_{r}\) as a fixed frame centered on the robot. For each tracked person, the robot is capable to estimate the pose of their torso (\(\mathcal{F}_{t}\)) and head (\(\mathcal{F}_{h}\)) frames. In particular, we denote as \(\mathbf{p}_{t}\in\mathbb{R}^{2}\) and \(\theta_{t}\) the planar position and orientation of \(\mathcal{F}_{t}\) w.r.t. \(\mathcal{F}_{r}\), respectively. The distance of the person from the robot is \(d=\|\mathbf{p}_{t}\|\). Similarly, \(\theta_{h}\) indicates the orientation of \(\mathcal{F}_{h}\) in \(\mathcal{F}_{r}\) around the vertical axis. Finally, the variable \(\mathbf{v}_{t}\in\mathbb{R}^{2}\) indicates the person's linear velocity. Note that the position and orientation of a person's torso w.r.t \(\mathcal{F}_{r}\) and its velocity are informative of their proxemics [23] and are also useful to determine which proxemic zone [24; 41] they occupy. The head orientation is also indicative of the user's gaze and is expected to be informative of their intention.
We tackle the problem of predicting the intention of a person to interact with the robot, as soon as possible before the interaction begins. To this end, we make use of information captured about possible interacting people, elaborated by different classifier architectures, as described in the following.
### Sensing and features
In our study, we make use of the proxemics, i.e. the analysis of motion cues of interacting users. More into detail, proxemics analyses the way a user uses or occupies the social space in order to infer useful information for the interaction [23]. Proxemics concepts are particularly suitable to our scope as \((i)\) they are very representative of the intention to interact and \((ii)\) the quantities that define them can be conveniently measured with state-of-the-art robotic sensors. In particular, the RGB-D sensor used for data collection is the Azure Kinect [42]. The SDK of this sensor provides the detection and tracking of the human skeletons appearing in its field of view. More into detail, each detected skeleton is given an ID and defined as a tree of frames along the kinematic structures of the user. From the spatial information of the skeleton, the motion of the user can be easily extracted and used for our intention prediction module. Such data is saved in an anonymous way, i.e., no RGB-D images are stored: only the metric information required by the classifier is logged.
For our analysis, we take into account different sets of features. First of all, we consider the distance or the orientation of the person's torso, i.e.,
\[\mathbf{f}_{1}=d,\quad\text{and}\quad\mathbf{f}_{2}=\theta_{t}. \tag{1}\]
The third set that we consider contains the torso position:
\[\mathbf{f}_{3}=\mathbf{p_{t}}\in\mathbb{R}^{2}. \tag{2}\]
The fourth set of features gathers torso position and orientation together:
\[\mathbf{f}_{4}=\left(\mathbf{p_{t}}^{\top},\sin\theta_{t},\cos\theta_{t}\right)^{\top} \in\mathbb{R}^{4} \tag{3}\]
where, according to machine learning best practices [43], we encode the torso orientation into its \(\sin\) and \(\cos\) functions, to account for the fact that the feature is cyclical, and thus the representation of angle \(0^{\circ}\) should be close to \(359^{\circ}\). In the fifth set, we also include the orientation of the head:
\[\mathbf{f}_{5}=\left(\mathbf{p_{t}}^{\top},\sin\theta_{t},\cos\theta_{t},\sin\theta_{ h},\cos\theta_{h}\right)^{\top}\in\mathbb{R}^{6} \tag{4}\]
and in the last one, we add the velocity of the torso as well:
\[\mathbf{f}_{6}=\left(\mathbf{p_{t}}^{\top},\sin\theta_{t},\cos\theta_{t},\sin\theta_{ h},\cos\theta_{h},\mathbf{v}_{t}^{\top}\right)^{\top}\in\mathbb{R}^{8}. \tag{5}\]
The sets of features include the notions of proxemics at different levels. Their comparison allows for analyzing the contribution of the different proxemics elements in the prediction of the interaction.
### Classification approach
To solve the problem, we train a binary classifier that takes as input a feature vector describing a tracked person at a given time and outputs the probability that the person will interact with the robot.
The classifier is trained on a dataset \(\mathcal{D}\) composed of several _sequences_. A sequence represents a person tracked by the robot over time and is composed of multiple _samples_ (one per timestep). The sequence begins when the person enters the robot's social space and is first seen by the sensor; it ends when the person either begins their interaction with the robot or exits the social space of the robot without interacting. The dataset is denoted as
\[\mathcal{D}=\left\{\mathbf{f}_{i,j},\ y_{i,j}\right\}_{i=1,j=1}^{N_{j},S} \tag{6}\]
where \(\mathbf{f}\) is the feature vector, and \(y\) the label; the subscripts \(i\) and \(j\) indicate the \(i\)-th sample of the \(j\)-th sequence, respectively; \(S\) is the number of
sequences, whereas \(N_{j}\) is the number of samples contained in the \(j\)-th sequence. For a given sequence \(j\), all labels \(y_{i,j}\) are the same: 0 if the person did not interact with the robot; 1 if they did.
Assuming that the robot has the ability to detect that the user has engaged in an interaction, the true label of a sequence becomes available as soon as the sequence ends. This enables the robot to grow its training dataset without external supervision and to iteratively improve the classifier performance in a self-supervised way.
From an implementation point of view, we investigate different classifiers: Logistic Classifier (LC), Random Forest (RF), and Multilayer Perceptron (MLP) using their implementation provided by the scikit-learn library [44]; and Long Short-Term Memory (LSTM) which is implemented using PyTorch [45]. The MLP is composed of 2 hidden layers with 30 neurons each, using sigmoid activations; the LSTM is composed of 2 long short-term memory [46] cells with a 10-dimensional hidden state each; both models have approximately 1500 trainable parameters.
It is worth noting that the input of the classifier is limited to the information related to a single subject, i.e., the one whose intention to interact is being classified, whereas it does not include information about other people. However, during both training and inference, the presence of multiple people is easily handled by our approach since each person is tracked and processed independently. Since the classifier is computationally light (i.e. on a standard laptop, it runs at 30 FPS, which is the sensor's maximum frame rate) and can be instantiated in parallel for several users, we actually handle multi-user prediction by tracking and classifying all the people appearing in the field of view of the sensor (see Sec. 5.2).
## 4 Experimental scenarios
We test our approach in different scenarios, presented in Tab. 1 and described in detail in the following.
### Real-world interactions at a coffee break area
We collect a real-world, challenging dataset of human-machine interactions in which humans behave naturally. In particular, we consider a coffee machine placed in a break area neighboring a corridor of an office building (Fig. 2). During the day, many people pass through the corridor, some of them stop in the break area, and some others approach the machine to have
a coffee. This scenario is interesting and convenient for our analysis as we can observe the spontaneous behavior of the users who plan to interact with the coffee machine, in a natural context with many challenging complications and distractors: other users hanging around chatting; users approaching the general area to reach a nearby tap or fridge; users queuing up to use the machine. In this scenario, we have collected 3422 unique sequences of tracked skeletons, accounting for more than 12 hours of recorded data. Recorded users come from a heterogeneous sample of people, mainly employees and guests who have access to the break area. The users are informed about the
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Scenario** & **Setting** & **User** & **Agent** & **Self** & **Sequences** & **Mode** & **Agent** \\ \hline Coffee break & In-the-wild & Unaware & Coffee machine & Distance based & High (3422) & Train \& Test & Passive \\ Waiter robot & Controlled & Actor & Robo-master & Vision based & Medium (200) & Train \& Test & Reactive \\ Info robot & Controlled & Actor & HSR-B robot & Touch based & Low (72) & Test & Passive \\ \hline \hline \end{tabular}
\end{table}
Table 1: Scenarios considered in our analysis.
Figure 2: To collect data, the motion of people walking in a break area is monitored to predict their intention to interact with a coffee machine.
presence of the sensor above the coffee machine. However, they are unaware of the scope of the data collection. In this way, we ensure that their behavior is as natural as possible. Non-sensitive data (i.e. only the skeletons of the users) are recorded.
In this specific scenario, sequences should be ideally labeled by considering when a user operates the machine, e.g. by pressing a button on it; similarly one might expect a service robot to easily determine when a user engages with it. However, in our case, we do not have access to the machine firmware and we can not read its internal state. Therefore, we rely on the sensor used for data collection to automatically generate labels. To do so, we use the following distance-based heuristic: interaction is detected when a person stays very close (i.e. within a distance of 1 m) to the coffee machine for an uninterrupted period of 5 seconds; we assume that the interaction takes place at the end of this period; all samples, coming from the same sequence, in the preceding 10 seconds are labeled \(y=1\). We empirically verified that such criterion is very effective as a proxy to detect actual interactions, and we use it to automatically generate labels in this scenario.
### Chocolate handover by a waiter robot
In the second scenario, we use a wheeled omnidirectional robot (DJI Robomaster EP [47]), placed on a table in the vicinity of the Azure Kinect [42] sensor (Fig. 1, bottom). The robot behaves as a waiter who serves chocolate treats to people passing by. During the data collection, the robot does not perform any motion. Data can be self-labelled using a simple vision-based approach based on image-based detections taken with the robot's onboard camera with which we can automatically detect whether users take the chocolate or not. The recorded data consists of 200 sequences of a single user performing the same number of interacting and non-interacting actions.
In the deployment phase, instead, we provide the robot with reactive behavior. If an interaction is predicted, the robot enacts a reaction by turning its LEDs on and orienting itself toward the user yielding the highest probability. At the same time, the robot extends its arm handing out a chocolate treat to the user: this acknowledges that the robot has seen the user and is available to interact. When no interaction is predicted, the robot gets back to its initial orientation, turns the LEDs off, and retracts its arm. Such behavior has been tested with users aware of the interactions in a controlled environment. The users of these tests were informed about the purpose of the experiments and gave their consent to participate in the data collection.
### Information service robot
Finally, we propose a controlled evaluation setup with the Toyota Human Support Robot series B (HSR-B) [48] robot placed in a U-shaped corridor (see Figure 1, top). The robot is equipped with the Azure Kinect sensor on its head, oriented horizontally w.r.t. the floor, at a height of about 1.3 m. We have collected evaluation data from the behavior of 12 participants. The participants who walk through the corridor, initially can not see the robot, after the first curve notice it, and adjust their behavior according to their intention to interact. In this specific data collection setup, participants act as actors, i.e., they are informed of the presence of the robots. Furthermore, in half of the cases, they are told to pretend that they do not wish to or do not have time to interact with the robot. In this way, they provide non-interaction sequences for our dataset. In the other half of the cases, participants are instructed to walk to the robot when they see it and touch its head, which we considered the interaction trigger for this evaluation scenario. Each test participant produced 3 samples of not interacting with the robot, and 3 that recorded interaction, resulting in a total of 72 samples. The data collection protocol was approved by the Ethical Committee of King's College London, United Kingdom (Review reference: LRS/DP-22/23-35586).
## 5 Results
We report the experimental analysis carried out in each scenario described in Sec. 4. First, we perform offline experiments on the large and challenging coffee break dataset presented in Sec. 4.1, comparing different feature sets and classification approaches. Based on these experiment results, we then select the most promising combination of features and classifier for the experimental validation within the other two scenarios that involve actual robots. The presented results can be further qualitatively evaluated in the video accompanying the paper.
### Offline experiment in the coffee break scenario
#### 5.1.1 Sample-level performance
We compare different feature sets (Sec. 3.2) and classification approaches (Sec. 3.3) using the dataset collected in the coffee break scenario. We partition the set of the recorded sequences into 5 evenly-sized non-overlapping groups. Then, for each combination of feature set and classifier, we use a 5-fold cross-validation approach to compute predictions for all the samples
in all the sequences. In particular, the samples in all the sequences of a given group are classified by a model trained on all sequences belonging to the 4 remaining groups.
We then consider all samples from all sequences to compute performance metrics. In particular, we report the Area Under the ROC Curve (AUROC): a robust binary classification metric that does not depend on a choice of threshold, and ranges between 0.5 (for a non-informative classifier, e.g. one always reporting the majority class) and 1.0 (an ideal classifier). It can be interpreted as the probability that, taking a random sample from a person who did not interact, and a random sample from a person who eventually interacted, the classifier assigns to the former a lower score than the latter. When computed on all testing samples pooled together, all models score very high when using feature sets that include the distance-based information (see Tab. 2). The reason is that the person's distance from the device is a very strong cue of whether the person ends up interacting with it.
However, we aim to evaluate the ability of our approach to classify a person's intention to interact _independently_ on their distance from the device. A more informative metric in our context is therefore the AUROC computed among samples that all lie approximately at the same distance; within this group of samples, the distance feature alone loses its discriminative ability. Therefore we partition all our testing samples in seven distance bins, determined in such a way to have an approximately uniform amount of samples per bin: \(d<0.75\) m, \(d\in[0.75,1)\) m, \(d\in[1,1.25)\) m, \(d\in[1.25,2)\) m, \(d\in[2,2.5)\) m, \(d\in[2.5,3)\) m, and \(d\geq 3\) m; this yields 7 AUROC values for each model, each representing its performance on people in a given distance bin; we then average these values together to get an overall metric describing how good a model is to determine user's intention, independently of their distance from the machine. Fig. 3 reports this metric, separately for each
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \(\mathbf{f}_{1}\) & \(\mathbf{f}_{2}\) & \(\mathbf{f}_{3}\) & \(\mathbf{f}_{4}\) & \(\mathbf{f}_{5}\) & \(\mathbf{f}_{6}\) \\ \hline
**LC** & 0.909 & 0.663 & 0.897 & 0.901 & 0.901 & 0.906 \\
**RF** & 0.838 & 0.559 & 0.872 & 0.896 & 0.905 & 0.931 \\
**MLP** & 0.908 & 0.666 & 0.914 & 0.925 & 0.921 & 0.940 \\
**LSTM** & 0.919 & 0.659 & 0.894 & 0.906 & 0.895 & 0.913 \\ \hline \hline \end{tabular}
\end{table}
Table 2: AUROC for different classifiers (rows) and feature sets (columns) tested on all samples pooled together (i.e., without distance-based binning).
distance range, and averaged over all the distances. We observe that:
* As expected, in non-recurrent models (LC, RF, MLP), \(\mathbf{f}_{1}\) alone is not informative according to the chosen metric (the AUROC is always close to 0.5).
* Consistently over all the models and distances, richer features yield better results.
* The LSTM model does not benefit when provided with explicit velocity information, since this can be already captured by the model itself, which operates on sequential data. For the same reason, the LSTM model performs significantly better than chance, even when given only the distance feature as input, since it can capture and exploit distance variations over time.
* When the models are provided with rich features, predicting performance at short distances is harder (lower AUROC) than at long dis
Figure 3: Coffee break scenario: performance of the classifiers according to the AUROC metric for the different models (from top to bottom: LC, RF, MLP, and LSTM); tested in different ranges of social distance (from left to right, ranging from below 0.75 m to above 3.5 m) and on average over all the distance ranges (last column); and using different sets of features (from \(\mathbf{f}_{1}\) to \(\mathbf{f}_{6}\) for each column of the histograms from left to right). The horizontal dotted line denotes the performance of a noninformative classifier (AUROC = 0.5).
tances. This can be explained considering the characteristics of our dataset: people in the vicinity of the device often mingle around it for a long time, chatting with others or being busy with other tasks, even if they do not end up interacting with the machine; people that are approaching from afar, in contrast, exhibit clearer intention in their body language and gaze; this also explains why, for people that lie far from the device, providing orientation and velocity information is very beneficial to performance, whereas the same does not hold for people nearby.
#### 5.1.2 Sequence-level performance
While sample-level performance is a relevant metric to robustly compare different classification approaches, in a real deployment we care about the ability of the approach to correctly classify the intent of a nearby person, as early as possible after the person is first detected. Therefore, we now limit our analysis to the LSTM approach using the \(\mathbf{f}_{5}\) set, which shows the most promising performances for higher distances with no need to explicitly encode velocity features. We report sequence-level metrics, computed as follows.
We consider each sequence in the testing set separately; we evaluate every sample in the sequence and simulate taking an irreversible decision (e.g. to
Figure 4: Coffee break scenario: ROC curve for sequence-level performance (left); Precision, Recall, and Advance detection time w.r.t the threshold of the classifier (right).
acknowledge the person's presence and demonstrate availability to interact) as soon as the probability returned by the classifier exceeds a given threshold. A sequence for which such probability never exceeds the threshold is a _true negative_ if the user does not interact with the robot, or a _false negative_ if it eventually does. A sequence for which such probability exceeds the threshold for at least one sample is a _true positive_ if the person eventually interacts, or a _false positive_ otherwise. Then, we can compute the _true positive rate_ (i.e., the _recall_), _false positive rate_, and _precision_. For true positives, we also track the _advance detection time_: the period (in seconds) between the first time the probability exceeds the threshold and the moment in which the interaction actually occurs.
Figure 4 reports how these metrics change as a function of the threshold. We observe that the resulting AUROC is well above 0.5, indicating a good ability of the approach to discriminate sequences that eventually interact from those that do not; as the threshold increases, the advance detection time (averaged over true positives) decreases, as the system takes a decision later in the sequence, i.e., when the person is closer to the robot.
#### 5.1.3 Self-supervised learning
We test the ability of the system to improve its performance as new training data is collected in a self-supervised way [14; 40]. In particular,
Figure 5: Coffee break scenario: AUROC of the model on each day of the self-supervised learning experiment (see text). Boxplots report statistics over 20 runs of the experiment.
we split the collected sequences into 10 disjoint, equally-sized, temporally-contiguous groups. Each group contains about 340 sequences and we refer to it in the following as a "day" of data, assuming that the robot is placed in an area with limited visitors. A crowded hall might see the same number of sequences in one hour or less. We then consider a setting in which the robot is deployed with no training at day 0: the robot collects data for one day, then trains a new model using all collected data up to that day, which will be used and evaluated in the following day; the process is repeated for a total of 10 days.
Figure 5 reports the improvements in the performance measured over the considered period; statistics are reported over 20 runs of the experiment, obtained by randomly shuffling the order of the days. For each run, we take the average AUROC computed over each distance bin independently as explained in 5.1.1. We observe that median performance steeply increases in the first 4 days (about 1500 sequences); additional training further improves AUROC, with reduced returns. Note that, although we did not test this in our current experiment, this approach would be able to automatically adapt to domain shift over time, i.e. caused by changing user demographics, or changing the spatial layout of the environment.
Figure 6: Waiter robot scenario: deployment of the classifier. If no interaction is predicted (snapshots on the left) the robot does not react at all. If a user is classified as intending to interact (center), the robot orients its body towards them, turns its LEDs on, and extends its arm to hand out a chocolate treat. When multiple users are detected (right) the robot orients itself towards the closest person that is predicted as intending to interact
### Robot validation experiments
#### 5.2.1 Self-supervised learning on the waiter robot
We leverage the self-supervised nature of the dataset collected as described in Sec. 4.2 to implement the behavior of the waiter robot. Similarly to the coffee break scenario, we split the available data in 3 "days" and assume that each one of them is incrementally added to the dataset as time goes by. At day 0 the robot starts without actual training and passively collects data. Each day, new data is recorded and a new model is trained using \(k\)-fold cross-validation, where \(k\) equals the number of available days of data. We then compute AUROCs for each model and we observe the performance incrementally increasing from 0.500 on day 0, to 0.871 on day 1 and 0.927 on day 2. On day 3, the robot becomes very confident about its prediction as the model yields an AUROC of 0.944. At this point, the robot can start enacting the reaction strategy described in Sec. 4.2.
The qualitative performance of the model can be observed in Fig. 6 and supplementary videos. Our perception module correctly detects whether someone is approaching the robot to take the chocolate, or is simply passing nearby the robot. This behavior would not be possible to realize if the model used distance-based features only. Furthermore, the model works with multiple users at the same time. In this case, the robot shows availability to interact with the closest user whose intention to interact is predicted.
#### 5.2.2 Performance in the information robot scenario
Finally, we consider the classifier learned in the waiter robot scenario, and test it in a new setting with different users. In particular, we compute the performance of such a classifier on the dataset collected in the information robot scenario. Figure 7 reports solid sequence-based metrics: AUROC is approximately equal to 0.99; also, for a threshold of about 0.86 we get a recall of 1, and a precision of about 0.90, maintaining an average advance detection time of more than 3 s, which is a reasonable prediction time for the considered scenario.
A qualitative evaluation of the performance is shown in Fig. 8 and supplementary videos.
### Discussion
The results obtained in the three scenarios illustrate that the proposed approach works well to detect users' intention to interact before the interaction actually happens. The robot validation experiments exhibit better
performance than the experiments in the coffee break scenario: in fact, the former is a controlled environment with users that were specifically tasked to interact with the robot; the latter relies on a more challenging dataset collected in the wild. Nevertheless, the AUROC value computed for the sequence-based analysis in the coffee break scenario confirms the reliability of the approach.
The advance detection time, which is always \(>3s\), demonstrates that the approach works well in practice. In fact, considering an average human walking speed of 1.35 m/s [49], we can argue that we are able to predict the intention of an approaching person and proactively anticipate them to successfully start an interaction. This intuition is extensively verified in the attached video, where the classifier evaluated in Sec. 5.2.1 and Sec. 5.2.2 is successfully deployed. In the video, we try to challenge the classifier, recording difficult sequences in which people approach the robot just to pass by it without interacting. In these circumstances, the use of only distance-based features would not be sufficient to correctly classify the user's intention to interact. Indeed in the recorded sequences, our classifier that makes use of a rich feature set (\(\mathbf{f}_{5}\) in the presented experiments) manages to successfully detect the user's intentions.
Furthermore, the SSL experiments advocate for the practical use of the
Figure 7: Information robot scenario: ROC curve for sequence-level performance (left); Precision, Recall, and Advance detection time w.r.t the threshold of the classifier (right).
Figure 8: Information robot scenario: snapshots taken from the robot’s sensor during two sequences. The bounding boxes that are superimposed on the acquired image report the output of the classifier: red boxes mean a low probability of interaction, whereas green boxes indicate a higher probability of interaction. On the left: a user walks through the corridor without interacting with the robot; the system does not predict any interaction. On the right: another user approaches the robot and the system correctly predicts the intention to interact in advance.
proposed approach. Starting from the realistic assumption that a robot can easily determine whether a person has interacted with it (e.g. by pressing a button, or starting a conversation), the results shown in Sec. 5.1.3 and 5.2.1 demonstrate how a robot could be deployed in an unknown environment. Most importantly, during the deployment, the robot can autonomously collect new data, improve its predictions, and start to proactively engage people in interactions.
Moreover, the choice of using only spatial-based features extracted from the skeleton of the user, avoiding RGB-D data, proved to be ideal to make the approach more robust and general. Indeed, using skeleton-derived data as features allows us to be independent of the users' appearance and more robust w.r.t. to the scene background. In fact, we have obtained strong performances with the classifier trained in the waiter robot scenario even when deployed in a new scenario, such as the information robot one, without retraining. Both the video and Fig. 8 showcase this important aspect, displaying sequences from the robot information scenario and the predictions returned by the waiter robot classifier. Also, avoiding image based information makes data collection and processing easier, both in term of computation and privacy concerns.
Finally, both Fig. 6 and the video show how the approach can be deployed to handle multiple users. The input to the classifier is limited to the information related to one single subject (the one whose intention is actually classified) and does not include information related to other neighboring persons that might be influencing the subject's behavior. However, both during training and inference, the presence of multiple people is easily handled as each person is tracked and processed independently. Indeed, in the waiter robot scenario, we demonstrate that we can handle multiple users and interact with the closest person who is predicted to interact. The video qualitatively shows that the robot can proactively behave even when multiple users are present at the same time.
## 6 Conclusions
We have presented a self-supervised learning approach to predict the user's intention to interact with a robot. To this end, we have collected three datasets in different interaction contexts and settings, with different sizes, containing hundreds of body-tracked users interacting with agents, even within real everyday-life scenarios. We have tested the system with various
classification approaches to assess the relevance of the features containing information on the user's pose and motion. We have also simulated the deployment of our strategy in a self-supervised learning fashion and tested it at both sample and sequence levels. Furthermore, we have validated our approach in real human-robot interaction experiments, and involving two different robot platforms. Finally, we have also shown a strategy to proactively react to the user's intention. The presented results are also reported in the supplementary video.
In the future, we will investigate different robot reaction strategies and the way they affect the interaction from the users' perspective. Similarly, we will analyze how the presence of multiple people influences the user's intention to interact. To this end, we plan to augment the feature set of the classifier with information about the people neighboring the tracked user. Furthermore, we will also consider the role played by the robot's appearance and test our framework with different robot platforms. Finally, we plan to conduct an extensive data collection session in public environments and in different social contexts.
## Acknowledgment
This work was supported by the European Union through the project SERMAS, and by the Swiss National Science Foundation grant n. 213074.
| サービスロボットが、近接するユーザーがインタラクションを意図しているか、その可能性を proactively 検出できるならば、よりスムーズなインタラクション体験を提供することができる。例えば、そのサービス提供の有無を示すことで、そのロボットはインタラクションを促すことができる。この研究では、人間ユーザーがロボットとインタラクションをする可能性を予測するための学習ベースアプローチを提案する。このアプローチは、人間との遭遇ごとに自動的にラベル付けが可能な自己教師あり学習である。ユーザーの姿勢や動作の組み合わせを考慮した異なる分類アプローチを用いた。このアプローチを、3つのシナリオで検証・展開する。最初のシナリオでは、オフィスブレイクエリアの従業員の3442個の自然なシーケンス(インタラクションと非インタラクションの両方)を収集する。これは現実世界での課題設定であり、コーヒーメーカーをサービスロボットの代わりに使用している。他の2つのシナリオでは、 |
2309.16591 | Preferential attachment with choice based edge-step | We study the asymptotic behavior of the maximum degree in the preferential
attachment model with a choice-based edge-step. We add vertex type to the model
and prove, among others types of behavior, the effect of condensation on
multiple vertices with different types. | Yury Malyshkin | 2023-09-28T16:53:59 | http://arxiv.org/abs/2309.16591v3 | # Preferential attachment with choice based edge-step
###### Abstract.
We study the asymptotic behavior of the maximum degree in the preferential attachment model with a choice-based edge-step. We add vertex type to the model and prove the effect of condensation on multiple vertices with different types.
Key words and phrases:random graphs, preferential attachment, power of choice, fitness 2010 Mathematics Subject Classification: 05C80
## 1. Introduction
In the present work, we study the addition of vertex fitness to the linear preferential attachment model (see, e.g. [12, 13]) with a choice-based edge step. Addition of the vertices of different types is a natural way to model people's preferences, for example in electoral models (see, e.g., models with fitness [1] or geometrical models [1]). The motivation of additional parameter is to model certain specifics people preferences, for example, peoples who have pets are more likely to visits pet-related forums and groups. The standard preferential attachment graph model was introduced in [1]. Such a graph is constructed in the following way. First, we start with some initial graph \(G_{1}\), usually, for simplification purposes, it consists of two vertices and an edge between them. Then on each step, we add a new vertex and draw an edge from it to an already existing vertex, chosen with probability proportional to its degree. Usually, one considers the rule when we choose a vertex with probability proportional to its degree plus some parameter \(\beta>-1\). Such a model was widely studied (see, e.g., [13], section 8) and different modifications have been introduced.
One of the modifications we consider is the introduction of a choice to the model (see, e.g., [1, 1, 12, 13]). In this modification, we consider the sample of \(d\) independently chosen vertices and then choose the one with the largest degree. This modification often results in the effect of condensation, when a single vertex has linear (over the total number of edges) degree (see, e.g., [1, 12, 13]). The other modification is the introduction of the edge step to the model (see, e.g., [1]). In this modification, we have two types of steps. The vertex step is the classical preferential attachment step when we add a new vertex and then draw edges from it to an already existing vertex. The other type of step is an edge step. In this step, we draw edges between already existed vertices, chosen with probabilities proportional to their degrees.
Let us introduce our model. Fix \(k,m,d,T\in\mathbb{N}\), \(d>1\), \(\beta>-1\) and \(p_{i}\in(0,1)\), \(i=1,...,T\), \(\sum_{i=1}^{T}p_{i}=1\), which are parameters of our model. We consider a sequence of graphs \(G_{n}\), with \(G_{n}\) containing vertices \(v_{1},...,v_{n}\). We consider i.i.d. random
variables \(X_{1},X_{2},...\) with distributions \(P(X_{1}=i)=p_{i}\), such that \(X_{i}\) corresponds to \(v_{i}\) and represents the type (fitness) of a vertex. We start with the initial graph \(G_{1}\) that consists of a vertex \(v_{1}\). To build graph \(G_{n+1}\) from \(G_{n}\) we add a vertex \(v_{n+1}\) and draw edges in two steps.
Vertex step: We draw \(m\) edges independently from \(v_{n+1}\) to one of the vertices \(v_{1},v_{2},...,v_{n}\), for each edge the endpoint is chosen with conditional probability (given \(G_{n}\))
\[\frac{\deg_{G_{n}}v_{i}+\beta}{\beta n+\sum_{i=1}^{n}\deg_{G_{n}}v_{i}}. \tag{1}\]
Edge step: We independently choose \(k\) pairs \((v,u)\) of vertices of \(G_{n}\cup v_{n+1}\), such that vertex \(v\) chosen uniformly from all vertices and vertex \(u\) chosen by the following rule. We consider a sample \(y_{1},...,y_{d}\) of size \(d\) of vertices of \(G_{n}\cup v_{n+1}-v\) with the same type \(t\) as \(v\), chosen with conditional (given \(G_{n}\)) probabilities
\[\Pr(y_{k}=v_{j}|\mathcal{F}_{n})=\mathbf{1}\{X_{j}=t\}\frac{\deg_{G_{n}}v_{j}+ \beta}{\sum_{i=1}^{n}\mathbf{1}\{X_{i}=t\}\left(\deg_{G_{n}}v_{i}+\beta\right)}. \tag{2}\]
Here we use \(\mathcal{F}_{n}\) to describe \(\sigma\)-algebra generated by all random variables used in constructing \(G_{n}\). Then we draw an edge from \(v\) to the vertex from the sample with the highest degree in \(G_{n}\) (in case of the tie choose randomly, it would not affect the degree distribution).
Let \(D_{n}^{i}=\sum_{j=1}^{n}\left(\deg_{G_{n}}v_{j}+\beta\right)\mathbf{1}\{X_{j}=i\}\) be the total weight of all vertices of type \(i\) and \(D_{n}=\sum_{i=1}^{T}D_{n}^{i}\). Note that \(G_{n}\) is the sum of weights of all vertices of the graph, so \(G_{n}=(2m+2k+\beta)n\). At step \(n+1\) we could increase \(D_{n}^{i}\) in the following ways. We could draw one of \(m\) edges to the vertex of \(G_{n}\) of type \(i\) during an edge step, with the expected number of such edges equals to \(m\frac{D^{i}(n)}{D_{n}}\). The new vertex could also be of type \(i\), which happens with probability \(p_{i}\) and results in increase of \(D^{i}(n)\) by \(m+2k+\beta\). Therefore we get representation
\[\mathbb{E}\left(D^{i}(n+1)-D^{i}(n)|\mathcal{F}_{n}\right)=m\frac{D^{i}(n)}{D _{n}}+(m+2k+\beta)p_{i}n.\]
Hence, due to the law of iterated logarithm for stochastic approximation (see, e.g., [1]),
\[D^{i}(n)=(2m+2k+\beta)p_{i}n+o(n^{1/2}\ln n)\text{ almost surely.} \tag{3}\]
Let us formulate our main result. Let \(M_{1}(n),...,M_{T}(n)\) be the highest degrees of vertices of types \(1,...,T\) in \(G_{n}\).
**Theorem 1**.: _If \(k(d-2)>m\), then_
\[\lim_{n\to\infty}\frac{M_{i}(n)}{n}=p_{i}x_{*}\]
_almost surely for each \(i=1,...,T\) where \(x_{*}\in(0,2m+2k)\) is the root of the equation_
\[\frac{m+2k}{2m+2k}x=k\left(1-\left(1-\frac{x}{2m+2k+\beta}\right)^{d}\right).\]
This theorem shows the existence of a condensation effect on multiple vertices. To prove the above theorem, we need the following auxiliary result from stochastic approximation processes (see Corollary 2.7 in [12] and [13] for more details).
**Lemma 2**.: _Let \(\mathcal{F}_{n}\)-measurable process \(Z(n)\) satisfy the following conditions:_
* \(|Z(n+1)-Z(n)|<C\) _almost surely for some constant_ \(C\)_._
* \(\mathbb{E}(Z(n+1)-Z(n)|\mathcal{F}_{n})=f\left(\frac{Z(n)}{n}\right)+O\left( \frac{1}{n}\right)\) _for some concave function_ \(f(x)\)_._
* \(f(0)=0\)_,_ \(f^{\prime}(0)>1\)_,_ \(f(c)=0\) _for some_ \(c>0\)_._
_Then almost surely_
\[\lim_{n\to\infty}\frac{Z(n)}{n}=c.\]
Since types of vertices are i.i.d. random vertices with finite number of values, the number \(N_{i}(n)\) of vertices of type \(i\) in \(G_{n}\) satisfies the law of iterated logarithm. Hence, due to (3), for any \(\epsilon>0\)
\[\Pr\left(\mathcal{A}_{\epsilon}^{i}(n_{0})\right)\to 0\quad\text{ as }\quad n_{0}\to\infty\]
where
\[\mathcal{A}_{\epsilon}^{i}(n_{0})=\left\{\exists n\geq n_{0}\;D_{n}^{i}>(2m+2 k+\beta+\epsilon)p_{i}n\text{ or }|N_{i}(n)-p_{i}n|>\epsilon n\right\}. \tag{4}\]
Let us consider the evolution of \(M_{1}(n)\) (for other types the argument is the same). On step \(n+1\) it could be increased in two ways. First, we could draw an edge from \(X_{n+1}\) to the vertex with the highest degree. The probability (conditioned on graph \(G_{n}\)) to do so (for each of \(m\) possible edges, the same procedure independently repeated \(m\) times) is at least (exactly if there is a single vertex with the highest degree) \(\frac{M_{1}(n)+\beta}{D_{n}}\). Second, we could draw edges (the same procedure independently repeated \(k\) times) to it during an edge step (or for it to be the initial vertex, which happens with probability \(1/n\)). To do so first we need the initial vertex of a pair to be a type \(1\) vertex, which happens with conditional probability \(\frac{D_{n}^{1}}{D_{n}}\). Then, we need a vertex with the highest degree to appear in the sample. The probability of getting the vertex to the exact position in the sample is at least \(\frac{M_{1}(n)+\beta}{D_{n}^{1}}\), so the probability for the vertex of the maximal degree to be in the sample is at least \(1-\left(1-\frac{M_{1}(n)+\beta}{D_{n}^{1}}\right)^{d}.\) As a result we get an estimate
\[\mathbb{E}(M_{n+1}-M_{n}|\mathcal{F}_{n})\geq m\frac{M_{1}(n)+\beta}{D_{n}}+k \frac{1}{n}+\frac{N_{1}(n)}{n}\left(1-\left(1-\frac{M_{1}(n)+\beta}{D_{n}^{1} }\right)^{d}\right). \tag{5}\]
Hence, we have the following representation
\[\mathbb{E}(M_{1}(n+1)-M_{1}(n)|\mathcal{F}_{n})\geq m\frac{M_{1}(n)}{n}\left( \frac{1}{\frac{D_{n}}{n}}\right)+\]
\[+k\frac{N_{1}(n)}{n}\left(1-\left(1-\frac{M_{1}(n)}{n}\frac{1}{\frac{D_{1}^{1} }{n}}\right)^{d}\right)+O\left(\frac{1}{n}\right).\]
As a result, for \(n\geq n_{0}\) we get
\[\mathbf{1}\{\mathcal{A}_{\epsilon}(n_{0})\}\mathbb{E}(M_{1}(n+1)-M_{1}(n)| \mathcal{F}_{n})\geq\mathbf{1}\{\mathcal{A}_{\epsilon}(n_{0})\}\left(\frac{M_ {1}(n)}{n}\left(\frac{m}{2m+2k+\beta+\epsilon}\right)+\right.\]
\[\left.+(p_{1}+\epsilon)\mathbb{E}k\left(1-\left(1-\frac{M_{1}(n)}{n}\frac{1}{ p_{1}(2m+2k+\beta+\epsilon)}\right)^{d}\right)+O\left(\frac{1}{n}\right) \right)=\]
\[=\mathbf{1}\{\mathcal{A}_{\epsilon}(n_{0})\}\left(f\left(\frac{M_{1}(n)}{n} \right)+O\left(\frac{1}{n}\right)\right),\]
where \(f_{\epsilon}(x)=\frac{m}{2m+2k+\beta+\epsilon}x+(p_{1}+\epsilon)\mathbb{E}k(1-(1- \frac{1}{p_{1}(2m+2k+\beta+\epsilon)}x)^{d})\). The function \(f_{\epsilon,1}(x)\) is concave, \(f_{\epsilon}(0)=0\), and \(f_{\epsilon}^{\prime}(0)=\frac{m+d(1+\frac{1}{p_{1}})k}{2m+2k+\beta+\epsilon}>1\) for small enough \(\epsilon\). Also, \(M_{1}(n+1)-M_{1}(n)<1+2m\). Therefore due to Lemma 2
\[\liminf_{n\to\infty}\frac{M_{1}(n)}{n}\geq x_{*}^{\epsilon,1},\]
where \(x_{*}^{\epsilon,1}\) is the root of \(f_{\epsilon,1}(x)\) in \((0,p_{1}(2m+2k+\beta+\epsilon))\).
When \(\epsilon\to 0\) polynomial function \(f_{\epsilon}(x)\) uniformly converges to \(p_{1}f\left(\frac{x}{p_{1}}\right)\). Therefore \(x_{*}^{\epsilon,1}\to p_{1}x_{*}\) as \(\epsilon\to 0\).
Note that the inequality in equation (5) is due to the possibility of having multiple vertices of the type 1 with the highest degree.
We will use a persistent hub argument, similar to the one used in [1], to show that the highest degree is achieved on a single vertex for large enough \(n\). In [1] it was proven that if we consider two-dimensional random walk \((A_{n},B_{n})_{n\geq n_{0}}\) on \(\mathbb{N}^{2}\), which could only move right or up, with
\[\begin{split}\Pr\left(A_{n+1}-A_{n}=1,B_{n+1}-B_{n}=0|A_{n},B_{n }\right)\geq\frac{A_{n}+\beta}{A_{n}+B_{n}+2\beta},\\ \Pr\left(A_{n+1}-A_{n}=1,B_{n+1}-B_{n}=0|A_{n},B_{n}\right)+\\ +\Pr\left(A_{n+1}-A_{n}=0,B_{n+1}-B_{n}=1|A_{n},B_{n}\right)=1 \end{split} \tag{6}\]
then
\[\Pr\left(\exists n:A_{n}=B_{n}|A_{n_{0}}=a,B_{n_{0}}=1\right)\leq\frac{Q(a)}{2 ^{a}} \tag{7}\]
for some polynomial \(Q(a)\) (Corollary 8) and the number of moments \(n\) with \(A_{n}=B_{n}\) is finite almost surely for any starting point \((A_{n_{0}},B_{n_{0}})\) (Proposition 9).
We would apply these results to a pair \((M_{1}(n),\deg_{G_{n}}v_{i})\) where \(v_{i}\), \(i\leq n\), is a given vertex of type 1. The estimate (7) then implicates that almost surely only a finite number of vertices could become the highest degree vertices among vertices of their type. The second statement would result in only a finite number of changes of leadership, and, hence, after some (random) time, the highest degree is achieved on a single vertex (i.e. the probability that there are no change of leadership after time \(n\) turns to 1 as \(n\to\infty\)).
Let \(u\) be a vertex of type 1 that does not have a maximal degree. We check the first inequality from (6) by proving such inequality for both steps of adding an edge. In case we draw an edge from a new vertex to either \(u\) or vertex with highest degree, the probabilities to do so would be exactly \(\frac{\deg_{G_{n}}u+\beta}{D_{n}}\) for \(u\) and at least \(\frac{M^{1}(n)+\beta}{D_{n}}\) for a vertex with highest degree, so the first condition holds. For an edge step, we could increase the degree by choosing a vertex \(u\) as the first vertex with probability \(\frac{1}{n}\) or by choosing it from the sample. To choose \(u\) from a sample, we need at least a vertex with the highest degree to not be in the sample. Let's assume that in this case if \(u\) is present in the sample we choose \(u\). Such an assumption would increase the probability of choosing \(u\). Hence, the conditional average increase of \(B_{n}:=\deg_{G_{n}}u\) by adding an edge during an edge step would be equal to
\[\frac{1}{n}+\frac{D_{n}^{1}}{D_{n}}\left(\left(1-\frac{M_{1}(n)+\beta}{D_{n}^ {1}}\right)^{d}-\left(1-\frac{M_{1}(n)+B_{n}+2\beta}{D_{n}^{1}}\right)^{d} \right).\]
For the \(M^{1}(n)\) the same increase would be equal to
\[\frac{1}{n}+\frac{D_{n}^{1}}{D_{n}}\left(1-\left(1-\frac{M_{1}(n)+2\beta}{D_{n}^ {1}}\right)^{d}\right).\]
Let divide the first increase by the second and prove that it is at most \(\frac{\deg_{G_{n}}u+\beta}{M^{1}(n)+\beta}\), which would result in the existence of the persistent hub. We get
\[\frac{\frac{1}{n}+\frac{B_{n}+\beta}{D_{n}^{1}}\sum_{i=0}^{d-1}\left(1-\frac{ M_{1}(n)+\beta}{D_{n}^{1}}\right)^{i}\left(1-\frac{M_{1}(n)+B_{n}+2\beta}{D_{n}^ {1}}\right)^{d-i-1}}{\frac{1}{n}+\frac{M_{1}(n)+\beta}{D_{n}^{1}}\sum_{i=0}^{ d-1}\left(1-\frac{M_{1}(n)+\beta}{D_{n}^{1}}\right)^{i}}=\]
\[=\frac{\frac{B_{n}+\beta}{D_{n}^{1}}\sum_{i=0}^{d-1}\left(1-\frac{M_{1}(n)+ \beta}{D_{n}^{1}}\right)^{i}}{\frac{1}{n}+\frac{M_{1}(n)+\beta}{D_{n}^{1}} \sum_{i=0}^{d-1}\left(1-\frac{M_{1}(n)+\beta}{D_{n}^{1}}\right)^{i}}+\]
\[+\frac{\frac{1}{n}-\frac{B_{n}+\beta}{D_{n}^{1}}\sum_{i=0}^{d-1}\left(1-\frac {M_{1}(n)+\beta}{D_{n}^{1}}\right)^{i}\left(1-\left(1-\frac{M_{1}(n)+B_{n}+2 \beta}{D_{n}^{1}}\right)^{d-i-1}\right)}{\frac{1}{n}+\frac{M_{1}(n)+\beta}{D_{ n}^{1}}\sum_{i=0}^{d-1}\left(1-\frac{M_{1}(n)+\beta}{D_{n}^{1}}\right)^{i}}\leq\]
\[\leq\frac{B_{n}+\beta}{M_{1}(n)+\beta}+\]
\[+\frac{\frac{1}{n}-\frac{B_{n}+\beta}{D_{n}^{1}}\frac{M_{1}(n)+B_{n}+2\beta}{ D_{n}^{1}}\sum_{i=0}^{d-2}\left(1-\frac{M_{1}(n)+\beta}{D_{n}^{1}}\right)^{i} \sum_{j=0}^{d-i-2}\left(1-\frac{M_{1}(n)+B_{n}+2\beta}{D_{n}^{1}}\right)^{j}} {\frac{1}{n}+\frac{M_{1}(n)+\beta}{D_{n}^{1}}\sum_{i=0}^{d-1}\left(1-\frac{M_ {1}(n)+\beta}{D_{n}^{1}}\right)^{i}}.\]
Note that \(\frac{M_{1}(n)+B_{n}+2\beta}{D_{n}^{1}}\) is separated from \(1\). Hence there is a constant \(c>0\), such that almost surely
\[\frac{1}{D_{n}^{1}}\frac{M_{1}(n)+B_{n}+2\beta}{D_{n}^{1}}\sum_{i=0}^{d-2} \left(1-\frac{M_{1}(n)+\beta}{D_{n}^{1}}\right)^{i}\sum_{j=0}^{d-i-2}\!\!\left( 1-\frac{M_{1}(n)+B_{n}+2\beta}{D_{n}^{1}}\right)^{j}\geq\frac{c}{n}.\]
Therefore, if \(\deg_{G_{n}}u+\beta>\frac{1}{c}\), the last term is negative and we get the needed estimate. The condition \(\deg_{G_{n}}u+\beta>\frac{1}{c}\) is not significant in random walk estimates, since it only affects the starting point by a constant (we start from \((A,1/c)\) instead of \((A,1)\)).
Hence, the number of vertices that could achieve a maximum degree and the number of changes of degree leadership between them is almost surely finite. Therefore, there exists a random variable \(N_{1}\), such that \(M_{1}(n)\) is achieved on the same single vertex for all \(n>N_{1}\). As result, we could replace inequality in (5) by equality for \(n>N_{1}\) which would not affect convergence (we could replace \(\mathcal{A}_{\epsilon}(n_{0})\) with \(\mathcal{A}_{\epsilon}(n_{0})\cap\{n>N_{1}\}\) in the argument). Hence, we get that
\[\liminf_{n\rightarrow\infty}\frac{M_{1}(n)}{n}=x_{*}^{\epsilon,1}.\] | preferentialattachmentモデルにおける最大度数の上 asymptotical behavior を研究しています。choice-based edge-step を用いて、頂点タイプを追加して、複数の頂点の凝縮効果について証明しています。 |
2302.00081 | SonoUno web: an innovative user centred web interface | Sonification as a complement of visualization is been under research for
decades as a new ways of data deployment. ICAD conferences, gather together
specialists from different disciplines to discuss about sonification. Different
tools as sonoUno, starSound and Web Sandbox are attempt to reach a tool to open
astronomical data sets and sonify it in conjunction to visualization. In this
contribution, the sonoUno web version is presented, this version allows user to
explore data sets without any installation. The data can be uploaded or a
pre-loaded file can be opened, the sonification and the visual characteristics
of the plot can be customized on the same window. The plot, sound and marks can
be saved. The web interface were tested with the main used screen readers in
order to confirm their good performance. | Gonzalo De La Vega, Leonardo Martin Exequiel Dominguez, Johanna Casado, Beatriz García | 2023-01-31T20:23:00 | http://arxiv.org/abs/2302.00081v1 | # SonoUno web: an innovative user centred web interface+
###### Abstract
Sonification as a complement of visualization is been under research for decades as a new ways of data deployment. ICAD conferences, gather together specialists from different disciplines to discuss about sonification. Different tools as sonoUno, starSound and Web Sandbox are attempt to reach a tool to open astronomical data sets and sonify it in conjunction to visualization. In this contribution, the sonoUno web version is presented, this version allows user to explore data sets without any installation. The data can be uploaded or a pre-loaded file can be opened, the sonification and the visual characteristics of the plot can be customized on the same window. The plot, sound and marks can be saved. The web interface were tested with the main used screen readers in order to confirm their good performance.
Keywords:Sonification Graphic User Interface Human centred design.
## 1 Introduction
The need to explore data sets beyond the visual field has led the community to study new ways to represent it, this is the case of sonification. In this sense, since 1992 the ICAD conferences[1] has existed bringing together scientists from different fields to discuss sonification, how people perceive it and how it can be used. Related to sonification Phillips and Cabrera[2] present a sonification workstation; and related to astronomy Shafer et.al.[3] and Garcia Riber[4] develop specific projects to sonify solar harmonics and light curves.
During the past years some sonification programs were created as tools to make possible the multimodal exploration of visual and audio graphs, this is
the case of xSonify[5], Sonification Sandbox[6], Sonipy[7, 8], StarSound[9] and SonoUno[10]. All are standard alone software that requires you to download a package and install it. Related to the possibility of analyze data with sonification, Diaz-Merced [11] in her thesis, using the standard alone sonification software xSonify, concluded that sonification as a complement to visual display augments the detection of features in the data sets under analysis.
Given the complexity to use the available standard alone software and to avoid errors and problems during the software installation, the idea of a sonification software working through the web began to make sense. TwoTone[12], TimeWorkers[13], Sonification Blocks[14] and Web Sandbox[15] are different attempts to make it real, but none of them allow the end user to explore, make choices about the configuration and how they want to display the data and functionalities. In this sense, we present in this contribution a graphic user interface available in the web that presents the same user centred framework and almost same functionalities of sonoUno [16] desktop software.
The sonoUno software, in its web and desktop versions, is a public tool to display, sonify and apply mathematical functions to any data set. The actual original application of the software is to astronomical data, but it can be used with any type of data presented in two or more columns (csv or txt) files. SonoUno presents a user centred approach from the beginning, first with a theoretical framework, second with focus group sessions and then with a community of people that kindly test the software and send the feedback to the developers[17].
The sonoUno web interface was tested in different operative system and with different screen readers. This work was partially finaciated by the Project REINFORCE (GA 872859) with the support of the EC Research Innovation Action under the H2020 Programme SwafS-2019-1 the REINFORCE www.reinforceeu.eu.
## 2 Methodology
Taking in mind that the end user must have the ability to choose, configure and decide how they want to explore their data sets, this project requires the use of HTML, JavaScript, CSS and ARIA (Accessible Rich Internet Applications) tools and protocols to make it possible. It is a novel approach, because it is not common that web interfaces allows users to make decisions and to configure the display during the interaction. Concerning that, collapsible panel were used, maintaining the principal framework with few functionalities and giving the user the power to decide what they want to display and use.
In consideration of how people with visual impairments handling the digital interface, and how screen reader read the graphic user interface, sonoUno web design use the ARIA standard. Not only the ARIA-labels were indicated, but also an specific order to generate a good workflow thought functionalities and ensuring that the screen reader describe the things just as the visual display indicates. Moreover, the unnecessary elements of the visual display are not read
by the screen reader, for example, the plot are not read as plot, instead the button play allows to sonify the data plotted.
Another big challenge for this development was to ensure the synchronization between the audio and visual graph, bearing in mind the asynchronous nature of JavaScript. Events with timer were used to guarantee the correct relationship during the reproduction of the data set. Furthermore, during the last tests using large data sets a new problem arise, in the web version and with all this functionalities to plot and sonify large data sets is very difficult, take a lot of time and produce errors in some cases. To solve this issue a decimating filter is being tested.
### Graphic User Interface design
In order to maintain the web display as similar as possible to the desktop deployment, a menu was constructed at the top containing: input (allows to open csv/txt data sets, sound and marks that could be done in a data set pointing to parts of interest in the data); output (allows to save the sound, png plot and marks); sample data (this menu item contain pre-loaded data sets that can be displayed on the tool); help (open the complete manual of the tool); and quickstart (open a summary of what to expect and how to use the principal functions).
The reproduction buttons are always displayed and under the plot, these buttons are: play/pause, stop, mark point, delete mark, reset plot, the command text entry and the two sliders to indicate the x position and the tempo. On the other hand, math functionalities and the configurations are located on collapsible panels, this allows to maintain an organized display and few elements that have to be read by the screen reader (helps to reduce memory overload).
The sound and graphic display can be customized by the end user as their desire. About the sound the maximum and minimum frequency can be set, the volume, the sound type (sine, flute, piano and celesta), choose between continuous and logarithmic scale, and the envelope of the sound. Secondly, the plot configuration allows to set the titles, the grid, the line, the markers, and to flip the x and y axis.
## 3 Results
A screenshot of the interface was shown in Figure 1. This web tool allows users to see and hear data sets opened from csv or txt files, also end users can load data sets from 'Data Sample' menu item, for example the gravitational wave glitch showed in Figure 1 was selected from that menu item. At the bottom, the text entry box, allows to write the functionalities available on the interface (this feature allows to use the web interface from there avoiding the use of the mouse).
The plot section allows to zoom it directly on the same plot with the mouse. The abscissa position slider (see Figure 2 at the top) allows to move the cursor
through the data set and to begin the reproduction from there. The tempo slider allows to speed up and down the reproduction time. Figure 2 also shows opened the math function panel, where at the moment there are four functions ready to use: peak finder (in this case a new window allows to select the percentage of sensitivity and if you want to clean or mark the peaks); logarithmic; quadratic; and smooth. At bottom of Figure 2 the configuration panels collapsed are shown. Figure 3 shows the configuration panels opened with all it functions.
The SonoUno web interface was tested in different platforms with different screen readers (NVDA on Windows, Voice Over on MAC and Orca on Ubuntu). All the elements are enunciated by the screen reader, the elements on the panels are only recognizable when the panel is opened (this is very important to maintain the relation between the visual and auditory display).
## 4 Conclusion
A web interface of sonoUno software was developed, maintaining the original functionalities distribution as same as possible, continuing the user center design from the beginning of the tool. This web interface allows the user to explore the data set, making decisions about what the want to display and how.
Concerning the use of screen readers, the elements of the interface present descriptions and the order of the audible display was carefully design, to ensure the adequate correlation between visual and audible deployment. The principal
Figure 1: A sonoUno web interface screenshot, it include the menu, plot, reproduction buttons and the command line text box. The plot shows a gravitational wave glitch, detected by EGO[18] and part of the open data provided by the REINFORCE project
Figure 3: A screenshot of sound and plot configurations panels opened.
Figure 2: A screenshot with the x position and tempo sliders at the top, the math function panel opened and the sound and plot configurations collapsed.
free screen reader of each operative system was tested, and the results show a good performance.
This innovative approach seeks to continue growing, removing barriers and offering more accessible tools to analyse data sets. Since the beginning of the year, this web interface is being used by a professor at Spain with visual impaired students. This experience will let us know some features to enhance and if the sonoUno web interface can be use by student to better understand math and science.
As future works, the web interface is adapting to be used from any mobile device, and the axis limits of the plot will be set indicating the specific number to cut. New user tests and focus group will be performed to maintain and assure the user centred design philosophy of sonoUno and all the associated tools.
| 音響化を視覚化の補完として、数十年間、データの展開の新方法として研究されてきた。ICAD会議は、様々な学問分野の専門家を招いて、音響化について議論する場となっている。SonoUno、starSound、Web Sandboxなどのツールが、天文データセットを開放し、視覚化と同時に音響化を実現しようとしている。この論文では、sonoUnoウェブ版が提示される。この版は、インストールなしでデータセットを探索できる。データはアップロードするか、事前に読み込まれたファイルが開き、音響化と視覚特性のplotの調整は同じウィンドウで行える。プロット、音、マーカーは保存できる。ウェブインターフェースは、主要な使用されるスクリーンリーダーを用いてテストして、その性能を確認した。
Please note that this is a machine translation and may contain some minor grammatical errors. Please be careful. |
2309.15098 | Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of
Language Models | We investigate the internal behavior of Transformer-based Large Language
Models (LLMs) when they generate factually incorrect text. We propose modeling
factual queries as constraint satisfaction problems and use this framework to
investigate how the LLM interacts internally with factual constraints. We find
a strong positive relationship between the LLM's attention to constraint tokens
and the factual accuracy of generations. We curate a suite of 10 datasets
containing over 40,000 prompts to study the task of predicting factual errors
with the Llama-2 family across all scales (7B, 13B, 70B). We propose SAT Probe,
a method probing attention patterns, that can predict factual errors and
fine-grained constraint satisfaction, and allow early error identification. The
approach and findings take another step towards using the mechanistic
understanding of LLMs to enhance their reliability. | Mert Yuksekgonul, Varun Chandrasekaran, Erik Jones, Suriya Gunasekar, Ranjita Naik, Hamid Palangi, Ece Kamar, Besmira Nushi | 2023-09-26T17:48:55 | http://arxiv.org/abs/2309.15098v2 | # Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models
###### Abstract
We investigate the internal behavior of Transformer-based Large Language Models (LLMs) when they generate factually incorrect text. We propose modeling factual queries as Constraint Satisfaction Problems and use this framework to investigate how the model interacts internally with factual constraints. Specifically, we discover a strong positive relation between the model's attention to constraint tokens and the factual accuracy of its responses. In our curated suite of 11 datasets with over 40,000 prompts, we study the task of predicting factual errors with the Llama-2 family across all scales (7B, 13B, 70B). We propose SAT Probe, a method probing self-attention patterns, that can predict constraint satisfaction and factual errors, and allows early error identification. The approach and findings demonstrate how using the mechanistic understanding of factuality in LLMs can enhance reliability.1
Footnote 1: Our datasets, evaluation protocol, and methods will be released at [https://github.com/microsoft/mechanistic-error-probe](https://github.com/microsoft/mechanistic-error-probe).
## 1 Introduction
Large language models (LLMs) encode substantial knowledge (Petroni et al., 2019; Srivastava et al., 2022), yet they are prone to generating factually incorrect text. For instance, LLMs can generate confident-appearing completions with _hallucinations_(Zhang et al., 2023; Ji et al., 2023), fabricating entities or factual claims. As LLMs reach wider audiences and are used for safety-critical applications, understanding factuality becomes of paramount importance.
However, our understanding of how LLMs process factual queries and produce errors remains nascent. Existing approaches to interpret how models produce outputs fall into two categories; they either i) treat the LLM as a black box and ask it questions about generated factual claims, or ii) use white-box internals to study how LLMs process factual queries mechanistically. Though promising and exploratory, each approach has limitations.
Black-box approaches investigate the consistency of the claims of an LLM using follow-up questions with other LLMs (Cohen et al., 2023) or have the LLM judge its own response (Zhang et al., 2023; Manakul et al., 2023). However, explanations from LLMs have shown to be unreliable (Turpin et al., 2023) or convey contradictory signals, e.g. LLMs can produce an answer and then acknowledge that it is wrong (Zhang et al., 2023; Mundler et al., 2023). Further, these approaches use multiple generations from LLMs, which may be prohibitively expensive to use in practice.
Mechanistic white-box approaches investigate the internal mechanisms of LLMs to dissect factual recall. For instance, Meng et al. (2022); Geva et al. (2023) focus on facts with the (subject, relation, object) structure (e.g. Paris, capital of, France) and propose insightful mechanisms of how an LLM recalls a fact. They suggest that the Multi-Layer Perceptron (MLP) layers store facts, and attention layers transfer factual information from the subject tokens. However, these works focus on when the model can produce factually correct responses. Mechanics of factual errors are yet to be explored.
**Our Contributions:** Here, we investigate the internal mechanisms of LLMs when they produce factual errors. We propose to view factual queries as Constraint Satisfaction Problems (CSPs), where queries comprise constraints that completions should satisfy to be factually correct (SS3); e.g. in Figure 1 the _director name_ or the _award name_ are constraints on the model's response to a search query for a movie. We explore how properties of constraints, such as popularity, relate to the LLM's correctness and explore mechanisms of constraint satisfaction (SS4). We find that attention to constraint tokens correlates with LLM's factual correctness, where less attention indicates inaccurate responses.
Building on our insights, we propose SAT Probe, a method that predicts constraint satisfaction and factual errors using a simple probe on the LLM's attention to constraints (SS5). To test SAT Probe, we curate a suite of \(11\) datasets of single- and multi-constraint queries that in total comprise >\(40,000\) prompts. We find that SAT Probe performs comparably to the LLM's confidence. Further, SAT Probe can predict factual errors halfway through the forward pass to stop the computation partway and save costs. Our findings contribute to the mechanistic understanding of LLMs and demonstrate the potential of model internals to understand and mitigate factual errors.
## 2 Background: Language Models and Factual Recall
We first describe the transformer architecture (Vaswani et al., 2017). Our presentation largely follows that of Meng et al. (2022); Geva et al. (2023); Elhage et al. (2021), and similar to these works we omit the details around layer normalization for brevity. Let us have an input sequence of \(T\) tokens \(t_{1},...,t_{T}\) and \(t_{i}\in\mathcal{V}\) for a fixed vocabulary \(\mathcal{V}\). A token \(t_{i}\) is initially represented with a \(d\)-dimensional vector \(\mathbf{x}_{i}^{0}\in\mathbb{R}^{d}\) using an embedding matrix \(E\in\mathbb{R}^{|\mathcal{V}|\times d}\). We use \(\mathcal{V}^{+}\) to denote a sequence of tokens.
The architecture consists of \(L\) layers that transform the input token embeddings to a sequence of hidden states \(\mathbf{x}_{1}^{\ell},\dots,\mathbf{x}_{T}^{\ell}\) at each layer \(\ell\) where \(\mathbf{x}_{i}^{\ell}\) denotes the state of token \(i\). Often, each hidden state vector has the same number of dimensions, i.e., \(\forall\,i,\ell\ \mathbf{x}_{i}^{\ell}\in\mathbb{R}^{d}\). The states are obtained by:
\[\mathbf{x}_{i}^{\ell}=\mathbf{x}_{i}^{\ell-1}+\mathbf{a}_{i}^{\ell}+\mathbf{ m}_{i}^{\ell}, \tag{1}\]
where we call \(\mathbf{m}_{i}^{\ell}\) the _MLP contribution_ and \(\mathbf{a}_{i}^{\ell}\) the _attention contribution_ to a token \(i\) at layer \(\ell\). The LLM produces a predicted probability distribution for the next token by \(\hat{\mathbb{P}}(t_{T+1}|t_{1:T})=\text{Softmax}\big{(}W_{U}\mathbf{x}_{T}^{L} +\mathbf{b}_{U}\big{)}\), where \(W_{U}\in\mathbb{R}^{|\mathcal{V}|\times d}\) is the unembedding matrix, \(\mathbf{b}_{U}\in\mathbb{R}^{|\mathcal{V}|}\). In this work, we study the interactions between tokens. Unlike attention which is a function of the states of all
Figure 1: **Tracking attention to predict constraint satisfaction and factual errors. We view factual queries as Constraint Satisfaction Problems. That is, factual queries impose a set of constraints that the LLM’s responses must satisfy. To predict constraint satisfaction (i.e., factual correctness), we track the attention to the constraint tokens in an LLM (here, Llama-2 13B). We find that attention to the constraint tokens highly correlates with constraint satisfaction and factual correctness. The red text indicates factually incorrect completions, whereas the blue indicates factually correct completions.**
tokens, MLP contribution is a function of the state of the _same_ token. Thus, we do not focus on the MLP contribution; see Appendix A for a description.
The **attention** operation updates each token's state using the previous states at all positions, i.e., the representation for a token is updated by 'attending' to all the tokens that come before it. Formally, the operation involves four projection matrices \(W_{Q}^{t},W_{K}^{t},W_{G}^{t},W_{G}^{t}\in\mathbb{R}^{d\times d}\) that correspond to the 'query', 'key', 'value', and 'output' projections. Each of these matrices is split into multiple heads, where \(W_{Q}^{t,h},W_{K}^{t,h},W_{V}^{t,h}\in\mathbb{R}^{d\times d_{h}}\) and \(W_{O}^{t,h}\in\mathbb{R}^{d_{h}\times d}\) denote the matrices for head \(h\), \(d_{h}\) is the dimensionality for each head, and \(h\in[H]\). In practice, the embeddings are split into equal parts such that \(d_{h}=\frac{d}{H}\)(Elhage et al., 2021; Dar et al., 2022; Touvron et al., 2023). The _attention contribution_ from the token \(j\) to token \(i\)\(\mathbf{a}_{i,j}^{t}\) is defined as
\[\mathbf{a}_{i,j}^{\ell} =\sum_{h=1}^{H}A_{i,j}^{\ell,h}(x_{j}^{\ell-1}W_{V}^{t,h})W_{O}^{ \ell,h} \tag{2}\] \[A^{\ell,h} =\text{Softmax}\Bigg{(}\frac{\Big{(}X^{\ell-1}W_{Q}^{\ell,h} \Big{)}\Big{(}X^{\ell-1}W_{K}^{\ell,h}\Big{)}^{T}}{\sqrt{d_{h}/H}}\Bigg{)}, \tag{3}\]
where \(\mathbf{a}_{i}^{l}=\sum_{j\in[T]}\mathbf{a}_{i,j}^{l}\) and Softmax is taken row-wise. \(A^{\ell,h}\in\mathbb{R}^{T\times T}\) are the _attention weights_ computed by the \(h\)-th attention head at layer \(\ell\), and \(A_{i,j}^{\ell,h}\) is the entry in the \(i\)-th row and \(j\)-th column of the matrix. For autoregressive LLMs, \(A^{\ell,h}\) is lower triangular since each token can only attend to the representation of the previous tokens. For brevity, we use \([H]\) to denote the sequence of integers from \(1\) to \(H\), and superscript \([H]\) indicates stacking items for all \(h\in[H]\), i.e., \(A_{i,j}^{\ell,[H]}=\{A_{i,j}^{\ell,h}\}_{h=1}^{H}\in\mathbb{R}^{H}\).
**Mechanics of Factual Recall in Language Models:** Recent work investigates the internal activations of language models to understand the mechanics of factual. Meng et al. (2022); Geva et al. (2021) suggest that MLP layers store factual associations, by studying factual queries of the form (subject, relation, object). Further, Geva et al. (2023); Meng et al. (2022); Elhage et al. (2021) suggest that attention layers transfer factual knowledge to where it will be used. Specifically, when the LLM is given the prompt _LeBron James professionally plays_, the information _LeBron James professionally plays basketball_ is extracted by the MLP contribution to the tokens for _LeBron James_ (subject). Next, the attention layers transfer the information from the tokens of the subject to the last token for the model to generate _basketball_ (object). However, these works study the internal mechanisms when the LLM's completions are factually correct, _not when the LLM produces factually incorrect text_.
## 3 Factual Queries as Constraint Satisfaction Problems
Choosing the right framework to study factual errors is challenging. One can naively categorize completions as factually correct or incorrect, yet this binary view can fall short, e.g., queries that are easy for the LLM and ones that it barely gets right are indistinguishable since both are labeled as 'correct'. Further, it prevents us from building a model around why some queries are more difficult or which parts of the queries drive the LLM to failure.
To systematically study factual queries and LLMs' internal behavior, we propose the CSP view:
**Definition 3.1** (Factual Query as a CSP).: A factual query is specified by a set of constraints \(\mathcal{C}=\{(C_{1},V_{1}),\ldots(C_{K},V_{K})\}\) where \(C_{k}\in\mathcal{V}^{+}\) indicates the sequence of tokens for the constraining entity \(k^{2}\), and \(V_{k}:\mathcal{V}^{+}\rightarrow\{0,1\}\) is a _verifier_ that takes a set of generation tokens as the input and returns whether the constraint indexed by \(k\) is satisfied. Under this view, we call a completion \(Y\) as a _factual error_ if \(\exists\,k\in[K]:V_{k}(Y)=0\), that is, if there is a constraint in the factual query that the response does not satisfy3. Otherwise, we call the response _factually correct_.
Footnote 3: While it may be nontrivial to generally isolate tokens for the constraining entity for arbitrary queries, in our evaluations we investigate settings in which we assume we have access to this set.
A large set of factual queries can be seen as a set of constraints that responses must satisfy to be correct, e.g., see Figure 1. This structure is comprehensive; for example, an important subset of
queries made by users to search engines has historically been conjunctions of constraints (Spink et al., 2001). Structured and multi-constraint queries are also inherent to faceted search and information retrieval (Tunkelang, 2009; Hahn et al., 2010). Further, under this definition, prior (subject, relation, object) queries (Meng et al., 2022) can be seen to have a single-constraint structure. Similarly, instructions to LLMs are also constraints for controlling the output (Ouyang et al., 2022).
Focusing on the constraints of a CSP can help us reason about the difficulty of a query. We start with two factors that can describe difficulty for factual queries: i) the popularity of the constraining entity, and ii) the constrainedness of the query.
**Popularity of the Entity vs LLM Performance:** Recent work documented the correlation between training data frequency and memorization in LLMs (Carlini et al. (2022); Biderman et al. (2023); _inter alia_). However, even for many open-source LLMs, we cannot compute the frequency of facts since we do not have the training data or a trivial way to search for complex facts. As an accessible proxy for entities from WikiData, we use the number of site links on the page as the _popularity_ metric and we hypothesize that it strongly correlates with the training data frequency or popularity. See Tables 3,4 for examples of popularity statistics across basketball players and football teams.
For Figure 2 left, we produce queries of the form _Tell me year the basketball player [name] was born in_, and evaluate the LLM's performance via accuracy in this task. Then, we compare the correctness of the LLM for entities (players) of varying popularity. We observe that i) LLM performance is better for entities with higher popularity, and ii) larger LLMs have better performance for entities that are less popular. Similar relationships with popular/typical input are documented in concurrent work (Mallen et al., 2022; Kandpal et al., 2023; Yuksekgonul et al., 2023).
**Constrainedness of the CSP vs LLM Performance:** A well-explored complexity metric for CSPs is constrainedness (Gent et al., 1996). Here, we define constrainedness as the number of potential solutions to the given problem in the domain of the output. For instance, for a query of the form _Tell me a word that starts with the letter e and ends with the letter t_, we quantify constrainedness by the number of such words4 in the English language that satisfy these constraints.
Footnote 4: We use nltk.corpus.words to compute the number of such words.
In Figure 2 right, we show how constrainedness relates to correctness. We observe that i) as the problem becomes more constrained the LLM performance performance drops ii) larger models generally perform better across all constrainedness levels.
**Summary:** We argue that the CSP lens can provide a useful vocabulary to capture the difficulty of factual queries, and can let us understand what parts of the query are more difficult. Our goal is to build a framework to discuss how LLMs process factual queries and produce factual errors. Next, we describe how we leverage LLMs' internal mechanisms to characterize and predict factual errors.
## 4 Understanding Factual Errors Using Attention to Constraints
Here, we explore how an LLM processes constraints when the model produces factually incorrect text. Geva et al. (2023); Meng et al. (2022) suggest that attention layers transfer the factual information
Figure 2: **Difficulty of the factual query vs LLM performance. Left: Popularity vs Correctness** We observe that the more popular the entity in the factual query is, the more correct the LLMs are. **Right: Constrainedness vs Correctness** We observe that the more constrained the problem is (i.e. has a smaller set of potential solutions), the less correct the LLMs are.
from the source entity (e.g., _Bad Romance_) to the last token for generation (to generate _Lady Gaga_, Figure 3) when the LLM correctly addresses a query. However, these works do not explore the mechanisms when the model produces factually incorrect responses. Intuitively, we want to quantify how the LLM interacts with constraints to understand constraint satisfaction and thus factual errors.
To study how the LLM processes a constraint, we focus on the attention _to the constraint tokens_, i.e.,
\[\mathbf{a}_{c,T}^{\ell,h}=A_{c,T}^{\ell,h}\big{(}x_{c}^{\ell-1}W_{V}^{\ell,h} \big{)}W_{O}^{\ell,h}, \tag{4}\]
where \(\mathbf{a}_{c,T}^{\ell,h}\in\mathbb{R}^{d}\) indicates the attention contribution from a constraint token \(c\) through a head \(h\) to the final token \(T\) (where the \(T+1\)-th token will be generated). The total attention contribution to \(T\) is then is \(\mathbf{a}_{c,T}^{\ell}=\sum_{h}\mathbf{a}_{c,T}^{\ell,h}\). When the constraint comprises multiple tokens denoted by the set \(C\), we take the maximum value across all constraint tokens, i.e., \(A_{C,T}^{\ell,h}=\max_{c\in C}A_{c,T}^{\ell,h}\) or \(\mathbf{a}_{C,T}^{\ell,h}=\max_{c\in C}||\mathbf{a}_{c,T}^{\ell,h}||\)5. An example is shown in Figure 1; we track the regions that are marked by \(C_{1}\) and \(C_{2}\), which in this case represent the constraint that the movies were directed by the specified directors, and also won the specified awards.
Footnote 5: While there is earlier work that suggests the last constraint token could the most important, we observed that in practice there are subtleties. See Appendix C.2 for a short discussion.
To understand whether attention to constraints can help explain factual errors, we study three factors. First, we explore the relationship between attention and popularity of the constraining entity, as we find that LLM's correctness correlates with popularity (Fig 2). Next, we explore the relation of attention to the LLM's confidence \(\hat{\mathbb{P}}(Y|X)\), which estimates the probability of a completion \(Y\) given the prompt \(X\). Finally, we explore how attention patterns behave when we scale the LLMs.
**Attention predicts popularity:** In Figure 9, we show the results for predicting the popularity of the constraining entity in the prompt (the basketball player) only from the attention weights \((A_{C,T}^{[L],[H]})\) using linear regression. In all LLMs (Llama-2 7B, 13B, 70B), the predicted populaities using attention values significantly correlate with the ground truth popularity values (over a held-out set, with Spearman's Correlation \(\rho\geq 0.65\) and p-value \(p\approx 0\) for all LLMs). We give further details of the protocol in Appendix C.1.
This is a curious case: _Why should we have expected that LLMs have more attention to popular entities_? This finding aligns with the recent theoretical work that identifies a _frequency bias_ of self-attention layers, gradually putting more attention on tokens that co-occur a lot with the query token during training (Tian et al., 2023). However, our main goal is to characterize and predict factual errors. While popularity seems predictive, we may not always have access to a clean popularity measure or training data frequency of constraints.
**Attention correlates with confidence and LLM's correctness:** In Figure 4 (left four panels), each row represents the attention flow across layers for a single sample and we sort the points by the
Figure 3: **Tracking attention to predict factual errors in single-constraint settings. We track the attention contribution from the constraint tokens during generation. We observe a small-norm contribution (\(||\mathbf{a}_{c,T}^{\ell}||\)) when the LLM makes a factual error, in contrast, we observe a larger-norm attention contribution when the LLM is factually correct. The red text indicates factually incorrect completions, whereas the blue text indicates factually correct completions.**
confidence of the LLM. The leftmost panels show the attention for the \(25\) most confident predictions and the middle panels show the \(25\) least confident predictions; where the x-axis shows the layers, and colors indicate the norm of the attention contribution from the constraints \(\left(||\mathbf{a}_{C,T}^{\ell,[H]}||\right)\). The core observation is that _when the LLM is accurate, there is more attention to constraint tokens_ (first column) in sharp contrast to cases where the LLM fails and the attention is weak (second column).
In Figure 4's rightmost plots, queries are sorted and grouped by the LLM's total attention contribution from the constraints across all layers (\(\sum_{\ell}||\mathbf{a}_{C,T}^{\ell}||\)), and LLM's accuracy is computed for each group. Similar to the left panels, we observe that _the magnitude of attention to constraints correlates with accuracy_. This observation is not only interesting in hindsight; aforethought could have suggested either outcome (e.g., more attention correlating with hallucination). While the phenomenon deserves further explanatory investigation, this is a positive observation indicating that attention to constraints can be used to predict the LLM's success.
**Language models grow larger, pay more attention, and succeed more:** In Figure 5, each panel compares the attention to constraints for the basketball player queries between two different LLMs, where the x-axis indicates the smaller LLM, the y-axis indicates the larger LLM and the coloring indicates the success of the pair of LLMs. We group prompts by the attention contribution, and color the cells by the the most frequent category. We find that more (relatively) attention in both LLMs generally indicates success for both and less attention in both LLMs indicates failure for both. For cases on the top left, the larger LLM does pay more attention, and only the larger LLM succeeds. Overall, we note a consistent pattern between attention and correctness across model scales; and performance improvements in larger LLMs relate to increased attention to constraint tokens.
Figure 4: **Attention contribution correlates with correctness. The first two columns of panels** give the \(25\) samples for which the LLM makes the most and the least confidence predictions, respectively. The color indicates the norm of the attention contribution from the constraint, where each column in the panel captures a layer in the LLM and each row is a specific sample. **The last column of panels** relates the total attention to constraints and accuracy, where the x-axis is the attention contribution percentile in the dataset and the y-axis is the accuracy in the bin. The results are for the year of birth queries for basketball players (see 13).
Figure 5: **Attention contribution and model scaling.** Here, the x-axis and y-axis show the attention to the constraints \(\left(||\mathbf{a}_{C,T}^{\ell,[H]}||\right)\) for the smaller LLM and the larger LLM, respectively, and normalized via dividing by the maximum value. Coloring is determined by which of the two LLMs succeeds in factual queries. We group the factual queries by their x-axis value and y-axis values and color the cell with the most frequent category in the cell. Appendix Figure 11 presents the complete scatter plot.
**Summary:** In this section, we explored the interaction between attention, constraints, and factual correctness. Our findings indicate that attention can help us reason about and predict factual errors. In the next section, we pull this thread and conduct extensive experiments to start tapping into the potential of the LLMs' attention patterns for factual error prediction.
## 5 Predicting Factual Errors Using Attention to Constraints
Here, we show how our mechanistic understanding can be used to predict the failures of LLMs. Let \(X\) denote a prompt, a sequence of tokens that specifies a factual query with a set of constraints \(\mathcal{C}=\{(C_{1},V_{1}),\ldots(C_{K},V_{K})\}\). Let \(\hat{Y}\) be the response tokens obtained from the LLM after feeding \(X\). Broadly, we want to design a function \(f\) to estimate the probability that a constraint \(k\) is satisfied:
\[\hat{\mathbb{P}}(V_{k}(\hat{Y})=1)=f(X,\hat{Y},C_{k},\mathcal{M}),\]
using the LLM \(\mathcal{M}\), the prompt, the completion, and the constraints. For single-constraint factual queries where there is a single factually correct completion \(Y\), this can be reduced to the correctness, i.e. \(\hat{\mathbb{P}}(Y=\hat{Y})=f(X,\hat{Y},\mathcal{M})\). Note how this formalism closely matches that of selective classification (Geifman and El-Yaniv, 2017), where the goal is to abstain when the model would otherwise fail.
**Datasets:** For our evaluations, we curate a benchmark with \(11\) datasets that are listed in Table 1 containing \(>\)\(40,000\) queries. For single-constraint queries, we curate 4 datasets using WikiData and 3 datasets using the existing CounterFact dataset (Meng et al., 2022). We further designed four \(2\)-constraint datasets, using WikiData (Books and Movies), Opendatasoft (2023) (Nobel Winners), or hand-curation (Words). Further details about all data curation can be found in Appendix D.
**Constraint Verification \((V_{k})\):** We use Exact Match6 for single-constraint queries with a single solution. We probe WikiData to verify constraints when queries have multiple potential solutions (e.g., we check WikiData for whether the movie name generated by the model is directed by the director in the constraint). Appendix D.3 contains a complete description of the methodology.
Footnote 6: We acknowledge that exact match is a strict criterion that could introduce noise to our evaluations, and it constitutes a limitation where we use this verification. Evaluating factual correctness is still an evolving research topic (Min et al., 2023) and we do our best to find queries and prompt structures that suffer the least from this.
**Models:** We use the 7B, 13B, and 70B parameter variants of Llama-2 (Touvron et al., 2023) released through the HuggingFace's Transformers (Wolf et al., 2019). We perform our experiments on a single NVIDIA A100-PCIE-80GB GPU. 80GB memory can only fit the Llama-2 70B in 8-bit precision (Dettmers et al. (2023) report marginal-to-no performance drop). See Appendix A for further details on models.
**Evaluation Metrics**: We give the AUROC for the binary task of predicting failure or success as it does not require setting a threshold for the classifier. We also report \(\text{Risk}_{\text{Top 20\%}}\) (the fraction of mistakes for the samples with top 20% of the scores by the predictor \(f\)), \(\text{Risk}_{\text{Bottom 20\%}}\) (the fraction of mistakes for the samples with the bottom 20% of the scores by the predictor \(f\)). These metrics measure how well the model performs on the most and least reliable completions according to the predictor \(f\). For a good failure predictor, we want the actual error to be low among high-confidence examples and have a large fraction of failures among low-confidence examples.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline Dataset Name & Constraint Type(s) & \(N\) & Constraint Source & Verifier & Example Prompt \\ \hline Basketball Players & _born in the year_ & 13631 & WikiData & Exact Match & Figure 13 \\ Football Teams & _founded in the year_ & 8825 & WikiData & Exact Match & Figure 14 \\ Movies & _directed by_ & 12197 & WikiData & Exact Match & Figure 15 \\ Songs & _performed by_ & 2813 & WikiData & Exact Match & Figure 16 \\ CounterFact & _mother tongue_ & 919 & CounterFact & Exact Match & Figure 17 \\ CounterFact & _citzenship_ & 958 & CounterFact & Exact Match & Figure 18 \\ CounterFact & _headquarter location_ & 756 & CounterFact & Exact Match & Figure 19 \\ \hline Books & _author, published year_ & 1492 & WikiData & WikiData Search & Figure 20 \\ Movies & _directed by, won awand_ & 1066 & WikiData & WikiData Search & Figure 21 \\ Nobel Winner & _won Nobel, born in city_ & 1290 & Opendatasoft (2023) & WikiData Search & Figure 22 \\ Words & _starts with, ends with_ & 1352 & WikiData & Character Match & Figure 23 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Overview of Datasets.** The columns denote the dataset name, constraint type, number of prompts, and the data sources used to collect entities and verify LLM responses, respectively.
### Predicting Factual Correctness
**Predictors (\(f\)):** We propose the constraint satisfaction probe, SAT Probe, that predicts whether an individual constraint is satisfied by only looking at self-attention layers. To demonstrate the simplicity, we define \(f\) to be a linear function of the attention weights to or contributions from constraints:
\[\hat{\mathbb{P}}(V_{k}(\hat{Y})=1;A_{C_{k},T})=\sigma(w^{T}A_{C_{k},T}+b),\]
where \(A_{C_{k},T},w^{T}\in\mathbb{R}^{L\times H},b\in\mathbb{R}\) and \(A_{C_{k}}=\{\forall\ell\in[L],h\in[H]:A_{C_{k},T}^{\ell,h}\}\). That is, we linearly probe the attention weights across all layers and attention heads, and we estimate the parameters \(w\) and \(b\) using Logistic Regression. In the multi-constraint setting, using SAT Probe, we simply combine the predictions for multiple constraints:
\[\hat{\mathbb{P}}(\prod_{k\in[K]}\mathbf{1}_{\{V_{k}(\hat{Y})=1\}};A_{C_{k},T} )=\prod_{k\in[K]}\hat{\mathbb{P}}(V_{k}(\hat{Y})=1;A_{C_{k},T}).\]
**Baselines:** We compare SAT Probe to the Confidence of the model, \(\hat{\mathbb{P}}(\hat{Y}|X)\), which concurrent work reports as a good hallucination detector (Varshney et al., 2023); and a Constant predictor that predicts the majority class (either 0 or 1) as baselines. Note that while Confidence is a strong baseline, it only provides an overall estimate for the whole generation, and cannot predict the failure for individual constraints. We also use the Popularity baseline only in the single-constraint WikiData datasets that we curated - as in other datasets, it is not accessible (CounterFact) or unclear (multi-constraint) how to compute. We do not need to map these scalar scores to a probability measure, as all of our evaluation metrics quantify whether classes are well-separated (e.g., AUROC). In the Appendix, we also give results with featurization using the attention contribution, e.g., \(\mathbf{a}_{C_{k},T}=\{\forall\ell\in[L],h\in[H]:||\mathbf{a}_{C_{k},T}^{\ell,h }||\}\), denoted by SAT Probe(a).
**Results:** In Figure 5(a), we present the overall AUROC of predicting factual correctness for multi-constraint queries, and Table 6 contains all metrics. In this task, we find that SAT Probe mostly performs comparably to and sometimes better than the model's Confidence in the correctness prediction task, in addition to being able to provide fine-grained feedback (i.e. which constraint is not satisfied, see SS5.2). In Figure 5(b), we present the AUROC results for the single-constraint setting, and in Table 5 we give the results in the tabular format. In the single-constraint setting, SAT Probe is comparable to Confidence. Further, we find that the approaches are comparably good in isolating highly reliable vs unreliable points (Table 5,6 show \(\text{Risk}_{\text{Top 20\%}}\) and \(\text{Risk}_{\text{Bottom 20\%}}\) metrics).
Overall, these results demonstrate how the attention weights alone can predict failures well. It is significantly better than the Constant baseline which suggests that it contains a nontrivial amount of information, and sometimes better than the Confidence. Surprisingly, even though LLMs are optimized by maximizing the next token probability, simply probing attention patterns exclusively on the constraints can match or sometimes exceed this performance (without using other states or non-constraint tokens). However, attention alone does not explain all failures (we observe some attention on constraints where the model still fails), and there is an opportunity for further investigation. Our findings demonstrate the value in studying the procedure by which a model produces an output, rather than only the output itself.
### Extensions
We study 3 extensions to explore the potential of SAT Probe and propose avenues for future work.
**Predicting partial constraint satisfaction:** SAT Probe gives access to failure predictions for individual constraints. We report the partial constraint satisfaction results in Table 7 where we report the failure prediction metrics for individual constraints and find comparable results to the single-constraint prediction task. While SAT Probe lets us test whether each constraint is satisfied, using the raw Confidence does not since it only outputs a single value for all constraints. We believe producing fine-grained reliability statements, such as reporting partial constraint satisfaction, can prove useful for debugging (e.g., failing to follow specific instructions).
**Early stopping:** Using SAT Probe, we can predict failures partway through the computation and save costs. In Appendix Figures 7, we show that we can predict failures earlier in the inference with an experiment across all single-constraint datasets. Specifically, we use only attention weights up to an intermediate layer and try to predict failures ahead of time. For Llama-2 7B and 13B, we observe
that we can stop the inference early without degradation in the average performance and save \(50\%\) of wall-clock time on failures for most datasets. In 70-B, early stopping of the inference results in a slight drop in performance. Especially for use cases where we have a high Risk\({}_{\text{Bottom 20\%}}\), we can isolate these most unreliable predictions and abstain from making a prediction. See Appendix B.4 for details on the ablation.
**Generalized predictors:** We explore using a single failure predictor across all constraint types. For this purpose, we train a failure predictor on a mixture of single constraint datasets and report the performance over individual datasets in Appendix B.5 and Figure 8. We observe the performance is competitive with training individual predictors for each constraint and still better than Popularity. This suggests the potential of general factual error detectors, as a future work avenue.
## 6 Related Works
Carlini et al. (2021, 2022); Biderman et al. (2023) related the training data frequency of a string to memorization in LLMs. In recent concurrent work, Mallen et al. (2022); Kandpal et al. (2023); Sun et al. (2023) document the relation between the success/difficulty of factual queries and a measure/proxy for training data frequency. Several recent works investigated the mechanics of factual recall. There are numerous works Elhage et al. (2021); Devlin et al. (2018); Olsson et al. (2022); Clark et al. (2019); Tian et al. (2023); Hutt et al. (2019); Voita et al. (2019); Burns et al. (2022); Gurnee et al. (2022) that discuss how specific attention heads exhibit certain functionalities, such as heads that encode syntax or induction heads that copy tokens. Further, Meng et al. (2022); Geva et al. (2023) discuss the role of attention in specifically transferring factual information, and Hernandez et al. (2023) studies how specific relations can be decoded with a linear transformation from the subject tokens. However, _none of these works_ investigate the mechanisties when factually incorrect information is generated. Halawi et al. (2022); Belrose et al. (2023) study how LLMs internally deal with safety-critical input, such as false demonstrations for in-context learning or prompt injections. These share a similar insight as ours: analyzing latent information _across layers_ and not in isolation could be more useful for failure prediction. Varshney et al. (2023) detects and mitigates hallucinations using the model's logits, which is closest to our Confidence baseline. Mundler et al. (2023);
Figure 6: **Factual Error Prediction.** (a) Predicting the failure probability for individual constraints using SAT Probe and combining them performs comparably, sometimes better than Confidence. (b) Predicting failure for single-constraint queries. SAT Probe is comparable to Confidence and better than Popularity. We average the performance across all relations for CounterFact datasets. For both figures, error bars show the standard error across 10 random seeds where the randomness is over rerunning the experiments with different train/test splits. Tables 5,6 contains the results in tabular form with multiple metrics.
Manakul et al. (2023); Zhang et al. (2023) interact with the LLMs in a black box fashion and aim to determine factual errors through inconsistencies, but doing so requires several forward passes and conveys conflicting signals such as refuting an initial claim, which can diminish user trust (Liao and Vaughan, 2023; Huang et al., 2020).
## 7 Conclusion and Future Work
While this work provides initial insights and a lens into leveraging LLM's internals to understand factual errors, it raises several exciting questions for future work. First, we studied only conjunctive factual queries, but the class of potential constraints is much broader (e.g. instructions (Ouyang et al., 2022), disjunctive queries). Studying those would improve the utility of the framework and our understanding of how models perform and represent compositions of constraints. Second, the content of the information in attention patterns remains opaque and warrants further investigation. Similarly, the reasons behind the correlation between attention and constraint popularity/correctness found here are still unknown. Here we offered a fairly simple framework to probe the information in attention, and we believe there are further opportunities for improvement. Overall, this work takes another step towards improving our understanding of safety-critical mechanisms in LLMs and operationalizing these insights.
## Acknowledgment
We would like to thank Duygu Yilmaz, Marah Abdin, Rahee Ghosh Peshawaria, Federico Bianchi, Kyle Swanson, Shirley Wu, James Zou, Eric Horvitz, Zhi Huang, Marco Tulio Ribeiro, Scott Lundberg for their support and comments throughout the project.
| トランスフォーマーベースの大規模言語モデル (LLM) が事実を誤ったテキストを生成する場合の内部動作を調査しています。私たちは、事実に関する質問を制約 satisfa ction 問題としてモデル化し、このフレームワークを使用して、LLM が事実の制約とどのように相互作用するかを調査します。LLM の制約トークンへの注意度と生成の事実の正確性に強い正の相関関係を見出しました。We curate a suite of 10 datasets containing over 40,000 prompts to study the task of predicting factual errors with the Llama-2 family across all scales (7B, 13B, 70B)。私たちは、SAT Probeという、事実の誤りを予測し、詳細な制約 satisfa ction を可能にし、早期のエラーの識別を可能にする、注意パターンを調査するための方法を提案しました。このアプローチと発見は、LLM のメカニズム |
2310.00290 | Universality of periodic points in bounded discrete time series | We consider arbitrary bounded discrete time series originating from dynamical
system. Without any use of the Fourier transform, we find periodic points which
suitably characterizes (i.e. independent of Lyapunov exponent) the
corresponding time series. In particular, bounded discrete time series
generated by the autoregressive model (without the white noise) is equivalent
to a quasi periodic function. | Chikara Nakayama, Tsuyoshi Yoneda | 2023-09-30T07:46:47 | http://arxiv.org/abs/2310.00290v6 | Mathematical structure of perfect predictive reservoir computing for autoregressive type of time series data
###### Abstract.
Reservoir Computing (RC) is a type of recursive neural network (RNN), and there can be no doubt that the RC will be more and more widely used for building future prediction models for time-series data, with low training cost, high speed and high computational power. However, research into the mathematical structure of RC neural networks has only recently begun. Bollt (2021) clarified the necessity of the autoregressive (AR) model for gaining the insight into the mathematical structure of RC neural networks, and indicated that the Wold decomposition theorem is the milestone for understanding of these. Keeping this celebrated result in mind, in this paper, we clarify hidden structures of input and recurrent weight matrices in RC neural networks, and show that such structures attain perfect prediction for the AR type of time series data.
Key words and phrases:Reservoir computing, Autoregressive model, universal approximation theorem, almost periodic functions, transcendental numbers. 2020 Mathematics Subject Classification: Primary 68T27; Secondary 11B50, Tertiary 42A16
## 1. Introduction
Reservoir Computing (RC) is a type of recursive neural network (RNN). Gilpin [4] evaluated 24 statistical forecasting models across 135 dynamical systems, including RC, autoregressive moving averages (ARIMA), deep neural networks such as the transformer model, long-short-term-memory networks (LSTM), vanilla recurrent neural networks (RNN), temporal convolutional neural networks and neural basis expansion/neural hierarchical interpolation (NBEATS/NHiTS). The best-performing machine learning models require very long training times, in contrast, the RC exhibits competitive performance with two orders of magnitude less training time. Thus there can be no doubt that the RC will be more and more widely used for building future prediction models for time-series data, with low training cost, high speed and high computational power.
On the other hand, research into the mathematical structure of RC neural networks has only recently begun. Bollt [1] clarified the necessity of the autoregressive (AR) model for gaining the insight into the mathematical structure of RC neural networks, and indicated the Wold decomposition theorem [10] is the milestone for understanding of these. More precisely, in the stochastic framework, a zero mean covariance stationary vector process admits a vector AR representation (see [1, Section V]). Furthermore, Gauthier et.al [3] proposed a next generation RC with quadratic reservoir vectors, which focuses not only on the mathematical understanding of the RC, but also on the fundamental improvement of it. In contrast to these celebrated results, we stick to the deterministic framework, and in this paper,
we clarify hidden structures of input and recurrent weight matrices, and show that these structures attain perfect prediction for the AR type of time series data.
## 2. AR model and almost periodic functions
Before going into structures of input and recurrent weight matrices, first we construct both training and reference data. Let us start from the following condition on smooth functions \(\phi\in C^{\infty}(\mathbb{R})\cap L^{\infty}(\mathbb{R})\), which is naturally expressing "_recurring pattern_":
1. \(\begin{cases}\text{From any sequence of the form }\{\phi(t+h_{n})\}_{n}\text{ where }h_{n}\text{ are real numbers,}\\ \text{one can extract a subsequence converging uniformly on the real line.}\end{cases}\)
Due to Corduneanu [2, Theorems 1.10 and 1.11], "_recurring pattern_" is nothing more than the almost periodicity (necessary and sufficient conditions), expressed as follows:
\[\phi(t)=\sum_{\lambda\in\Lambda}a_{\lambda}\sin\left(\lambda(t-b_{\lambda}) \right),\quad\{a_{\lambda}\}_{\lambda},\{b_{\lambda}\}_{\lambda}\subset\mathbb{ R},\quad\Lambda(\subset\mathbb{R})\text{ is countable.} \tag{2}\]
**Remark 1**.: We see that almost periodic functions possess quasi-periodic orbits, so, these are integrable systems (see the well-known Arnold-Liouville theorem). We now explain it briefly. Let \(L\in\mathbb{Z}_{\geq 1}\), \(\{\lambda_{j}\}_{j=1}^{L}\subset\mathbb{R}\) and let \(\mathcal{M}\) be a torus such that
\[\mathcal{M}=\prod_{j=1}^{L}(\mathbb{R}/(2\pi\mathbb{Z})).\]
Also let \(x_{t}\) be a shift operator (diffeomorphism) such that
\[x_{t}=x_{0}+\tau t:\mathcal{M}\to\mathcal{M}\quad(x_{0}\mapsto x_{t}),\quad t \in\mathbb{R},\quad\tau=\{\lambda_{j}\}_{j=1}^{L}\in T_{x_{t}}\mathcal{M}\cong \mathbb{R}^{L}.\]
Then there exists a \(g:\mathcal{M}\to\mathbb{R}\) such that
\[\phi(t)=\sum_{j=1}^{L}a_{\lambda_{j}}\sin\left(\lambda_{j}(t-b_{\lambda_{j}}) \right)=g\circ x_{t}.\]
More specifically, we set \(g\) as follows:
\[g(t_{1},t_{2},\cdots,t_{L})=\sum_{j=1}^{L}a_{\lambda_{j}}\sin t_{j}.\]
This expression exhibits nothing more than quasi-periodic orbit. Kobayashi et. al. [8] investigated the RC from a dynamical system perspective, such as unstable fixed points, periodic orbits, chaotic saddle, Lyapunov exponents and manifold structures (see also [6, 9]). We see that their _unstable periodic orbit_ must be related to our _quasi-periodic orbit_, since the definition of _"chaos"_ needs a sort of the following property (see [7]):
* Let \(f\) be a map, which takes an interval \(I\) to itself. Then periodic points of \(f\) are dense in \(I\).
We emphasize that almost periodic functions are indispensable for mathematically analyzing the AR model (in the deterministic framework). For \(\{p_{\ell}\}_{\ell=1}^{L}\subset\mathbb{R}\), the AR model is described as follows:
1. \(y(t)=p_{1}y(t-1)+p_{2}y(t-2)+\cdots+p_{L-1}y(t-L+1)+p_{L}y(t-L)\quad(t\geq 0)\)
with prescribed initial data \(\{y(-\ell)\}_{\ell=1}^{L}\). We now explain that this AR model crucially includes the structure of almost periodic functions (2). We plug the following initial data (looking into the characteristic equation)
\[y(-\ell)=\mu^{L-\ell},\quad(\mu\in\mathbb{R},\ \ell=0,1,\cdots,L)\]
into (3). Throughout this paper, we choose \(L>0\) to be even, and we look into eigenfunctions of the characteristic equation whose modulus of eigenvalues are exactly \(1\). In this context, the following equality is the crucial:
\[(\mu-e^{i\lambda})(\mu-e^{-i\lambda})=\mu^{2}-2\mu\cos\lambda+1\quad\text{for} \quad\mu,\lambda\in\mathbb{R}.\]
We multiply this type of second order polynomials \(L/2\) times, then we obtain the following equality which clarifies the relation between almost periodicity and the AR model (factorization of \(L\)-th degree polynomial):
\[0=-\sum_{\ell=1}^{L}p_{\ell}\mu^{L-\ell}+\mu^{L}=\prod_{j=1}^{L/2}(\mu^{2}-2 \cos\lambda_{j}\mu+1),\]
namely, if \(\{p_{\ell}\}_{\ell}\) satisfies the above equality (at least we can easily figure out that \(p_{L}=-1\)), \(\{y(t)\}_{t\geq 0}\) for the AR model (3) can be expressed as follows:
\[y(t)=\sum_{j=1}^{L/2}a_{j}\sin\left(\lambda_{j}(t-b_{j})\right),\quad t=0,1, \cdots,\]
where \(a_{j},b_{j}\in\mathbb{R}\) are uniquely determined by the initial data. Since the almost periodic functions naturally possess _recurring pattern_, in the next section, we employ this AR model data (almost periodic functions) as the both training and reference data, more precisely,
* \(\{y(t)\}_{t\in\mathbb{Z}_{<-L}}\) as the training data,
* \(\{y(t)\}_{t\in\mathbb{Z}_{\geq-L}}\) as the reference data.
## 3. Mathematical structure of reservoir computing (main result)
Throughout this paper we set \(N\) as the number of reservoir nodes (we will determine this \(N\) later). First we formulate the RC and then we state the main theorem. For sufficiently small \(\varepsilon>0\), let \(h:\mathbb{R}\to[-1,1]\) be an activate function (which is allowed to be odd symmetric) as follows:
\[|h(t)-\tanh t|<\varepsilon\quad\text{for}\quad t\in\mathbb{R}. \tag{4}\]
However, we expect that this condition (4) is not needed, that is, the setting \(h(t)=\tanh t\) may goes well. To give a simpler proof of the mathematical main theorem, we decided to employ (4). Now let us discretize the range \([-1,1]\) as follows: For \(K\in\mathbb{Z}_{\geq 1}\), we choose \(\{a_{k}^{K}\}_{k=0}^{2K}\) such that (we employ transcendental numbers)
* \(\{a_{1}^{K},a_{2}^{K},\cdots,a_{K-1}^{K},a_{K+1}^{K},\cdots,a_{2K-1}^{K}\}\subset \{\pm e^{-\frac{n}{m}}\ ;m,n\in\mathbb{Z}_{\geq 1}\}\subset\mathbb{R}\setminus \mathbb{Q}\),
* \(-1=a_{0}^{K}<a_{1}^{K}<a_{2}^{K}<\cdots<a_{K}^{K}=0<a_{K+1}^{K}<\cdots<a_{2K-1 }^{K}<a_{2K}^{K}=1\),
* \(\lim\limits_{K\to\infty}\sup\limits_{1\leq k\leq 2K}|a_{k-1}^{K}-a_{k}^{K}|=0\).
By the Lindemann-Weierstrass theorem, we see that
\[\frac{a_{k^{\prime}}^{K}}{a_{k}^{K}}\in(\mathbb{R}\setminus\mathbb{Q})\cup\{ 0\}\cup\{-1\}\quad(k\neq k^{\prime},\ k\neq K,\ k,k^{\prime}\geq 1), \tag{5}\]
\[\sum_{\ell=1}^{L}\frac{a_{k_{\ell}^{\prime}}^{K}}{a_{k_{\ell}}^{K}}\in(\mathbb{R} \setminus\mathbb{Q})\cup\{-L,-L+1,\cdots,-1,0,1,\cdots,L-1\}\quad(k_{\ell}\neq K,\;k_{\ell},k_{\ell}^{\prime}\geq 1), \tag{6}\]
except for the \(k_{1}=k_{1}^{\prime},\;k_{2}=k_{2}^{\prime},\cdots,k_{L}=k_{L}^{\prime}\) case.
**Remark 2**.: \[\sum_{\ell=1}^{L}\frac{a_{k_{\ell}^{\prime}}^{K}}{a_{k_{\ell}}^{K}}=L\]
if and only if \(k_{1}=k_{1}^{\prime},\;k_{2}=k_{2}^{\prime},\cdots,k_{L}=k_{L}^{\prime}\).
In what follows, we employ the AR model data (almost periodic functions) as the both training and reference data:
\[y(t)=\sum_{\ell=1}^{L}a_{\ell}\sin(\lambda_{\ell}(t-b_{\ell}))=\sum_{\ell=1}^{ L}p_{\ell}y(t-\ell),\quad t\in\mathbb{Z}, \tag{7}\]
for some suitably prescribed \(\{p_{\ell}\}_{\ell=1}^{L}\), \(\{\lambda_{\ell}\}_{\ell=1}^{L}\), \(\{a_{\ell}\}_{\ell=1}^{L}\) and \(\{b_{\ell}\}_{\ell=1}^{L}\), with the normalization \(y(t)\in[-1,1]\) (\(t\in\mathbb{Z}\)). We now discretize this \(y(t)\). There exists a unique \(k_{t}\in\{1,\cdots,2K\}\) such that
\[\frac{a_{k_{t}-1}^{K}+a_{k_{t}}^{K}}{2}< y(t)\leq\frac{a_{k_{t}}^{K}+a_{k_{t}+1}^{K}}{2}\quad(k_{t}=2,3, \cdots,2K-2)\quad\text{or}\] \[a_{0}^{K}< y(t)\leq\frac{a_{1}^{K}+a_{2}^{K}}{2}\quad(k_{t}=1)\quad\text{or}\] \[\frac{a_{2K-2}^{K}+a_{2K-1}^{K}}{2}< y(t)\leq a_{2K}^{K}\quad(k_{t}=2K-1), \tag{8}\]
thus we can appropriately define the discretized \(\bar{y}\) as follows:
\[\bar{y}(t):=a_{k_{t}}^{K}. \tag{9}\]
Note that, we can simplify this discretization as follows:
\[\bar{y}(t)=\operatorname*{arg\,min}_{a\in\{a_{k}^{K}\}_{k=1}^{2K-1}}|y(t)-(a-0 )|,\]
where \(a-0:=a-\varepsilon\) for any sufficiently small \(\varepsilon>0\). Now we determine the training time steps \(T>0\). To determine it, we just apply (1), namely, there exists a sufficiently large \(T>0\) such that
\[\sup_{t}|y(t-T)-y(t)|\ll 1/K. \tag{10}\]
This means that the sequence pattern
\[\bar{y}(-L),\bar{y}(-L+1)\cdots,\bar{y}(-1)\]
is almost the same as
\[\bar{y}(-L-T),\bar{y}(-L+1-T),\cdots,\bar{y}(-1-T).\]
Rigorously, it may still have an error \(\ll 1/K\), but for simplicity, here, we identify these two sequences. Now we set up the RC as follows:
* Training phase
From time-series (training) data \(\{\bar{y}(t)\}_{t\in\mathbb{Z}_{<-L}\cap\mathbb{Z}_{\geq-T}}\), we create reservoir state vectors (column vectors) \(\{\bar{r}(t)\}_{t\in\mathbb{Z}_{<-L}\cap\mathbb{Z}_{\geq-T}}\subset\mathbb{R}^{N}\) by using the following RC: For each fixed \(t\), first, we determine the following tentative reservoir state vectors \(\widetilde{r}_{t}(t-\ell+1)\) (\(\ell=L,L-1,\cdots,2,1\) in this order) inductively:
\[\begin{split}\widetilde{r}_{t}(t-L+1)&=h(W^{in} \bar{y}(t-L)),\\ \widetilde{r}_{t}(t-\ell+1)&=h(W\widetilde{r}_{t}(t -\ell)+W^{in}\bar{y}(t-\ell))\quad(\ell=L-1,L-2,\cdots,2,1),\end{split} \tag{11}\]
where \(W^{in}\in\mathbb{R}^{N\times 1}\) is a column vector (degenerated input weight matrix), \(W\in\mathbb{R}^{N\times N}\) is a square matrix (recurrent weight matrix). These \(W^{in}\) and \(W\) are prescribed vector and matrix, and we will explain concrete \(W^{in}\) and \(W\) in the next section. Then we set
\[\bar{r}(t):=\widetilde{r}_{t}(t).\]
Note that, this \(L\) should be corresponding to the _transient time interval_ in the usual RC.
**Remark 3**.: In this paper, we neglect the term \(\widetilde{r}_{t}(t-L)\) (in the first equation in (11)) which exists in the usual RC. Even if we take \(\widetilde{r}_{t}(t-L)\) into account, the contribution of this term is relatively small, if the recurrent weight matrix \(W\) satisfies the _echo state property_ (see Jaeger [5]), however, it remains open question whether or not \(W\) in Theorem 1 really satisfies it, when \(K\) and \(L\) are relatively large.
From reservoir state vectors, we determine a row vector \(W^{out}\in\mathbb{R}^{1\times N}\) (degenerated output weight matrix) by using the mean-square error. More precisely, we find \(W^{out}\) such that
\[W^{out}:=\operatorname*{arg\,min}_{\widetilde{W}^{out}}\sum_{t=-T}^{-L-1} \left|y(t)-\widetilde{W}^{out}\bar{r}(t)\right|^{2}. \tag{12}\]
* Inference phase
We plug \(W^{in}\), \(W\) and \(W^{out}\) into the following RC, and create a series of future prediction \(\{\bar{u}(t)\}_{t\geq 0}\) from initial reference data \(\{\bar{y}(-\ell)\}_{\ell=1}^{L}\):
\[\begin{cases}\bar{r}(-L+1)&=h(W^{in}\bar{y}(-L)),\\ \bar{r}(-\ell+1)&=h(W\bar{r}(-\ell)+W^{in}\bar{y}(-\ell)),\quad(\ell=L-1,L-2, \cdots 2,1)\\ \bar{u}(0)&=W^{out}\bar{r}(0)-\bar{\delta}_{n_{0}},\\ \end{cases} \tag{13}\]
\[\begin{cases}\bar{r}(t)&=h(W\bar{r}(t-1)+W^{in}\bar{u}(t-1)-W^{in}\bar{u}(t-L -1)),\\ \bar{u}(t)&=W^{out}\bar{r}(t)-\bar{\delta}_{n_{t}}.\end{cases}\quad(t=1,2, \cdots).\]
where \(\bar{\delta}_{n}\) is defined in (16) as averages of the errors \(\{\delta(t)\}_{t\in\mathbb{Z}_{<-L}\cap\mathbb{Z}_{\geq-T}}\) in Remark 6, and the index \(n_{t}\) (\(t=0,1,\cdots\)) is uniquely determined. See Remark 7.
**Remark 4**.: Since we do not know whether or not this \(W\) possesses the _echo state property_ so far, thus we need to subtract \(W^{in}\bar{u}(t-L-1)\) and eliminate the past contribution.
Then we can state the main theorem as follows:
**Theorem 1**.: _(Perfect prediction for \(K\to\infty\).) For each \(K\in\mathbb{Z}_{\geq 1}\), there exist \(h\) with (4), \(W\) and \(W^{in}\) such that_
\[|\bar{u}(t)-y(t)|\lesssim_{L}\frac{2^{t}}{K}\quad(t\geq 0). \tag{14}\]
**Remark 5**.: In the Fourier analysis, _existence_ of the perfect prediction is rather obvious due to (1). The point is that we found an _explicit representation (i.e. pattern memory)_ of it.
## 4. Proof of main theorem
The crucial point of the proof is to construct suitable \(W\) and \(W^{in}\). In order to do so, we need to define a row vector which represents \(N\)-consecutive time series data:
\[V_{\ell}:=(V_{\ell,1},V_{\ell,2},\cdots,V_{\ell,N})\quad\text{for}\quad\ell=1, 2,\cdots,L.\]
Let \(a_{k}:=a_{k}^{K}\). First let \(\sigma_{j}\) (\(j=1,2,\cdots,N\)) be a permutation operator, namely,
\[\sigma_{j}:\{1,2,\cdots,L\}\to\{a_{1},a_{2},\cdots,a_{K-1},0,a_{K+1},a_{K+2}, \cdots,a_{2K-1}\}\]
(\(\ell\mapsto\sigma_{j}(\ell)\)) and \(\sigma_{j}\neq\sigma_{j^{\prime}}\) (\(j\neq j^{\prime}\)). We exclude the case when
\[\sigma_{j}(\ell)=0\quad\text{for}\quad\ell=1,2,\cdots,L\quad\text{(identically zero case)},\]
and we impose the following two conditions for uniquely determining \(N\):
* For any \(t\in\mathbb{Z}_{<-L}\cap\mathbb{Z}_{\geq-T}\), there is \(j\in\{1,\cdots,N\}\) such that \(\sigma_{j}(\ell)=\bar{y}(t-\ell)\) for \(\ell=1,2,\cdots,L\),
* For any \(j\in\{1,\cdots,N\}\) there is \(t\in\mathbb{Z}_{<-L}\cap\mathbb{Z}_{\geq-T}\) such that \(\sigma_{j}(\ell)=\bar{y}(t-\ell)\) for \(\ell=1,2,\cdots,L\).
Note that \(N\leq(2K-1)^{L}-1\) due to the sequence with repetition. Then we can define the representation of \(L\)-consecutive time series data as follows:
\[V_{\ell,j}:=\sigma_{j}(\ell).\]
This definition covers all patterns of \(L\)-consecutive time series data in \(\{\bar{y}(t)\}_{t\in\mathbb{Z}_{<-L}\cap\mathbb{Z}_{\geq-T}}\), in other words, in the training phase, there exists a column vector
\[e:=\underbrace{(0,0,\cdots,0,1,0\cdots,0)}_{N}^{T}\]
such that
\[V_{\ell}e=\bar{y}(t-\ell)\quad(\ell=1,\cdots,L).\]
In particular, due to (10), even for the initial reference data \(\{\bar{y}(-\ell)\}_{\ell=1}^{L}\), there exists a column vector
\[e:=\underbrace{(0,0,\cdots,0,1,0\cdots,0)}_{N}^{T}\]
such that
\[V_{\ell}e=\bar{y}(-\ell)\quad(\ell=1,\cdots,L).\]
In order to obtain the future prediction data \(\{\bar{u}(t)\}_{t=0}^{\infty}\) sequentially, we need to classify each patterns as follows:
\[\mathcal{T}_{n}:=\left\{t\in\mathbb{Z}_{<-L}\cap\mathbb{Z}_{\geq-T}:\sigma_{n }(\ell)=\bar{y}(t-\ell)\quad\text{for}\quad\ell=1,2,\cdots,L\right\}. \tag{15}\]
The crucial point is that, for \(t\in\mathcal{T}_{n}\),
\[\operatorname*{arg\,min}_{a\in\{a_{k}^{K}\}_{k=1}^{2K-1}}\left|\sum_{\ell=1}^ {L}p_{\ell}y(t-\ell)-(a-0)\right|\]
may NOT be uniquely determined. In this case we just choose arbitrary one \(t^{*}\in\mathcal{T}_{n}\) such that
\[v_{n}^{*}:=\operatorname*{arg\,min}_{a\in\{a_{k}^{K}\}_{k=1}^{2K-1}}\left|\sum_{ \ell=1}^{L}p_{\ell}y(t^{*}-\ell)-(a-0)\right|\]
and we define the row vector \(V^{*}\in\mathbb{R}^{N}\) as follows:
\[V^{*}:=(v_{1}^{*},v_{2}^{*},\cdots,v_{N}^{*}).\]
We need this \(V^{*}\) in the inference phase, to obtain the future prediction data \(\{\bar{u}(t)\}_{t=0}^{\infty}\) sequentially.
**Remark 6**.: We observe the controllable errors:
\[v_{n}^{*}-\sum_{\ell=1}^{L}p_{\ell}\bar{y}(t-\ell)=:\delta^{*}(t )\quad\text{for}\quad t\in\mathcal{T}_{n},\] \[|\delta^{*}(t)|\lesssim\left|\sum_{\ell=1}^{L}p_{\ell}y(t^{*}- \ell)-\sum_{\ell=1}^{L}p_{\ell}\bar{y}(t-\ell)\right|+\frac{1}{K}\lesssim_{L} \frac{1}{K},\] \[y(t)-\sum_{\ell=1}^{L}p_{\ell}\bar{y}(t-\ell)=\sum_{\ell=1}^{L}p _{\ell}y(t-\ell)-\sum_{\ell=1}^{L}p_{\ell}\bar{y}(t-\ell)=:\delta(t),\] \[|\delta(t)|\lesssim_{L}\frac{1}{K}.\]
Let \(\bar{\delta}=(\bar{\delta}_{1},\bar{\delta}_{2},\cdots,\bar{\delta}_{N})\) be the corresponding mean averages:
\[\bar{\delta}_{n}:= \frac{1}{|\mathcal{T}_{n}|}\sum_{t\in\mathcal{T}_{n}}(\delta(t)- \delta^{*}(t)),\quad\text{in other words},\] \[\bar{\delta}_{n}:= \operatorname*{arg\,min}_{\delta}\frac{1}{|\mathcal{T}_{n}|}\sum_ {t\in\mathcal{T}_{n}}|\delta(t)-\delta^{*}(t)-\delta|^{2}. \tag{16}\]
Clearly, \(|\bar{\delta}_{n}|\lesssim_{L}1/K\).
**Remark 7**.: In the inference phase, for each \(t\geq 0\), there exists a unique column vector
\[e_{n_{t}}:=(\underbrace{\overbrace{0,0,\cdots,0,1}^{n_{t}},0\cdots,0}_{N})^{T}\]
such that
\[\bar{u}(t-\ell)=V_{\ell}e_{n_{t}}\quad\text{for}\quad\ell=1,2,\cdots,L. \tag{17}\]
We have used this index \(n_{t}\) in (13).
Next we define column vectors (conjugate type of vectors)
\[W_{\ell}^{in}:=(W_{\ell,1}^{in},W_{\ell,2}^{in},\cdots,W_{\ell,N}^{in})^{T} \quad\text{for}\quad\ell=1,2,\cdots,L,\]
as follows: First let \(\sigma_{i}^{*}\) (\(i=1,2,\cdots,N\)) be an adjoint type of permutation operator, namely, let
\[\begin{cases}\sigma_{j}^{*}(\ell):=\frac{1}{\sigma_{j}(\ell)}&\text{if}\quad \sigma_{j}(\ell)\neq 0,\\ \sigma_{j}^{*}(\ell):=0&\text{if}\quad\sigma_{j}(\ell)=0\end{cases}\]
for \(\ell\in\{1,\cdots,L\}\) and \(j\in\{1,\cdots N\}\). Then we can define the conjugate type of representation of \(L\)-consecutive time series as follows:
\[W^{in}_{\ell,i}:=\sigma^{*}_{i}(\ell)\times\frac{1}{\#\{\sigma^{*}_{i}(\ell)\neq 0 ;\ell=1,2,\cdots,L\}}. \tag{18}\]
For notational convenience, we set \(W^{in}_{0}:=W^{in}_{L}\), also, let \(W^{in}:=W^{in}_{L}\). By the definition of this \(\{W^{in}_{\ell}\}_{\ell}\), then we can construct a suitable matrix \(W\). More precisely, our main task now is to assure existence of the inverse of matrix \(X\):
\[X:=h(W^{in}_{0}V_{1}+W^{in}_{1}V_{2}+\cdots+W^{in}_{L-1}V_{L}).\]
Note that, by using this expression, the RC (11) can be rewritten as follows:
\[WX=W^{in}_{1}V_{1}+W^{in}_{2}V_{2}+\cdots+W^{in}_{L}V_{L}=:Y. \tag{19}\]
**Lemma 2**.: \(X\) _is a regular matrix. In other words, we have \(W=YX^{-1}\)._
**Remark 8**.: By using this \(W\), we have
\[Wh(W^{in}\bar{y}(t-L)) =W^{in}_{1}\bar{y}(t-L),\] \[W(h(W^{in}\bar{y}(t-L+1)+W^{in}_{1}\bar{y}(t-L))) =W^{in}_{1}\bar{y}(t-L+1)+W^{in}_{2}\bar{y}(t-L),\] \[\vdots\] \[W\left(h\left(\sum_{\ell=1}^{L}W^{in}_{\ell-1}\bar{y}(t-\ell) \right)\right) =\sum_{\ell=1}^{L}W^{in}_{\ell}\bar{y}(t-\ell).\]
Note that, if \(\bar{y}(t-L)=0\), then we just skip the first step and start from the second step. If \(\bar{y}(t-L)=\bar{y}(t-L+1)=0\), then we skip the first and second steps and start from the third step, and so on.
Proof.: The key ingredient of the proof is the following:
* Let \(f\) be a non-zero polynomial in \(N\) variables. Then the complement of the zero point set, that is, \(\{x\in\mathbb{R}^{N};f(x)\neq 0\}\) is dense in \(\mathbb{R}^{N}\).
In the following proof, we give a concrete representation of such density. By (5) and (18) (see also Remark 2), we see
\[W^{in}_{\ell-1,i}V_{\ell,j} \in\left\{1,2,\cdots,\#\{\sigma_{i}(\ell);\ell=1,2,\cdots,L\} \right\}\times\frac{1}{\#\{\sigma_{i}(\ell);\ell=1,2,\cdots,L\}}\] \[\text{or}\quad\in(\mathbb{R}\setminus\mathbb{Q})\cup\{0\}.\]
By (6), Remark 2 and Lindemann-Weierstrass theorem, we see
\[\sum_{\ell=1}^{L}W^{in}_{\ell-1,j}V_{\ell,j}=1\quad\text{and}\quad\sum_{\ell= 1}^{L}W^{in}_{\ell-1,i}V_{\ell,j}\neq 1\]
for \(i\neq j\). In order to construct an appropriate \(h\), we use a finite set \(G\) as follows:
\[G:=\left\{\sum_{\ell=1}^{L}W^{in}_{\ell-1,i}V_{\ell,j};i,j\in\{1,2,\cdots,N\} \right\}\subset\mathbb{R}.\]
Note that \(1\in G\). Now we take a smooth function \(h:\mathbb{R}\to[-1,1]\) satisfying the following:
\[h(1) \in\mathbb{Q}\setminus\{0\},\] \[h(\gamma) \in\left(\{\pm e^{-\frac{n}{m}}\ ;m,n\in\mathbb{Z}_{\geq 1}\}\cup\{0 \}\right)\subset(\mathbb{R}\setminus\mathbb{Q})\cup\{0\},\quad\text{for} \quad\gamma\subset G\setminus\{1\}.\]
Then we can easily check that (applying the Lindemann-Weierstrass theorem)
* \(h(\gamma_{1})h(\gamma_{2})\cdots h(\gamma_{N})\in(\mathbb{R}\setminus\mathbb{Q}) \cup\{0\}\quad\) for \(\quad\{\gamma_{n}\}_{n=1}^{N}\in G^{N}\setminus\{\underbrace{1,1,\cdots,1}_{N}\}\),
* for any \(\{\tau_{n^{\prime}}\}_{n^{\prime}=1}^{N!-1}\in\{-1,1\}^{N!-1}\,\)and \(\{\{\gamma_{n,n^{\prime}}\}_{n=1}^{N}\}_{n^{\prime}=1}^{N!-1}\subset G^{N} \setminus\{\underbrace{1,1,\cdots,1}_{N}\}\),
\[\sum_{n^{\prime}=1}^{N!-1}\tau_{n^{\prime}}h(\gamma_{1,n^{\prime}})h(\gamma_{ 2,n^{\prime}})\cdots h(\gamma_{N,n^{\prime}})\in(\mathbb{R}\setminus\mathbb{Q} )\cup\{0\}.\]
By applying the above properties, then we see that the determinant of the matrix \(X\) is nonzero, since it is expressed as
\[|X|=\eta_{1}+\eta_{2},\quad\eta_{1}\in\mathbb{Q}\setminus\{0\},\quad\eta_{2} \in(\mathbb{R}\setminus\mathbb{Q})\cup\{0\}.\]
Now we resume the proof of the main theorem. First we solve the following:
\[\bar{\delta}+V^{*}=W^{out}X\]
Since there exists the inverse matrix \(X^{-1}\), we obtin
\[\left(\bar{\delta}+V^{*}\right)X^{-1}=W^{out}. \tag{20}\]
We now check that this is the desired \(W^{out}\). By Remark 6 and (16), we have that
\[\sum_{t}\left|y(t)-W^{out}\bar{r}(t)\right|^{2}\] \[= \sum_{t}\left|\delta(t)-\delta^{*}(t)+V^{*}-W^{out}\bar{r}(t) \right|^{2}\] \[= \sum_{n=1}^{N}\sum_{t\in\mathcal{T}_{n}}|\delta(t)-\delta^{*}(t) -\bar{\delta}_{n}|^{2}\] \[\quad+2\sum_{n=1}^{N}\sum_{t\in\mathcal{T}_{n}}(\delta(t)-\delta^ {*}(t)-\bar{\delta}_{n})\left(\bar{\delta}_{n}+V^{*}e_{n}-W^{out}h(Y)e_{n}\right)\] \[\quad+\sum_{n=1}^{N}\sum_{t\in\mathcal{T}_{n}}\left|\bar{\delta} _{n}+V^{*}e_{n}-W^{out}h(Y)e_{n}\right|^{2},\]
where \(e_{n}\) is a suitable column vector such that
\[e_{n}:=(\underbrace{\overbrace{0,0,\cdots,0,1}^{n},0\cdots,0}_{N})^{T}.\]
Therefore the minimum value of (12) can be attained by (16) and (20), which is zero. In the inference phase (13), we show (14). First we estimate the case \(t=0\). Since
\[\bar{u}(0)=v_{n_{0}}^{*}=\sum_{\ell=1}^{L}p_{\ell}\bar{y}(-\ell)+\delta^{*}(0), \quad\text{and}\quad y(0)=\sum_{\ell=1}^{L}p_{\ell}\bar{y}(-\ell)+\delta(0),\]
we have
\[|\bar{u}(0)-y(0)|\lesssim|\delta(0)|+|\delta^{*}(0)|\lesssim_{L}\frac{2}{K}.\]
Next we estimate the case \(t=1\). Since
\[\bar{u}(1)=v_{n_{1}}^{*} =p_{1}\bar{u}(0)+\sum_{\ell=2}^{L}p_{\ell}\bar{y}(-\ell+1)+\delta^{ *}(1)\] \[=p_{1}(y(0)+\delta^{*}(0)-\delta(0))+\sum_{\ell=2}^{L}p_{\ell}\bar{ y}(-\ell+1)+\delta^{*}(1)\]
and
\[y(1)=\sum_{\ell=1}^{L}y(1-\ell)=\sum_{\ell=1}^{L}\bar{y}(1-\ell)+ \delta(1),\]
we have
\[|\bar{u}(1)-y(1)|\lesssim_{L}\frac{4}{K}.\]
Also we estimate the case \(t=2\). Since
\[\bar{u}(2)=v_{n_{2}}^{*} =p_{1}\bar{u}(1)+p_{2}\bar{u}(0)+\sum_{\ell=3}^{L}p_{\ell}\bar{y}( -\ell+2)+\delta^{*}(2)\] \[=p_{1}(y(1)+\delta^{*}(1)-\delta(1)+\delta^{*}(0)-\delta(0))+p_{2 }(y(0)+\delta^{*}(0)-\delta(0))\] \[\quad+\sum_{\ell=3}^{L}p_{\ell}\bar{y}(-\ell+2)+\delta^{*}(2)\]
and
\[y(2)=\sum_{\ell=1}^{L}y(2-\ell)=\sum_{\ell=1}^{L}\bar{y}(2-\ell)+ \delta(2),\]
we have
\[|\bar{u}(2)-y(2)|\lesssim_{L}\frac{8}{K}.\]
Repeating this argument, we have
\[|\bar{u}(t)-y(t)|\lesssim_{L}\frac{2^{t}}{K}.\]
This is the desired estimate.
### Acknowledgments
The author is grateful to Professors Chikara Nakayama and Yoshitaka Saiki for valuable comments. Research of TY was partly supported by the JSPS Grants-in-Aid for Scientific Research 20H01819.
### Conflict of Interest
The authors have no conflicts to disclose.
| arbitrary bounded discrete time series
dynamicalsystem
Fourier transform
periodic points
suitably characterizes
Lyapunov exponent
bounded discrete time series
autoregressive model
white noise
quasi periodic function
Please let me know if you need any clarification. |
2309.04195 | Towards Mitigating Architecture Overfitting in Dataset Distillation | Dataset distillation methods have demonstrated remarkable performance for
neural networks trained with very limited training data. However, a significant
challenge arises in the form of architecture overfitting: the distilled
training data synthesized by a specific network architecture (i.e., training
network) generates poor performance when trained by other network architectures
(i.e., test networks). This paper addresses this issue and proposes a series of
approaches in both architecture designs and training schemes which can be
adopted together to boost the generalization performance across different
network architectures on the distilled training data. We conduct extensive
experiments to demonstrate the effectiveness and generality of our methods.
Particularly, across various scenarios involving different sizes of distilled
data, our approaches achieve comparable or superior performance to existing
methods when training on the distilled data using networks with larger
capacities. | Xuyang Zhong, Chen Liu | 2023-09-08T08:12:29 | http://arxiv.org/abs/2309.04195v1 | # Towards Mitigating Architecture Overfitting in Dataset Distillation
###### Abstract
Dataset distillation methods have demonstrated remarkable performance for neural networks trained with very limited training data. However, a significant challenge arises in the form of _architecture overfitting_: the distilled training data synthesized by a specific network architecture (i.e., training network) generates poor performance when trained by other network architectures (i.e., test networks). This paper addresses this issue and proposes a series of approaches in both architecture designs and training schemes which can be adopted together to boost the generalization performance across different network architectures on the distilled training data. We conduct extensive experiments to demonstrate the effectiveness and generality of our methods. Particularly, across various scenarios involving different sizes of distilled data, our approaches achieve comparable or superior performance to existing methods when training on the distilled data using networks with larger capacities.
## 1 Introduction
Deep learning has achieved tremendous success in various applications [1, 2], but training a powerful deep neural network requires massive training data [3, 4]. To accelerate training, one possible way is to construct a new but smaller training set that preserves most of the information of the original large set. In this regard, we can use _coreset_[5, 6] to sample a subset of the original training set or _dataset distillation_[7, 8] to synthesize a small training set. Compared to coreset, dataset distillation achieves much better performance when the amount of data is extremely small [6, 9]. Furthermore, dataset distillation is shown to benefit various applications, such as continual learning [8, 9, 10, 11], neural architecture search [8, 11], and privacy preservation [12, 13]. Therefore, in this work, we focus on dataset distillation to compress the training set.
In the dataset distillation framework, the small training set, which is also called the _distilled dataset_, is learned by using a neural network (i.e., training network) to extract the most important information from the original training set. Existing data distillation methods are based on various techniques, including meta-learning [7, 14, 15, 16, 17] and data matching [8, 9, 18, 19, 20]. These methods are then evaluated by the test accuracy of another neural network (i.e., test network) trained on the distilled dataset. Despite efficiency, dataset distillation methods generally suffer from _architecture overfitting_[9, 16, 17, 11, 18]. That is, the performance of the test network on the distilled dataset degrades significantly when it has a different network architecture from the training network. Moreover, the performance deteriorates further when there is a larger difference between the training and test networks in terms of depth and topological structure. Due to high computational complexity and optimization challenges in dataset distillation, the training networks are usually shallow networks, such as 3-layer convolutional neural networks (CNN) [17, 18]. However, such shallow networks lack representation power in practical applications. In addition, deep networks have shown stronger representation power in many tasks [3, 21]. Therefore, we believe a deeper network has the potential for better performance when trained on distilled datasets.
Note that, our analysis indicates that the performance gap between different network architectures is larger in the case of training on the distilled dataset than in the case of training on the subset of the original training set. In addition,compared with methods compressing the training set by subset selection, dataset distillation achieves better performance when using the same amount
of training instances and is thus more popular in downstream applications [9, 11, 13]. Therefore, we focus on dataset distillation, in which the effectiveness of the proposed method can be better revealed.
In this work, we demonstrate that the architecture overfitting issue in dataset distillation can be mitigated by a better architecture design and training scheme of test networks on the distilled dataset. We propose a series of approaches to mitigate architecture overfitting in dataset distillation. Specifically, these approaches can be categorized into four types: **a) architecture:** DropPath with three-phase keep rate and improved shortcut connection; **b) objective function:** knowledge distillation from a smaller teacher network; **c) optimization:** periodical learning rates and a better optimizer; **d) data:** a stronger augmentation scheme. Our proposed methods are generic: we conduct comprehensive experiments on different network architectures, different numbers of instances per class (IPC), different dataset distillation methods and different datasets to demonstrate the effectiveness of our methods. Figure 1 below demonstrates the performance of our proposed methods in various scenarios. It is clear that our methods can greatly mitigate architecture overfitting and make large networks achieve better performance in most cases. In addition to dataset distillation, our methods can also improve the performance of training on a small real dataset, including those constructed by corsets. What's more, compared with the existing methods, our proposed methods introduce negligible overhead and are thus computationally efficient.
We summarize the contributions of this paper as follows:
1. We propose a series of approaches to mitigate architecture overfitting in dataset distillation. They are generic and applicable to different model architectures and training schemes.
2. We conduct extensive experiments to demonstrate that our method significantly mitigates architecture overfitting for different network architectures, different dataset distillation approaches, different IPCs, and different datasets.
3. Moreover, our method generally improves the performance of deep networks trained on limited real data. As a result, deep networks outperform shallow networks on different fractions of training data, even when there are only 100 training samples.
## 2 Related Works
**Dataset Distillation:** The goal of dataset distillation is to learn a smaller set of training samples (i.e. distilled dataset) that preserves essential information of the original large dataset so that the model trained on this small dataset performs similarly to that trained on the original large dataset. Existing dataset distillation approaches are based on either meta-learning or data matching [22]. The former category includes backpropagation through time (BPTT) approach [7, 14, 15] and kernel ridge regression (KRR) approach [16, 17]; the latter category includes gradient matching [11, 23], trajectory matching [18, 19, 24], and distribution matching [9, 20]. However, these methods suffer
Figure 1: Effectiveness of our method on different architectures, different dataset distillation methods, and different images per class (IPCs) on CIFAR10. We use a 3-layer CNN as the training network, so it performs the best among various architectures under baselines (dashed lines). Our methods (solid lines) can significantly narrow down the performance gap between the 3-layer CNN and other architectures. Under our method, the performance of test networks in most cases is better than that of the 3-layer CNN.
from severe architecture overfitting, which means significant performance degradation when the architecture of the training network and the test network are different. Recently, some factorization methods [25, 26, 27, 28], which learn synthetic datasets by optimizing their factorized features and corresponding decoders, greatly improve the cross-architecture transferability. However, the instance per class (IPC), which indicates the number of instances in the distilled dataset, used in these methods is at least 5 times larger than that of meta-learning and data matching approaches, which greatly cancels out the advantages of dataset distillation. To better fit the motivation of dataset distillation, we only consider small IPCs (1, 10 and 50) in this work, so the factorization methods are not included for comparison.
**Model Ensemble:** Model ensemble aims to integrate multiple models to improve the generalization performance. Popular ensemble methods for classification models include bagging [29], AdaBoost [30], random forest [31], random subspace [32], and gradient boosting [33]. However, these methods require training several models and thus are computationally expensive. By contrast, DropOut [34] trains the model only once but stochastically masks its intermediate feature maps during training. At each training iteration with DropOut, only part of the model parameters are updated, which forms a sub-network of the model. In this regard, DropOut enables implicit model ensembles of different sub-networks to improve the generalization performance. Similar to DropOut, DropPath [35] also implicitly ensembles sub-networks but it blocks a whole layer rather than masking some feature maps. Therefore, it is applicable to network architectures with multiple branches, such as ResNet [21], otherwise, the model output will be zero if a layer of a single branch network is dropped. By contrast, we propose a DropPath variant in this work which is generic, applicable to single-branch networks and effective to mitigate architecture overfitting.
**Knowledge Distillation:** Knowledge distillation [36] aims to compress a well-trained large model (i.e., teacher model) into a smaller and more efficient model (i.e., student model) with comparable performance. The standard knowledge distillation [36] is also known as offline distillation since the teacher model is fixed when training the student model. Online distillation [37, 38] is proposed to further improve the performance of the student model, especially when a large-capacity high-performance teacher model is not available. In online distillation, both the teacher model and the student model are updated simultaneously. In most cases, knowledge distillation methods use large models as the teachers and small models as the students, which is based on the fact that larger models typically have better performance. However, in the context of dataset distillation, a smaller test network with the same architecture as the training network can achieve a better performance than a larger one on the distilled dataset, so we use the small model as the teacher and the large model as the student in this work.
We show in the following sections that combining DropPath and knowledge distillation, architecture overfitting in dataset distillation can be almost overcome.
## 3 Methods
In this section, we introduce the approaches that are effective in mitigating architecture overfitting in dataset distillation. Our methods are motivated by traditional wisdom to mitigate overfitting, including ensemble learning [34, 39], regularization [40, 41] and data augmentation [42, 43]. First, we propose a DropPath variant, which implicitly ensemble subsets of models and is different from vanilla DropPath [35] in that it is also applicable to single-branch architectures. Correspondingly, we optimize the shortcut connections of ResNet-like architecture to better accommodate DropPath. Second, we use knowledge distillation [36] as a form of regularization to improve the performance to a large extent, even though the teacher model is actually smaller than the student model in our cases. Finally, we adopt a periodical learning rate scheduler, a gradient symbol-based optimizer [44], and a stronger data augmentation scheme when training models on the distilled dataset, to further improve the performance.
### DropPath
Similar to DropOut [34], DropPath [35], a.k.a., stochastic depth, was proposed to improve generalization. While DropOut masks some entries of feature maps, DropPath randomly prunes the entire branch in a multi-branch architecture. To obtain a deterministic model for evaluation, DropPath is deactivated during inference. To ensure the expectation of the feature maps to be consistent for training and inference, we scale the output of feature maps after DropPath during training.
Mathematically, DropPath works as follows:
\[\texttt{DropPath}(\mathbf{x})=\frac{m}{p}\cdot\mathbf{x},\quad m=\texttt{ Bernoulli}(p), \tag{1}\]
where \(p\in[0,1]\) denotes the keep rate, \(m=\texttt{Bernoulli}(p)\in\{0,1\}\) outputs \(1\) with probability \(p\) and \(0\) with probability \(1-p\). The scaling factor \(1/p\) is used to ensure the expectation of the feature maps remains unchanged after DropPath. Figure 2 (a) illustrates how DropPath is integrated into networks. It effectively decreases the model complexity during training and can force the model to learn more generalizable representations using fewer layers. Same as DropOut, any network trained with DropPath can be regarded as an ensemble of its subnetworks [45]. Ensembling has been proven to improve generalization [29, 30, 31, 32, 33]. As a result, we can also expect DropPath to mitigate the architecture overfitting issue in dataset distillation. Note that, DropOut masks part of the feature maps and effectively decreases the network width; by contrast, DropPath removes a branch and thus decreases the network depth. Architecture overfitting arises from deeper test networks, so we use DropPath instead of DropOut in this context.
**Three-Phase Keep Rate:** The keep rate \(p\) is the key parameter that controls the architecture variance and the exploration-exploitation trade-off. Since the architecture factor \(m=\texttt{Bernoulli}(p)\), the variance gets larger as \(p\) increases. In the early phase of training, large architecture variance brings optimization challenges for training, thereby causing training divergence, so we turn off DropPath by setting the keep rate \(p=1\) in the first few epochs to make sure that the network learns meaningful representations. We then gradually decrease \(p\) to increase architecture variance and thus to encourage exploration until it reaches the predefined minimum value after several epochs. In the final phase of training, we expect to decrease the architecture variance to ensure training convergence. In this regard, we increase the keep rate \(p\) to a typically large value. In experiments, we shrink the keep rate every few epochs. The pseudo-code is shown in Algorithm 1 of Appendix A.1. Figure 6 of Appendix A.1 illustrates the scheduler of the keep rate.
**Generalize to Single-Branch Networks:** Since DropPath prunes the entire branch, it is not applicable to single-branch networks, such as VGG [46]. This is because we need to ensure the input and the output of the network are always connected, otherwise, we will obtain a trivial constant model. In the case of ResNet, we prune the main path of a residual block stochastically, while the shortcut connections are always reserved.
To improve the performance of single-branch networks, we propose a variant of DropPath. As illustrated in Figure 2(b), we add a virtual shortcut connection between two layers, such as two consecutive convolutional layers in VGG, to form a block. This structure is similar to a residual
Figure 2: **(a)** The DropPath used for multi-branch residual blocks during training, it does not block the shortcut path. **(b)** The DropPath used for single-branch networks during training. Here, \(m=\texttt{Bernoulli}(p)\in\{0,1\}\), \(p\in[0,1]\) denotes the keep rate. When the main path is pruned (\(m=0\)), the virtual shortcut is activated; when the main path is not pruned (\(m=1\)), the virtual shortcut is removed. DropPath is always deactivated, i.e., \(p=1\), during inference. **(c)** The original architecture of a shortcut connection to downsample feature maps, which consists of a \(1\times 1\) convolution layer with the stride of \(2\) and a normalization layer. **(d)** The improved architecture of a shortcut connection to downsample feature maps, which is a sequence of a \(2\times 2\) max pooling layer, a \(1\times 1\) convolution layer with the stride of \(1\) and a normalization layer.
block, however, since we are training a single-branch architecture instead of a real ResNet, the virtual shortcut connection is only used when the main path is pruned by DropPath during training. That is to say when the main path is not pruned, the virtual shortcut connection is removed so that we are still training a single-branch network. Correspondingly, the virtual shortcut connection is discarded during inference.
**Improved Shortcut Connection:** In the original ResNet [21], if one residual block's input shape is the same as its output shape, the shortcut connection as in Figure 2(c) is just an identity function, otherwise a \(1\times 1\) convolution layer of a stride larger than one, which may be followed by a normalization layer, is adopted in the shortcut connection to transform the input's shape to match the output's. In the latter case, the resolution of the feature maps is divided by the stride. For example, if the stride is 2, the top left entry in each \(2\times 2\) area of the input feature map is sampled, whereas the rest 3 entities of the same area are directly dropped.
This naive subsampling strategy will cause dramatic information loss when we use DropPath. Specifically, if DropPath prunes the main path as in Figure 2 (a), the shortcut connection will dominate the output of the residual block. In this regard, the naive subsampling strategy may corrupt or degrade the quality of the features, since it always picks a fixed entry of a grid. To tackle this issue, we replace the original shortcut connect with a \(2\times 2\) max pooling followed by a \(1\times 1\) convolutional layer with the stride of 1. This improved structure will preserve the most important information after pooling instead of the one from a fixed entry. Figure 2 (c) and (d) show the comparison between the original and improved shortcut connections when the shapes of input and output are different.
### Knowledge Distillation
Given sufficient training data, large models usually perform better than small models due to their larger representation capability. Knowledge distillation aims to compress a well-trained large model (i.e., teacher model) into a smaller model (i.e., student model) without compromising too much performance. The basic idea behind knowledge distillation is to distill the knowledge from a teacher model into a student model by forcing the student's predictions (or internal activations) to match those of the teacher [47]. Specifically, we can use Kullback-Leibler (KL) divergence with temperature \(\mathcal{L}_{KL}\)[36] to match the predictions of student and teacher models. Then, we can combine the KL divergence as the regularization term in addition to the classification loss. Mathematically, the overall loss is:
\[\mathcal{L}(\mathbf{y}_{s},\mathbf{y}_{t},y)=\mathcal{L}_{KL}(\mathbf{y}_{s}, \mathbf{y}_{t})\cdot\alpha\cdot\tau^{2}+\mathcal{L}_{CE}(\mathbf{y}_{s},y) \cdot(1-\alpha), \tag{2}\]
where \(\tau\) denotes the temperature factor, and \(\alpha\in(0,1)\) denotes the weight factor to balance the KL divergence \(\mathcal{L}_{KL}\) and cross-entropy \(\mathcal{L}_{CE}\). The output logits of the student model and teacher model are denoted by \(\mathbf{y}_{s}\) and \(\mathbf{y}_{t}\), respectively. \(y\) denotes the target.
In our context, small models can perform better than large ones, since small models are used to construct distilled dataset. As a result, we adopt the small training network as the teacher model and the large test network as the student model. The computational overhead in knowledge distillation mainly arises from calculating \(\mathbf{y}_{t}\). In this case, the computational overhead is negligible because evaluating on the small teacher network is more efficient than on the larger student network.
### Training and Data Augmentation
Besides aforementioned methods, we use the following methods to further improve the performance.
**Periodical Learning Rate:** Because of the three-phase stepwise scheduler for the keep rate \(p\), we expect the network to jump out of the current local minima, and tries to search for a better one when \(p\) changes. Inspired by [48], we use a cosine annealing curve with warmup to adjust the learning rate, and we periodically reset it when \(p\) changes. Formally, the learning rate is adjusted as shown in Eq. 3 of Appendix A.1.
**Better Optimizer:** Lion [44] is a gradient symbol-based optimizer. It has faster convergence speed and is capable of finding better local minima for ResNets. Thus, Lion is used as the default optimizer in our experiments.
**Stronger Augmentation:** The data augmentation strategy used in MTT [18] samples a single augmentation operation from a pool to augment the input image. However, we observe that sampling more operations will better diversify the model's inputs and thus improve the performance, especially when IPC is small. For convenience, when sampling \(k\) operations, we call this strategy
\(k\)-fold augmentation. Empirically, we use 2-fold augmentation when IPC is 10 or 50 and 4-fold augmentation when IPC is 1. For the experiments about the impact of different augmentations, please refer to Appendix B.3.
## 4 Experiments
In this section, we evaluate our method on different dataset distillation algorithms, different numbers of instances per class (IPCs), different datasets and different network architectures. Our methods are shown effective in mitigating architecture overfitting and generic to improve the performance on limited real data. In addition, we conduct extensive ablation studies for analysis. Implementation details are referred to Appendix C.
### Mitigate Architecture Overfitting in Dateset Distillation
We first evaluate our method on two representative dataset distillation (DD) algorithms, i.e., neural Feature Regression with Pooling (FRePo) [17] and Matching Training Trajectories (MTT) [18]. FRePo proposes a neural feature kernel to solve a kernel ridge regression problem, and MTT focuses on matching the training trajectories on real data. Both of them show competitive performance. Furthermore, we test several ablations of our methods, and the settings of each ablation are elaborated in Table 1.
We comprehensively evaluate the performance of these methods under various settings, including different numbers of instances per class (IPC), different datasets and different architectures of the test networks. Table 2 demonstrates the results on CIFAR10, and the results on CIFAR100 and Tiny-ImageNet are reported in Appendix B.1. Note that, DropPath and knowledge distillation are not applicable when we use the same architecture for training and test networks, i.e., 3-layer CNN, because 1) it is too shallow for DropPath; 2) we will converge to the teacher model if we use the same architecture for the teacher and the student models. We can observe from these results that architecture overfitting is more severe in the case of small IPC and large architecture discrepancy, but both DropPath and knowledge distillation is capable of mitigating it. In addition, combining them can further improve the performance and overcome architecture overfitting in many cases. For instance, when evaluating our method on distilled images of MTT (CIFAR10, IPC=10), it contributes performance gains of 18.5% and 35.7% for ResNet18 and ResNet50, respectively. We are also interested in how much performance gap between training and test networks we can close. Surprisingly, when IPC=10 and 50, the test accuracies of most network architectures surpass that of the architecture identical to the training network. Along with it, the gaps between different test networks, such as ResNet18 and ResNet50, are also narrowed down in most cases. Additionally, we observe that the kernel-based method (i.e., FRePo) showed better cross-architecture generalization than the matching-based method (i.e., MTT).
DropPath enables an implicit ensemble of the shallow subnetworks and thus mitigates architecture overfitting. However, each of these sub-networks may have sub-optimal performance. Knowledge distillation can address this issue by encouraging similar outputs between the teacher model and the sub-networks and thus further improves the performance. By contrast, the contribution of knowledge distillation could be marginal without DropPath due to the big difference in architecture [50]. Empirically, combining DropPath with knowledge distillation not only achieves the best performance, but also greatly decreases the performance difference among different test network architectures.
Due to space limits, we report the standard deviations of performance in Table 7 of Appendix B.2. The results show that although the standard deviations increase when decreasing IPC, we can still see significant improvement by our methods.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Method & DP & KD & Misc. \\ \hline Baseline & ✘ & ✘ & ✘ \\ w/o DP \& KD & ✘ & ✔ \\ w/o DP & ✘ & ✔ & ✔ \\ w/o KD & ✔ & ✘ & ✔ \\ Full & ✔ & ✔ & ✔ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Experimental settings. _DP_ denotes DropPath with three-phase keep rate, _KD_ denotes knowledge distillation. Besides, the mis-cellaneous (Misc.) includes the methods in Section 3.3.
### Improve the Performance of Training on Limited Real Data
We discuss the performance of our methods when training on a limited amount of real data and compare it with the case of the distilled dataset. Our methods have shown effective to mitigate architecture overfitting on the distilled dataset, we expect them to improve the performance on limited real training data as well. In this case, smaller models also tend to perform better than larger models because both can fit the training set perfectly but the latter suffers more from overfitting.
As illustrated in Figure 3, we train models on different fractions of CIFAR10 training set which are randomly sampled. The 3-layer CNN still serves as the teacher model when we use knowledge distillation. Since ResNet18 and ResNet50 exhibit the largest performance differences from the 3-layer CNN in the previous experiments, we only show the results of ResNet18 and ResNet50 here. ResNet18 and ResNet50 significantly outperform 3-layer CNN with enough training data, but they show worse generalization performance than CNN when the fraction is lower than 0.02, i.e., 1000 training instances. Under our methods, the performances of both ResNet18 and ResNet50 surpass that of 3-layer CNN even when the fraction is as small as 0.002, i.e., 100 training instances. However, the performance gain saturates when the fraction of training data reaches 0.05, which can be attributed to the unsatisfactory performance of the teacher model (blue line). Therefore, we do not bother to obtain the results with larger fractions as a result. Nevertheless, Figure 7 (b) in Appendix B.4 shows that when the current teacher does not contribute to performance gain anymore, a stronger teacher can further improve the performance. More results are discussed there.
Furthermore, we observe that the performance gap of training on limited real data is much smaller than that of training on distilled images. For instance, when the fraction of training data is 0.002, which is equivalent to IPC=10, the performance gap between 3-layer CNN and ResNet50 is 4.9% when they are trained on real images. However, when we train them on distilled images
\begin{table}
\begin{tabular}{c|c|c|c|c c c c} \hline \hline \multirow{2}{*}{DD} & \multirow{2}{*}{IPC} & \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{3-layer} & \multirow{2}{*}{ResNet18} & \multirow{2}{*}{AlexNet} & \multirow{2}{*}{VGG11} & \multirow{2}{*}{ResNet50} \\ & & & & CNN & & & & \\ \hline \multirow{8}{*}{\begin{tabular}{c} \end{tabular} } & \multirow{8}{*}{1} & Baseline & 44.3 & 34.4 (-9.9) & 41.8 (-2.5) & 44.0 (-0.3) & 25.9 (-18.4) \\ & & w/o DP \& KD & **44.8** (+0.5) & 35.6 (-8.7) & 47.4 (-3.1) & 41.5 (-2.8) & 30.3 (-14.0) \\ & & w/o DP & - & 47.2 (-2.9) & 49.7 (+5.4) & 48.7 (+4.4) & 39.3 (-5.0) \\ & & w/o KD & - & 37.0 (-7.3) & 46.0 (+1.7) & 41.1 (-3.2) & 32.5 (-11.8) \\ & & **Full** & - & **49.3** (\(\pm\)5.0) & **50.7** (\(\pm\)6.4) & **48.8** (\(\pm\)4.5) & **41.5** (-2.8) \\ \hline \multirow{8}{*}{\begin{tabular}{c} \end{tabular} } & \multirow{8}{*}{10} & Baseline & 63.0 & 55.6 (-7.4) & 59.3 (-3.6) & 61.3 (-1.7) & 44.4 (-18.6) \\ & & w/o DP \& KD & **64.7** (+1.7) & 61.0 (-2.0) & 62.3 (-0.7) & 62.4 (-0.6) & 54.7 (-8.3) \\ & & w/o DP & - & 64.0 (+1.0) & 63.3 (-0.3) & 63.6 (+0.6) & 57.7 (-5.3) \\ & & w/o KD & - & 63.9 (+0.9) & 63.8 (+0.8) & 62.2 (-0.8) & 54.0 (-9.0) \\ & & **Full** & & **66.2** (\(\pm\)3.2) & **64.8** (\(\pm\)1.8) & **65.4** (\(\pm\)2.4) & **62.4** (\(\pm\)0.6) \\ \hline \multirow{8}{*}{\begin{tabular}{c} \end{tabular} } & \multirow{8}{*}{50} & Baseline & 70.5 & 66.7 (-3.8) & 66.8 (-3.7) & 68.3 (-2.2) & 60.5 (-10.0) \\ & & w/o DP \& KD & **72.4** (+1.9) & 73.0 (-2.5) & 71.0 (-0.5) & 70.9 (-0.4) & 71.2 (-0.7) \\ & & w/o DP & - & 73.9 (+3.4) & 72.1 (+1.6) & 72.0 (+1.5) & 72.9 (+2.4) \\ & & w/o KD & - & 74.5 (+4.0) & 71.5 (+1.0) & 70.1 (-0.4) & 70.6 (+0.1) \\ \cline{2-6} & & **Full** & - & **74.5** (\(\pm\)4.0) & **73.2** (\(\pm\)2.7) & **72.8** (\(\pm\)2.3) & **73.2** (\(\pm\)2.7) \\ \hline \multirow{8}{*}{\begin{tabular}{c} \end{tabular} } & \multirow{8}{*}{1} & Baseline & **48.3** & 37.2 (-11.1) & 40.5 (-7.8) & 39.3 (-9.0) & 22.4 (-25.9) \\ & & w/o DP \& KD & 46.8 (-1.5) & 36.9 (-11.4) & 43.2 (-5.1) & 36.7 (-11.6) & 24.7 (-23.6) \\ & & w/o DP & - & 41.6 (-6.7) & 46.7 (-1.6) & 38.6 (-9.7) & 32.4 (-15.9) \\ & & w/o KD & - & 35.5 (-12.8) & 41.1 (-7.2) & 34.4 (-13.9) & 28.5 (-19.8) \\ \cline{2-6} & & **Full** & - & **47.2** (\(\pm\)1.1) & **47.3** (\(\pm\)1.0) & **44.1** (\(\pm\)4.2) & **43.0** (\(\pm\)5.3) \\ \hline \multirow{8}{*}{\begin{tabular}{c} \end{tabular} } & \multirow{8}{*}{10} & Baseline & 63.6 & 48.9 (-14.7) & 56.9 (-6.7) & 52.6 (-11.0) & 28.1 (-35.5) \\ & & w/o DP \& KD & **65.0** (+1.4) & 51.3 (-12.3) & 60.7 (-2.9) & 56.0 (-7.6) & 39.8 (-23.8) \\ \cline{1-1} & & w/o DP & - & 61.4 (-2.2) & 52.7 (-10.9) & 48.8 (-14.8) & 49.9 (-13.7) \\ \cline{1-1} & & w/o KD & - & 60.7 (-2.9) & 59.2 (-4.4) & 57.6 (-6.0) & 47.5 (-16.1) \\ \cline{1-1} & & **Full** & & **67.4** (\(\pm\)3.8) & **68.3** (\(\pm\)4.7) & **67.1** (\(\pm\)3.5) & **63.8** (\(\pm\)0.2) \\ \hline \multirow{8}{*}{
\begin{tabular}{c} \end{tabular} } & \multirow{8}{*}{50} & Baseline & 70.2 & 62.3 (-7.9) & 67.5 (-2.7) & 63.0 (-7.2) & 53.1 (-17.1) \\ \cline{1-1} & & w/o DP \& KD & **70.5** (+0.3) & 68.1 (-2.1) & 69.5 (-0.7) & 67.6 (-2.6) & 66.5 (-3.7) \\ \cline{1-1} & & w/o DP & - & 66.9 (-3.3) & 63.8 (-6.4) & 61.2 (-9.0) & 66.8 (-3.4) \\ \cline{1-1} & & w/o KD & - & 69.8 (-0.4) & 67.2 (-3.0) & 69.0 (-1.2) & 65.0 (-5.2) \\ \cline{1-1} \cline{2-6} & & **Full** & - & **71.0** (\(\pm\)0.8) & **72.0** (\(\pm\)1.8) & **69.5** (-12.2) & **70.0** (\(\pm\)0.2) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Test accuracies of models trained on the distilled data of **CIFAR10**[49] with different IPCs. 3-layer CNN is the architecture used for data distillation and is the teacher model of knowledge distillation. The results in the bracket indicate the gaps from the baseline performance of 3-layer CNN. The results in bold are the best results among different settings. Note that DP and KD are not applicable for 3-layer CNN, so we do not have the test accuracy of 3-layer CNN in these settings.
of FRePo, the performance gap increases to 18.6%. As for the distilled images generated by MTT, the gap is even larger, which reaches 35.5%. Meanwhile, training on a distilled dataset results in much better performance than training on real data of the same size, which makes it popular in downstream applications. Therefore, we focus on applying our method in the context of dataset distillation, in which the effectiveness of our method can be better revealed.
### Ablation Studies
We conduct extensive ablation studies here to validate the effectiveness of each component in our methods. In this subsection, we focus on the case of using 3-layer CNN as the training network, ResNet18 as the test network, setting IPC to 10 and generating the distilled dataset by FRePo. Note that the baseline performance of 3-layer CNN trained on the distilled data is 63.0%, its performance improves to 64.7% with better optimization and data augmentation.
**DropPath:** We first try different minimum keep rates in the three-phase scheduler introduced in Section 3.1. As illustrated in Figure 4 (a), we find that a large minimum keep rate induces poor performance, but a smaller one makes the training longer (as indicated by lines 1-2 in Algorithm 1). Therefore, we set the minimum keep rate to 0.5 which balances performance and efficiency. Moreover, we verify the effectiveness of the final high keep rate (KR), which is the third phase in the three-phase scheduler, and the improved shortcut connection (SC) introduced in Section 3.1: the results shown in Table 3 indicate that both of them contribute to the performance.
**Knowledge Distillation:** We also test different hyperparameters of knowledge distillation (KD). As illustrated in Figure 4 (b) and (c), when weight \(\alpha\) and temperature \(\tau\) are in the range of [0.5, 0.8] and [1, 10], respectively, the performance does not vary significantly. It indicates that our method is quite robust to different hyperparameter choices.
**Optimization and Data Augmentation:** In Table 4, we replace each of the optimization and data augmentation approaches with a baseline. The results indicate that each of these approaches
\begin{table}
\begin{tabular}{c c c|c} \hline \hline Periodical LR & Lion & stronger Aug. & Test Acc. \\ \hline ✗ & ✗ & ✗ & 61.6 \\ ✗ & ✗ & ✗ & 61.9 \\ ✗ & ✗ & ✗ & 64.8 \\ ✗ & ✗ & ✗ & **66.6** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation studies about optimization and data augmentation. If periodical learning rate (LR), Lion optimizer and stronger augmentation (Aug.) are not adopted, we replace them with cosine annealing learning rate [51], AdamW [52] and 1-fold augmentation, respectively.
Figure 3: Test accuracies obtained from training on different fractions of CIFAR10, the shadow indicates the standard deviation. We compare the test accuracies **(a)** between ResNet18 (RN18) and 3-layer CNN (CNN), and **(b)** between ResNet50 (RN50) and CNN, respectively. The x-axis denotes the fraction of training data, _DP+KD_ denotes that the network is trained with DropPath and knowledge distillation Note that we run the experiments three times with different random seeds, but the teacher network is always trained on the same data as the student network.
improves performance. Among them, Lion optimizer contributes a performance improvement of 2.9%. Compared with adaptive optimizers, such as AdamW [52], Lion tends to converge to flatter minima, which results in better generalization performance [53, 54]. Since Lion can be seen as a gradient sign-based SGD with momentum and converges faster than SGD [44], we adopt it in our method. Figure 8 of Appendix B.5 further demonstrates that Lion finds flatter minima than AdamW from both quantitative and qualitative perspectives by numerical methods.
Note that the results of IPC=1 in Table 2 are obtained with 4-fold augmentation. For comparison, we also get the results with 2-fold augmentation (see in Table 8 of Appendix B.3).
## 5 Conclusion
This paper studies architecture overfitting when we train models on distilled datasets. We propose a series of approaches in both architecture designs and training schemes which can be adopted together to mitigate this issue. Our methods are efficient, generic and can improve the performance when training on a small real dataset directly. We believe that our work can help extend dataset distillation for applications in more real-world scenarios. Recognizing the existing disparity in performance between training on distilled data and the original training set, our future work will focus on exploring methods to further enhance performance.
| ディザイスト方法が、非常に少ない訓練データで訓練されたニューラルネットワークの驚くべき性能を示してきた。しかし、その一方で、アーキテクチャの過剰学習という重要な課題が発生する。つまり、特定のネットワークアーキテクチャ(すなわち、トレーニングネットワーク)によって生成された蒸馏訓練データは、他のネットワークアーキテクチャ(すなわち、テストネットワーク)でトレーニングされる際にパフォーマンスが低い。本論文では、この課題に対処し、アーキテクチャの設計とトレーニングスキームの組み合わせを提案する。これにより、蒸馏訓練データを用いて異なるネットワークアーキテクチャで汎用性の高い性能を得ることができる。本論文では、様々な規模の蒸馏データを含む実験を通じて、私たちの方法は効果的かつ一般性を示す。特に、大規模な容量を持つネットワークを用いて蒸馏データでトレーニングする際に、既存の方法と比較して、私たちの方法は、比較的小規模なデータ量 |
2309.15926 | Magnetic flux plays an important role during a BHXRB outburst in
radiative 2T GRMHD simulations | Black hole (BH) X-ray binaries cycle through different spectral states of
accretion over the course of months to years. Although fluctuations in the BH
mass accretion rate are generally recognized as the most important component of
state transitions, it is becoming increasingly evident that magnetic fields
play a similarly important role. In this article, we present the first
radiative two-temperature (2T) general relativistic magnetohydrodynamics
(GRMHD) simulations in which an accretion disk transitions from a quiescent
state at an accretion rate of $\dot{M} \sim 10^{-10} \dot{M}_{\rm Edd}$ to a
hard-intermediate state at an accretion rate of $\dot{M} \sim 10^{-2}
\dot{M}_{\rm Edd}$. This huge parameter space in mass accretion rate is bridged
by artificially rescaling the gas density scale of the simulations. We present
two jetted BH models with varying degrees of magnetic flux saturation. We
demonstrate that in `Standard and Normal Evolution' models, which are
unsaturated with magnetic flux, the hot torus collapses into a thin and cold
accretion disk when $\dot{M} \gtrsim 5\times 10^{-3} \dot{M}_{\rm Edd}$. On the
other hand, in `Magnetically Arrested Disk' models, which are fully saturated
with vertical magnetic flux, the plasma remains mostly hot with substructures
that condense into cold clumps of gas when $\dot{M} \gtrsim 1 \times 10^{-2}
\dot{M}_{\rm Edd}$. This suggests that the spectral signatures observed during
state transitions are closely tied to the level of magnetic flux saturation. | M. T. P. Liska, N. Kaaz, K. Chatterjee, Razieh Emami, Gibwa Musoke | 2023-09-27T18:05:02 | http://arxiv.org/abs/2309.15926v2 | # Magnetic flux plays an important role during a BHXRB outburst in radiative 2T GRMHD simulations
###### Abstract
Black hole (BH) X-ray binaries cycle through different spectral states of accretion over the course of months to years. Although fluctuations in the BH mass accretion rate are generally recognized as the most important component of state transitions, it is becoming increasingly evident that magnetic fields play a similarly important role. In this article, we present the first radiative two-temperature (2T) general relativistic magnetohydrodynamics (GRMHD) simulations in which an accretion disk transitions from a quiescent state at an accretion rate of \(\dot{M}\sim 10^{-10}\dot{M}_{\rm Edd}\) to a hard-intermediate state at an accretion rate of \(\dot{M}\sim 10^{-2}\dot{M}_{\rm Edd}\). This huge parameter space in mass accretion rate is bridged by artificially rescaling the gas density scale of the simulations. We present two jetted BH models with varying degrees of magnetic flux
Most general relativistic magnetohydrodynamic (GRMHD) simulations to date address accretion in the quiescent state. While BHXRBs indeed spend most of their time in the quiescent state, they accrete most of their gas (and hence grow most rapidly) in the hard-intermediate and high-soft states (e.g. Fabian, 2012). However, simulating accretion disks in these luminous states is numerically challenging due to the presence of dynamically important radiation fields and thermal decoupling between ions and electrons. Presently, only a handful of GRMHD codes are able to model radiation (e.g. Sadowski et al., 2013; McKinney et al., 2013; Fragile et al., 2014; Ryan et al., 2017; White et al., 2023). In addition, since radiative cooling makes such accretion disks thinner, one needs a much higher resolution to resolve them. For example, to resolve on a spherical grid a disk that is two times thinner without static or adaptive mesh refinement requires a factor 32 more computational time. These factors make such simulations extremely expensive and complex. Due to recent algorithmic and computational advances, radiative GRMHD simulations of accretion disks accreting above a few percent of the Eddington limit (i.e., very thin disks) came within the realm of possibility (e.g. Ohsuga and Mineshige, 2011; Mishra et al., 2016; Morales Teixeira et al., 2018; Fragile et al., 2018; Lancova et al., 2019; Mishra et al., 2020, 2022; Liska et al., 2022, 2023). These recent advances supplement earlier work that attempted to tackle the physics driving accretion in the luminous states using an ad hoc cooling function in place of first-principles radiation (e.g. Noble et al., 2009; Avara et al., 2016; Hogg and Reynolds, 2017, 2018; Scepi et al., 2023; Nemmen et al., 2023; Bollimpalli et al., 2023).
Recently, first-of-their-kind radiative GRMHD simulations of accretion disks accreting at \(L\sim 0.35L_{\rm Edd}\) demonstrated that in systems where no vertical magnetic flux is present, a thin and cold accretion disk forms, possibly explaining the high-soft state (Liska et al., 2022). However, in the presence of dynamically important large scale vertical magnetic flux (e.g. 'MADs', Narayan et al., 2003; Tchekhovskoy et al., 2011)), the accretion disk is truncated and decouples within \(r\sim 20r_{g}\) into a two-phase plasma of cold and dense clumps surrounded by hot and dilute gas (Liska et al., 2022). The presence of cold plasma down to the innermost stable circular orbit (ISCO) provides an interesting explanation for the observed relativistic broadened iron-reflection lines in the hard-state (e.g. Reis et al., 2010), which is thought to only feature hot gas unable to produce such lines. In between these two extreme regimes, where vertical magnetic flux is present but does not saturate the disk, Lancova et al. (2019) demonstrated that a hot plasma with both inflowing and outflowing components sandwic a thin accretion disk. Such puffy disk models can potentially describe BHXRBs in the intermediate spectral state, which launch relativistic jets but show no clear evidence of significant disk truncation (e.g. Kara et al., 2019).
However, none of this work addresses how and at which accretion rates these high-luminosity accretion states form and what role magnetic fields play in that process. In this work we present the first radiative two-temperature GRMHD simulations spanning 8 orders of magnitude in mass accretion rate. These simulations demonstrate a transition from a hot torus in the quiescent state to either a magnetically truncated (e.g. Liska et al., 2022) or puffy accretion disk (e.g. Lancova et al., 2019) in the (hard-) intermediate state depending on the amount of magnetic flux saturation. In Section 3 we describe our radiative GRMHD code and numerical setup, before presenting our results in Section 4 and concluding in Section 5.
## 2 Numerical Setup
To model the rise from quiescence to the hard-intermediate state we use the GPU-accelerated GRMHD code H-AMR (Liska et al., 2018, 2022). H-AMR evolves the radiative two-temperature GRMHD equations (e.g. Sadowski et al., 2013; Sadowski et al., 2017) on a spherical grid. Similar to Liska et al. 2022, we model the radiation field as a second fluid using the M1 approximation and, in addition, also evolve the photon number density to get a better estimate for the radiation temperature (Sadowski and Narayan, 2015). Radiative processes such as Brehmstrahlung, Synchrotron, bound-free, iron line emission, and scattering (including Comptonization) are included assuming a \(M_{\rm Bh}=10M_{\odot}\) black hole with solar abundances (\(X=0.70\), \(Y=0.28\), \(Z=0.02\)). The associated energy-averaged grey opacities are provided in McKinney et al. (2017) (equations C16, D7 and E1).
At each timestep, the dissipation rate is calculated by subtracting the internal energy provided by the entropy equation from the internal energy provided by the energy equation (e.g. Ressler et al., 2015). Subsequently, the total energy dissipation is divided as a source term between the electron and ions based on a reconnection heating model (Rowan et al., 2017). This deposits a fraction \(\delta_{e}\lesssim 0.5\) of the dissipation into the electrons, which varies between \(\delta_{e}\sim 0.2\) in less magnetized regions to \(\delta_{e}\sim 0.5\) in highly magnetized regions. Coulomb collisions (Stepney, 1983) are taken into account through an implicit source term (e.g. Sadowski et al., 2017). To avoid the jet funnel becoming devoid of gas and to keep the GRMHD scheme stable, we floor the density in the drift frame of the jet (Ressler et al., 2015) such that the ratio of the density and magnetic pressure \(\frac{p_{\rm B}}{\rho}\lesssim 12.5\).
We use a spherical grid in the Kerr-Schild foliation with coordinates \(x^{1}=\log(r)\), \(x^{2}=\theta\), and \(x^{3}=\varphi\) with a resolution of \(N_{r}\times N_{\theta}\times N_{\varphi}=420\times 192\times 192\) for our SANE model and \(N_{r}\times N_{\theta}\times N_{\varphi}=560\times 192\times 192\) for our MAD model. This adequately resolves the fastest growing MRI-wavelength by
\(\gtrsim 16\) cells in all 3 dimensions. We place the outer boundary at \(R_{\rm out}=10^{3}r_{g}\) for our SANE model, and at \(R_{\rm out}=10^{4}r_{g}\) for our MAD model. We also maintain at least 5 cells within the event horizon such that the inner boundary is causally disconnected from the rest of the computational domain. We speed up the simulations approximately 3-fold by introducing 4 levels of local adaptive timestepping (Liska et al., 2022). To prevent cell squeezing around the polar axis from slowing down our simulations (e.g. Courant et al., 1928) we use 4 levels of static mesh derefinement (Liska et al., 2018, 2022) to reduce the \(\varphi\)-resolution to \(N_{\varphi}=[96,48,24,12]\) within \(\theta\lesssim[30^{\circ},15^{\circ},7.5^{\circ},3.75^{\circ}]\) from each pole. This maintains a cell aspect ratio of roughly \(|\Delta r|:|\Delta\theta|:|\Delta\varphi|\sim 1:1:2\) throughout the grid, which is sufficient to capture the 3-dimensional nature of the turbulence.
## 3 Physical Setup
To understand the effects of magnetic flux saturation on the transition from the quiescent to the hard-intermediate state, we include two models in the SANE ('Standard and Normal Evolution', Narayan and Yi, 1994) and MAD ('Magnetically Arrested Disk', Narayan et al., 2003) regimes. We assume a rapidly spinning black hole with spin parameter \(a=0.9375\). Our SANE model (XRB SANE) features a standard Fishbone and Moncrief torus (Fishbone and Moncrief, 1976) with inner radius \(r_{\rm in}=6\,r_{g}\), radius of maximum pressure at \(r_{\rm max}=12\,r_{g}\), and outer radius \(r_{\rm out}\sim 50r_{g}\). Our MAD model (XRB MAD), on the other hand, features a much larger torus with \(r_{\rm in}=20\,r_{g}\) and \(r_{\rm max}=41r_{g}\) whose outer edge lies at \(r_{\rm out}\sim 800\,r_{g}\). These torii are pretty standard choices in the GRMHD community (e.g. Porth et al., 2019; Chatterjee et al., 2023). We thread the SANE model with magnetic vector potential \(A_{\Phi}\propto\rho-0.2\) and the MAD model with magnetic vector potential \(A_{\Phi}\propto\rho r^{3}\sin^{3}(\theta)\exp\left(-\frac{r}{400r_{g}}\right)- 0.2\). Here \(\rho\) is the gas density. In both cases this produces a field loop that is approximately the size of the torus. Because the field loop in our MAD torus is much larger than in our SANE torus, only the MAD torus contains enough magnetic flux to get saturated and become MAD. In both cases, we normalize the resulting magnetic field such that \(\beta^{\rm max}=p_{\rm gas}^{\rm max}/p_{b}^{\rm max}=100\) where \(p_{\rm gas}^{\rm max}\) and \(p_{b}^{\rm max}\) are the maximum gas and magnetic pressure in the torus. For the purpose of calculating the initial torus solution we set the adiabatic index \(\gamma=5/3\) for our SANE model and \(\gamma=13/9\) for our MAD model. We subsequently distribute, according to our heating prescription involving a magnetic reconnection model (Rowan et al., 2017), the total pressure between the ions and electrons before we self-consistently evolve their entropy and adiabatic indices (e.g. Sadowski et al., 2017).
To make GRMHD simulations of BHXRB outbursts feasible, we artificially shorten the relevant timescales by introducing a rescaling method that, as a function of time, sets the accretion rate to a predetermined value. However, before we apply this method, we first run the simulation for \(t=10^{4}r_{g}/c\) in two-temperature non-radiative GRMHD to get the accretion disk into a quasi-steady state. We subsequently restart the simulation in radiative two-temperature GRMHD and re-normalize the density (\(\rho\)) every full timestep with a factor \(\zeta\) such that the running average of the black hole mass accretion rate, \(\langle\dot{M}\rangle\),
\[\langle\dot{M}\rangle=\langle\int-\sqrt{-\beta}\rho u^{\prime}d\theta d\varphi \rangle|^{t}_{t-10^{4}r_{g}/c}, \tag{1}\]
is scaled to a time-dependent 'target' mass accretion rate,
\[\dot{M}_{\rm Target}=10^{-10}\times 2^{\frac{\rm(jet)}{10^{4}}}\dot{M}_{\rm Edd}, \tag{2}\]
via the rescaling factor,
\[\zeta=\dot{M}_{\rm Target}/\langle\dot{M}\rangle|_{t=5r_{g}} \tag{3}\]
Here \(\dot{M}_{\rm Edd}=\frac{L_{\rm Edd}}{\eta_{\rm NT}c^{2}}\) is the Eddington accretion rate and \(\eta_{\rm NT}=0.178\) the Novikov and Thorne (1973) radiative efficiency. We also rescale the internal energy density (\(u_{g}\)), radiation energy density (\(E_{\rm rad}\)) and magnetic energy density (\(b^{2}\)) with the same prefactor \(\zeta\) as the density. This leads to a doubling of the black hole mass accretion rate every \(t=10^{4}r_{g}/c\). Note that this approach automatically increases the total amount of event horizon magnetic flux (\(\Phi=\frac{\sqrt{4\pi}}{2}\int|B^{\prime}|d\theta d\varphi\rangle\) by a factor \(\sqrt{\zeta}\), such that the normalized magnetic flux (\(\phi=\frac{\Phi|_{\rm max}}{\sqrt{\langle\dot{M}\rangle}|_{t=5r_{g}}}\)) remains constant. When \(\phi\sim 40-50\) we expect that the
Figure 1: **Panel A:** The event horizon mass accretion rate \(\dot{M}\) closely follows the target mass accretion rate \(\dot{M}_{\rm target}\) (red) for both the SANE (black) and MAD (blue) models. **Panel B:** The normalized magnetic flux \(\phi\) maintains saturation value in the MAD model and stays a factor \(\gtrsim 2.0\) below saturation in the SANE model.
disk turns MAD and flux bundles get ejected from the black hole (e.g. Tchekhovskoy et al., 2011; McKinney et al., 2012). We achieve inflow-outflow equilibrium over the mass accretion rate doubling time (\(\Delta t=10^{4}r_{g}/c\)) up to approximately \(r\sim 15r_{g}\) for our SANE model and \(r\sim 30r_{g}\) for our MAD model.
## 4 Results
We evolve both models for \(t\sim 260,000-280,000\,r_{g}/c\), during which the targeted mass accretion rate increases by 8 orders of magnitude. As illustrated in Figure 1a, the black hole mass accretion rate closely follows the targeted mass accretion rate for both models. However, as illustrated in Figure 1b, while the normalized magnetic flux threading the BH event horizon stays constant in our MAD model, it increases by a factor of \(\sim 3\) in our SANE model. This is because a significant fraction of the initial gas reservoir accretes or gets ejected in the form of winds, leading to a relative larger increase in the dimensionless magnetic flux compared to the mass accretion rate. The rapid variability of the magnetic flux observed in our MAD model is a well-known characteristic of MADs (e.g. Tchekhovskoy et al., 2011; McKinney et al., 2012) caused by flux bundles being ejected from the black hole event horizon through magnetic reconnection (e.g. Ripperda et al., 2021).
The contour plots of density and electron temperature in Figure 2 illustrate 3 different stages of the artificially-induced state transition. An accompanying animation also illustrating the ion temperature is included in the supplementary materials and on our Youtube playlist. In the first stage (\(\dot{M}\lesssim 10^{-6}\dot{M}_{\rm Edd}\)), the radiative efficiency is low and radiative cooling plays a negligible role. The ions in the plasma are significantly hotter than the electrons because our heating prescription typically injects only a fraction \(\delta_{e}\sim 0.2-0.4\) of the dissipative heating into the electrons (and a fraction \(\delta_{i}\sim 0.6-0.8\) into the ions). In the second stage (\(\dot{M}\gtrsim 10^{-6}\dot{M}_{\rm Edd}\)), radiative cooling of the electrons becomes efficient leading to a drop in electron temperature but no other structural change (see also Chatterjee et al., 2023). In the third stage (\(\dot{M}\gtrsim 10^{-2}\dot{M}_{\rm Edd}\)), Coulomb collisions become efficient. This allows the ions
Figure 2: The SANE (upper panels) and MAD (lower panels) models at 3 different accretion rates. The left hemisphere illustrates the electron temperature (\(T_{e}\)) while the right hemisphere illustrates the density (\(\rho\)). The disk-jet boundary (\(b^{2}/\rho=1\)) is demarcated by white line and the last scattering surface (\(\tau_{eq}=1\)) is demarcated by a magenta line. The inset in the left hemisphere gives the mass accretion rate and luminosity in Eddington units. See our Youtube playlist for a full animation of both figures that also includes the ion temperature. At very low accretion rates (left panels), the electron temperature is determined by the heating rate and adiabatic evolution of the electrons. At intermediate accretion rates (middle panels), the electrons cool efficiently but there is no noticeable change in the disk structure. At accretion rates of \(\dot{M}\gtrsim 10^{-2}\dot{M}_{\rm Edd}\) the torus collapses into a thin accretion disk (SANE) sandwiched by a hot plasma or forms a magnetically truncated accretion disk (MAD).
to cool by transferring their energy to the radiation-emitting electrons and, eventually, leads to a rapid collapse of the hot torus.
In our SANE model this collapse results in a geometrically thin accretion disk surrounded by hot magnetic pressure supported gas outside of \(r\gtrsim 3\,r_{\rm g}\). Thus, the disk is only truncated very close to the black hole. The production of hot nonthermal electrons within the ISCO was predicted by Hankla et al. (2022). Interestingly, this hot coronal gas rather than the thin accretion disk seems to be responsible for the majority of \(\dot{M}\) (Fig. 3). This is similar to the puffy accretion disks first presented in other radiative GRMHD simulations (Lancova et al., 2019) and in pure MHD simulations of weakly to moderately magnetized disks (Jacquemin-Ide et al., 2021). On the other hand, in our MAD model the hot torus transitions into a two-phase medium with cold optically thick patches of gas surrounded by hot, optically thin, plasma. These cold patches of gas are visible for \(20\,r_{g}\lesssim r\lesssim 100\,r_{g}\) and do not reach the event horizon. Since this work was performed at a rather low resolution, we were forced to stop these simulations after the cold plasma became under-resolved. Follow-up simulations featuring a much higher resolution will be necessary to resolve this cold and slender plasma evolves as we keep increasing the accretion rate.
Nevertheless, these findings diverge from the magnetically truncated accretion disk models detailed in Liska et al. (2022), where a slender disk was truncated at \(r\sim 20r_{g}\) and cold patches of gas reached the event horizon. The absence of this thin disk in our simulations may be attributed to the considerably higher saturation of magnetic flux within our torus, distinguishing it from the disk in Liska et al. 2022. This discrepancy could feasibly result in a significantly larger truncation radius. Consequently, if the truncation radius in our MAD model lies much further out, it is plausible that our simulation's duration is insufficient to capture the formation of a thin accretion disk. A larger truncation radius (assuming the cold clumps of gas were sufficiently resolved, which might not be true) might consequently also explain why no cold plasma reaches the event horizon. Namely, as proposed in Liska et al. (2022), magnetic reconnection can potentially evaporate the cold clumps of gas before they reach the event horizon. This is less likely to happen if the magnetic truncation radius moves further in and, hence, the cold clumps have less time to evaporate.
In Figure 4 we plot the time evolution of the bolometric luminosity (panels a and b), density scale height (panels c and d), and outflow efficiencies (panel c and d) as function of the mass accretion rate. While the luminosity increases from \(L=10^{-15}L_{\rm Edd}\) to \(L=10^{-2}L_{\rm Edd}\), the radiative efficiency increases by \(3-5\) orders of magnitude. Similar to results presented in the radiative GRMHD simulations of Ryan et al. (2017) and Dexter et al. (2021), the MAD model is significantly more radiatively efficient, especially at low accretion rates. This is caused by more efficient Synchrotron cooling in the highly magnetized gas of a MAD. Around \(\dot{M}=5\times 10^{-3}\dot{M}_{\rm Edd}\) the SANE model collapses into a thin accretion disk, and we observe a rapid order-of-magnitude rise in the radiative efficiency to the NT73 (Novikov and Thorne, 1973) limit of \(\eta_{rad}\sim\eta_{NT}\sim 0.18\). Here \(\eta_{rad}=\frac{(\int\sqrt{-\pi\pi^{\prime}}d\theta d\varphi|)_{\rm max}\varphi _{\rm g}}{(\dot{M})|_{\rm max}\varphi_{\rm g}}\) with \(R_{\nu}^{\mu}\) being the radiation stress-energy tensor. This collapse manifests itself as a rapid decrease in the density scale height of the disk (\(\frac{h}{r}=\langle\theta-\pi/2\rangle|_{\rho}\) with \(|_{\rho}\) denoting a density-weighted average). On the other hand, in our MAD model the radiative efficiency asymptotes to \(\eta_{rad}\sim 1.2\eta_{NT}\). This has been observed in other radiative MADs (e.g. Curd and Narayan, 2023) and could, pending future analysis, potentially be explained by the presence of a dynamically important magnetic field that injects energy into the accreting gas, which is not accounted for in Novikov and Thorne (1973). In addition, there is only a marginal factor \(\sim 2\) decrease in the disk scale height after the formation of cold plasma, because magnetic pressure is able to stabilize the accretion disk against runaway thermal collapse (e.g Sadowski, 2016; Jiang et al., 2019). The total wind (\(\eta_{wind}=\frac{\int\sqrt{-\pi^{\prime}}t^{\prime}_{\nu}d\theta d\varphi|_{ \rm max}^{(\rho^{2}/\rho<1)}-M|_{\rm max}\varphi_{\rm g}}{\langle\dot{M}\rangle |_{\rm max}\varphi_{\rm g}}\) with \(T_{\nu}^{\mu}\) the stress energy tensor and \(\frac{b^{2}}{\rho}=1\) the wind-jet boundary) and jet (\(\eta_{jet}=\frac{\int\sqrt{-\pi^{\prime}}t^{\prime}_{\nu}d\theta d\varphi|_{ \rm max}^{(\rho^{2}/\rho)<1}}{\langle\dot{M}\rangle|_{\rm max}\varphi_{\rm g}}\)) driven outflow efficiency remain relatively constant throughout the evolution in our MAD model. However, in our SANE model, the increase
Figure 3: A contourplot of density with velocity streamlines in black and the last scattering surface in magenta for model XRB SANE. Similar to the puffy accretion disk models presented in Lančová et al. (2019) the majority of gas accretion seems to be driven by inflows outside of the disk’s midplane.
in the normalized magnetic flux causes the jet to become significantly more efficient over time (e.g. \(\eta_{jet}\propto\phi^{2}\)).
To better understand when, during an outburst, certain physical processes become important, we plot in Figure 5(a,b) the radiative cooling timescale (\(t_{rad}\) = \(\frac{\int\sqrt{g_{\star}}d\theta d\varphi|=\eta_{e}}{\Lambda_{Com}|=\eta_{e}}\) with \(\Lambda_{Em}\) the radiative emission rate and \(u_{i,e}\) the ion/electron internal energy), Compton timescale (\(t_{Compt}=\frac{\int\sqrt{g_{\star}}d\theta d\varphi|=\eta_{e}}{\Lambda_{Compt }|=\eta_{e}}\) with \(\Lambda_{Compt}\) the Compton scattering emission rate), Coulomb coupling timescale (\(t_{Coulom}=\frac{\int\sqrt{g}(\kappa_{\star}+\kappa_{\star})d\theta d\varphi|= \eta_{e}}{\Lambda_{Coulom}|=\eta_{e}}\) with \(\Lambda_{Coulom}\) the Coulomb coupling rate), and accretion timescale (\(t_{Acc}=\frac{\int\sqrt{g_{\star}}d\theta d\varphi|=\eta_{e}}{\int\sqrt{g_{ \star}}d\theta d\varphi|=\eta_{e}}\)). \(\Lambda_{\rm Em}\), \(\Lambda_{\rm Compt}\), and \(\Lambda_{\rm Coul}\) are derived from the opacities given in McKinney et al. (2017) and the Coulomb coupling rate given in Sadowski et al. (2017). Evidently, the radiative and Compton cooling timescale become similar to the accretion timescale around \(\dot{M}\gtrsim 10^{-6.5}\dot{M}_{\rm Edd}\) and \(\dot{M}\gtrsim 10^{-4}\dot{M}_{\rm Edd}\) respectively. This manifests itself in figure 5(c,d) as a decrease in the electron temperature (\(T_{e}=\frac{\int\sqrt{g_{\star}}d\theta d\varphi|=\eta_{e}}{\int\sqrt{g_{ \star}}d\theta d\varphi|=\eta_{e}}\)). The ion temperature (\(T_{i}=\frac{\int\sqrt{g_{\star}}d\theta d\theta\varphi|_{\eta_{e}\eta_{e}}}{ \int\sqrt{g_{\star}}d\theta d\varphi|=\eta_{e}}\)) only drops when the Coulomb coupling timescale becomes comparable to the accretion timescale around \(\dot{M}\sim 5\times 10^{-3}\dot{M}_{\rm Edd}\). Meanwhile the plasma transitions in figure 5(e,f) from a quasi-relativistic adiabatic index \(\gamma\sim 1.5\) to a non-relativistic \(\gamma\sim 5/3\). Even at accretion rates that are typically associated with radiatively inefficient accretion (\(\dot{M}\lesssim 10^{-7}\dot{M}_{\rm Edd}\)), future work will need to test if electron cooling (see also Dibi et al., 2012; Yoon et al., 2020) and/or a self-consistent adiabatic index can change the spectral signatures compared to equivalent non-radiative single-temperature GRMHD models (e.g. Moscibrodzka et al., 2014).
## 5 Discussion and Conclusion
In this article we addressed for the first time the transition from the quiescent to the hard intermediate state using radiative two-temperature GRMHD simulations. By rescaling the black hole mass accretion rate across 8 orders of magnitude, these simulations demonstrated that radiative cooling and Coulomb coupling become increasingly important and eventually lead to a transition to a two-phase medium. While the hot torus in SANE models transitions to a thin accretion disk surrounded by a sandwich-like corona, rem
Figure 4: **Panels (a,b):** As the mass accretion rate rises the luminosity increases from \(L\sim 10^{-15}-10^{-13}L_{EDD}\) to \(L\sim 10^{-2}L_{EDD}\). This is driven by both an increase in the mass accretion rate and radiative efficiency. **Panels (c,d):** The density scaleheight of the disk stays relatively constant until the accretion rate exceeds \(\dot{M}\sim 10^{-3}\dot{M}_{EDD}\) at which point the disk collapses. At this point Coulomb collisions are efficient and both the ions can cool through the radiation emitting electrons. **Panels (e,f):** The jet (blue), wind (black), radiative (purple) and NT73 (dashed green) efficiency as function of \(\dot{M}\). Interestingly, the MAD model maintains a significantly higher radiative efficiency, presumably due to more efficient Synchrotron emission.
Figure 5: **Panels (a,b):** In both models the electron temperature \(T_{e}\) starts cooling for \(\dot{M}\gtrsim 10^{-6}\dot{M}_{EDD}\) due to more efficient radiative cooling. The ion temperature \(T_{i}\) is governed by the ability of the ions to transfer their energy to the radiation-emitting electrons and doesn’t start cooling until \(\dot{M}\sim 10^{-3}\dot{M}_{EDD}\). **Panels (c,d)**: We compare the radiative emission (\(t_{Em}\)), Comptonization (\(t_{Compt}\)), and Coulomb collision (\(t_{Coulom}\)) timescale against the accretion (\(t_{acc}\)) timescale to illustrate the importance of these processes at different accretion rates. **Panels (e,f)**: The adiabatic index for the electrons \(\gamma_{e}\) is relativistic while the adiabatic index for the ions \(\gamma_{i}\) is non-relativistic. This leads to a semi-relativistic gas with adiabatic index \(\gamma_{g}\sim 1.55\) at \(\dot{M}\lesssim 10^{-6}\dot{M}_{\rm Edd}\).
inscent of a puffy (Lancova et al., 2019) or magnetically elevated (Begelman and Silk, 2017) disk, the MAD torus transitions to a magnetically truncated disk and forms cold clumps of gas embedded in a hot corona (see also Bambic et al., 2023). This is similar to previous work (Liska et al., 2019, 2022), but in this case an outer thin accretion disk is absent and the cold plasma does not reach the event horizon. While a thin accretion disk forms around \(\dot{M}\sim 5\times 10^{-3}\dot{M}_{\rm Edd}\) for our SANE model, which agrees well with Dexter et al. (2021), no cold plasma is observed in our MAD model before \(\dot{M}\gtrsim 1\times 10^{-2}\dot{M}_{\rm Edd}\). As described in the appendix, we find remarkably similar conclusions for two analogous models applicable to a \(M=6.5\times 10^{9}M_{\odot}\) AGN. Pending future ray-tracing analysis, we expect the MAD and SANE model have vastly different spectral and time-variability signatures. We also plan to address the structure and dynamics of the thin and truncated disks as we keep increasing the density scale with a dedicated simulation campaign performed at a much higher resolution.
The goal of this article was to study the transition from the quiescent state to the hard intermediate state, both of which feature radio jets. Thus, we have not considered models that do not produce any jets such as models with a purely toroidal field (e.g. Liska et al., 2022) and instead only considered a jetted SANE model with \(\phi\sim 5-15\) and a jetted MAD model with \(\phi\sim 40-55\). By rescaling both the gas density and magnetic energy density proportionally to the target mass accretion rate, this work implicitly assumes that the accretion-rate normalized magnetic flux (\(\phi\)) remains constant within a factor \(\sim 2\). Conventional thinking might suggest that since the hard-intermediate state is associated with the most powerful jets, and recent polarimetric Event Horizon Telescope observations (Event Horizon Telescope Collaboration et al., 2021) strongly imply that AGN accretion disks in the quiescent state are MAD, the hard-intermediate state would be MAD as well. However, the jet power is set by the total amount of magnetic flux threading the black hole (\(P_{\rm jet}\propto\Phi^{2}\)), and thus a SANE jet at a much higher accretion rate can easily outperform a MAD jet at a lower accretion rate. Thus, an interesting possibility to be explored in future work would include a model where the magnetic flux does not increase proportional to \(\Phi\propto\sqrt{\dot{M}}\) but is truncated at a maximum value \(\Phi\propto min(\sqrt{\zeta}\Phi_{0},\Phi_{max})\). This would cause the disk to transition from a MAD disk in the quiescent state to a SANE disk in the hard-intermediate state where, at least initially, the magnetic pressure is still dynamically important (e.g. Begelman and Silk, 2017; Dexter and Begelman, 2019; Lancova et al., 2019). In upcoming work, we will employ ray-tracing calculations to compare both our existing models and future models featuring a truncated magnetic flux against multi-wavelength observations, which offer constraints on the truncation radius and the size/geometry of coronal structures in actual astrophysical systems (e.g. Ingram and Done, 2011; Plant et al., 2014; Fabian et al., 2014; Garcia et al., 2015; Kara et al., 2019).
There are several theoretical and observational arguments that support this 'truncated flux' scenario. First, for systems to remain MAD during a 2-4 orders of magnitude increase in \(\dot{M}\), they would need to advect \(1-2\) orders of magnitude additional magnetic flux onto the BH (e.g \(\Phi\propto\sqrt{\dot{M}}\) in a MAD). Especially when the outer disk becomes geometrically thin it is unclear if this is physically possible since theoretical arguments suggest thin disks might not be able to advect magnetic flux loops (e.g. Lubow et al., 1994). Second, observations suggests that the disk truncation radius in the hard intermediate state appears (e.g. Reis et al., 2010; Kara et al., 2019) to be rather small (\(r_{t}\lesssim 5r_{g}\)). This is inconsistent with recent GRMHD simulations which demonstrated that even when the disk only contained a factor \(\sim 1.5\) of excess magnetic flux (above the MAD limit), this led to a truncation radius \(r_{t}\sim 20r_{g}\)(e.g. Liska et al., 2019, 2022). Third, low-frequency quasi-periodic oscillations which are ubiquitous in the hard-intermediate state (e.g. Ingram and Motta, 2019), are most likely seeded by a precessing disk which tears of from a larger non-precessing disk (e.g. Stella and Vietri, 1998; Ingram et al., 2009, 2016; Musoke et al., 2022). This has been observed in radiative and non-radiative GRMHD simulations where a tilted thin accretion disk is threaded by a purely toroidal magnetic field (e.g. Liska et al., 2022; Musoke et al., 2022; Liska et al., 2023) and in similar GRMHD simulations where the accretion disk is threaded by a below saturation level vertical magnetic field (e.g. Liska et al., 2021). However, there are no numerical simulations that have shown any disk tearing or precession where the disk is saturated by vertical magnetic flux (e.g. Fragile et al., 2023). The main problem is that for a disk to tear (and precess), the warping of space-time needs to substantially exceed the viscous torques holding the disk together (e.g. Nixon and King, 2012; Nealon et al., 2015; Dogan et al., 2018; Dogan and Nixon, 2020; Raj et al., 2021). However, the viscous torques stemming from equipartition strength magnetic fields within the truncation radius might be too strong for a disk to tear.
While our simulations incorporate the effects of radiation and thermal decoupling between ions and electrons, they still rely on a rather simplistic heating prescription for electrons extracted from particle-in-cell models (Rowan et al., 2017). Since, absent any Coulomb collisions, the cooling rate in a given magnetic field will be determined by the temperature and density of the radiation emitting electrons, the radiative efficiency at lower accretion rates can become sensitive to the used heating prescription (e.g. Chael et al., 2018). For example, in our models roughly a fraction \(\delta_{e}\sim 20-40\%\) of the dissipation ends up in the electrons. If this electron heating fraction would be smaller/bigger, we expect that the radia
tive efficiency to drop/rise and the collapse to a two-phase medium to occur later/earlier. Similarly, other microphysical effects, typically not captured by the ideal MHD approximation, such as thermal conduction between the corona and disk (e.g. Meyer and Meyer-Hofmeister, 1994; Liu et al., 1999; Meyer-Hofmeister and Meyer, 2011; Cho and Narayan, 2022), a non-unity magnetic Prandtl number (e.g. Balbus and Henri, 2008), could alter the transition rate to a two-phase medium.
In addition, it was recently demonstrated that the physics driving accretion in luminous black holes (e.g. with \(L\gtrsim 0.01L_{\rm Edd}\)), which are misaligned with the black hole spin axis, is fundamentally different. Namely, dissipation of orbital energy is driven by nozzle shocks induced by strong warping (Kaaz et al., 2022; Liska et al., 2023) instead of magneto-rotational instability (MRI) driven turbulence (e.g. Balbus and Hawley, 1991, 1998). These nozzle shocks form perpendicular to the line of nodes, where the disk's midplane intersects the equatorial plane of the black hole, and increase the radial speed of the gas by \(2-3\) orders of magnitude in luminous systems that are substantially misaligned. This could, at a given accretion rate, lead to a decrease in the disk's density, potentially delaying the formation of a thin accretion disk. We expect to address outbursts of warped accretion disks in the coming years.
Numerically, this article has also introduced a method to study outbursts by artificially rescaling the density as a function of time. This solves the issue that the physical processes in the outer disk that drive such drastic fluctuations in the mass accretion rate occur over timescales that are too long to simulate (real outbursts typically take weeks to months, while our simulations last for \(t\sim 10-15s\)). Future applications of this method might include (i) ultra-luminous accretion disks, which decay from super-Eddington to sub-Eddington accretion rates; (ii) the transition from the hard-intermediate state to the high-soft state, where the magnetic flux threading the black hole drops while the accretion rate remains constant; and (iii) the transition from the high-soft state to the quiescent state, characterised by a gradual drop in the mass accretion rate.
## 6 Acknowledgements
We thank Sera Markoff, Sasha Tchekhovskoy, and Ramesh Narayan for insightful discussions. An award of computer time was provided by the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) and ASCR Leadership Computing Challenge (ALCC) programs under awards PHY129 and AST178. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. ML was supported by the John Harvard, ITC and NASA Hubble Fellowship Program fellowships. NK was supported by an NSF Graduate Research Fellowship. RE was supported by the NASA ATP grant numbers 21-ATP21-0077, NSF AST-1816420, and HST-GO-16173.001-A. KC was supported by the Black Hole Initiative (BHI) fellowship program. GM was supported by a Netherlands Research School for Astronomy (NOVA) Virtual Institute of Accretion (VIA) and the Canadian Institute for Theoretical Astrophysics (CITA) postdoctoral fellowships.
## Appendix
We present two radiative two-temperature GRMHD models (M87 SANE and M87 MAD) where we change the black hole mass from a typical BHXRB of \(M_{\rm BH}=10M_{\odot}\) to a large AGN such as M87 with \(M_{\rm BH}=6.5\times 10^{9}M_{\odot}\). Figures 6, 7, 8,9 and 10 in the Appendix correspond to figures 1, 2, 3, 4 and 5 in the main article. Interestingly, the evolution of our AGN models looks very similar to our BHXRB models. The most striking difference between BHXRBs and AGN is a slightly lower radiative efficiency at lower accretion rates, which can be explained by a weaker Synchrotron emission opacity coefficient (e.g. McKinney et al., 2017) and is reflected in a longer emission time \(t_{Em}\). In addition, after the plasma condenses into a two-phase medium, the temperature of the cold phase gas in AGN (\(T_{e}\sim 10^{5}K\)) is much lower than in BHXRBs (\(T_{e}\sim 10^{7}K\)). This is a well known fact in the analytic theory of radiatively efficient AGN accretion disks (e.g. Shakura & Sunyaev, 1973; Novikov & Thorne, 1973), which are less dense and hence more radiation pressure dominated than their BHXRB analogues.
Figure 6: Same as figure 1, but for a \(M=6.5\times 10^{9}M_{\odot}\) AGN.
Figure 7: Same as figure 2, but for a \(M=6.5\times 10^{9}M_{\odot}\) AGN.
Figure 8: Same as figure 3, but for a \(M=6.5\times 10^{9}M_{\odot}\) AGN.
Figure 10: Same as figure 5, but for a \(M=6.5\times 10^{9}M_{\odot}\) AGN. | ブラックホール(BH)X線二等星は、数ヶ月から数年かけて、 accretion の異なる光度状態を繰り返します。BH の質量 accretion の変動が、状態遷移の最も重要な要素として一般的に認識されていますが、磁場の役割が同様に重要であることがますます明白になっています。この論文では、BH accretion ディスクが静止状態から accretion の速度が $\dot{M} \sim 10^{-10} \dot{M}_{\rm Edd}$ に達するまで、そして $ \dot{M} \sim 10^{-2}\dot{M}_{\rm Edd}$ に達するまで、放射線2温度 (2T) の一般relativistic magnetohydrodynamics (GRMHD)シミュレーションを初めて apresentaます。この巨大な質量 accretion のパラメータ空間は、シミュレーションのガス密度スケールを人工的に調整することで渡り合っています。 2 つの |
2309.12091 | Scotogenic model from an extended electroweak symmetry | We argue that the higher weak isospin $SU(3)_L$ manifestly unifies dark
matter and normal matter in its isomultiplets for which dark matter carries a
conserved dark charge while normal matter does not. The resultant gauge
symmetry is given by $SU(3)_C\otimes SU(3)_L \otimes U(1)_X\otimes U(1)_G$,
where the first factor is the color group, while the rest defines a theory of
scotoelectroweak in which $X$ and $G$ determine electric charge
$Q=T_3-1/\sqrt{3}T_8+X$ and dark charge $D=-2/\sqrt{3}T_8+G$. This setup
provides both appropriate scotogenic neutrino masses and dark matter stability
as preserved by a residual dark parity $P_D=(-1)^D$. Interpretation of the dark
charge is further discussed, given that $SU(3)_L$ is broken at very high energy
scale. | Phung Van Dong, Duong Van Loi | 2023-09-21T14:03:04 | http://arxiv.org/abs/2309.12091v2 | # Scotoelectroweak theory
###### Abstract
We argue that the higher weak isospin \(SU(3)_{L}\) manifestly unifies dark matter and normal matter in its isomultiplets for which dark matter carries a conserved dark charge while normal matter does not. The resultant gauge symmetry is given by \(SU(3)_{C}\otimes SU(3)_{L}\otimes U(1)_{X}\otimes U(1)_{G}\), where the first factor is the color group, while the rest defines a theory of scotoelectroweak in which \(X\) and \(G\) determine electric charge \(Q=T_{3}-1/\sqrt{3}T_{8}+X\) and dark charge \(D=-2/\sqrt{3}T_{3}+G\). This setup provides both appropriate scotogenic neutrino masses and dark matter stability as preserved by a residual dark parity \(P_{D}=(-1)^{D}\). Interpretation of the dark charge is further discussed, given that \(SU(3)_{L}\) is broken at very high energy scale.
## I Introduction
Neutrino mass [1; 2] and dark matter [3; 4] are the important questions in science which require the new physics beyond the standard model. Additionally, the standard model cannot address the quantization of electric charge and the existence of just three fermion families, as observed in the nature.
Among attempts to solve these issues, the model based on \(SU(3)_{C}\otimes SU(3)_{L}\otimes U(1)_{X}\) (called 3-3-1) gauge symmetry is well-motivated as it predicts the family number to be that of colors by anomaly cancellation [5; 6; 7; 8; 9]. Further, the charge quantization naturally arises in the 3-3-1 model for typical fermion contents [10; 11; 12; 13; 14]. The 3-3-1 model may supply small neutrino masses by implementing radiative and/or seesaw mechanisms [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27] and dark matter stability by interpreting global/discrete symmetries [28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39]. Recently, the 3-3-1 model may give a suitable solution to the \(W\)-mass anomaly [40].
In the 3-3-1 model, the baryon minus lepton number \(B-L\) generically neither commutes nor closes algebraically with \(SU(3)_{L}\). This enlarges the 3-3-1 group to a complete gauge symmetry \(SU(3)_{C}\otimes SU(3)_{L}\otimes U(1)_{X}\otimes U(1)_{N}\) (called 3-3-1-1) in which the last factor \(N\) relates to \(B-L\) via a \(SU(3)_{L}\) charge and this setup reveals matter parity as a residual gauge symmetry [41; 42]. This matter parity stabilizes various dark matter candidates besides related phenomena as studied in [43; 44; 45; 46]. The 3-3-1-1 model typically supplies neutrino masses via canonical seesaw, as suppressed by heavy right-handed neutrinos that exist due to anomaly cancellation and gain large Majorana masses from \(N\)-charge breaking. However, it may alternatively generate neutrino masses via scotogenic mechanism due to the existence of matter parity [47; 48; 49; 50; 51]. The cosmological inflation, asymmetric matter production, new abelian \(N\)-charge breaking, and effect of kinetic mixing between two \(U(1)\) groups are extensively investigated in [52; 53; 54; 55; 56; 57] too.
The 3-3-1 symmetry has a property that unifies dark matter and normal matter in \(SU(3)_{L}\) multiplets and normally couples dark matter in pairs in interactions [41]. Above, \(B-L\) is realized in such a way that dark matter carries a wrong \(B-L\) number opposite to that defined in the standard model for normal matter. Hence, dark matter is odd, governed by the matter parity. Since both dark matter and normal matter have \(B-L\) charge, this setup implies a strict couple between the two kinds of matter through \(B-L\) gauge portal. This work does not further examine such interacting effects of dark matter, especially under experimental detection [43; 44; 45; 46]. Instead, we propose a dark charge for dark matter, while normal matter has no dark charge, which has a nature completely different from \(B-L\) and relaxes such interaction. This interpretation of dark charge supplies naturally scotogenic neutrino mass and dark matter [58], because the mentioned canonical seesaw including its right-handed neutrinos manifestly disappears.
A global version for dark charge under consideration was first discussed in [32] in attempt to find a mechanism for dark matter stability in 3-3-1 model and further promoted in [41]. As electric charge \(Q\) is unified with weak isospin \(T_{i}\) (\(i=1,2,3\)) in electroweak theory \(SU(2)_{L}\otimes U(1)_{Y}\) for which \(Q=T_{3}+Y\), the present proposal combines both electric charge \(Q\) and dark charge \(D\) in a higher weak isospin \(T_{n}\) (\(n=1,2,3,\cdots,8\)) yielding \(SU(3)_{L}\otimes U(1)_{X}\otimes U(1)_{G}\) for which \(Q=T_{3}+\beta T_{8}+X\) and \(D=\beta^{\prime}T_{8}+G\). Here the coefficients \(\beta,\beta^{\prime}\) determine the electric charge and dark charge of dark fields, respectively. This theory indeed unifies dark force and electroweak force in the same manner the electroweak theory does so for electromagnetic
force and weak force, thus it is called scotoelectroweak, where "scoto" means darkness.
The rest of this work is organized as follows. In Sec. II we propose the scotoelectroweak model. In Sec. III we examine scalar and gauge boson mass spectra. In Sec. IV we obtain the scheme of neutrino mass generation. In Sec. V we investigate dark matter observables. In Sec. VI we constrain the model and deliver a numerical investigation. In Sec. VII we give a realization of dark charge that the model refers to. Finally, we summarize our results and conclude this work in Sec. VIII.
## II Scotoelectroweak setup
In the standard model, the weak isospin \(SU(2)_{L}\) arranges left-handed fermions in isodoublets \((\nu_{aL},e_{aL})\sim 2\) and \((u_{aL},d_{aL})\sim 2\), while putting relevant right-handed fermions in isosinglets \(e_{aR}\sim 1\), \(u_{aR}\sim 1\), and \(d_{aR}\sim 1\), where \(a=1,2,3\) is a family index.
The standard model cannot explain nonzero neutrino masses and flavor mixing required by oscillation experiments. Additionally, it cannot explain the existence of dark matter which makes up most of the mass of galaxies and galaxy clusters.
We argue that both the questions may be solved by existence of dark fields, a new kind of particles, which are assumed, possessing a conserved dark charge (\(D\)), normalized to unity for brevity, i.e. \(D=\pm 1\). The content of dark fields and relevant dark symmetry are determined by enlarging the weak isospin \(SU(2)_{L}\) to a higher symmetry, \(SU(3)_{L}\).
The fundamental representations of \(SU(3)_{L}\) are decomposed as \(3=2\oplus 1\) and \(3^{*}=2^{*}\oplus 1\) under \(SU(2)_{L}\). Hence, enlarging known fermion isodoublets (\(2/2^{*}\)) implies dark fermion isosinglets (1's) lying at the bottom of \(3/3^{*}\), such as
\[\psi_{aL}=\begin{pmatrix}\nu_{aL}\\ e_{aL}\\ N_{aL}\end{pmatrix}\sim 3,\ \ \ \ Q_{\alpha L}=\begin{pmatrix}d_{\alpha L}\\ -u_{\alpha L}\\ D_{\alpha L}\end{pmatrix}\sim 3^{*},\ \ \ \ Q_{3L}=\begin{pmatrix}u_{3L}\\ d_{3L}\\ U_{3L}\end{pmatrix}\sim 3, \tag{1}\]
where \(\alpha=1,2\) is a family index as \(a=1,2,3\) is. Furthermore, the relevant right-handed partners transform as \(SU(3)_{L}\) singlets,
\[e_{aR}\sim 1,\ \ \ \ N_{aR}\sim 1,\ \ \ \ u_{aR}\sim 1,\ \ \ \ d_{aR}\sim 1,\ \ \ \ D_{\alpha R}\sim 1,\ \ \ \ U_{3R}\sim 1. \tag{2}\]
Above, the \([SU(3)_{L}]^{3}\) anomaly cancelation requires the third quark family (as well as those of leptons) transforming differently from the first two quark families [59; 60; 61; 62]. This
condition demands that the number of fermion families matches that of color. As stated, \(N_{a}\) and \(U_{3}\) have a dark charge \(D=1\), while \(D_{\alpha}\) possesses a dark charge \(D=-1\), as all collected in Tab. 1. It is noted that all normal fields carry no dark charge, i.e. \(D=0\).1 We further assume \(N_{a}\), \(D_{\alpha}\), and \(U_{3}\) possessing an electric charge \(Q=0\), \(-1/3\), and \(2/3\) respectively like those of the 3-3-1 model with right-handed neutrinos.2
Footnote 1: As the standard model, the hypothetical right-handed neutrinos \(\nu_{aR}\) are a gauge singlet having neither electric charge nor dark charge and are thus not imposed; whereas, the other right-handed fermions must be present, as already included.
Footnote 2: Additionally, these dark leptons and quarks have the same \(B,L\) numbers as usual leptons and quarks, hence \(B\) and \(L\) are global charges commuting with \(SU(3)_{L}\) like those in the standard model, opposite to the original 3-3-1-1 model.
It is clear that \(Q=\text{diag}(0,-1,0)\) and \(D=\text{diag}(0,0,1)\) for lepton triplet \(\psi_{L}\) which both neither commute nor close algebraically with \(SU(3)_{L}\) charges. By symmetry principles, we obtain two new abelian charges \(X\) and \(G\) which complete the gauge symmetry,
\[SU(3)_{C}\otimes SU(3)_{L}\otimes U(1)_{X}\otimes U(1)_{G}, \tag{3}\]
called 3-3-1-1, where \(SU(3)_{C}\) is the color group, \(SU(3)_{L}\) is previously given, while \(X,G\) determine electric and dark charges, respectively,
\[Q=T_{3}-\frac{1}{\sqrt{3}}T_{8}+X,\hskip 14.226378ptD=-\frac{2}{\sqrt{3}}T_{8}+G, \tag{4}\]
where \(T_{n}\) (\(n=1,2,3,\cdots,8\)) is \(SU(3)_{L}\) charge.
The fermion representation content under the 3-3-1-1 symmetry is given by
\[\psi_{aL} \sim(1,3,-1/3,1/3),\hskip 14.226378ptQ_{\alpha L}\sim(3,3^{*},0,-1 /3),\hskip 14.226378ptQ_{3L}\sim(3,3,1/3,1/3), \tag{5}\] \[e_{aR} \sim(1,1,-1,0),\hskip 14.226378ptN_{aR}\sim(1,1,0,1),\hskip 14.226378ptu _{aR}\sim(3,1,2/3,0),\] (6) \[d_{aR} \sim(3,1,-1/3,0),\hskip 14.226378ptD_{\alpha R}\sim(3,1,-1/3,-1), \hskip 14.226378ptU_{3R}\sim(3,1,2/3,1). \tag{7}\]
\begin{table}
\begin{tabular}{l c
All the anomalies vanish. Indeed, since the 3-3-1 model is well established, it it sufficient to verify those associated with \(U(1)_{G}\).
\[[SU(3)_{C}]^{2}U(1)_{G} \sim \sum_{\rm quarks}(G_{q_{L}}-G_{q_{R}}) \tag{8}\] \[= 2.3.(-1/3)+3.(1/3)-2.(-1)-1=0,\]
\[[SU(3)_{L}]^{2}U(1)_{G} \sim \sum_{\rm(anti)triplets}G_{F_{L}} \tag{9}\] \[= 3.(1/3)+2.3.(-1/3)+3.(1/3)=0,\]
\[[{\rm Gravity}]^{2}U(1)_{G} \sim \sum_{\rm fermions}(G_{f_{L}}-G_{f_{R}}) \tag{10}\] \[= 3.3.(1/3)+2.3.3.(-1/3)+3.3.(1/3)\] \[-3.1-2.3.(-1)-3.1=0,\]
\[[U(1)_{X}]^{2}U(1)_{G} = \sum_{\rm fermions}(X_{f_{L}}^{2}G_{f_{L}}-X_{f_{R}}^{2}G_{f_{R}}) \tag{11}\] \[= 3.3.(-1/3)^{2}.(1/3)+3.3.(1/3)^{2}(1/3)\] \[-2.3.(-1/3)^{2}.(-1)-3.(2/3)^{2}.(1)=0,\]
\[U(1)_{X}[U(1)_{G}]^{2} = \sum_{\rm fermions}(X_{f_{L}}G_{f_{L}}^{2}-X_{f_{R}}G_{f_{R}}^{2}) \tag{12}\] \[= 3.3.(-1/3).(1/3)^{2}+3.3.(1/3)(1/3)^{2}\] \[-2.3.(-1/3).(-1)^{2}-3.(2/3).(1)^{2}=0,\]
\[[U(1)_{G}]^{3} = \sum_{\rm fermions}(G_{f_{L}}^{3}-G_{f_{R}}^{3}) \tag{13}\] \[= 3.3.(1/3)^{3}+2.3.3.(-1/3)^{3}+3.3.(1/3)^{3}\] \[-3.(1)^{3}-2.3.(-1)^{3}-3.(1)^{3}=0.\]
The 3-3-1-1 symmetry breaking and mass generation are appropriately induced by
\[\eta = \begin{pmatrix}\eta_{1}^{0}\\ \eta_{2}^{-}\\ \eta_{3}^{0}\end{pmatrix}\sim(1,3,-1/3,1/3), \tag{14}\] \[\rho = \begin{pmatrix}\rho_{1}^{+}\\ \rho_{2}^{0}\\ \rho_{3}^{+}\end{pmatrix}\sim(1,3,2/3,1/3),\] (15) \[\chi = \begin{pmatrix}\chi_{1}^{0}\\ \chi_{2}^{-}\\ \chi_{3}^{0}\end{pmatrix}\sim(1,3,-1/3,-2/3),\] (16) \[\phi \sim (1,1,0,-2),\hskip 14.226378pt\xi\sim(1,1,0,1). \tag{17}\]
Here \(\phi\) couples to \(N_{R}N_{R}\), breaks \(U(1)_{G}\), and defines a dark parity. The fields \(\eta\), \(\rho\), and \(\chi\) couple a fermion (anti)triplet to right-handed partners of the first, second, and third components respectively and break the 3-3-1 symmetry. The scalar \(\xi\) analogous to a field in [50] couples to \(\eta^{\dagger}\chi\) and \(\phi\) inducing neutrino mass. Dark charge for scalars is included to Tab. 1 too. Note that dark scalars include \(\eta_{3}\), \(\rho_{3}\), \(\chi_{1,2}\), \(\xi\), and \(\phi\), which have \(D\neq 0\), whereas the rest fields, \(\eta_{1,2}\), \(\rho_{1,2}\), and \(\chi_{3}\), are normal scalars possessing \(D=0\).
Scalar fields develop vacuum expectation values (VEVs), such as
\[\langle\eta\rangle = \begin{pmatrix}\frac{u}{\sqrt{2}}\\ 0\\ 0\\ 0\end{pmatrix},\hskip 14.226378pt\langle\rho\rangle=\begin{pmatrix}0\\ \frac{v}{\sqrt{2}}\\ 0\end{pmatrix},\hskip 14.226378pt\langle\chi\rangle=\begin{pmatrix}0\\ 0\\ \frac{w}{\sqrt{2}}\end{pmatrix},\hskip 14.226378pt\langle\phi\rangle=\frac{ \Lambda}{\sqrt{2}},\hskip 14.226378pt\langle\xi\rangle=0. \tag{18}\]
The scheme of symmetry breaking is given by
\[SU(3)_{C}\otimes SU(3)_{L}\otimes U(1)_{X}\otimes U(1)_{G}\] \[\downarrow\Lambda,w\] \[SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{Y}\otimes P_{D}\] \[\downarrow u,v\] \[SU(3)_{C}\otimes U(1)_{Q}\otimes P_{D}\]
Here we assume \(\Lambda,w\gg u,v\) for consistency with the standard model. Besides the residual electric and color charges, the model conserves a residual dark parity,
\[P_{D}=(-1)^{D}=(-1)^{-\frac{2}{\sqrt{3}}T_{8}+G}. \tag{19}\]
Indeed, a residual charge resulting from \(SU(3)_{L}\otimes U(1)_{X}\otimes U(1)_{G}\) breaking must take the form \(R=x_{n}T_{n}+yX+zG\). \(R\) must annihilate the vacua \(\langle\eta,\rho,\chi\rangle\), i.e. \(R\langle\eta,\rho,\chi\rangle=0\), leading to \(x_{1}=x_{2}=x_{4}=x_{5}=x_{6}=x_{7}=0\), \(x_{3}=y\), and \(x_{8}=-\frac{1}{\sqrt{3}}(y+2z)\). Substituting these \(x\)'s we get \(R=yQ+zD\), where \(Q,D\) are given as in (4). Obviously, \(Q\) and \(D\) commute, i.e. \([Q,D]=0\), implying that they are separated as two abelian subgroups. Additionally, \(Q\) annihilates the vacuum \(\langle\phi\rangle\), i.e. \(Q\langle\phi\rangle=0\), implying that \(Q\) is a final residual charge, conserved after breaking. For the remainder, \(D\) is broken by \(\langle\phi\rangle\), since \(D\langle\phi\rangle=-2\Lambda/\sqrt{2}\neq 0\). However, a residual symmetry of it, i.e. \(P_{D}=e^{i\omega D}\), may be survived, i.e. \(P_{D}\langle\phi\rangle=\langle\phi\rangle\), or \(e^{i\omega(-2)}=1\), where \(\omega\) is a transformation parameter. It leads to \(\omega=k\pi\), for \(k\) integer. Hence, \(P_{D}=e^{ik\pi D}=(-1)^{kD}=\{1,(-1)^{D}\}\cong Z_{2}\) for which we redefine \(P_{D}=(-1)^{D}\) to be dark parity as in (19). The dark parity (odd/even) of particles are collected in Tab. 1 too. It is stressed that \(\eta_{3}^{0}\), \(\chi_{1}^{0}\), and \(\xi\) do not have a nonzero VEV due to dark parity conservation.
We now write the total Lagrangian of the model,
\[{\cal L}={\cal L}_{\rm kin}+{\cal L}_{\rm Yuk}-V. \tag{20}\]
The kinetic part takes the form,
\[{\cal L}_{\rm kin} = \sum_{F}\bar{F}i\gamma^{\mu}D_{\mu}F+\sum_{S}(D^{\mu}S)^{\dagger}( D_{\mu}S)-\frac{1}{4}\sum_{A}A_{\mu\nu}A^{\mu\nu}, \tag{21}\]
where \(F\), \(S\), and \(A\) denote fermion, scalar, and gauge-boson multiplets respectively, the covariant derivative \(D_{\mu}\) and field strength tensors \(A_{\mu\nu}\) are explicitly given by
\[D_{\mu} = \partial_{\mu}+ig_{s}t_{n}G_{n\mu}+igT_{n}A_{n\mu}+ig_{X}XB_{\mu} +ig_{G}GC_{\mu}, \tag{22}\] \[G_{n\mu\nu} = \partial_{\mu}G_{n\nu}-\partial_{\nu}G_{n\mu}-g_{s}f_{nmp}G_{m\mu }G_{p\nu},\] (23) \[A_{n\mu\nu} = \partial_{\mu}A_{n\nu}-\partial_{\nu}A_{n\mu}-gf_{nmp}A_{m\mu}A_{ \mu\nu},\] (24) \[B_{\mu\nu} = \partial_{\mu}B_{\nu}-\partial_{\nu}B_{\mu},\ \ \ \ C_{\mu\nu}= \partial_{\mu}C_{\nu}-\partial_{\nu}C_{\mu}, \tag{25}\]
where \((g_{s},\ g,\ g_{X},\ g_{G})\), \((G_{n\mu},A_{n\mu},B_{\mu},C_{\mu})\), and \((t_{n},\ T_{n},\ X,\ G)\) indicate coupling constants, gauge bosons, and charges according to 3-3-1-1 subgroups, respectively. Notice that all gauge bosons have \(D=0\) behaving as normal fields, except for \(X^{0},Y^{-}\) coupled to \(T_{4,5,6,7}\) having \(D=-1\) and acting as dark vectors, which are all listed to Tab. 1 too.
The Yukawa Lagrangian is easily obtained,
\[{\cal L}_{\rm Yuk} = h^{e}_{ab}\bar{\psi}_{aL}\rho e_{bR}+h^{N}_{ab}\bar{\psi}_{aL} \chi N_{bR}+\frac{1}{2}h^{\prime N}_{ab}\bar{N}^{c}_{aR}N_{bR}\phi \tag{26}\] \[+h^{d}_{\alpha a}\bar{Q}_{\alpha L}\eta^{*}d_{aR}+h^{u}_{\alpha a} \bar{Q}_{\alpha L}\rho^{*}u_{aR}+h^{D}_{\alpha\beta}\bar{Q}_{\alpha L}\chi^{*}D _{\beta R}\] \[+h^{u}_{3a}\bar{Q}_{3L}\eta u_{aR}+h^{d}_{3a}\bar{Q}_{3L}\rho d_{aR }+h^{U}_{33}\bar{Q}_{3L}\chi U_{3R}+H.c..\]
The scalar potential can be decomposed,
\[V=V(\rho,\chi,\eta,\phi)+V(\xi), \tag{27}\]
where the first part relates to a potential that induces breaking,
\[V(\rho,\chi,\eta,\phi) = \mu^{2}_{1}\rho^{\dagger}\rho+\mu^{2}_{2}\chi^{\dagger}\chi+\mu^{ 2}_{3}\eta^{\dagger}\eta+\lambda_{1}(\rho^{\dagger}\rho)^{2}+\lambda_{2}(\chi^ {\dagger}\chi)^{2}+\lambda_{3}(\eta^{\dagger}\eta)^{2} \tag{28}\] \[+\lambda_{4}(\rho^{\dagger}\rho)(\chi^{\dagger}\chi)+\lambda_{5}( \rho^{\dagger}\rho)(\eta^{\dagger}\eta)+\lambda_{6}(\chi^{\dagger}\chi)(\eta^{ \dagger}\eta)\] \[+\lambda_{7}(\rho^{\dagger}\chi)(\chi^{\dagger}\rho)+\lambda_{8}( \rho^{\dagger}\eta)(\eta^{\dagger}\rho)+\lambda_{9}(\chi^{\dagger}\eta)(\eta^{ \dagger}\chi)+(f\epsilon^{ijk}\eta_{i}\rho_{j}\chi_{k}+H.c.)\] \[+\mu^{2}\phi^{\dagger}\phi+\lambda(\phi^{\dagger}\phi)^{2}+ \lambda_{10}(\phi^{\dagger}\phi)(\rho^{\dagger}\rho)+\lambda_{11}(\phi^{ \dagger}\phi)(\chi^{\dagger}\chi)+\lambda_{12}(\phi^{\dagger}\phi)(\eta^{ \dagger}\eta),\]
while the last part relates to a dark sector that induces neutrino mass,
\[V(\xi) = \mu^{2}_{\xi}\xi^{\dagger}\xi+\lambda_{\xi}(\xi^{\dagger}\xi)^{2} +\lambda_{13}(\xi^{\dagger}\xi)(\rho^{\dagger}\rho)+\lambda_{14}(\xi^{\dagger }\xi)(\chi^{\dagger}\chi)+\lambda_{15}(\xi^{\dagger}\xi)(\eta^{\dagger}\eta) \tag{29}\] \[+\lambda_{16}(\xi^{\dagger}\xi)(\phi^{\dagger}\phi)+(f_{1}\phi\xi \xi+f_{2}\xi\eta^{\dagger}\chi+\lambda_{17}\phi^{*}\xi^{*}\eta^{\dagger}\chi+H. c.).\]
Above, \(h\)'s and \(\lambda\)'s are dimensionless, while \(\mu\)'s and \(f\)'s have a mass dimension. We can consider the parameters \(f\), \(f_{1,2}\), and \(\lambda_{17}\) to be real by absorbing their phases (if any) into appropriate scalar fields \(\eta\), \(\rho\), \(\chi\), \(\phi\), and \(\xi\). That said, the potential conserves CP. We also suppose that CP is not broken by vacua, i.e. the VEVs \(u\), \(v\), \(w\), and \(\Lambda\) are all real too. It is further noted that there are neither mixing between a scalar (CP-even) and a pseudo-scalar (CP-odd) due to CP conservation nor mixing between a \(P_{D}\)-even field and a \(P_{D}\)-odd field due to dark parity conservation.
## III Scalar and gauge boson masses
### Scalar mass spectrum
The potential \(V(\rho,\chi,\eta,\phi)\) has been explicitly examined in [43]. Let us summarize its result. First, expand the scalar fields around their VEVs,
\[\eta=\left(\begin{array}{c}\frac{u}{\sqrt{2}}\\ 0\\ 0\end{array}\right)+\left(\begin{array}{c}\frac{S_{1}+iA_{1}}{\sqrt{2}}\\ \eta_{-}^{-}\\ \frac{S_{1}^{\prime}+iA_{1}^{\prime}}{\sqrt{2}}\end{array}\right),\ \ \ \ \rho=\left(\begin{array}{c}0\\ \frac{v}{\sqrt{2}}\\ 0\end{array}\right)+\left(\begin{array}{c}\rho_{1}^{+}\\ \frac{S_{2}+iA_{2}}{\sqrt{2}}\\ \rho_{3}^{+}\end{array}\right), \tag{30}\]
\[\chi=\left(\begin{array}{c}0\\ 0\\ \frac{w}{\sqrt{2}}\end{array}\right)+\left(\begin{array}{c}\frac{S_{1}^{ \prime}+iA_{1}^{\prime}}{\sqrt{2}}\\ \chi_{-}^{-}\\ \frac{S_{3}+iA_{3}}{\sqrt{2}}\end{array}\right),\ \ \ \ \phi=\frac{\Lambda}{\sqrt{2}}+\frac{S_{4}+iA_{4}}{\sqrt{2}}, \tag{31}\]
and notice that the following approximations "\(\simeq\)" are given up to \((u,v)/(-f,w,\Lambda)\) order. The usual Higgs field (\(H\)) and three new neutral scalars (\(H_{1,2,3}\)) are obtained by
\[H\simeq\frac{uS_{1}+vS_{2}}{\sqrt{u^{2}+v^{2}}},\ \ \ \ H_{1} \simeq\frac{vS_{1}-uS_{2}}{\sqrt{u^{2}+v^{2}}}, \tag{32}\] \[H_{2}\simeq c_{\varphi}S_{3}-s_{\varphi}S_{4},\ \ \ \ H_{3} \simeq s_{\varphi}S_{3}+c_{\varphi}S_{4}, \tag{33}\]
with mixing angle \(t_{2\varphi}=\frac{\lambda_{11}w\Lambda}{\lambda\Lambda^{2}-\lambda_{2}w^{2}}\). The usual Higgs mass is appropriately achieved at the weak scale \(m_{H}\sim(u,v)\), while the new scalar masses are
\[m_{H_{1}}^{2}\simeq-\frac{fw}{\sqrt{2}}\left(\frac{u}{v}+\frac{ v}{u}\right), \tag{34}\] \[m_{H_{2,3}}^{2}\simeq\lambda_{2}w^{2}+\lambda\Lambda^{2}\mp \sqrt{(\lambda_{2}w^{2}-\lambda\Lambda^{2})^{2}+\lambda_{11}^{2}w^{2}\Lambda^ {2}}. \tag{35}\]
A massive pseudo-scalar with corresponding mass is identified as
\[\mathcal{A}=\frac{vwA_{1}+uwA_{2}+uvA_{3}}{\sqrt{u^{2}v^{2}+v^{2}w^{2}+u^{2}w^ {2}}},\ \ \ m_{\mathcal{A}}^{2}=-\frac{f}{\sqrt{2}}\left(\frac{vw}{u}+\frac{uw}{v}+ \frac{uv}{w}\right). \tag{36}\]
Two charged scalars are given by
\[H_{4}^{\pm}=\frac{v\chi_{2}^{\pm}+w\rho_{3}^{\pm}}{\sqrt{v^{2}+w^{2}}},\ \ \ \ H_{5}^{\pm}=\frac{v\eta_{2}^{\pm}+u\rho_{1}^{\pm}}{\sqrt{u^{2}+v^{2}}}, \tag{37}\]
with respective masses,
\[m_{H_{4}}^{2}=\left(\frac{\lambda_{7}}{2}-\frac{fu}{\sqrt{2}vw}\right)(v^{2}+ w^{2}),\ \ \ \ m_{H_{5}}^{2}=\left(\frac{\lambda_{8}}{2}-\frac{fw}{\sqrt{2}vu}\right)(v^{2}+ u^{2}). \tag{38}\]
A neutral complex scalar with corresponding mass is
\[H^{\prime 0}\equiv\frac{S^{\prime}+iA^{\prime}}{\sqrt{2}}=\frac{u\chi_{1}^{0*}+w \eta_{3}^{0}}{\sqrt{u^{2}+w^{2}}},\ \ \ \ m_{H^{\prime}}^{2}=\left(\frac{\lambda_{9}}{2}-\frac{fv}{\sqrt{2}uw}\right)(u^ {2}+w^{2}), \tag{39}\]
where the real \(S^{\prime}=(wS^{\prime}_{3}+uS^{\prime}_{1})/\sqrt{u^{2}+w^{2}}\) and imaginary \(A^{\prime}=(wA^{\prime}_{3}-uA^{\prime}_{1})/\sqrt{u^{2}+w^{2}}\) parts of \(H^{\prime}\) are degenerate with the same \(H^{\prime}\) mass.
Except for the usual Higgs mass, all new scalar masses are given at \((w,\Lambda,-f)\) scale. For the remaining fields, the massless Goldstone bosons of neutral gauge fields \(Z\), \(Z^{\prime}\), and \(Z^{\prime\prime}\) are identified as
\[G_{Z}=\frac{uA_{1}-vA_{2}}{\sqrt{u^{2}+v^{2}}},\ \ \ \ G_{Z^{\prime}}=\frac{w(u^{2}+v^{2})A _{3}-uv(vA_{1}+uA_{2})}{\sqrt{(u^{2}+v^{2})(u^{2}v^{2}+v^{2}w^{2}+u^{2}w^{2})} },\ \ \ G_{Z^{\prime\prime}}=A_{4}, \tag{40}\]
while those of charged/complex gauge fields \(W^{\pm}\), \(Y^{\pm}\), and \(X^{0}\) take the form,
\[G_{W}^{\pm}=\frac{u\eta_{2}^{\pm}-v\rho_{1}^{\pm}}{\sqrt{u^{2}+v^{2}}},\ \ \ \ G_{Y}^{\pm}=\frac{w\chi_{2}^{2}-v\rho_{3}^{\pm}}{\sqrt{v^{2}+w^{2}}},\ \ \ \ G_{X}^{0}=\frac{w\chi_{1}^{0}-u\eta_{3}^{0*}}{\sqrt{u^{2}+w^{2}}}. \tag{41}\]
Because \(\langle\xi\rangle=0\), the potential \(V(\xi)\) does not affect the minimum conditions derived from \(V(\rho,\chi,\eta,\phi)\) as in [43]. In other words, \(u,v,w,\Lambda\) are uniquely given, assuming that \(\mu^{2}<0\), \(\mu_{1,2,3}^{2}<0\), \(\lambda>0\), \(\lambda_{1,2,3}>0\), and necessary conditions for \(\lambda_{4,5,\cdots,12}\). Additionally, conservations of dark parity and electric charge imply that the presence of \(\xi\), i.e. \(V(\xi)\), modifies only the mass spectrum of \(H^{\prime}\) and \(G_{X}\), or exactly \(S^{\prime}\) and \(A^{\prime}\), which includes
\[V \supset \frac{1}{2}\left(S^{\prime}\ \ S^{\prime}_{5}\right)\begin{pmatrix}m_{H ^{\prime}}^{2}&\left(\frac{f_{2}}{\sqrt{2}}+\frac{\lambda_{17}\Lambda}{2} \right)\sqrt{u^{2}+w^{2}}\\ \left(\frac{f_{2}}{\sqrt{2}}+\frac{\lambda_{17}\Lambda}{2}\right)\sqrt{u^{2}+ w^{2}}&m_{\xi}^{2}+\sqrt{2}f_{1}\Lambda\end{pmatrix}\begin{pmatrix}S^{\prime} \\ S^{\prime}_{5}\end{pmatrix} \tag{42}\] \[+\frac{1}{2}\left(A^{\prime}\ \ A^{\prime}_{5}\right)\begin{pmatrix}m_{H ^{\prime}}^{2}&\left(\frac{f_{2}}{\sqrt{2}}-\frac{\lambda_{17}\Lambda}{2} \right)\sqrt{u^{2}+w^{2}}\\ \left(\frac{f_{2}}{\sqrt{2}}-\frac{\lambda_{17}\Lambda}{2}\right)\sqrt{u^{2} +w^{2}}&m_{\xi}^{2}-\sqrt{2}f_{1}\Lambda\end{pmatrix}\begin{pmatrix}A^{\prime} \\ A^{\prime}_{5}\end{pmatrix},\]
where \(\xi\equiv(S^{\prime}_{5}+iA^{\prime}_{5})/\sqrt{2}\) and \(m_{\xi}^{2}\equiv\mu_{\xi}^{2}+\lambda_{13}v^{2}/2+\lambda_{14}w^{2}/2+ \lambda_{15}u^{2}/2+\lambda_{16}\Lambda^{2}/2\). Defining two mixing angles
\[t_{2\theta_{R}}=\frac{(\sqrt{2}f_{2}+\lambda_{17}\Lambda)\sqrt{u^{2}+w^{2}}}{m _{\xi}^{2}+\sqrt{2}f_{1}\Lambda-m_{H^{\prime}}^{2}},\ \ \ \ t_{2\theta_{I}}=\frac{(\sqrt{2}f_{2}-\lambda_{17}\Lambda)\sqrt{u^{2}+w^{2}}}{m _{\xi}^{2}-\sqrt{2}f_{1}\Lambda-m_{H^{\prime}}^{2}}, \tag{43}\]
we obtain physical fields
\[R_{1}=c_{\theta_{R}}S^{\prime}-s_{\theta_{R}}S^{\prime}_{5},\ \ \ \ R_{2}=s_{\theta_{R}}S^{\prime}+c_{\theta_{R}}S^{\prime}_{5}, \tag{44}\] \[I_{1}=c_{\theta_{I}}A^{\prime}-s_{\theta_{I}}A^{\prime}_{5},\ \ \ \ I_{2}=s_{\theta_{I}}A^{\prime}+c_{\theta_{I}}A^{\prime}_{5}, \tag{45}\]
with respective masses
\[m_{R_{1,2}}^{2} = \frac{1}{2}\left[m_{H^{\prime}}^{2}+m_{\xi}^{2}+\sqrt{2}f_{1}\Lambda\right. \tag{46}\] \[\left.\mp\sqrt{(m_{H^{\prime}}^{2}-m_{\xi}^{2}-\sqrt{2}f_{1} \Lambda)^{2}+(\sqrt{2}f_{2}+\lambda_{17}\Lambda)^{2}(u^{2}+w^{2})}\right],\] \[m_{I_{1,2}}^{2} = \frac{1}{2}\left[m_{H^{\prime}}^{2}+m_{\xi}^{2}-\sqrt{2}f_{1}\Lambda\right.\] (47) \[\left.\mp\sqrt{(m_{H^{\prime}}^{2}-m_{\xi}^{2}+\sqrt{2}f_{1} \Lambda)^{2}+(\sqrt{2}f_{2}-\lambda_{17}\Lambda)^{2}(u^{2}+w^{2})}\right].\]
### Gauge boson mass spectrum
The gauge bosons obtain mass from \({\cal L}\supset\sum_{S}(D^{\mu}\langle S\rangle)^{\dagger}(D_{\mu}\langle S\rangle)\). Substituting the VEVs, we get physical non-Hermitian gauge bosons
\[W_{\mu}^{\pm}=\frac{A_{1\mu}\mp iA_{2\mu}}{\sqrt{2}},\ \ \ \ X^{0,0*}=\frac{A_{4\mu}\mp iA_{5 \mu}}{\sqrt{2}},\ \ \ \ Y^{\mp}=\frac{A_{6\mu}\mp iA_{7\mu}}{\sqrt{2}}, \tag{48}\]
with respective masses,
\[m_{W}^{2}=\frac{g^{2}}{4}(u^{2}+v^{2}),\ \ \ \ m_{X}^{2}=\frac{g^{2}}{4}(u^{2}+w^{2}),\ \ \ \ m_{Y}^{2}=\frac{g^{2}}{4}(v^{2}+w^{2}). \tag{49}\]
\(W\) is identical to that of the standard model and \(u^{2}+v^{2}=(246\ {\rm GeV})^{2}\).
Neutral gauge bosons are identified as
\[A_{\mu}=s_{W}A_{3\mu}+c_{W}\left(-\frac{t_{W}}{\sqrt{3}}A_{8\mu} +\sqrt{1-\frac{t_{W}^{2}}{3}}B_{\mu}\right), \tag{50}\] \[Z_{\mu}=c_{W}A_{3\mu}-s_{W}\left(-\frac{t_{W}}{\sqrt{3}}A_{8\mu} +\sqrt{1-\frac{t_{W}^{2}}{3}}B_{\mu}\right),\] (51) \[{\cal Z}_{\mu}^{\prime}=\sqrt{1-\frac{t_{W}^{2}}{3}}A_{8\mu}+ \frac{t_{W}}{\sqrt{3}}B_{\mu}, \tag{52}\]
where \(s_{W}=e/g=\sqrt{3}t_{X}/\sqrt{3+4t_{X}^{2}}\), with \(t_{X}=g_{X}/g\), is the sine of the Weinberg angle. The photon \(A_{\mu}\) is massless and decoupled. The \(Z\) boson that is identical to that of the standard model is radically lighter than the \({\cal Z}^{\prime}\) boson of 3-3-1 model and the \(C\) boson of \(U(1)_{G}\). Although \(Z\) mixes with \({\cal Z}^{\prime}\) and \(C\), at \((u,v)/(w,\Lambda)\) order the field \(Z\) is decoupled as a physical field possessing a mass,
\[m_{Z}^{2}\simeq\frac{g^{2}}{4c_{W}^{2}}(u^{2}+v^{2}). \tag{53}\]
There remains a mixing between \({\cal Z}^{\prime}\) and \(C\), yielding physical fields by diagonalization,
\[Z^{\prime}=c_{\theta}{\cal Z}^{\prime}-s_{\theta}C,\ \ \ \ Z^{\prime\prime}=s_{ \theta}{\cal Z}^{\prime}+c_{\theta}C, \tag{54}\]
with mixing angle and respective masses,
\[t_{2\theta} = \frac{4\sqrt{3+t_{X}^{2}}t_{G}w^{2}}{4t_{G}^{2}(w^{2}+9\Lambda^{2 })-(3+t_{X}^{2})w^{2}}, \tag{55}\] \[m_{Z^{\prime},Z^{\prime\prime}}^{2} = \frac{g^{2}}{18}\left\{4t_{G}^{2}(w^{2}+9\Lambda^{2})+(3+t_{X}^{2 })w^{2}\right.\] (56) \[\left.\mp\sqrt{[4t_{G}^{2}(w^{2}+9\Lambda^{2})-(3+t_{X}^{2})w^{2 }]^{2}+16(3+t_{X}^{2})t_{G}^{2}w^{4}}\right\},\]
where \(t_{G}=g_{G}/g\).
The above result is similar to that in [43] since the scalar multiplets have a dark charge value equal to that for \(B-L\). The difference would be explicitly in the couplings of \(Z^{\prime},Z^{\prime\prime}\) with matter fields because the normal fermions have \(B-L\) while do not have dark charge. For comparison and further usage, we compute in Tab. 2 the couplings of \(Z^{\prime}\) with fermions, while those for \(Z^{\prime\prime}\) can be obtained from \(Z^{\prime}\) by replacing \(c_{\theta}\to s_{\theta}\) and \(s_{\theta}\to-c_{\theta}\).
\begin{table}
\begin{tabular}{c c c} \hline \hline \(f\) & \(g_{V}^{Z^{\prime}}(f)\) & \(g_{A}^{Z^{\prime}}(f)\) \\ \hline \(\nu_{a}\) & \(\frac{c_{\theta}c_{2W}}{2\sqrt{3-4s_{W}^{2}}}-\frac{1}{3}s_{\theta}c_{W}t_{G}\) & \(\frac{c_{\theta}c_{2W}}{2\sqrt{3-4s_{W}^{2}}}-\frac{1}{3}s_{\theta}c_{W}t_{G}\) \\ \(e_{a}\) & \(\frac{c_{\theta}(1-4s_{W}^{2})}{2\sqrt{3-4s_{W}^{2}}}-\frac{1}{3}s_{\theta}c_{ W}t_{G}\) & \(\frac{c_{\theta}}{2\sqrt{3-4s_{W}^{2}}}-\frac{1}{3}s_{\theta}c_{W}t_{G}\) \\ \(N_{a}\) & \(-\frac{c_{\theta}c_{W}^{2}}{\sqrt{3-4s_{W}^{2}}}-\frac{4}{3}s_{\theta}c_{W}t_{G}\) & \(-\frac{c_{\theta}c_{W}^{2}}{\sqrt{3-4s_{W}^{2}}}+\frac{2}{3}s_{\theta}c_{W}t_{G}\) \\ \(u_{\alpha}\) & \(-\frac{c_{\theta}(3-8s_{W}^{2})}{6\sqrt{3-4s_{W}^{2}}}+\frac{1}{3}s_{\theta}c_ {W}t_{G}\) & \(-\frac{c_{\theta}}{2\sqrt{3-4s_{W}^{2}}}+\frac{1}{3}s_{\theta}c_{W}t_{G}\) \\ \(u_{3}\) & \(\frac{c_{\theta}(3+2s_{W}^{2})}{6\sqrt{3-4s_{W}^{2}}}-\frac{1}{3}s_{\theta}c_ {W}t_{G}\) & \(\frac{c_{\theta}c_{2W}}{2\sqrt{3-4s_{W}^{2}}}-\frac{1}{3}s_{\theta}c_{W}t_{G}\) \\ \(d_{\alpha}\) & \(-\frac{c_{\theta}(3-2s_{W}^{2})}{6\sqrt{3-4s_{W}^{2}}}+\frac{1}{3}s_{\theta}c_ {W}t_{G}\) & \(-\frac{c_{\theta}c_{2W}}{2\sqrt{3-4s_{W}^{2}}}+\frac{1}{3}s_{\theta}c_{W}t_{G}\) \\ \(d_{3}\) & \(\frac{c_{\theta}\sqrt{3-4s_{W}^{2}}}{6}-\frac{1}{3}s_{\theta}c_{W}t_{G}\) & \(\frac{c_{\theta}}{2\sqrt{3-4s_{W}^{2}}}-\frac{1}{3}s_{\theta}c_{W}t_{G}\) \\ \(U\) & \(-\frac{c_{\theta}(3-7s_{W}^{2})}{3\sqrt{3-4s_{W}^{2}}}-\frac{4}{3}s_{\theta}c_ {W}t_{G}\) & \(-\frac{c_{\theta}c_{W}^{2}}{\sqrt{3-4s_{W}^{2}}}+\frac{2}{3}s_{\theta}c_{W}t_{G}\) \\ \(D_{\alpha}\) & \(\frac{c_{\theta}(3-5s_{W}^{2})}{3\sqrt{3-4s_{W}^{2}}}+\frac{4}{3}s_{\theta}c_ {W}t_{G}\) & \(\frac{c_{\theta}c_{W}^{2}}{\sqrt{3-4s_{W}^{2}}}-\frac{2}{3}s_{\theta}c_{W}t_{G}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Couplings of \(Z^{\prime}\) with fermions; additionally, notice that \(Z^{\prime\prime}\)-fermion couplings derived from this table with replacement \(c_{\theta}\to s_{\theta}\) and \(s_{\theta}\to-c_{\theta}\).
Neutrino Mass
In the 3-3-1-1 model by gauging \(B-L\), the right-handed neutrinos are required for anomaly cancellation. Consequently, neutrinos obtain a small mass via canonical seesaw mechanism, suppressed by large right-handed neutrino mass scales relating to \(B-L\) breaking. In this kind of model, ordinary lepton doublets may couple to a scalar and fermions that both are odd under the matter parity, revealing an interesting possibility for scotogenic neutrino mass generation alternative to the above canonical seesaw [47; 48; 49; 50; 51]. The issue raised is how to suppress this canonical seesaw since the \(B-L\) breaking scale is not necessarily large for the latter. The most studies have chosen \(B-L\) charges for right-handed neutrinos to be \(-4,-4,+5\) which avoid their coupling to usual leptons and Higgs boson. But one must introduce two scalar singlets coupled to these right-handed neutrinos in order to make them appropriately heavy, hence expressing a complicated \(U(1)_{N}\) Higgs sector with two unreasonable pseudo Nambu-Goldstone bosons. Additionally, the fermions that are odd under the matter parity responsible for the mentioned scotogenic setup are not necessarily present under the theoretical ground, unlike the unwanted \(\nu_{aR}\). The present 3-3-1-1 model by gauging dark charge properly overcomes such issues. Indeed, \(\nu_{aR}\) are not required by dark charge anomaly cancellation, thus the canonical seesaw disappears. Additionally, \(N_{aR}\) must be present for dark charge anomaly cancellation, which are odd under dark parity and coupled to usual leptons via a scalar triplet. We introduce only an extra scalar singlet \(\xi\) that necessarily separates the relevant \(H^{\prime}\) (i.e. \(S^{\prime},A^{\prime}\)) mass, yielding a neutrino mass generation scheme to be more economical than the previous studies.
First note that charged leptons and every (usual and exotic) quarks gain appropriate masses from the Yukawa Lagrangian, as usual/similar to the 3-3-1 model. Neutral fermions obtain a mass matrix of form,
\[\mathcal{L}_{\rm Yuk}\supset-\frac{1}{2}\left(\bar{N}_{aL}\ \ \bar{N}_{aR}^{c} \right)\begin{pmatrix}0&m_{ab}^{D}\\ m_{ba}^{D}&m_{ab}^{R}\end{pmatrix}\begin{pmatrix}N_{bL}^{c}\\ N_{bR}\end{pmatrix}+H.c., \tag{57}\]
where \(m^{D}=-h^{N}w/\sqrt{2}\) and \(m^{R}=-h^{\prime N}\Lambda/\sqrt{2}\) are Dirac and (right-handed) Majorana masses for \(N\), respectively. We can diagonalize the generic mass matrix, yielding
\[\mathcal{L}_{\rm Yuk}\supset-\frac{1}{2}\bar{N}_{k}^{c}M_{k}N_{k}, \tag{58}\]
for \(k=1,2,\cdots,6\), where \((N_{aL}^{c},N_{aR})=(U_{ak},V_{ak})N_{k}\) relates the gauge states to mass eigen
states \(N_{k}\) with mass eigenvalues \(M_{k}\).
What concerns is neutrino mass generation Lagrangian which is collected from those in Yukawa interactions and scalar potential, such as
\[\mathcal{L} \supset \frac{uh_{ab}^{N}V_{bk}}{\sqrt{2}\sqrt{u^{2}+w^{2}}}\bar{\nu}_{aL}( c_{\theta_{R}}R_{1}+s_{\theta_{R}}R_{2}-ic_{\theta_{I}}I_{1}-is_{\theta_{I}}I_{2})N_ {k} \tag{59}\] \[+\frac{wh_{ab}^{N}V_{bk}}{\sqrt{u^{2}+w^{2}}}\bar{\nu}_{aL}G_{X}^{ 0}N_{k}-\frac{1}{2}M_{k}N_{k}^{2}+H.c.\] \[-\frac{1}{2}m_{R_{1}}^{2}R_{1}^{2}-\frac{1}{2}m_{R_{2}}^{2}R_{2}^ {2}-\frac{1}{2}m_{I_{1}}^{2}I_{1}^{2}-\frac{1}{2}m_{I_{2}}^{2}I_{2}^{2},\]
where we have used \(\chi_{1}^{0}=(uH^{0*}+wG_{X}^{0})/\sqrt{u^{2}+w^{2}}=[u(c_{\theta_{R}}R_{1}+s _{\theta_{R}}R_{2}-ic_{\theta_{I}}I_{1}-is_{\theta_{I}}I_{2})/\sqrt{2}+wG_{X}^ {0}]/\sqrt{u^{2}+w^{2}}\) and \(N_{bR}=V_{bk}N_{k}\). Neutrino mass generation Feynman diagram is depicted in Fig. 1 in both flavor basis (left panel) and mass eigenbasis (right panel).
Neutrino mass is induced in form of \(\mathcal{L}\supset-\frac{1}{2}\bar{\nu}_{aL}(m_{\nu})_{ab}\nu_{bL}^{c}+H.c.\) in which
\[(m_{\nu})_{ab} = \frac{u^{2}}{u^{2}+w^{2}}\frac{(h^{N}V)_{ak}(h^{N}V)_{bk}M_{k}}{3 2\pi^{2}} \tag{60}\] \[\times\left(\frac{c_{\theta_{R}}^{2}m_{R_{1}}^{2}\ln\frac{M_{k}^ {2}}{m_{R_{1}}^{2}}}{M_{k}^{2}-m_{R_{1}}^{2}}-\frac{c_{\theta_{I}}^{2}m_{I_{1} }^{2}\ln\frac{M_{k}^{2}}{m_{I_{1}}^{2}}}{M_{k}^{2}-m_{I_{1}}^{2}}+\frac{s_{ \theta_{R}}^{2}m_{R_{2}}^{2}\ln\frac{M_{k}^{2}}{m_{R_{2}}^{2}}}{M_{k}^{2}-m_{ R_{2}}^{2}}-\frac{s_{\theta_{I}}^{2}m_{I_{2}}^{2}\ln\frac{M_{k}^{2}}{m_{I_{2}}^{2}}}{M _{k}^{2}-m_{I_{2}}^{2}}\right).\]
Remarks are in order
1. The divergent one-loop contributions corresponding to \(R_{1,2}\) and \(I_{1,2}\) are cancelled out due to \(c_{\theta_{R}}^{2}-c_{\theta_{I}}^{2}+s_{\theta_{R}}^{2}-s_{\theta_{I}}^{2}=0\).
Figure 1: Neutrino mass generation in the scotoelectroweak theory, where left and right diagrams are given in flavor and mass eigenbases, respectively.
2. For gauge realization of the matter parity, the inert scalar doublet \((\chi_{1},\chi_{2})\) may approximate as Goldstone mode of a gauge vector doublet \((X,Y)\), i.e. \((\chi_{1},\chi_{2})\sim(G_{X},G_{Y})\). Both \(G_{X}\) and \(X\) do not contribute to neutrino mass since they possess a degenerate mass between particle and antiparticle, opposite to its global versions [58; 63].
3. Contributing to neutrino mass is a scalar singlet \(\eta_{3}\) that mixes with \(\chi_{1}\), thus suppressed by \((u/w)^{2}\sim 10^{-3}\) besides the usual loop factor \((1/32\pi^{2})\sim 10^{-3}\), another intermediate scalar singlet \(\xi\) that connects to \(\eta_{3}\), and the singlet mass splittings \(\Delta m^{2}/m^{2}\sim f_{1}/\Lambda\sim f_{2}\lambda_{17}/\Lambda\) as well as Majorana masses \(M_{k}\sim\Lambda\) for \(N_{k}\), all governed by dark charge breaking field \(\langle\phi\rangle\sim\Lambda\). It translates to \[m_{\nu}\sim\left(\frac{h^{N}}{10^{-2}}\right)^{2}\times\left(\frac{f_{1},f_{2} \lambda_{17}}{\text{GeV}}\right)\times 0.1\ \text{eV},\] (61) appropriate to experiment, given that \(h^{N}\sim 10^{-2}\), and the soft coupling \(f_{1,2}\sim 1\) GeV is not necessarily small, in contrast to [50]. This is due to a double suppression between the weak and new physics scales, \((u/w)^{2}\).
## V Dark matter
Contributing to the scotogenic neutrino masses is two kinds of dark field, the dark scalars \(R_{1,2},I_{1,2}\) and the dark fermions \(N_{1,2,\cdots,6}\). In contrast to the 3-3-1-1 model by gauging \(B-L\), the dark scalars in the present model are now separated in mass \(m_{R_{1}}\neq m_{I_{1}}\) and \(m_{R_{2}}\neq m_{I_{2}}\). This presents interesting coannihilation phenomena between \(R_{1}\) and \(I_{1}\) as well as \(R_{2}\) and \(I_{2}\) that set the relic density, if each of them is interpreted to be dark matter. Additionally, the dark scalar mass splitting would avoid dangerous scattering processes of \(R_{1}/I_{1}\) or \(R_{2}/I_{2}\) with nuclei in direct detection experiment due to mediators of \(Z,Z^{\prime},Z^{\prime\prime}\). The phenomenology of dark scalar candidates is quite analogous to those studied in the 3-3-1 model with inert multiplets [34; 37; 38], which will be skipped. In what follows we assume the dark fermions containing dark matter, namely the dark matter candidate is assigned as \(N_{1}\) which has a mass smaller than other \(N\)'s, dark scalars, and dark vectors. Therefore, this \(N_{1}\) is absolutely stabilized by dark parity conservation.
A distinct feature between the 3-3-1-1 model by gauging \(B-L\) and the 3-3-1-1 model by gauging dark charge is that \(N_{1}\) in the former has \(B-L=0\), while \(N_{1}\) in the latter has
\(D=1\neq 0\). Therefore, in the present model \(N_{1}=U_{a1}^{*}N_{aL}^{c}+V_{a1}^{*}N_{aR}\) has both (left and right) chiral couplings to \(Z^{\prime},Z^{\prime\prime}\), such as
\[{\cal L} \supset -\left[\left(\frac{gc_{W}c_{\theta}}{\sqrt{3-4s_{W}^{2}}}+\frac{g _{G}s_{\theta}}{3}\right)U_{a1}^{*}U_{a1}-g_{G}s_{\theta}V_{a1}^{*}V_{a1} \right]\bar{N}_{1}\gamma^{\mu}N_{1}Z^{\prime}_{\mu} \tag{62}\] \[-\left[\left(\frac{gc_{W}s_{\theta}}{\sqrt{3-4s_{W}^{2}}}-\frac{g _{G}c_{\theta}}{3}\right)U_{a1}^{*}U_{a1}+g_{G}c_{\theta}V_{a1}^{*}V_{a1} \right]\bar{N}_{1}\gamma^{\mu}N_{1}Z^{\prime\prime}_{\mu},\]
where the terms \(V_{a1}\) (exactly of \(N_{aR}\)) exist only in the present model, which sets the neutrino mass above. Specially, we will examine the effect of \(N_{aR}\) by assuming \(||V_{a1}||\gg||U_{a1}||\), i.e. the dark matter \(N_{1}\simeq V_{a1}^{*}N_{aR}\) to be most right-handed. Combined with unitarity condition, we have \(V_{a1}^{*}V_{a1}=1-U_{a1}^{*}U_{a1}\simeq 1\) while \(U_{a1}^{*}U_{a1}\simeq 0\). Eq. (62) becomes
\[{\cal L}\supset g_{G}s_{\theta}\bar{N}_{1}\gamma^{\mu}N_{1}Z^{\prime}_{\mu}-g_ {G}c_{\theta}\bar{N}_{1}\gamma^{\mu}N_{1}Z^{\prime\prime}_{\mu}. \tag{63}\]
In the early universe, \(N_{1}\) annihilates to usual fields via \(Z^{\prime},Z^{\prime\prime}\) portals as in Fig. 2 which set the relic density. Here the \(Z^{\prime},Z^{\prime\prime}\) couplings with usual fermions (\(f=\nu,e,u,d\)) can be found in Tab. 2. It is stressed that there are no \(t\)-channel annihilations exchanged by \(X,Y\) dark vectors, in contrast to [41]. Additionally, the Higgs portal interactions of \(N_{1}\) with normal matter are small and suppressed.
The dark matter annihilation cross-section is computed as
\[\langle\sigma v\rangle_{N_{1}}=\frac{g^{4}m_{N_{1}}^{2}}{16\pi c_{W}^{4}}\sum _{f,x,y}\frac{g_{V}^{x}(N_{1})g_{V}^{y}(N_{1})N_{C}(f)[g_{V}^{x}(f)g_{V}^{y}(f )+g_{A}^{x}(f)g_{A}^{y}(f)]}{(4m_{N_{1}}^{2}-m_{x}^{2})(4m_{N_{1}}^{2}-m_{y}^{ 2})}, \tag{64}\]
where \(x,y=Z^{\prime},Z^{\prime\prime}\), \(N_{C}(f)\) refers to the color number of \(f\), and \(g_{V}^{Z^{\prime}}(N_{1})=-s_{\theta}c_{W}t_{G}\) and \(g_{V}^{Z^{\prime\prime}}(N_{1})=c_{\theta}c_{W}t_{G}\) are given in the mass basis of \(N\), as mentioned. Further, the dark matter relic density can be approximated as \(\Omega_{N_{1}}h^{2}\simeq 0.1\) pb\(/\langle\sigma v\rangle_{N_{1}}\simeq 0.12\), where the last value is given by experiment [64].
Figure 2: Fermion dark matter annihilation to normal matter.
Because \(N_{1}\) is a Majorana particle, it scatters with quarks in direct detection experiment only through spin-dependent (SD) effective interaction exchanged by \(Z^{\prime},Z^{\prime\prime}\) analogous to the diagram in Fig. 2 for \(f=q\), namely
\[\mathcal{L}_{\rm eff}\supset\frac{g^{2}}{4c_{W}^{2}}\sum_{q,x}\frac{g_{A}^{x}(N _{1})g_{A}^{x}(q)}{m_{x}^{2}}(\bar{N}_{1}\gamma^{\mu}\gamma_{5}N_{1})(\bar{q} \gamma_{\mu}\gamma_{5}q), \tag{65}\]
where \(g_{A}^{x}(N_{1})=-g_{V}^{x}(N_{1})\) for \(x=Z^{\prime},Z^{\prime\prime}\). The SD cross-section determining scattering of \(N_{1}\) with a target neutron (\(n\)) is given by
\[\sigma_{N_{1}}^{\rm SD}=\frac{3g^{4}m_{n}^{2}}{4\pi c_{W}^{4}}\sum_{x,y}\frac{ g_{A}^{x}(N_{1})g_{A}^{y}(N_{1})[g_{A}^{x}(u)\lambda_{u}^{n}+g_{A}^{x}(d)( \lambda_{d}^{n}+\lambda_{s}^{n})][g_{A}^{y}(u)\lambda_{u}^{n}+g_{A}^{y}(d)( \lambda_{d}^{n}+\lambda_{s}^{n})]}{m_{x}^{2}m_{y}^{2}} \tag{66}\]
where \(x,y=Z^{\prime},Z^{\prime\prime}\), and the fractional quark-spin coefficients are \(\lambda_{u}^{n}=-0.42\), \(\lambda_{d}^{n}=0.85\), and \(\lambda_{s}^{n}=-0.88\) for neutron [65]. Notice that dark matter scattering with proton leads to a similar bound, which is not of interest.
## VI Constraining
As the neutrino mass is governed by \(h^{N}\) and \(f_{1,2},\lambda_{17}\) all independent of the gauge portal, the dark matter observables can appropriately be constrained, independent with those for neutrino.3 Only supplemental conditions relevant to dark matter are mass regime for WIMP stability, collider limit for \(Z^{\prime},Z^{\prime\prime}\) masses, and FCNCs, studied in order.
Footnote 3: Note that \(N_{1}\) mass that enters dark matter observables can be induced by a \(h^{\prime N}\) coupling. The other \(h^{\prime N}\) and \(h^{N}\) couplings are sufficient to recover neutrino data.
### WIMP stability
It is easy to adjust relevant Yukawa couplings and scalar potential parameters so that \(N_{1}\) is lighter than other dark fermions and dark scalars. But for dark vectors, we must impose
\[m_{N_{1}}<m_{X,Y}\simeq\frac{g}{2}w, \tag{67}\]
where \(m_{N_{1}}=M_{1}\) is the mass of \(N_{1}\) as mentioned and the last approximation is given at the leading order \(u,v\ll w\).
### Collider bound
In our model, \(Z^{\prime}\) and \(Z^{\prime\prime}\) couple to lepton and quark quite equally. Hence, the LEPII [66] and LHC [67] experiments would make similar bounds on these gauge bosons, analogous to a sequential \(Z^{\prime}\) boson that has the same couplings as the standard model \(Z\). That said, it is necessary to consider only the LEPII bound for possess \(e^{+}e^{-}\to f\bar{f}\) exchanged by \(Z^{\prime},Z^{\prime\prime}\), given by the effective interaction,
\[{\cal L}_{\rm eff}\supset\sum_{x}\frac{g^{2}}{c_{W}^{2}m_{x}^{2}}[\bar{e} \gamma^{\mu}(a_{L}^{x}(e)P_{L}+a_{R}^{x}(e)P_{R})e][\bar{f}\gamma_{\mu}(a_{L}^ {x}(f)P_{L}+a_{R}^{x}(f)P_{R})f], \tag{68}\]
for \(x=Z^{\prime},Z^{\prime\prime}\) and \(f=\mu,\tau\), where the chiral couplings \(a_{L,R}^{x}(f)=\frac{1}{2}[g_{V}^{x}(f)\pm g_{A}^{x}(f)]\) can be extracted from Tab. 2, particularly
\[a_{L}^{Z^{\prime}}(e)=\frac{c_{\theta}c_{2W}}{2\sqrt{3-4s_{W}^{2}}}-\frac{1}{ 3}s_{\theta}c_{W}t_{G},\hskip 14.226378pta_{L}^{Z^{\prime\prime}}(e)=a_{L}^{Z^ {\prime}}(e)|_{c_{\theta}\to s_{\theta},s_{\theta}\to-c_{\theta}}. \tag{69}\]
Since leptons possess universal couplings, we further write
\[{\cal L}_{\rm eff}\supset\sum_{x}\frac{g^{2}[a_{L}^{x}(e)]^{2}}{c_{W}^{2}m_{x }^{2}}(\bar{e}\gamma^{\mu}P_{L}e)(\bar{f}\gamma_{\mu}P_{L}f)+(LR)+(RL)+(RR), \tag{70}\]
where the last three terms (\(\cdots\)) differ from the first term only in chiral structures. The LEPII has studied such chiral interactions, typically indicating to
\[\sum_{x}\frac{g^{2}[a_{L}^{x}(e)]^{2}}{c_{W}^{2}m_{x}^{2}}=\frac{g^{2}}{c_{W} ^{2}}\left\{\frac{[a_{L}^{Z^{\prime}}(e)]^{2}}{m_{Z^{\prime}}^{2}}+\frac{[a_{ L}^{Z^{\prime\prime}}(e)]^{2}}{m_{Z^{\prime\prime}}^{2}}\right\}<\frac{1}{(6~{}{\rm TeV })^{2}}. \tag{71}\]
### Fcnc
Since quark families transform differently under the gauge symmetry, there must be FCNCs coupled to \(Z^{\prime},Z^{\prime\prime}\). They arise from the gauge interaction,
\[{\cal L}\supset-g\bar{F}\gamma^{\mu}[T_{3}A_{3\mu}+T_{8}A_{8\mu}+t_{X}(Q-T_{3 }+T_{8}/\sqrt{3})B_{\mu}+t_{G}(D+2T_{8}/\sqrt{3})C_{\mu}]F, \tag{72}\]
where we have substituted \(X,G\) from (4). It is noted that all leptons and exotic quarks do not flavor change, while the couplings of \(Q\), \(T_{3}\), and \(D\) always conserve flavors, due to dark parity conservation. What remains is only usual quarks coupled to \(T_{8}\),
\[{\cal L} \supset -g\bar{q}_{L}\gamma^{\mu}T_{\delta}q_{L}(A_{8\mu}+t_{X}/\sqrt{3} B_{\mu}+2t_{G}/\sqrt{3}C_{\mu}) \tag{73}\] \[\supset \bar{q}_{iL}^{\prime}\gamma^{\mu}q_{jL}^{\prime}(V_{qL}^{*})_{3 i}(V_{qL})_{3j}(g^{\prime}Z^{\prime}+g^{\prime\prime}Z^{\prime\prime}),\]
which flavor changes for \(i\neq j\) (\(i,j=1,2,3\)). Above, \(q\) denotes either \(u=(u_{1},u_{2},u_{3})\) or \(d=(d_{1},d_{2},d_{3})\) whose \(T_{8}\) value is \(T_{q8}=\frac{1}{2\sqrt{3}}\text{diag}(-1,-1,1)\). Additionally, \(q^{\prime}\) defines mass eigenstates, either \(u^{\prime}=(u,c,t)\) or \(d^{\prime}=(d,s,b)\), related to gauge states by \(q_{L,R}=V_{qL,R}q^{\prime}_{L,R}\) which diagonalizes relevant quark mass matrices. The \(g^{\prime},g^{\prime\prime}\) couplings are
\[g^{\prime}=2g_{G}s_{\theta}-\frac{gc_{\theta}c_{W}}{\sqrt{3-4s_{W}^{2}}},\ \ \ \ g^{ \prime\prime}=g^{\prime}(c_{\theta}\to s_{\theta},s_{\theta}\to-c_{\theta}). \tag{74}\]
Integrating \(Z^{\prime},Z^{\prime\prime}\) out, we obtain an effective Lagrangian describing meson mixing,
\[\mathcal{L}_{\text{eff}}\supset(\bar{q}^{\prime}_{iL}\gamma^{\mu}q^{\prime}_{ jL})^{2}[(V^{*}_{qL})_{3i}(V_{qL})_{3j}]^{2}\left(\frac{g^{\prime 2}}{m^{2}_{Z^{ \prime}}}+\frac{g^{\prime\prime 2}}{m^{2}_{Z^{\prime\prime}}}\right). \tag{75}\]
Aligning the quark mixing to down quark sector, i.e. \(V_{uL}=1\), it implies \(V_{dL}=V_{\text{CKM}}\). Since neutral meson mixings \(K^{0}\)-\(\bar{K}^{0}\) and \(B^{0}_{d,s}\)-\(\bar{B}^{0}_{d,s}\) give quite the same bounds, we consider only the last one, \(B^{0}_{s}\)-\(\bar{B}^{0}_{s}\) mixing, implying [64]
\[[(V^{*}_{dL})_{32}(V_{dL})_{33}]^{2}\left(\frac{g^{\prime 2}}{m^{2}_{Z^{ \prime}}}+\frac{g^{\prime\prime 2}}{m^{2}_{Z^{\prime\prime}}}\right)<\frac{1}{(100 \text{ TeV})^{2}}. \tag{76}\]
The CKM factor is \((V^{*}_{dL})_{32}(V_{dL})_{33}=4\times 10^{-2}\), leading to
\[\frac{g^{\prime 2}}{m^{2}_{Z^{\prime}}}+\frac{g^{\prime\prime 2}}{m^{2}_{Z^{ \prime\prime}}}<\frac{1}{(4\text{ TeV})^{2}}. \tag{77}\]
### Numerical estimation
We take \(s_{W}^{2}=0.231\), \(\alpha=1/128\), and \(t_{G}=1\), hence \(t_{X}=\sqrt{3}s_{W}/\sqrt{3-4s_{W}^{2}}\simeq 0.577\) and \(g_{G}=g=0.652\). It is clear from (55) and (56) that the \(\mathcal{Z}^{\prime}\)-\(C\) mixing angle \(\theta\) and the \(Z^{\prime},Z^{\prime\prime}\) masses \(m_{Z^{\prime},Z^{\prime\prime}}\) depend only on the two new physics scales, \(w,\Lambda\). Hence, the constraints (71) and (77) each directly yield a bound on \((w,\Lambda)\), as despited in Fig. 3. Such a bound depends infinitesimally on \(t_{G}\), i.e. the strength of the dark coupling \(g_{G}\), if it varies. This is due to the fact that ordinary leptons and quarks have zero dark charge and the effects come only from small mixings. The FCNCs make a \((w,\Lambda)\) bound stronger than that of the collider, which would be taken into account for neutrino mass and dark matter.
The FCNC bound under consideration yields that
1. In the limit \(\Lambda\to\infty\) (or \(\Lambda\gg w\)), we obtain \(w=4\) TeV. In this case, \(Z^{\prime\prime}\) is superheavy and decoupled from the 3-3-1 particle spectrum, while the \(Z^{\prime}\) mass is \(m_{Z^{\prime}}=1.59\) TeV.
2. In the limit \(w\to\infty\) (or \(w\gg\Lambda\)), we achieve \(\Lambda=2.68\) TeV. In this case, \(Z^{\prime\prime}\) is superheavy and decoupled from the standard model particle spectrum with \(U(1)_{D}\) symmetry (see below), while the \(Z^{\prime}\) mass is \(m_{Z^{\prime}}=2.35\) TeV.
3. In the case of \(w\sim\Lambda\), both \(Z^{\prime},Z^{\prime\prime}\) effectively govern the new physics. We fix benchmark values to be \((w,\Lambda)=(5,4.5)\) or \((9,3)\), which translate to \((m_{Z^{\prime}},m_{Z^{\prime\prime}})=(1.85,6.3)\) or \((2.26,6.18)\) respectively, where all values are in TeV.
Using the parameter values and the last case given above, we plot the dark matter relic density (cf. Sec. V) as a function of the dark matter mass as in Fig. 4. It is stressed that the \(Z^{\prime},Z^{\prime\prime}\) mass resonances (left, right funnels, respectively) are necessary to set the correct relic density, \(\Omega_{N_{1}}h^{2}\leq 0.12\). For the case \((w,\Lambda)=(5,4.5)\) TeV, the \(Z^{\prime}\) resonance \(m_{N_{1}}=m_{Z^{\prime}}/2\) plays the role, yielding \(m_{N_{1}}=0.89\)-\(0.96\) TeV for the correct abundance, whereas the \(Z^{\prime\prime}\) resonance is excluded by the WIMP unstable regime, namely \(m_{N_{1}}<1.63\) TeV. However, for the case \((w,\Lambda)=(9,3)\) TeV, both the resonances \(m_{N_{1}}=m_{Z^{\prime}}/2\) by \(Z^{\prime}\) and \(m_{N_{1}}=m_{Z^{\prime\prime}}/2\) by \(Z^{\prime\prime}\) takes place. They indicate to \(m_{N_{1}}=1.06\)-\(1.21\) TeV and \(m_{N_{1}}=2.68\)-\(2.93\) TeV, for correct abundance. Here note that the relic density is only satisfied for a part of the second resonance by \(Z^{\prime\prime}\), since \(m_{N_{1}}<2.93\) TeV ensuring WIMP stability.
Figure 3: New physics scales \((w,\Lambda)\) bounded by LEPII and FCNC.
Using the above limits for new physics scales \(w,\Lambda\) and the input values for \(s_{W},\alpha,g_{X},g_{G}\) parameters, we make a contour of the SD cross-section of dark matter with nuclei in direct detection experiment (cf. Sec. V) as a function of \((w,\Lambda)\) as given in Fig. 5. It is clear that the SD cross-section is more sensitive to \(\Lambda\) than \(w\). Additionally, for viable regime \(w\geq 4\) TeV and \(\Lambda\geq 2.68\) TeV, this model predicts the dark matter signal strength in direct detection to be \(\sigma_{N_{1}}^{\rm SD}<10^{-46}\) cm\({}^{2}\) much below the current bound of \(10^{-42}\) cm\({}^{2}\) order for a typical WIMP with mass beyond 1 GeV [68].
Figure 4: Dark matter relic density plotted as function of its mass according to two cases: \(w=5\) TeV and \(\Lambda=4.5\) TeV (upper panel); \(w=9\) TeV and \(\Lambda=3\) TeV (lower panel).
## VII Realization of the dark charge
In this section, we consider an alternative scenario that reveals the main role of the dark charge by assuming the scalar triplet \(\chi\) to be superheavy, possessing a VEV \(w\gg\Lambda\), and of course \(\Lambda\gg u,v\).4 Hence, the scheme of symmetry breaking is now
Footnote 4: This case presents two new phases of the new physics similar to a matter discussed in [69].
\[SU(3)_{C}\otimes SU(3)_{L}\otimes U(1)_{X}\otimes U(1)_{G}\] \[\downarrow w\] \[SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{Y}\otimes U(1)_{D}\] \[\downarrow\Lambda\] \[SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{Y}\otimes P_{D}\] \[\downarrow u,v\] \[SU(3)_{C}\otimes U(1)_{Q}\otimes P_{D}\]
Indeed, when \(\chi\) develops a VEV, \(\langle\chi\rangle=(0,0,w/\sqrt{2})\), it breaks all new charges \(T_{4,5,6,7,8}\), \(X\), and \(G\) but conserves \(T_{1,2,3}\), \(Y=-1/\sqrt{3}T_{8}+X\), and \(D=-2/\sqrt{3}T_{8}+G\), besides the color, which match the standard model symmetry and \(U(1)_{D}\), as expected. This breaking by \(\chi\) decomposes every \(SU(3)_{L}\) multiplet into a normal isomultiplet with \(D=0\) and a dark
isomultiplet with \(D\neq 0\)--known as dark isopartner of the normal isomultiplet--which all are possibly seen in Tab. 1. Given that the scale \(w\) is very high, i.e. \(w\gg\Lambda\sim\) TeV, the new physics related to it, such as dark vectors \(X,Y\) coupled to broken \(T_{4,5,6,7}\), \(Z^{\prime\prime}\) coupled to broken combination of \(T_{8},X,G\), relevant Goldstone bosons \(G_{X}\), \(G_{Y}\), and \(G_{Z^{\prime\prime}}\) eaten by \(X\), \(Y\), and \(Z^{\prime\prime}\) respectively, and its Higgs fields, is all decoupled/integrated out. What imprinted at scale \(\Lambda\sim\) TeV is a novel theory \(SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{Y}\otimes U(1)_{D}\), explicitly recognizing the dark charge \(D\), directly affecting the standard model.
Notice that for \(w\gg\Lambda\), the \(Z^{\prime},Z^{\prime\prime}\) masses are
\[m_{Z^{\prime}}^{2}\simeq\frac{4g_{G}^{2}(3+t_{X}^{2})}{4t_{G}^{2}+3+t_{X}^{2} }\Lambda^{2},\ \ \ \ m_{Z^{\prime\prime}}^{2}\simeq\frac{g^{2}}{9}(4t_{G}^{2}+3+t_{X}^{2})w^{2}, \tag{78}\]
and the \({\cal Z}^{\prime}\)-\(C\) mixing angle is
\[t_{\theta}\simeq\frac{\sqrt{3+t_{X}^{2}}}{2t_{G}}. \tag{79}\]
As mentioned, \(Z^{\prime\prime}\) is decoupled, while \(Z^{\prime}\) now governs the collider and FCNC, bounded by \(m_{Z^{\prime}}>2.35\) TeV, for our choice of \(t_{G}=1\). In this case, \(t_{\theta}\simeq 0.91\), i.e. \(\theta\simeq 42.4^{\rm o}\), which determines the \(Z^{\prime}\) coupling with fermions, such as
\[{\cal L}\supset g_{G}s_{\theta}\sum_{f}\bar{f}\gamma^{\mu}\left(-\frac{2}{3}t _{W}^{2}Y+D\right)fZ^{\prime}_{\mu}, \tag{80}\]
where \(f\) runs over usual lepton and quark isomultiplets. The presence of the \(Y\) term like that from a kinetic mixing effect results from 3-3-1-1 breaking. That said, if the standard model fields have no dark charge \(D\), they may interact with the dark boson \(Z^{\prime}\) through scotoelectroweak unification governed by the hypercharge \(Y\). This effect is smaller than the dark force by one order, say \(2/3t_{W}^{2}\sim 0.1\)
Although \(\chi\) is superheavy, it can induce appropriate neutrino masses by the same mechanism and the result discussed above. But, the contribution of new physics in (60) must be reordered, \((u/w)^{2}=(u/\Lambda)^{2}\times(\Lambda/w)^{2}\sim 10^{-3}\times 10^{-3}=10^{-6}\), the loop factor \((1/32\pi^{2})\sim 10^{-3}\) as retained, the \(N\) mass matrix being pseudo-Dirac such that \((h^{N}V)^{2}M\sim(h^{N}\Lambda/w)^{2}\times w=10^{-3}(h^{N})^{2}w\), the scalar mass splitting as \(\Delta m^{2}/m^{2}\sim(f_{1},f_{2}\lambda_{17})\Lambda/w^{2}\). Hence, the neutrino masses are of order of eV,
\[m_{\nu}\sim(h^{N})^{2}\times\left(\frac{f_{1},f_{2}\lambda_{17}}{w}\right) \times\left(\frac{\Lambda}{\rm TeV}\right)\times{\rm eV}, \tag{81}\]
given that \(h^{N}\sim 1\), \(\Lambda\sim\) TeV, and \(f_{1,2}\sim w\), where the soft term (\(f_{1,2}\)) would mount to the scale of the 3-3-1-1 breaking.
After decoupling by the large scale \(w\), the intermediate TeV phase with \(U(1)_{D}\) symmetry can contain some dark fields survived, such as \(N_{1}\), \(\xi\), and \(\phi\) by choosing appropriate Yukawa couplings and scalar potential parameters. The dark matter phenomenology is similar to the above model, but it is now governed by only \(Z^{\prime}\) boson, coupled to normal matter via (80). For the dark fermion, the \(Z^{\prime}\) mass resonance sets its relic density. Alternatively, for the dark scalar, the new Higgs \(\phi\) portal takes place annihilating to the standard model Higgs fields, since the dark scalar mass splitting in this case is large.
## VIII Conclusion
The idea of a dark photon associated with a conserved, dark (abelian) charge is interesting as it provides potential solutions to a number of the current issues [70]. As electric charge is a result of electroweak breaking, this work has probed that a dark charge may result from a more fundamental theory, called scotoelectroweak. Moreover, the content of dark fields and the way they interact with normal matter are completely determined by the 3-3-1-1 symmetry of the theory.
We have examined the pattern of the 3-3-1-1 symmetry breaking, obtaining a residual dark parity that both stabilizes dark matter candidates and governs scotogenic neutrino mass generation. The small neutrino masses are suppressed by loop induced and ratio between electroweak to new physics scales, not requiring the soft terms to be too small. The fermion dark matter abundance is generically set by \(Z^{\prime},Z^{\prime\prime}\) mass resonances. Even in a scenario that the 3-3-1-1 breaking scale is very high, the light boson \(Z^{\prime}\) associated with the dark charge still plays the role due to a coupling to normal matter via the hypercharge.
We have investigated the model under constraints from LEPII and FCNCs. However, given a stronger bound it is easily evaded by enhancing \(w,\Lambda\) as the parameter space supplied in the figures. In all case, the signal for fermion dark matter in direct detection is very small. Embedding 3-3-1-1 symmetry in a GUT may be worth exploring as dark charge and its field contents may contribute to gauge coupling unification successfully.
| 私たちは、Dark Matterと普通のMatterを、そのIsomultipletにおいて統一できる、高いWeak Isospin $SU(3)_L$を主張しています。このIsomultipletは、Dark Matterが保存されたDark Chargeを持ち、普通のMatterは保存されていない。この結果、ゲージ対称性は $SU(3)_C \otimes SU(3)_L \otimes U(1)_X \otimes U(1)_G$ と定義され、最初の因子には色群が含まれ、残りの部分は、Scotoelectroweakの理論を定義しています。この理論では、$X$ と $G$ は電荷 $Q = T_3 - 1/\sqrt{3}T_8 + X$ と暗 charge $D = -2/\sqrt{3}T_8 + G$ を決定しています。この設定は、両方の適切な Scotogenic Neutrino Masses と暗 Matterの安定性を提供します。 |
2310.20626 | Groups of profinite type and profinite rigidity | We say that a group $G$ is of \textit{profinite type} if it can be realized
as a Galois group of some field extension. Using Krull's theory, this is
equivalent to the ability of $G$ to be equipped with a profinite topology. We
also say that a group of profinite type is \textit{profinitely rigid} if it
admits a unique profinite topology. In this paper we study when abelian groups
and some group extensions are of profinite type or profinitely rigid. We also
discuss the connection between the properties of profinite type and profinite
rigidity to the injectivity and surjectivity of the cohomology comparison maps,
which were studied by Sury and other authors. | Tamar Bar-On, Nikolay Nikolov | 2023-10-31T16:57:19 | http://arxiv.org/abs/2310.20626v3 | # Groups of profinite type and profinite rigidity
###### Abstract
We say that a group \(G\) is of _profinite type_ if it can be realized as a Galois group of some field extension. Using Krull's theory, this is equivalent to the ability of \(G\) to be equipped with a profinite topology. We also say that a group of profinite type is _profinitely rigid_ if it admits a unique profinite topology up to isomorphism. In this paper we study when abelian groups and some group extension are of profinite type or profinitely rigid. We also discuss the connection between the properties of profinite type and profinite rigidity to the injectivity and surjectivity of the cohomology comparison maps, studied by Sury and other authors.
## 1 Introduction
Let \(G\) be a group. An old result states that if \(G\) is finite then it can be realized as a Galois group of some field extension. It is still an open question, however, if every finite group can be realized as a Galois group over \(\mathbb{Q}\)- this question is known as "the inverse Galois problem".
As for infinite groups- the situation is much more mysterious. In his groundbreaking paper from 1928, [6] Gilfrand Krull proved that every Galois group can be endowed with a group topology which is called "the Krull topology", and is Compact, Hausdorff and totally-disconnected. This work has been completed in 1964 by Douady, [2], who proved that every topological group which is compact, Hausdorff and totally disconnected can be realized as a Galois group over some field. Such groups are known as _profinite groups_ and are exactly the groups which are inverse limit of finite groups. Hence, they admit a local basis of the unity consists of some of their finite-index subgroups. We call the Krull topology of a profinite group a _profinite topology_. This should not be confused with the (generally noncompact) topology which has all subgroups of finite index as a local basis of the identity. For clarity we will refer to the later as the "finite-index topology". These two topologies might differ on a given profinite group (see [9]).
Let \(G\) be an abstract group. We say that \(G\) is of profinite type if it can be given a profinite group topology. Hence, these are precisely the groups that can be realized as the Galois group of some field extension. There are few well-known restriction on the algebraic structure of a profinite group, and hence of
abstract groups of profinite type. For example, since the intersection of all open subgroups of a profinite group is trivial, a group of profinite type is residually finite. In addition, following Baire's Category Theorem, a group of profinite type is either finite or uncountable.
Identifying all groups of profinite type among the infinite abstract groups is an extremely difficult task. However, there are some natural candidates we shall examine, namely the residually finite groups which are extensions of two profinite groups and in particular subgroups of finite index of profinite groups.
Given a group of profinite type, it is natural to ask how many different profinite topologies can be given on it, up to continuous isomorphism. We say that a group \(G\) of profinite type is _profinitely rigid_ (or just rigid) if \(G\) admits a unique profinite topology. A first example is the variety of finite groups with the discrete topology. We say that \(G\) is _weakly rigid_ if all profinite topologies on \(G\) are equivalent as topological groups. For example the profinite group \((\mathbb{Z}/p\mathbb{Z})^{\mathbb{N}}\) is a weakly rigid but not rigid.
The object of this paper is to investigate the groups of profinite type, profinite rigidity and the connections between different profinite topologies possible on a given abstract group.
The paper is organized as follows: In Section 2 we give a necessary and sufficient criterion for an abelian group to be of profinite type and discuss some applications. In Section 3 we prove some general results on profinite type groups and profinite rigidity. In addition, we study profinite invariants of groups of profinite type - i.e, topological properties that preserved under abstract isomorphism. In section 4 we discuss connections with cohomological goodness of abstract and profinite groups.
## 2 Abelian groups of profinite type
We deal first with abelian groups of finite exponent.
**Proposition 1**.: _Let \(G\) be an abelian group of exponent \(n\). Let \(n=\prod_{i}p_{i}^{t_{i}}\) be factorization of \(n\) as a product of prime powers. Being a module over \(\mathbb{Z}/n\mathbb{Z}\)\(G\) is isomorphic to_
\[\bigoplus_{i}(\bigoplus_{j_{i}=1}^{t_{i}}(\bigoplus_{\mathbf{m}_{j_{i}}}C_{p_{i }^{j_{i}}}))\]
_for some cardinals \(\mathbf{m}_{j_{i}}\)._
_Then \(G\) is of profinite type if and only if for every \(j_{i}\), either \(\mathbf{m}_{j_{i}}\) is finite, or there exists some cardinal \(\mathbf{n}_{j_{i}}\) such that \(2^{\mathbf{n}_{j_{i}}}=\mathbf{m}_{j_{i}}\)._
_Remark 2_.: Assuming the Generalised Continuum Hypothesis (**GCH**), Proposition 1 can be rephrased as: For every \(i_{j}\), \(\mathbf{m}_{i_{j}}\) is either zero or a successor ordinal.
Proof.: \(\Leftarrow\) First notice that if \(\{A_{i}\}_{i\in I}\) are groups of profinite type then so is \(\prod_{I}A_{i}\). So, it is sufficient to show that if \(\mathbf{m}_{j_{i}}\) is either finite or equal to \(2^{\mathbf{n}_{j_{i}}}\) then
\(\bigoplus_{\mathbf{m}_{j_{i}}}C_{p^{j_{i}}}\) is of profinite type. Indeed, for \(\mathbf{m}_{j_{i}}\) the claim is trivial. Otherwise, let us look at \(\prod_{\mathbf{n}_{j_{i}}}C_{p^{i}}\), which is a profinite group. One easily checks that it is a free module over \(\mathbb{Z}/p^{\mathbf{n}_{j_{i}}}\) and thus is isomorphic to \(\bigoplus_{I}C_{p^{\mathbf{n}_{j_{i}}}}\). For cardinality reasons, \(|I|=2^{\mathbf{n}_{j_{i}}}=\mathbf{m}_{j_{i}}\).
\(\Rightarrow\) First notice that if an abelian group \(G\) of exponent \(n\) is of profinite type, then so is \(G[p]\). Indeed, let \(t=\max\{r\in\mathbb{N}:p^{r}\mid n\), then \(G[p]=\ker(g\to p^{t}g)\) which is a continuous map in any group topology on \(G\). Now assume that \(G\) is of profinite type, then so is \(G[p_{i}]\cong\bigoplus_{j_{i}=1}^{n_{i}}(\bigoplus_{\mathbf{m}_{j_{i}}}C_{p^{ j_{i}}})\). Let \(1\leq k\leq n_{i}\). Let us look at \({}_{p_{i}^{k}}G\) the subgroup of all elements of order dividing \(p_{i}^{k}\). Being the kernel of the continuous map \(g\to g^{p_{i}^{k}}\), it is closed. Similarly, \(G^{p_{i}^{n_{i}-k}}\) is the image of \(G\) under a continuous map, and thus closed. Hence, \(G_{p_{i}^{k}}/(G_{p_{i}^{k}}\cap G^{p_{i}^{n_{i}-k}})\) is profinite and isomorphic to \(H=\bigoplus_{j_{i}=1}^{k}(\bigoplus_{\mathbf{m}_{j_{i}}}C_{p^{j_{i}}})\). Thus, \(H^{p_{i}^{k-1}}\cong\bigoplus_{\mathbf{m}_{k}}p_{i}^{k-1}C_{p^{k}}\cong \bigoplus_{\mathbf{m}_{k}}C_{p_{i}}\) is of profinite type. The only abelian profinite groups of exponent \(p_{i}\) are the direct products of copies of \(C_{p_{i}}\) (see [9, Theorem 4.3.8]), so again for dimension calculations, \(\mathbf{m}_{k}\) must be either finite or \(2^{\mathbf{n}_{k}}\) for some cardinal \(\mathbf{n}_{k}\).
Now we can give a criteria for a general abelian group to be of profinite type, considering the torsion case is already known.
**Theorem 3**.: _Let \(G\) be an abstract abelian group. Then \(G\) is of profinite type if and only if the natural maps \(G\to G/G^{n}\) induce an isomorphism \(G\to\lim_{\leftarrow}G/G^{n}\) where \(n\) runs over the naturals with the division relation, and for every \(n\), \(G/G^{n}\) is of profinite type._
Proof.: The first direction is trivial.
Now assume that every \(G/G^{n}\) admits a profinite topology, we shall construct a profinite topology on \(\lim_{\leftarrow}G/G^{n}\). For every \(n\), \(G/G^{n}\cong\bigoplus G/G^{p_{i}^{n_{i}}}\) where \(p_{i}\)'s are the prime numbers dividing \(n\) and \(n_{i}\) is the maximum power of \(p_{i}\) dividing \(n\). By Proposition 1, for every prime number \(p\) and a power \(k\), \(G/G^{p^{k}}\cong\bigoplus_{i=1}^{k}(\bigoplus_{\mathbf{m}_{k,i}}C_{p^{i}})\) where for every \(i\), \(\mathbf{m}_{k,i}\) is either finite of of the form \(2^{\mathbf{n}_{k,i}}\). We can induce a profinite topology on \(G/G^{p^{k}}\), and hence on \(G^{n}\) for every \(n\), by fix isomorphisms \(\bigoplus_{\mathbf{m}_{k,i}}C_{p^{j_{i}}}\to\prod_{\mathbf{k}_{k,i}}\) for \(\mathbf{k}_{k,i}=\mathbf{m}_{k,i}\) in case \(\mathbf{m}_{k,i}\) is finite, and \(\mathbf{k}_{j_{i}}=\mathbf{n}_{k,i}\) otherwise. It is enough to construct profinite topologies on \(G/G^{p^{k}}\) for every \(k\) such that the natural epimorphisms \(G/G^{p^{k+1}}\to G/G^{p^{k}}\) are continuous. Since constructing a topology is equivalent to choosing an isom-rphism \(\bigoplus_{i=1}^{k}(\bigoplus_{\mathbf{m}_{k,i}}C_{p^{i}})\to\prod i=1^{k}( \prod_{\mathbf{k}_{k,i}}C_{p^{i}})\), and since the maps \(x\to x\mod p^{k}\) are always continuous between profinite abelian groups, it is enough to choose compatible isomorphisms \(\bigoplus_{i=1}^{k}(\bigoplus_{\mathbf{m}_{k,i}}C_{p^{i}})\to\prod i=1^{k}( \prod_{\mathbf{k}_{k,i}}C_{p^{i}})\). That can be done since every basis of \(G/G^{p^{k}}\) over \(\mathbb{Z}/p^{k}\) can be lifted to a basis of \(G/G^{p^{k+1}}\) over \(\mathbb{Z}/p^{k+1}\) we are done.
**Corollary 4**.: _Let \(G\) be a profinite abelian group and \(U\leq G\) a subgroup of finite index which is not necessary open. Then \(U\) is of profinite type._
Proof.: By induction we may assume \([G:U]=p\). Let us consider only natural numbers \(n\) which are divided by \(p\). Constructing a topology on \(G\) is equivalent to construct compatible topologies on each \(G/G^{n}\). We wish to construct a topology on \(G\) for which \(U\) is open. Notice that for every topology on \(G\), \(U\) is either open or dense. Since the continuous image of a dense subgroup, it is enough to construct a topology on \(G/G^{p}\) for which \(U/G^{p}\) is open, and then lift it to a topology on \(G\) as in Theorem 3. Let \(x\in G/G^{p}\setminus U/G^{p}\). Take a basis of \(U\) over \(\mathbb{Z}/\mathbb{Z}_{p}\) and complete it to a basis of \(G/G^{p}\) by adding \(x\). \(U/G^{p}\) is either finite or of the same cardinality of \(G/G^{p}\), which is profinite, hence we can build an isomorphism \(U/G^{p}\to\prod_{\mathbf{n}}C_{p}\) and complete it to an isomorphism \(G/G^{p}\to\prod_{\mathbf{n}+1}C_{p}\).
Later on we will see that the converse also holds: If \(G\) is an abelian group which possess a finite-index subgroup of profinite type, then \(G\) is of profinite type. Despite an extension of a finite abelian group by a profinite abelian group must be of profinite type, as we just stated, an extension of a profinite abelian group by a finite abelian group might not be of profinite type, as can be shown in the following example:
_Example 5_.: Let \(W\) be the two dimensional vector space over \(F_{p}\) with basis \(e_{1}\) and \(e_{2}\) and define an automorphism \(f\) to act on \(W\) by \(f(e_{1})=e_{1},f(e_{2})=e_{1}+e_{2}\). Note that \(f\) has order \(p\) and the centralizer of \(f\) in \(W\) is the span of \(e_{1}\).
Now let \(H_{0}\) be the direct sum \(W_{1}+W_{2}\) where \(W_{1}\) is a vector space over \(F_{p}\) of dimension \(2^{\mathbb{N}}\) on which \(f\) acts as the identity, and \(W_{2}\) is a direct sum of countably many copies of \(W\) with the action of \(f\) on \(W\) given above. Since \(H_{o}\) has dimension \(2^{\mathbb{N}}\) as a vector space over \(F_{p}\), we have by \(1\) that \(H_{0}\) is of profinite type. We claim that the semidirect product \(G\) of \(H_{0}\) with the cyclic group \(\langle f\rangle=C_{p}\) with the action by conjugation described above is not of profinite type. Suppose it is. The centralizer \(C_{G}(f)\) of \(f\) in \(G\) is generated by \(f\) and the centralizer \(C_{H_{0}}(f)\) of \(f\) in \(H_{0}\) which is a subspace of countably infinite codimension in \(H_{0}\). Hence \(C_{G}(f)\) is a closed subgroup of countably infinite index in \(G\). But the quotient \(G/G_{G}(f)\) is then a countably infinite profinite group, contradiction.
## 3 General results and profinite rigidity
We give here a necessary and sufficient condition for a profinite topology of a finite index normal subgroup to induce a profinite topology on the whole group.
**Lemma 6**.: _Let \(G\) be an abstract group and \(U\unlhd_{f}G\) a finite index normal subgroup such that \(U\) is profinite type. Then the topology on \(U\) can be extended to a profinite topology on \(G\) such that \(U\unlhd_{o}G\) iff for all \(g\in G\) and \(H\leq_{o}U\), \(gHg^{-}1\leq_{o}U\)._
Proof.: \(\Rightarrow\) If \(G\) is a profinite group and \(U\unlhd_{o}G\) then every open subgroup of \(U\) is open in \(G\). By a basic exercise in topology, a compact Hausdorff topology cannot be strictly included in any other compact topology. Thus, the topology
that \(H\) induces on \(U\) must be equal to the original topology in \(U\). Hence, by continuity of the multiplication, \(gHg^{-1}\leq_{o}U\) for all \(g\in G\) and \(H\leq_{o}U\).
\(\Leftarrow\) Let \(G\) and \(U\) be as in the proposition. We define a topology on \(G\) by letting the open subgroups of \(U\) serve as a set of neighborhoods of the identity. As the basic open groups are cosets of subgroups, the inverse operation is continuous. We shall show that the product operation is continuous. It is enough to show that if \(g,k\in G\) and \(H\leq_{o}U\) then \(gHk\) is open. By assumption \(gHg^{-1}\) is open. \(gHk=(gHg^{-1})gk\) is a coset of a basic open subgroup of the identity and hence is open. We get that \(G\) is a topological group. As a finite union of profinite spaces, \(G\) is profinite.
**Corollary 7**.: _Any abelian group possessing a finite-index normal subgroup of profinite type is of profinite type itself._
Recall that a profinite group \(G\) is called _profinitely rigid_ if it admits a unique profinite topology. In particular, every abstract automorphism of \(G\) is in fact continuous. A basic class of examples is the class of finite group with the discrete topology. Another elementary class of examples is the class of _strongly complete_ profinite groups, and in particular, finitely generated profinite groups (see [8] for the full proof). Recall that a profinite group is called _strongly complete_ if every subgroup of finite index is open. As the finite-index topology on it is compact-Hausdorff, any properly weaker topology can not be Hausdorff. Another class of examples was presented by Kiehlmann in his paper [5], by the following theorem:
**Theorem 8**.: _Let \(G\cong\prod_{I}A_{i}\) be a direct product of finite groups all having trivial center. Then \(G\) is profinitely rigid._
In particular, Kiehlmann's Theorem can be applied to all semi-simple profinite groups. The above discussion is the core of the following surprising result:
**Proposition 9**.: _There exists a profinite group whose finite-index normal subgroups are either open, and thus profinite, or are not of profinite type._
Proof.: Let \(S\) be a finite non-abelian simple group, and let \(G=\prod_{I}S\). Denote by \(\tau\) the product topology on \(G\), \((G,\tau)\) becomes a profinite group. By [9, Example 4.2.12]\(G\) is nonstrongly complete, and thus admits finite-index normal subgroup which are not open. Let \(U\) be such a subgroup. First we claim that if \(U\) is of profinite type then it must be semisimple itself. For that it is enough to show that every finite abstract image of \(G\)-and hence also of \(U\)- is semisimple. Observe that every abstract quotient of \(G\) satisfies all laws of \(S\). Let \(D\) be a finite image of \(G\). We deduced that \(D\) is an image of the quotient \(Q\) of some finitely generated free group \(F\) by the verbal subgroup of \(F\) generated by all laws of \(S\). Let \(D=f(Q)\) for some homomorphism \(f\). In turn \(Q\) is a subgroup of some finite product \(\prod_{i}T_{i}\) where each \(T_{i}\) is either \(S\) or a proper subgroup of \(S\). When \(T_{i}\) is a proper subgroup of \(S\). Let \(Q^{\prime}\) be the intersection of \(Q\) with the kernel of the projection into \(T_{i}\). We have that \(|Q:Q^{\prime}|\) is at most \(|T_{i}|<|S|\). Hence \(f(Q^{\prime})\) is a normal subgroup of \(f(Q)=D\) of index less than \(|S|\). But \(D\) cannot have a proper quotient of size less than \(S\) since the finite simple images
of G are only \(S\). In conclusion \(f(Q^{\prime})=D\) and so we can remove all factors \(T_{i}\) which are proper subgroups of \(S\) and assume that \(Q\) is a direct product of finitely many copies of \(S\). But then so is \(D\) by a standard argument.
Now let \(U\) be a finite index normal subgroup of \(G\) and assume that \(U\) is of profinite type. Then being the inverse limit of finite semisimple groups \(U\) is semisimple. Hence by Kiehlmann Theorem every automorphism is continuous. Applying this property of the conjugation by an element of \(G\) and using Lemma 6 we deduce that there is a profinite topology \(\tau^{\prime}\) on \(G\) for which \(U\) is open. Since \(G\) is profinitely rigid, \(\tau=\tau^{\prime}\) and hence \(U\) is open.
We extend the result of Kiehlmann to the larger class of profinite groups, which are the profinite groups of finite semisimple length. These are the profinite groups \(G\) which admit a finite subnormal series of closed subgroups \(1=G_{n}\unlhd\cdots\unlhd G_{1}\unlhd G_{0}=G\) such that all the quotients \(G_{i}/G_{i+1}\) are semisimple. By successively replacing each \(G_{i}\) with the the intersection of its \(G\)-conjugates we may assume that each \(G_{i}\) is a closed normal subgroup of \(G\).
**Theorem 10**.: _Every profinite group of finite semisimple lenght is profinitely rigid._
First we need a lemma. For a profinite group \(G\) we denote by \(G_{*}\) the largest closed normal semisimple subgroup of \(G\). Note that \(G_{*}\) exists since the closure of the product of a family of closed normal semisimple subgroups of G is also normal and semisimple. Moreover if \(G\) has semisimple length \(l\) then \(G/G_{*}\) has semisimple length \(l-1\).
**Lemma 11**.: _Let \(G\) be a profinite group of finite semisimple length. Let \(G_{*}=\prod_{i\in I}S_{i}\) where \(I\) is nonempty and each \(S_{i}\) is a finite nonabelian simple group. Then \(G_{*}=\cap_{i\in I}N_{G}(S_{i})\)._
Proof.: Let \(W=\cap_{i\in I}S_{i}\), this is a closed normal subgroup of \(G\) and hence has finite semisimple length. In particular \(W\) is topologically (in fact even abstractly) perfect. Clearly \(G_{*}\leq W\) and the action of \(W\) on \(G_{*}\) by conjugation induces a continuous homomorphism \(f:W\rightarrow\prod_{i\in I}Aut(S_{i})\). Since \(Out(S_{i})\) is a finite solvable group and \(W\) is topologically perfect \(f(W)\leq\prod_{i\in I}Inn(S_{i})\). It follows that \(W=G_{*}\times L\) where \(L=C_{G}(G_{*})\). We claim that \(L\) is the trivial group. Indeed \(L\) is a normal closed subgroup of \(G\) hence also has finite semisimple length. If \(L\neq\{1\}\) then \(L_{*}\neq\{1\}\) is a characteristic closed normal semisimple subgroup of \(L\) and hence \(L_{*}\) is a closed normal subgroup of \(G\). It follows that \(G_{*}L_{*}\) is a normal closed semisimple subgroup of \(G\) contradicting the maximality of \(G_{*}\). Hence \(L=1\) and \(W=G_{*}\).
We can now prove Theorem 10.
Proof.: Let \(G\) be a profinite group with topology \(\tau\). Suppose that \(G\) has semisimple length \(l\). We will prove that \(G\) is profinitely unique by induction on \(l\). The case \(l=1\) has been established by J. Kiehlmann. Let \(G_{*}=\prod_{i\in I}S_{i}\) be largest normal semsimple closed subgroup of \(G\). By induction we may assume that
\(G/G_{*}\) is profinitely unique. Let \(\tau^{\prime}\) be the another profinite topology on \(G\). Lemma 11 gives \(G_{*}=\cap_{i\in I}N_{G}(S_{i})\). Each \(S_{i}\) is a finite subgroup of \(G\) and therefore \(S_{i}\) and \(N_{G}(S_{i})\) are \(\tau^{\prime}\)-closed subgroups of \(G\). It follows that \(G_{*}\) is \(\tau^{\prime}\)-closed.
Let \(U\) be a \(\tau\)-open normal subgroup of \(G\). It is sufficient to prove that \(U\) is open in \(\tau^{\prime}\). Since \(G_{*}\) is closed in both \(\tau\) and \(\tau^{\prime}\) and \(G/G_{*}\) is profinitely unique we obtain that \(G_{*}U\) is open in \(\tau^{\prime}\). Replacing \(G\) with \(G_{*}U\) we may assume that \(G_{*}U=G\). Moreover \(U\cap G_{*}\) is an open normal subgroup of \(G_{*}\). Since \(G_{*}\) is profinitely unique we have that \(G_{*}\cap U\) is open in \(G_{*}\) under the induced topology from \(\tau^{\prime}\) and hence \(G_{*}\cap U\) is \(\tau^{\prime}\)-closed subgroup of \(G\). By replacing \(G\) with \(G/(G_{*}\cap U)\) we may assume that \(G_{*}\cap U=\{1\}\). Together with \(G_{*}U=G\) we deduce that \(G=G_{*}\times U\). Since \(G_{*}\) is a semisimple group it has trivial centre and we deduce \(U=C_{G}(G_{*})\). Thus \(U\) is closed in \(\tau^{\prime}\) and since \(|G:U|\) is finite \(U\) is open in \(\tau^{\prime}\). Theorem 10 is proved.
Although an extension of a profinite group of finite semisimple lenght by a profinite group of finite semisimple length remains rigid, this is not true for general profinitely rigid groups as can be shown by the following example:
_Example 12_.: Assume there exists a profinitely rigid group \(G\) which admits a noncontinuous finite abelian image \(A\). Being finite, \(A\) is profinitely rigid as well. Look at the direct product \(A\times G\). It admits a natural profinite topology when \(G\) considered as a closed subgroup. Now let \(\varphi:G\to A\) be a noncontinuous homomorphism. Then \(\{\varphi(g)g\}\) is another complement to \(A\) inside \(A\times G\). We claim that \(\{\varphi(g)g\}\) is not closed in \(A\times G\). Indeed, since \(\varphi:G\to A\) is noncontinuous, there admits an element \(h\in G\) such that \(h\in\overline{\ker(\varphi)}\setminus\ker(\varphi)\). Assume that \(h\in\{\varphi(g)g\}\). There exists an element \(x\in G\) such that \(h=x\varphi(x)\). So \(G\ni x^{-1}h=\varphi(x)\in A\). Thus, \(\varphi(x)=1\), meaning that \(x\in\ker(\varphi)\) and \(h=x\), a contradiction. Hence, \(h\notin\{\varphi(g)g\}\). However, as \(h\in\overline{\ker(\varphi)}\), for every normal open subgroup \(U\unlhd_{o}G\) there exists \(x\in\ker(\varphi)\) such that \(hU=xU\). It implies that \(hU=x\varphi(x)U\). As the set of all open normal subgroup of \(G\) is a basis for the profinite topology on \(A\times G\), we get that \(h\in\overline{\{\varphi(g)g\}}\). Now, as \(\{\varphi(g)g\}\cong G\) it has a (unique) profinite topology. So we can define on \(A\times G\) the topology induced from the decomposition \(A\times\{\varphi(g)g\}\) and the product topology. This is a different topology, since \(\{\varphi(g)g\}\) is closed.
We left to prove the existence of such \(G\). Let \((L_{i})_{i\in\mathbb{N}}\) be a sequence of finite groups such that the word width of \(L_{i}\) with respect to squares tends to infinity. For example we may take \(L_{i}=F_{i}/[F_{i}^{2},F_{i}]\), where \(F_{i}\) is the free group of rank \(i\). Let \(T:=\prod_{i\in\mathbb{N}}L_{i}\). The algebraic subgroup \(T^{2}\) generated by all squares of \(T\) is not closed in \(T\) and therefore \(T\) has a nonopen subgroup of index \(2\). Hence \(T\) has a noncontinuous abelian homomorphic image of size \(2\). Let \(M_{i}\) be the standard wreath product \(A_{5}\wr L_{i}\) and observe that the centre of \(M\) is trivial. Let \(G=\prod_{i\in\mathbb{N}}M_{i}\). We claim that the profinite topology of \(G\) is unique. Let \(j\in\mathbb{N}\) and let \(P_{j}=\cap_{i\neq j}C_{G}(M_{i})\). Since \(M_{i}\) are finite groups the group \(P_{j}\) is a closed in any profinite topology of \(G\). Note that \(P_{j}\) is the kernel of the projection \(G\to M_{j}\). Thus \(P_{j}\) is open with respect to any topology of \(G\) and it easily
follows that \(G\) is profinitely rigid. Since \(G\) maps continuously onto \(\prod_{i\in\mathbb{N}}L_{i}\) we deduce that \(G\) has a noncontinuous image isomorphic to \(C_{2}\).
The above example is in fact the first the first example in the paper for a profinite group which is not profinitely rigid- moreover, it can regarded as a "recipe" for creating such examples. In fact the simplest examples are the infinite abelian groups of finite exponent- as constructing a profinite topology is equivalent to choosing an isomorphism \(G\to\prod_{I}C_{p}\). Such groups admits infinite many different profinite topologies - however, they are all isomorphic as profinite groups and so are weakly rigid. The first example of profinite groups which are not weakly rigid was given by Kiehlmann in [5]:
_Example 13_.: Let \(p\) be a prime number. Then the groups \(G_{1}=\prod_{n\in\mathbb{N}}C_{p^{n}}\) and \(G_{2}=\mathbb{Z}_{p}\times\prod_{n\in\mathbb{N}}C_{p^{n}}\) are abstractly isomorphic, but are not isomorphic as profinite groups. Thus \(G_{1}\) is not weakly rigid.
So far all of the known examples of nonrigid profinite groups admit an abelian quotient. We use Kiehlmann's example of groups \(G_{1}\) and \(G_{2}\) above in order to construct a perfect profinite group which is not weakly rigid.
Let \(W_{i}\) be the closed subgroup of \(G_{i}^{5}\) defined as follows.
\[W_{i}=\{(x_{1},\ldots,x_{5})\in G_{i}^{5}\ |\ \sum_{i=1}^{5}x_{i}=1\}.\]
Define an action of the alternating group \(A_{5}\) on \(G_{i}^{5}\) and on \(W_{i}\) by permutations of the five coordinates. Let \(L_{i}=W_{i}\rtimes A_{5}\), this is a profinite group with the topology induced by its open subgroup \(W_{i}\).
_Example 14_.: The profinite groups \(L_{1}\) and \(L_{2}\) above are perfect, abstractly isomorphic but not isomorphic as profinite groups.
Proof: To show that \(L_{i}\) is perfect it is enough to show that \(a:=(x,x^{-1},1,1,1)\) belongs to \(L_{i}^{\prime}\) for any \(x\in G_{i}\), since the conjugates of all such \(a\) under \(A_{5}\) generate \(W_{i}\).
Let \(b=(x,1,x^{-1},1,1)\in W_{i}\) and \(g=(12)(45)\in A_{5}\), we have that \(a=b^{-1}b^{g}=[b,g]\in L_{i}^{\prime}\) as required.
Let \(f:G_{1}\to G_{2}\) be an abstract isomorphism between \(G_{1}\) and \(G_{2}\). We define an isomorphism \(F:L_{1}\to L_{2}\) by declaring \(F(g)=g\) for all \(g\in A_{5}\) and \(F((x_{1},\ldots,x_{5}))=(f(x_{1}),\ldots,f(x_{5}))\) for all \((x_{1},\ldots,x)\in W_{1}\). It is trivial to check that \(F\) is indeed a group isomorphism.
On the other hand suppose that \(h:L_{1}\to L_{2}\) is a continuous isomorphism between. Then \(h(W_{1})=h(W_{2})\) since \(W_{i}\) is the largest normal abelian subgroup of \(L_{i}\). It follows that \(W_{1}\) and \(W_{2}\) are isomorphic as profinite groups. This is impossible because \(W_{1}\) is the closure of its torsion subgroup and \(W_{2}\) is not (since \(G_{2}\) and hence \(W_{2}\) have continuous homomorphisms onto \(\mathbb{Z}_{p}\)).
We say that a finite group is anabelian if it has no abelian composition factors. Similarly a profinite group is said to be anabelian if it is an inverse limit of anabelian finite groups. Note that a profinite anabelian group could have finite index subgroups which are not open, such as the group appears in Proposition 9. This motivates the following.
**Open Question 15**.: _Is every anabelian profinite group profinitely rigid?_
Let \(G\) be a profinite group which is not weakly rigid. This means that \(G\) admits two profinite topologies \(\tau,\tau^{\prime}\) such that \((G,\tau)\) and \((G,\tau^{\prime})\) are not isomorphic as topological groups. How different can these topological group be from each other? The first easy observation is that they have the same supernatural order. Recall that the supernatural order of a profinite group \(G\), \(o(G)\), is defined by the lcm of the orders of all finite continuous quotients of \(G\). By [9, Proposition 4.2.3], for every finite abstract quotient \(A\) of \(G\), \(o(A)\mid o(G)\). This implies that \(o(G)\) can be described as the lcm of the orders of all finite quotients of \(G\). Thus, \(o(G)\) is independent the profinite topology on \(G\). As an immediate result, we get that if for some \(p\), a \(p\)-Sylow subgroup \(P_{\tau}\) of \((G,\tau)\) for some profinite topology \(\tau\) on \(G\) is finite, then for every profinite topology \(\tau^{\prime}\) on \(G\), \(P_{\tau^{\prime}}\cong P_{\tau}\). This result in fact holds in general:
**Theorem 16**.: _Let \(\varphi:G\to H\) be an abstract isomorphism of profinite groups. Let \(P\) be a \(p\)-Sylow subgroup of \(G\). Then \(\varphi(P)\) is a \(p\)-Sylow subgroup of \(H\)._
Proof.: First we show that the image of a \(p\)-Sylow subgroup of a profinite group is a \(p\)-Sylow subgroup in every finite abstact quotient. Let \(A\) be a finite abstract quotient of \(G\), and denote by \(\varphi\) the canonical projection \(G\to A\). Let \(x_{1},...,x_{n}\) be a finite set of preimages of \(A\). Look at \(H=\langle x_{1},...,x_{n}\rangle\) It is a finitely generated profinite group and hence strongly complete. Thus, \(\varphi|_{H}\) is continuous. Now let \(P_{0}\in\operatorname{Syl}_{p}(H)\). Since \(\varphi_{H}\) is continuous, \(\varphi(P_{0})\in\operatorname{Syl}_{p}(A_{i})\). Since \(H\) is closed, there exists some \(P_{1}\in\operatorname{Syl}_{p}(G)\) such that \(P_{1}\cap H=P_{0}\). So, \(\varphi(P_{1})\supseteq\varphi(P_{0})\). On the other hand, since \(P_{1}\) is a pro-\(p\) group, by [9, Proposition 4.2.3]\(\varphi(P_{1})\) is a finite \(p\)-group. So \(\varphi(P_{1})=\varphi(P_{0})\in\operatorname{Syl}_{p}(A_{i})\). Finally, \(P\) and \(P_{1}\) are conjugate and so are their images, so \(\varphi(P)\in\operatorname{Syl}_{p}(A_{i})\).
Now let \(P\) be a \(p\)-Sylow subgroup of \(G\). By the previous claim, the images of \(\varphi(P)\) in any continuous finite quotient of \(H\) are \(p\)-Sylow subgroups. So \(\varphi(P)\) is contained in their inverse limit \(Q\) which is a \(p\)-Sylow subgroup of \(H\). By the same argument, \(\varphi^{-1}(Q)\) is contained in a \(p\)-Sylow subgroup \(P^{\prime}\) of \(G\). So \(P\leq P^{\prime}\). Since \(P\) is a \(p\)-Sylow subgroup it can not be properly contained in any \(p\)-subgroup, thus \(P=P^{\prime}\) and we get that \(\varphi(P)=Q\).
As for the rank of the groups we have the following:
_Remark 17_.: Let \(\tau,\tau^{\prime}\) be two profinite topologies on an abstract group \(G\) and assume that \((G,\tau)\) is finitely generated as a topological group, then so is \((G,\tau^{\prime})\). In fact, this two groups are isomorphic as profinite groups, since \((G,\tau)\) is strongly complete. For topologies of infinite rank, however, having an equality depends on the Generalised Continuum Hypothesis **GCH**, since by [1], \(\omega_{0}(\hat{G})=2^{2^{\omega_{0}(G,\tau)}}\). Here \(\omega_{0}(G)\) stands for the _local weight_ of \(G\), which for profinite groups which are not finitely generated equals to minimal cardinality of a set of generators converging to \(1\), also referred as the _rank_. Later on we will see that it is in fact equivalent to **GCH**.
Connections with cohomological goodness
The notions of profinite type and profinite rigidity have a strong connection to group cohomology.
In [10] Serre defined the notion of cohomological goodness as follows. Let \(\varphi:G\to K\) be a homomorphism from an abstract group to a profinite group such that \(\varphi(G)\) is dense in \(K\), and let \(M\) be a finite continuous \(K\)-module. Then \(\varphi\) induces a series of maps \(\varphi^{i}:H^{i}(K,M)\to H^{i}(G,M)\). Of particular interest is the case where \(K=\hat{G}\) is the profinite completion of \(G\). The class \(\mathcal{A}_{n}\) consists of those group \(G\) for which \(\varphi^{i}\) is an isomorphism for all \(0\leq i\leq n\) and every finite continuous \(\hat{G}\)-module \(M\). A residually finite abstract group \(G\) is called _cohomologically good_ if \(G\in\mathcal{A}_{n}\) for all \(n\). Observe that \(\mathcal{A}_{1}\) equals to the class of all abstract groups. A lot of research is devoted to identify cohomologically good groups, and in particular the class \(\mathcal{A}_{2}\), as can be seen, for example, at, [4, 7]. In fact we have the following characterisation: a residually finite group \(G\) belongs to \(\mathcal{A}_{2}\) if and only if every finite extension of \(G\) is residually finite (see [7, Proposition 2.4]).
An analogue of Serre's cohomological goodness, which appears in the literature is the following: Let \(G\) be a profinite group, \(K=G\) and let \(\varphi:G\to G\) be the identity map. Let \(\mathcal{A}_{n}^{\text{pro}}\) be the class of all profinite groups \(G\) for which \(\varphi^{n}:H^{n}_{\text{con}}(G,M)\to H^{n}_{\text{abs}}(G,M)\) is isomorphism for every finite continuous \(G\)-module. The maps \(\varphi^{n}\) are also called the _comparison maps_ ([3]). For pro-\(p\) groups, the first comparison map is known to examine whether the group is finitely generated (see [3]). Using the proof of Theorem 16, we can generalize this result to general pronilpotent profinite groups.
**Proposition 18**.: _The pronilpotent groups in \(\mathcal{A}_{1}^{\text{pro}}\) are precisely the strongly complete pronilpotent groups._
Proof.: First assume that \(G\) is strongly complete. Then \(\hat{G}\cong G\) where the natural homomorphism is just the identity map. Then since every group belongs to \(\mathcal{A}_{1}\) we are done.
Now assume that \(G\in\mathcal{A}_{1}^{\text{pro}}\). Let \(P\in\operatorname{Syl}_{p}(G)\) and choose \(M=\mathbb{F}_{p}\) be a module with the trivial action. Since \(\varphi^{1}:H^{1}_{\text{con}}(\hat{G},\mathbb{F}_{p})\to H^{1}_{\text{abs}}( G,\mathbb{F}_{p})\) is always an isomorphism, it is equivalent to saying that \(\varphi^{1}:H^{1}_{\text{con}}(\hat{G},\mathbb{F}_{p})\to H^{1}_{\text{con}}(G,\mathbb{F}_{p})\) is an isomorphism. Recall that by [10, Corollary I-11] for every \(P\in\operatorname{Syl}_{p}(G)\), The inclusion map induces an isomorphism \(H^{1}(G,\mathbb{F}_{p})\to H^{1}(P,\mathbb{F}_{p})\).
The following two facts are true in general for every profinite groups \(G\) and \(P\in\operatorname{Syl}_{p}(G)\).
First step: \(\bar{P}\), the closure of \(P\) in \(\hat{G}\), is a \(p\)- Sylow subgroup of \(\hat{G}\). Indeed, that follows since \(\bar{P}\cong\lim_{\leftarrow}\varphi_{i}(P)\), for \(\varphi_{i}:G\to A_{i}\) runs over all the finite abstract quotients of \(G\), is an inverse limit of \(p\)-Sylow subgroups, by the proof of Proposition 16.
Hence, in our case, \(H^{1}(\bar{P},\mathbb{F}_{p})\to H^{1}(P,\mathbb{F}_{p})\) is an isomorphism. Since the map \(P\to\bar{P}\) is injective as the reduction of \(G\to\hat{G}\), by [9, Proposition 7.7.2] we get that \(\varphi:P\to\bar{P}\) is an isomorphism.
Second step: Assume that \(\varphi:P\to\bar{P}\) is an isomorphism. The identity map \(id:G\to G\) together with the universal property of the profinite completion (see [9, Chapter 3]), yields the following commutative diagram:
such that for every \(g\in G\), \(\hat{id}(\varphi(g))=g\). Let \(K=\ker(\hat{id})\) and \(Q=\bar{P}\cap K\). Then \(Q\) is a \(p\)-Sylow subgroup of \(K\). Since \(\varphi:P\to\bar{P}\) is an isomorphism, we conclude that \(Q=0\). Applying this result for every prime \(p\), we get that \(K=0\), i.e, \(\varphi:G\to\hat{G}\) is an isomorphism.
The following Lemma is an immediate application of the correspondence between the second cohomology group and group extensions:
**Lemma 19**.: _Let \((G,\tau)\) be a profinite group and \(M\) a finite continuous module. Denote by \(\varphi^{2}:H^{2}_{con}(G,M)\to H^{2}_{abs}(G,M)\) the natural homomorphism. Then_
1. \(\varphi^{2}\) _is surjective if and only if for every group extension_ \(1\to M\to H\to G\to 1\)_,_ \(H\) _admits a profinite topology._
2. \(\varphi^{2}\) _is injective if and only if for every profinite extension_ \(H\) _of_ \(G\) _by_ \(M\)_,_ \(H\) _admits a unique profinite topology which induces_ \(\tau\) _on_ \(G\) _via the quotient topology._
Proof.:
1. Let \(c\in H^{2}_{\mathrm{abs}}(G,M)\). Choose some preimage \(\tilde{c}\) of \(c\) in \(C^{2}_{\mathrm{abs}}(G,M)\). \(\tilde{c}\) corresponds to some abstract group extension \(1\to M\to H\to G\to 1\). Assume \(H\) is given a profinite topology which is compatible with \(\tau\). Hence, \(H\) corresponds to some element \(c^{\prime}\in H^{2}(G,M)\). Let \(\tilde{c^{\prime}}\) be a preimage of \(c^{\prime}\) in \(C^{2}(G,M)\). Since \(\tilde{c}\) and \(\tilde{c^{\prime}}\) corresponds to the same group extension, they are equivalent in \(H^{2}_{\mathrm{abs}}(G,M)\), i.e, \(i_{2}(c^{\prime})=c\). For the second direction, assume that \(i_{2}\) is onto. Let \(1\to M\to H\to G\to 1\) be an abstract group extension. It corresponds to some element \(c\in C^{2}_{\mathrm{abs}}(G,M)\). By assumption, \(c\) is equivalent in \(H^{2}_{\mathrm{abs}}(G,M)\) to some element \(c^{\prime}\in H^{2}(G,M)\). But \(c^{\prime}\) corresponds to a profinite group extension \(1\to M\to H^{\prime}\to G\to 1\). Hence, these extensions are equivalent. In particular, \(H\cong H^{\prime}\), which means that \(H\) can be given a profinite topology compatible with \(\tau\).
2. Let \(H_{1},H_{2}\) be profinite extensions of \(G\) by \(M\) such that \(H_{1}\cong_{\mathrm{abs}}H_{2}\), as extensions. Each one of them corresponds to an element \(c_{1},c_{2}\in H^{2}_{\mathrm{con}}(G,M)\), correspondingly. Notice that the images of \(c_{1},c_{2}\) in \(H^{2}_{\mathrm{abs}}(G,M)\) become equal by the abstract isomorphism. By injectivity of \(\varphi^{2}\) we get that the coset sof \(c_{1},c_{2}\) in \(H^{2}(G,M)\) are equal, which means that \(H_{1}\cong H_{2}\) continuously. The second direction is similar.
A lot of effort is devoted to classify those pro-\(p\) groups for which the second comparison map is isomorphism. In [11] Surry proved the isomorphism holds for every solvable and Chevalley \(p\)-adic analytic group, and conjectured it holds for every \(p\)-adic analytic group. For profinitely rigid groups these properties can be stated as follows:
**Corollary 20**.: _Let \(G\) be a rigid profinite group. The map \(\varphi^{2}\) is surjective if and only if every finite extension of \(G\) is of profinite type, while \(\varphi^{2}\) is injective if and only if every finite continuous extension of \(G\) is rigid._
_Example 21_.: Let \(G\) be a strongly complete profinite group, and \(1\to M\to H\to G\) a finite extension of \(G\). Assume that \(H\) is residually finite. Then taking profinite completion we get an exact sequence \(1\to M\to\hat{H}\to G\to 1\), which implies the natural homomorphism \(H\to\hat{H}\) being isomorphism. Hence \(H\) is of profinite type, and moreover \(H\) is strongly complete and thus rigid. We conclude that \(G\in\mathcal{A}_{2}^{pro}\) if and only if \(G\in\mathcal{A}_{2}\).
We end the paper with the following two results:
**Theorem 22**.: _GCH holds if and only if for every torsion abelian profinite group \(G\) and a trivial finite discrete module \(M\), \(\varphi^{2}:H^{2}_{con}(G,M)\to H^{2}_{\rm abs}(G,M)\) is injective._
Proof.: First assume that **GCH** doesn't hold. Then there are some infinite cardinals \(\mathbf{n}\neq\mathbf{m}\) such that \(2^{\mathbf{m}}=2^{\mathbf{n}}\). We get that \(\prod_{\mathbf{m}}C_{p}\) is abstractly isomorphic to \(\prod_{\mathbf{n}}C_{p}\), being vector spaces over \(\mathbb{F}_{p}\) of the same dimension. In particular \(\prod_{\mathbf{m}}C_{p}\cong C_{p}\times\prod_{\mathbf{n}}C_{p}\). However, \(\prod_{\mathbf{m}}C_{p}\not\cong\prod_{\mathbf{n}}C_{p}\) as profinite groups since \(\omega_{0}(\prod_{\mathbf{m}}C_{p})=\mathbf{m}\neq\mathbf{n}=\omega_{0}(\prod_ {\mathbf{m}}C_{p})\). Hence, take \(G=\prod_{\mathbf{n}}C_{p}\) and \(M=C_{p}\) we get that \(i_{2}:H^{2}(G,M)\to H^{2}_{\rm abs}(G,M)\) is not injective.
Now assume that **GCH** holds. Let \(G\) be a torsion profinite abelian group, \(M\) a finite abelian module and \(H\) a profinite group such that \(H\) is abstractly isomorphic to \(M\times G\). Since \(H,M,\) and \(G\) are all profinite abelian groups, they are isomorphic to the product of abelian pro-\(p\) groups. The \(p\)-Sylow subgroup of an abelian profinite group \(K\) is equal to \(\bigcap_{q\neq p,n\in\mathbb{N}}G^{q^{n}}\) and so the isomorphism induces isomorphism of the \(p\)-Sylow subgroups. Since the groups are the products of their \(p\)-Sylow subgroups, it is enough to conduct a continuous isomorphism for every \(p\)-Sylow subgroup. So let's assume all the groups are of exponent \(p^{n}\). As all the groups are profinite, we get that \(H\cong\oplus_{i=1}^{n}\prod_{\mathbf{m}_{i}}C_{p^{i}}\) and \(M\times G\cong\oplus_{i=1}^{n}\prod_{\mathbf{n}_{i}}C_{p^{i}}\), as profinite groups. We need to prove that \(\mathbf{n}_{i}=\mathbf{m}_{i}\). Indeed, if \(H\cong M\times G\) abstractly, then
\[\oplus_{i=k+1}^{n}\prod_{\mathbf{m}_{i}}C_{p^{i-k}}\cong H/H^{p^{k}}\cong(M \times G)/(M\times G)^{p^{k}}\cong\oplus_{i=k+1}^{n}\prod_{\mathbf{n}_{i}}C_{ p^{i-k}}\]
Now denote \(T=H/H^{p^{k}}\prod_{\mathbf{m}_{k+1}}C_{p}\cong T^{p}/(T^{p}\cap T_{p})\). We apply the same method on \(M\times H\) in order to get that \(\prod_{\mathbf{m}_{k+1}}C_{p}\cong\prod_{\mathbf{n}_{k+1}}C_{p}\). Both group are \(\mathbb{F}_{P}\) vector spaces of dimensions \(2^{\mathbf{n}_{k+1}}\) and \(2^{\mathbf{m}_{k+1}}\) correspondingly. As they are abstractly isomorphic, \(2^{\mathbf{n}_{k+1}}=2^{\mathbf{m}_{k+1}}\). By **GCH**\(\mathbf{n}_{i}=\mathbf{m}_{i}\) and hence \(\oplus_{i=1}^{n}\prod_{\mathbf{m}_{i}}C_{p^{i}}\) and \(\oplus_{i=1}^{n}\prod_{\mathbf{n}_{i}}C_{p^{i}}\) are isomorphic as profinite groups.
For the class \(\mathcal{A}_{2}\), Serre proved the following equivalence: \(G\) belongs to \(\mathcal{A}_{2}\) if and only if for every extension \(1\to N\to E\to G\to 1\) where \(N\) is finitely generated, the induced map \(\hat{N}\to\hat{E}\) is injective. The properties of \(\varphi^{2}\) in the contexts \(\mathcal{A}_{2}^{\text{pro}}\) can also be stated in terms of finitely generated extensions:
**Lemma 23**.: _Let \((G,\tau)\) be a profinite group such that for every finite extension \(1\to A\to H\to G\to 1\), \(H\) admits a unique profinite topology inducing \(\tau\) on \(G\). Then for every group extension \(1\to A\to H\to G\to 1\) with \((A,\tau^{\prime})\) finitely generated, \(H\) admits a unique profinite topology inducing \(\tau\) on \(G\) and \(\tau^{\prime}\) on \(A\)._
Proof.: Let \(1\to A\to H\to G\to 1\) be a group extension such that \((A,\tau^{\prime}),(G,\tau)\) are profinite and \((A,\tau^{\prime})\) is finitely generated. Then \(A\) admits a series of open characteristic subgroups \(A=A_{0}\geq A_{1}\geq A_{2}\geq\cdots\geq A_{n}\geq\cdots\) such that \(\bigcap_{n\in\mathbb{N}}A_{n}=e\) (see, for example [9, Proposition 2.5.1]). In particular, for every \(n\), \(A_{n}\unlhd H\). Let \(n\) be a natural number, then \(1\to A/A_{n}\to H/A_{n}\to G\to 1\) is a group extension with \(A/A_{n}\) finite. So \(H/A_{n}\) admits a profinite topology. The natural projections \(H/A_{n}\to H/A_{m}\) are continuous since \(H/A_{n}\) on \(H/A_{m}\) a topology compatible with \(\tau\). Since \(A\cong\lim_{\leftarrow}A/A_{n}\), \(H\cong H/A_{n}\). Indeed, let \((h_{n})\) be a compatible series in \(\prod H/H_{n}\), then every element has the form \(c_{n}a_{n}\) where \(c_{n}\) is some representative of \(A\) in \(H\) and \(a_{n}\) is some representative of \(A_{n}\) in \(A\). Since the series is compatible, \(c_{n}=c\) is fixed for all \(n\). The series of the \(a_{n}\) is compatible and thus yield to en elements in \(A\). So we can define a profinite topology on \(H\), compatible with \(\tau\). On the second hand, for every profinite topology on \(H\) for which \(A\) is closed, the groups \(A_{n}\) must be closed too, and so \(H\cong\lim_{\leftarrow}H/A_{n}\).
| |
2302.14792 | Continuous Stability Conditions of Type A and Measured Laminations of
the Hyperbolic Plane | We introduce stability conditions (in the sense of King) for representable
modules of continuous quivers of type A along with a special criteria called
the four point condition. The stability conditions are defined using a
generalization of delta functions, called half-delta functions. We show that
for a continuous quiver of type A with finitely many sinks and sources, the
stability conditions satisfying the four point condition are in bijection with
measured laminations of the hyperbolic plane. Along the way, we extend an
earlier result by the first author and Todorov regarding continuous cluster
categories for linear continuous quivers of type A and laminations of the
hyperbolic plane to all continuous quivers of type A with finitely many sinks
and sources. We also give a formula for the continuous cluster character. | Kiyoshi Igusa, Job Daisie Rock | 2023-02-28T17:44:32 | http://arxiv.org/abs/2302.14792v1 | Continuous stability conditions of type \(\mathbb{A}\) and measured laminations of the hyperbolic plane
###### Abstract.
We introduce stability conditions (in the sense of King) for representable modules of continuous quivers of type \(\mathbb{A}\) along with a special criteria called the four point condition. The stability conditions are defined using a generalization of \(\delta\) functions, called half-\(\delta\) functions. We show that for a continuous quiver of type \(\mathbb{A}\) with finitely many sinks and sources, the stability conditions satisfying the four point condition are in bijection with measured laminations of the hyperbolic plane. Along the way, we extend an earlier result by the first author and Todorov regarding continuous cluster categories for linear continuous quivers of type \(\mathbb{A}\) and laminations of the hyperbolic plane to all continuous quivers of type \(\mathbb{A}\) with finitely many sinks and sources. We also give a formula for the continuous cluster character.
_Dedicated to Idun Reiten for her kind support and encouragement_
###### Contents
* 1 The Finite Case
* 2 Continuous stability conditions
* 3 Continuous tilting
* 4 Measured Laminations and Stability Conditions
* ## Introduction
### History
The type of stability conditions in the present paper were introduced by King in order to study the moduli space of finitely generated representations of a finite-dimensional algebra [13].
There is recent work connecting stability conditions to wall and chamber structures for finite-dimensional algebras [6] and real Grothendieck groups [2]. There is also work studying the linearity of stability conditions for finite-dimensional algebras [9].
In 2015, the first author and Todorov introduced the continuous cluster category for type \(\mathbb{A}\)[12]. More recently, both authors and Todorov introduced continuous quivers of type \(\mathbb{A}\) and a corresponding weak cluster category [10, 11]. The second author also generalized the Auslander-Reiten quiver of type \(\mathbb{A}_{n}\) to the Auslander-Reiten space for continuous type \(\mathbb{A}\) and a geometric model to study these weak cluster categories [16, 17].
#### Contributions and Organization
In the present paper, we generalize stability conditions, in the sense of King, to continuous quivers of type \(\mathbb{A}\). In Section 1 we recall facts about stability conditions and reformulate them for our purposes. In Section 2 we recall continuous quivers of type \(\mathbb{A}\), representable modules, and then introduce our continuous stability conditions.
At the beginning of Section 2.2 we define a half-\(\delta\) function, which can be thought of as a Dirac \(\delta\) function that only exists on the "minus side" or "plus side" of a point. We use the half-\(\delta\) functions to define useful functions (Definition 2.8), which are equivalent to functions with bounded variation but better suited to our purposes. Then we define a stability condition as an equivalence class of pairs of useful functions with particular properties, modulo shifting the pair of functions up and down by a constant (Definitions 2.14 and 2.16).
We use some auxiliary constructions to define a semistable module (Definition 2.19). Then we recall \(\mathbf{N}_{\pi}\)-compatibility (Definition 2.23), which can be thought of as the continuous version of rigidity in the present paper. We link stability conditions to maximally \(\mathbf{N}_{\pi}\)-compatible sets using a criteria called the four point condition (Definition 2.21). By \(\mathcal{S}_{\mathrm{fpc}}(Q)\) we denote the set of stability conditions of a continuous quiver \(Q\) of type \(\mathbb{A}\) that satisfy the four point condition.
**Theorem A** (Theorem 2.25).: _Let \(Q\) be a continuous quiver of type \(\mathbb{A}\) with finitely many sinks and sources and let \(\sigma\in\mathcal{S}(Q)\). Then the following are equivalent._
* \(\sigma\in\mathcal{S}_{\mathrm{fpc}}(Q)\)_._
* _The set of_ \(\sigma\)_-semistable indecomposables is maximally_ \(\mathbf{N}_{\pi}\)_-compatible._
In Section 3 we define a continuous version of tilting. That is, for a continuous quiver \(Q\) of type \(\mathbb{A}\) we define a new continuous quiver \(Q^{\prime}\) of type \(\mathbb{A}\) together with an induced map on the set of indecomposable representable modules. This is not to be confused with reflection functors for continuous quivers of type \(\mathbb{A}\), introduced by Liu and Zhao [14]. For each stability condition \(\sigma\) of \(Q\) that satisfies the four point condition, we define a new stability condition \(\sigma^{\prime}\) of \(Q^{\prime}\) (Definition 3.12). We show that continuous tilting induces a bijection on indecomposable representable modules, preserves \(\mathbf{N}_{\pi}\)-compatibility, and includes a bijection on stability conditions for \(Q\) and \(Q^{\prime}\) that satisfy the four point condiiton. Denote by \(\mathrm{mod}^{\mathrm{r}}(Q)\) the category of representable modules over \(Q\).
**Theorem B** (Theorems 3.2 and 3.17).: _Let \(Q\) and \(Q^{\prime}\) be continuous quivers of type \(\mathbb{A}\) with finitely many sinks and sources. Continuous tilting yields a triple of bijections: \(\phi\), \(\Phi\), and \(\Psi\)._
* _A bijection_ \(\phi:\mathrm{Ind}(\mathrm{mod}^{\mathrm{r}}(Q))\to\mathrm{Ind}(\mathrm{mod}^{ \mathrm{r}}(Q^{\prime}))\)_._
* _A bijection_ \(\Phi\) _from maximal_ \(\mathbf{N}_{\pi}\)_-compatible sets of_ \(\mathrm{mod}^{\mathrm{r}}(Q)\) _to maximal_ \(\mathbf{N}_{\pi}\)_-compatible sets of_ \(\mathrm{mod}^{\mathrm{r}}(Q^{\prime})\)_. Furthermore if_ \(\mu:T\to T^{\prime}\) _is a mutation then so is_ \(\Phi(\mu):\Phi T\to\Phi T^{\prime}\) _given by_ \(\phi(M_{I})\mapsto\phi(\mu(M_{I}))\)_._
* _A bijection_ \(\Psi:\mathcal{S}_{\mathrm{fpc}}(Q)\to\mathcal{S}_{\mathrm{fpc}}(Q^{\prime})\) _such that if_ \(T\) _is the set of_ \(\sigma\)_-semistable modules then_ \(\Phi(T)\) _is the set of_ \(\Psi(\sigma)\)_-semistable modules._
In Section 4 we define a measured lamination to be a lamination of the (Poincare disk model of the) hyperbolic plane together with a particular type of measure on the set of geodesics (Definition 4.1). We denote the Poincare disk model of the hyperbolic plane by \(\mathfrak{h}^{2}\). Then we recall the correspondence between laminations of \(\mathfrak{h}^{2}\) and maximally \(\mathbf{N}_{\pi}\)-compatible sets of indecomposable representable modules
over the straight descending continuous quiver of type \(\mathbb{A}\), from the first author and Todorov (Theorem 4.4 in the present paper) [12]. We extend this correspondance to stability conditions that satisfy the four point condition and measured laminations (Theorem 4.12). Combining this with Theorems A and B, we have the last theorem.
**Theorem C** (Corollary 4.13).: _Let \(Q\) be a continuous quiver of type \(\mathbb{A}\) and \(\mathcal{L}\) the set of measured laminations of \(\mathfrak{h}^{2}\). There are three bijections: \(\phi\), \(\Phi\), and \(\Psi\)._
* _The bijection_ \(\phi\) _from_ \(\operatorname{Ind}(\operatorname{mod}^{\tau}(Q))\) _to geodesics in_ \(\mathfrak{h}^{2}\)_._
* _The bijection_ \(\Phi\) _from maximally_ \(\mathbf{N}_{\pi}\)_-compatible sets to (unmeasured) laminations of_ \(\mathfrak{h}^{2}\) _such that, for each maximally_ \(\mathbf{N}_{\pi}\)_-compatible set_ \(T\)_,_ \(\phi|_{T}\) _is a bijection from the indecomposable modules in_ \(T\) _to the geodesics in_ \(\Phi(T)\)_._
* _The bijection_ \(\Psi:\mathcal{S}_{\text{fpc}}(Q)\to\mathcal{L}\) _such that if_ \(T\) _is the set of_ \(\sigma\)_-semistable modules then_ \(\Phi(T)\) _is the set of geodesics in_ \(\Psi(\sigma)\)_._
In Section 4.3, we give a formula for a continuous cluster character \(\chi(M_{ab})\). This is a formal expression in formal variables \(x_{t}\), one for every real number \(t\). We verify some cluster mutation formulas, but leave further work for a future paper.
In Section 4.5, we relate continuous tilting to cluster categories of type \(\mathbb{A}_{n}\). In particular, we discuss how a particular equivalence between type \(\mathbb{A}_{n}\) cluster categories is compatible with continuous tilting. We conclude our contributions with an example for type \(\mathbb{A}_{4}\) (Section 4.5.1). Then we briefly describe some directions for further related research.
### Acknowledgements
The authors thank Gordana Todorov for helpful discussions. KI was supported by Simons Foundation Grant #686616. Part of this work was completed while JDR was at the Hausdorff Research Institute for Mathematics (HIM); JDR thanks HIM for their support and hospitality. JDR is supported at Ghent University by BOF grant 01P12621. JDR thanks Aran Tattar and Shijie Zhu for helpful conversations.
## 1. The Finite Case
There is a relation between stability conditions and generic decompositions which will become more apparent in the continuous case. Here we examine the finite case and impose continuous structures onto the discrete functions in order to give a preview of what will happen in the continuous quiver case.
For a finite quiver of type of \(\mathbb{A}_{n}\) with vertices \(1,\cdots,n\), we need a piecewise continuous functions on the interval \([0,n+1]\) which has discontinuities at the vertices which are sources or sinks. The stability function will be the derivative of this function. It will have Dirac delta functions at the sources and sinks. Since this is a reformulation of well-known results, we will not give proofs. We also review the Caldero-Chapoton cluster character for representations of a quiver of type \(A_{n}\)[7] in order to motivate the continuous case in Section 4.3.
### Semistability condition
Recall that a stability function is a linear map
\[\theta:K_{0}\Lambda=\mathbb{Z}^{n}\to\mathbb{R}.\]
A module \(M\) is \(\theta\)-semistable if \(\theta(\underline{\dim}M)=0\) and \(\theta(\underline{\dim}M^{\prime})\leq 0\) for all submodules \(M^{\prime}\subset M\). We say \(M\) is \(\theta\)-stable if, in addition, \(\theta(\underline{\dim}M^{\prime})<0\) for all \(0\neq M^{\prime}\subsetneq M\). For \(\Lambda\) of type \(\mathbb{A}_{n}\), we denote by \(M_{(a,b]}\) the indecomposable module with support
on the vertices \(a+1,\cdots,b\). For example \(M_{(i-1,i]}\) is the simple module \(S_{i}\). Let \(F:\{0,1,\cdots,n\}\to\mathbb{R}\) be the function
\[F(k)=\sum_{0<i\leq k}\theta(\underline{\dim}S_{i})=\theta(\underline{\dim}M_{(0,k]})\]
Then we have \(\theta(M_{(a,b]})=F(b)-F(a)\).
Thus, for \(M_{(a,b]}\) to be \(\theta\)-semistable we need \(F(a)=F(b)\) and another condition to make \(\theta(\underline{\dim}M^{\prime})\leq 0\). For example, take the quiver of type \(\mathbb{A}_{n}\) having a source at vertex \(c\) and sinks at \(1,n\). Then the indecomposable submodules of \(M_{(a,b]}\) are \(M_{(a,x]}\) for \(a<x<c\), \(x\leq b\) and \(M_{(y,b]}\) for \(c\leq y<b\), \(a\leq y\). Therefore, we also need \(F(x)\leq F(a)=F(b)\leq F(y)\) for such \(x,y\). (And strict inequalities \(F(x)<F(a)=F(b)<F(y)\) to make \(M_{(a,b]}\) stable.)
A simple characterization of \(x,y\) is given by numbering the arrows. Let \(\alpha_{i}\) be the arrow between vertices \(i,i+1\). Then the arrows connecting vertices in \((a,b]\) are \(\alpha_{i}\) for \(a<i<b\). \(M_{(a,x]}\subset M_{(a,b]}\) if \(\alpha_{x}\) points left (and \(a<x<b\)). \(M_{(y,b]}\subset M_{(a,b]}\) if \(\alpha_{y}\) points to the right (and \(a<y<b\)). More generally, we have the following.
**Proposition 1.1**.: \(M_{(a,b]}\) _is \(\theta\)-semistable if and only if the following hold._
1. \(F(a)=F(b)\)__
2. \(F(x)\leq F(a)\) _if_ \(\alpha_{x}\) _points left and_ \(a<x<b\)_._
3. \(F(y)\geq F(b)\) _if_ \(\alpha_{y}\) _points right and_ \(a<y<b\)_._
_Furthermore, if the inequalities in (2),(3) are strict, \(M_{(a,b]}\) is \(\theta\)-stable. _
For example, take the quiver
\[1\stackrel{{\alpha_{1}}}{{\longleftarrow}}2\stackrel{{ \alpha_{2}}}{{\longrightarrow}}3\stackrel{{\alpha_{3}}}{{ \longrightarrow}}4 \tag{1}\]
with \(\theta=(-1,2,-1,-2)\), \(F=(0,-1,1,0,-1)\). Then \(F(1)<F(0)=F(3)=0<F(2)\), with \(\alpha_{1}\) pointing left and \(\alpha_{2}\) pointing right. So, \(M_{(0,3]}\) is \(\theta\)-stable. Similarly, \(F(1)=F(4)=-2<F(2),F(3)\) implies \(M_{(1,4]}\) is also \(\theta\)-stable
One way to visualize the stability condition is indicated in Figure 1.
Figure 1. The graph of \(F:\{0,1,2,3,4\}\to\mathbb{R}\) shows the \(\theta\)-semistable modules. When \(M_{(a,b]}\) is \(\theta\)-stable, \(F(a)=F(b)\) making the line segment connecting \((a,F(a))\) to \((b,F(b))\) horizontal. Also, the intermediate red points are below and the blue points are above the line segment if we draw as red/blue, the spot \((x,F(x))\) for \(\alpha_{x}\) pointing left/right, respectively.
### Generic decomposition
Stability conditions for quivers of type \(\mathbb{A}_{n}\) also give the generic decomposition for dimension vectors \(\mathbf{d}\in\mathbb{N}^{n}\). This becomes more apparent for large \(n\) and gives a preview of what happens in the continuous case.
Given a dimension vector \(\mathbf{d}\in\mathbb{N}^{n}\), there is, up to isomorphism, a unique \(\Lambda\)-module \(M\) of dimension vector \(\mathbf{d}\) which is rigid, i.e., \(\operatorname{Ext}^{1}(M,M)=0\). The dimension vectors \(\beta_{i}\) of the indecomposable summands of \(M\) add up to \(\mathbf{d}\) and the expression \(\mathbf{d}=\sum\beta_{i}\) is called the "generic decomposition" of \(\mathbf{d}\). We use the notation \(\beta_{ab}=\underline{\dim}M_{(a,b]}\) and \(\mathbf{d}=(d_{1},\cdots,d_{n})\).
There is a well-known formula for the generic decomposition of a dimension vector [1] which we explain with an example. Take the quiver of type \(\mathbb{A}_{9}\):
\[1\xleftarrow{\alpha_{1}}2\xleftarrow{\alpha_{2}}3\xleftarrow{\alpha_{3}}4 \xleftarrow{\alpha_{4}}5\xleftarrow{\alpha_{5}}6\xleftarrow{\alpha_{6}}7 \xrightarrow{\alpha_{7}}8\xrightarrow{\alpha_{8}}9 \tag{2}\]
with dimension vector \(\mathbf{d}=(3,4,1,3,2,4,3,1,3)\). To obtain the generic decomposition for \(\mathbf{d}\), we draw \(d_{i}\) spots in vertical columns as shown in (3) below.
\[1\xleftarrow{\alpha_{1}}2\xleftarrow{\alpha_{2}}3\xleftarrow{\alpha_{4}}5 \xleftarrow{\alpha_{5}}6\xleftarrow{\alpha_{6}}7\xleftarrow{\alpha_{7}}8 \xleftarrow{\alpha_{8}}9 \tag{3}\]
For arrows going left, such as \(3\gets 4\), \(5\gets 6\) the top spots should line up horizontally. For arrows going right, such as \(6\to 7,7\to 8\) the bottom spots should line up horizontally as shown. Consecutive spots in any row are connected by horizontal lines. For example, the spots in the first row are connected giving \(M_{(0,6]}\) but the second row of spots is connected in three strings to give \(M_{(0,2]},M_{(3,7]}\) and \(S_{9}=M_{(8,9]}\). The generic decomposition is given by these horizontal lines. Thus
\[\mathbf{d}=(3,4,1,3,2,4,3,1,3)=\beta_{06}+2\beta_{02}+\beta_{37}+2\beta_{89}+ \beta_{12}+\beta_{34}+\beta_{57}+\beta_{59}\]
is the generic decomposition of \(\mathbf{d}=(3,4,1,3,2,4,3,1,3)\).
We construct this decomposition using a stablity function based on (3). We explain this with two examples without proof. The purpose is to motivate continuous stability conditions.
Take real numbers \(d_{0},d_{1},\cdots,d_{n},d_{n+1}\) where \(d_{0}=d_{n+1}=0\). Draw arrows where the arrow \(\alpha_{i}\) connecting \(i-1,i\) where \(\alpha_{0}\) points in the same direction as \(\alpha_{1}\) and \(\alpha_{n}\) points in the same direction as \(\alpha_{n-1}\). To each arrow \(\alpha_{i}\) we associate the real number which is \(d_{i}\) of the target minus \(d_{i}\) of the source. We write this difference below the arrow if the arrow points left and above the arrow when the arrow points right. Then we compute the partial sums for the top numbers and the bottom numbers. Let \(B,R\) denote these functions. Thus \(B(6)=0,B(7)=-1,B(8)=-3,B(9)=-1,B(10)=-4\) and \(R(0)=0,R(1)=-3\), etc. as shown below.
The generic decomposition of \(\mathbf{d}\) is given by \(\mathbf{d}=\sum a_{i}\beta_{i}\) where the coefficient \(a_{i}\) of \(\beta_{i}=\beta_{ab}\) is the linear measure of the set of all \(c\in\mathbb{R}\) so that \(M_{(x,y]}\) is semistable with \(F(x)=c=F(y)\) and so that \(\mathbb{Z}\cap(x,y]=\{a+1,\cdots,b\}\). For example, in Figure 2, the coefficient of \(\beta_{02}\) is the measure of the vertical interval \([-3,-1]\) which is \(2\). For \(c\) in this vertical interval the horizontal line at level \(c\) goes from the red line between \(\frac{1}{3}\) and \(1\) to the blue line between \(\frac{7}{3}\) and \(3\) with blue lines above and red lines below. (We extend the red and blue functions to the interval \((0,10]\) as indicated.) We require \(R(x)\leq B(x)\) for all \(x\in\mathbb{R}\).
We interpret the stability function \(\theta\) to be the derivative of \(F\) where we consider \(R,B\) separately. So, \(\theta\) is a step function equal to \(-3,-1,3,-2,1,-2\) on the six red unit intervals between \(0\) and \(6\) and \(\theta\) is \(-1,-2,2,-3\) on the four blue intervals from \(6\) to \(10\). \(\theta\) is \(4\) times the dirac delta function at \(6\). For example,
\[\theta(M_{(a,b]})=\int_{a}^{b}\theta(x)\,\mathrm{d}x=F(b)-F(a)=0\]
for \(a=3+\varepsilon,b=7+\varepsilon\) with \(0\leq\varepsilon\leq 1\) since \(F(a)=F(b)=-1-2\varepsilon\) in this range. However, \(F(5)=-2\) which is greater than \(-1-2\varepsilon\) for \(\varepsilon>1/2\). So, \(M_{(3+\varepsilon,7+\varepsilon]}\) is semistable only when \(0\leq\varepsilon\leq 1/2\). Taking only the integers in the interval \((3+\varepsilon,7+\varepsilon]\), we get \(M_{(3,7]}\) to be semistable.
Figure 2. The function \(F:(0,n+1]\to\mathbb{R}\) is given by the red function \(R\) on \((0,6]\) since the first \(5\) arrows point left and by the blue function \(B\) on \((6,10]\). A module \(M_{(a,b]}\) is semistable if there is a horizontal line from \((a,y)\) to \((b,y)\) so that \(R(x)\leq y\leq B(x)\) for all \(a\leq x\leq b\). “Islands” \(M_{(a,b]}\) for \(b<5\) are shaded.
### APR-tilting
For quivers of type \(\mathbb{A}_{n}\), we would like all arrows to be pointing in the same direction. We accomplish this with APR-tilting [3].
We recall that APR-tilting of a quiver \(Q\) is given by choosing a sink and reversing all the arrows pointing to that sink, making it a source in a new quiver \(Q^{\prime}\). Modules \(M\) of \(Q\) correspond to modules \(M^{\prime}\) of \(Q^{\prime}\) with the property that
\[\operatorname{Hom}(M^{\prime},N^{\prime})\oplus\operatorname{Ext}(M^{\prime},N^{\prime})\cong\operatorname{Hom}(M,N)\oplus\operatorname{Ext}(M,N)\]
for all pairs of \(\Bbbk Q\)-modules \(M,N\). This gives a bijection between exceptional sequences for \(\Bbbk Q\) and for \(\Bbbk Q^{\prime}\). However, generic modules are given by sets of ext-orthogonal modules. So, we need to modify this proceedure.
In our example, we have a quiver \(Q\) with \(5\) arrows pointing left. By a sequence of APR-tilts we can make all of these point to the right. The new quiver \(Q^{\prime}\) will have all arrows pointing to the right. Any \(\Bbbk Q\)-module \(M_{(a,b]}\) with \(a\leq 5<6\) gives ath \(\Bbbk Q^{\prime}\)-module \(M_{(5-a,b]}\). For example \(M_{(0,6]},M_{(3,7]},M_{(5,7]},M_{(5,9]}\) become \(M_{(5,6]},M_{(2,7]},M_{(0,7]},M_{(0,9]}\). See Figure 3. For \(a>5\), such as \(M_{(8,9]}=S_{9}\), the module is "unchanged". For \(b\leq 5\), the APR-tilt of \(M_{(a,b]}\) is \(M_{(5-b,5-a]}\). However, these are not in general ext-orthgonal to the other modules in our collection. For example, the APR-tilt of \(S_{4}=M_{(3,4]}\) is \(M_{(1,2]}=S_{2}\) which extends \(M_{(2,7]}\). So we need to shift it by \(\tau^{-1}\) to get \(\tau^{-1}M_{(5-b,5-a]}=M_{(4-b,4-a]}\). There is a problem when \(b=5\) since, in that case \(4-b=-1\). This problem will disappear in the continuous case. We call modules \(M_{(a,b]}\) with \(b<5\)_islands_. We ignore the problem case \(b=5\). Islands are shaded in Figure 2. Shifts of their APR-tilts are shaded in Figure 3.
### Clusters and cluster algebras
The components of a generic decomposition of any module form a partial tilting object since they do not extend each other. In the example shown in Figure 3, we have \(8\) objects:
\[M_{01},M_{07},M_{09},M_{23},M_{24},M_{27},M_{56},M_{89}.\]
Figure 3. This is given by APR-tilting of Figure 2. The modules \(M_{(a,b]}\) from Figure 2 with \(a\leq 5<b\) become \(M_{(5-a,b]}\) by APR-tilting. The “islands” \(M_{(a,b]}\) in Figure 2 gave \(\tau^{-1}M_{(5-b,5-a]}=M_{(4-b,4-a]}\) above (shaded).
Since the quiver \(A_{9}\) has rank 9, we need one more to complete the tilting object. There are two other modules that we could add to complete this tilting object. They are \(X=M_{26}\) and \(X^{*}=M_{57}\). There are always at most two objects that will complete a tilting object with \(n-1\) components. Tilting objects are examples of clusters and, in the cluster category [5], there are always exact two objects which complete a cluster with \(n-1\) components.
These two objects \(M_{26}\), \(M_{57}\) extend each other in the cluster category with extensions:
\[M_{57}\to M_{27}\oplus M_{56}\to M_{26}\]
and
\[M_{26}\to M_{24}\to M_{46}[1]=\tau^{-1}M_{57}[1].\]
In the cluster category, a module \(M\) over any hereditary algebra is identified with \(\tau^{-1}M[1]\). Thus, an exact sequence \(\tau^{-1}M\hookrightarrow A\twoheadrightarrow B\) gives an exact triangle \(M[-1]\to A\to B\to M\) in the cluster category since \(\tau^{-1}M=M[-1]\).
In the cluster algebra [8], which is the subalgebra of \(\mathbb{Q}(x_{1},\cdots,x_{n})\) generated by "cluster variables", we have a formula due to Caldero and Chapoton [7] which associates a cluster variable \(\chi(M)\) to every rigid indecomposable module \(M\) and, in this case, satisfies the equation:
\[\chi(X)\chi(X^{*})=\chi(M_{27})\chi(M_{56})+\chi(M_{24}) \tag{4}\]
The Caldero-Chapoton formula for the cluster character of \(M_{ab}\) for \(1<a<b<n\) with arrows going right is the sum of the inverses of exponential \(g\)-vectors of all submodules \(x^{g(M_{ib})}=x_{i}/x_{b}\) times that of the duals of their quotients \(M_{ab}/M_{ib}=M_{ai}\) (see [15]):
\[\chi(M_{ab})=\sum_{i=a}^{b}x^{-g(M_{ib})}x^{-g(DM_{ai})}=\sum_{i=a}^{b}\frac{x _{b}x_{a-1}}{x_{i}x_{i-1}}. \tag{5}\]
So, \(\chi(M_{aa})=\chi(0)=1\). When \(b=n+1\), \(M_{ab}\) is projective with support \([a,n+1)=[a,n]\). So,
\[\chi(P_{a})=\chi(M_{a,n+1})=\sum_{i=a}^{n+1}\frac{x_{a-1}}{x_{i}x_{i-1}}\]
where \(x_{n+1}=1\). This yields:
\[\chi(M_{ab})=x_{b}\chi(P_{a})-x_{a-1}\chi(P_{b+1}).\]
Then, the mutation equation (4) becomes the Plucker relation for the \(2\times 4\) matrix:
\[\begin{bmatrix}x_{1}&x_{4}&x_{6}&x_{7}\\ \chi(P_{2})&\chi(P_{5})&\chi(P_{7})&\chi(P_{8})\end{bmatrix}.\]
## 2. Continuous stability conditions
### Continuous quivers of type \(\mathbb{A}\)
Recall that in a partial order \(\preceq\), a element \(x\) is a **sink** if \(y\preceq x\) implies \(y=x\). Dually, \(x\) is a **source** if \(x\preceq y\) implies \(y=x\).
**Definition 2.1**.: Let \(\preceq\) be a partial order on \(\overline{\mathbb{R}}=\mathbb{R}\cup\{-\infty,+\infty\}\) with finitely many sinks and sources such that, between sinks and sources, \(\preceq\) is either the same as the usual order or the opposite. Let \(Q=(\mathbb{R},\preceq)\) where \(\preceq\) is the same partial
order on \(\mathbb{R}\subseteq\overline{\mathbb{R}}\). We call \(Q\) a **continuous quiver of type \(\mathbb{A}\)**. We consider \(Q\) as a category where the objects of \(Q\) are the points in \(\mathbb{R}\) and
\[\operatorname{Hom}_{Q}(x,y)=\begin{cases}\{*\}&y\preceq x\\ \emptyset&\text{otherwise}.\end{cases}\]
**Definition 2.2**.: Let \(Q\) be a continuous quiver of type \(\mathbb{A}\). A **pointwise finite-dimensional \(Q\) module** over the field \(\Bbbk\) is a functor \(V:Q\to\operatorname{vec}(\Bbbk)\). Let \(I\subset\mathbb{R}\) be an interval. An **interval indecomposable module**\(M_{I}\) is given by
\[M_{I}(x): =\begin{cases}\Bbbk&x\in I\\ 0&x\notin I\end{cases} M_{I}(x,y): =\begin{cases}1_{\Bbbk}&y\preceq x,\,x,y\in I\\ 0&\text{otherwise},\end{cases}\]
where \(I\subseteq\mathbb{R}\) is an interval.
By results in [4, 10] we know that every pointwise finite-dimensional \(Q\) module is isomorphic to a direct sum of interval indecomosables. In particular, this decomposition is unique up to isomorphism and permutation of summands. In [10] it is shown that the category of pointwise finite-dimensional modules is abelian, interval indecomposable modules are indecomposable, and there are indecomposable projectives \(P_{a}\) for each \(a\in\mathbb{R}\) given by
\[P_{a}(x)=\begin{cases}\Bbbk&x\preceq a\\ 0&\text{otherwise}\end{cases} P_{a}(x,y) =\begin{cases}1_{\Bbbk}&y\preceq x\preceq a\\ 0&\text{otherwise}.\end{cases}\]
These projectives are representable as functors.
**Definition 2.3**.: Let \(Q\) be a continuous quiver of type \(\mathbb{A}\). We say \(V\) is **representable** if there is a finite direct sum \(P=\bigoplus_{i=1}^{n}P_{a_{i}}\) and an epimorphism \(P\twoheadrightarrow V\) whose kernal is a direct sum \(\bigoplus_{j=1}^{m}P_{a_{j}}\).
By [10, Theorem 3.0.1], \(V\) is isomorphic to a finite direct sum of interval indecomosables. By results in [16], the subcategory of representable modules is abelian (indeed, a wide subcategory) but has no injectives. When \(\preceq\) is the standard total order on \(\mathbb{R}\), the representable modules are the same as those considered in [12].
**Notation 2.4**.: We denote the abelian subcategory of representable modules over \(Q\) by \(\operatorname{mod}^{\operatorname{r}}(Q)\). We denote the set of isomorphism classes of indecomosables in \(\operatorname{mod}^{\operatorname{r}}(Q)\) by \(\operatorname{Ind}^{\operatorname{r}}(Q)\).
**Definition 2.5**.: Let \(Q\) be a continuous quiver of type \(\mathbb{A}\), \(s\in\overline{\mathbb{R}}\) a sink, and \(s^{\prime}\in\overline{\mathbb{R}}\) an adjacent source.
* If \(s<s^{\prime}\) and \(x\in(s,s^{\prime})\) we say \(x\) is **red** and \((s,s^{\prime})\) is **red**.
* If \(s^{\prime}<s\) and \(x\in(s^{\prime},s)\) we say \(x\) is **blue** and \((s^{\prime},s)\) is **blue**.
Let \(I\) be an interval in \(\mathbb{R}\) such that neither \(\inf I\) nor \(\sup I\) is a source. We will need to refer to the endpoints of \(I\) as being red or blue the following way.
* If \(\inf I\) is a sink and \(\inf I\in I\) we say \(\inf I\) is **blue**.
* If \(\inf I\) is a sink and \(\inf I\notin I\) we say \(\inf I\) is **red**.
* If \(\sup I\) is a sink and \(\sup I\in I\) we say \(\sup I\) is **red**.
* If \(\sup I\) is a sink and \(\sup I\notin I\) we say \(\sup I\) is **blue**.
* If \(\inf I\) is not a sink (\(\sup I\) is not a sink) then we say \(\inf I\) (\(\sup I\)) is red or blue according to the first part of the definition.
Note that \(\inf I\) could be \(-\infty\), in which case it is red. Similarly, if \(\sup I=+\infty\) then it is blue.
**Definition 2.6**.: We say \(I\) is **left red** (respectively, **left blue**) if \(\inf I\) is red (respectively, if \(\inf I\) is blue).
We say \(I\) is **right red** (respectively, **right blue**) if \(\sup I\) is red (respectively, if \(\sup I\) is blue).
We have the following characterization of support intervals.
**Proposition 2.7**.: _Let \(I\subset\mathbb{R}\) be the support of an indecomposable representable module \(M_{I}\in\operatorname{Ind}^{r}(Q)\). Then an endpoint of \(I\) lies in \(I\) if and only if it is either left blue or right red (or both, as in the case \(I=[s,s]\) where \(s\) is a sink)._
### Half-\(\delta\) functions and red-blue function pairs
To define continuous stability conditions we need to introduce half-\(\delta\) functions. A half-\(\delta\) function \(\delta_{x}^{-}\) at \(x\in\overline{\mathbb{R}}\) has the following property. Let \(f\) some integrable function on \([a,b]\subset\overline{\mathbb{R}}\) where \(a<x<b\). Then the following equations hold:
\[\int_{a}^{x}\left(f(t)+\delta_{x}^{-}\right)\mathrm{d}t=\left(\int_{a}^{x}f(t )\,\mathrm{d}t\right)+1,\hskip 14.226378pt\int_{x}^{b}\left(f(t)+\delta_{x}^{-} \right)\mathrm{d}t=\int_{x}^{b}f(t)\,\mathrm{d}t.\]
The half-\(\delta\) function \(\delta_{x}^{+}\) at \(x\in\mathbb{R}\) has a similar property for an \(f\) integrable on \([a,b]\subset\overline{\mathbb{R}}\) with \(a<x<b\):
\[\int_{a}^{x}\left(f(t)+\delta_{x}^{+}\right)\mathrm{d}t=\int_{a}^{x}f(t)\, \mathrm{d}t,\hskip 14.226378pt\int_{x}^{b}\left(f(t)+\delta_{x}^{+}\right) \mathrm{d}t=\left(\int_{x}^{b}f(t)\,\mathrm{d}t\right)+1.\]
Consider \(f+\delta_{x}^{-}-\delta_{x}^{+}\). Then we have
\[\int_{a}^{x}\left(f(t)+\delta_{x}^{-}-\delta_{x}^{+}\right) \mathrm{d}t =\left(\int_{a}^{x}f(t)\,\mathrm{d}t\right)+1,\] \[\int_{x}^{b}\left(f(t)+\delta_{x}^{-}-\delta_{x}^{+}\right) \mathrm{d}t =\left(\int_{x}^{b}f(t)\,\mathrm{d}t\right)-1,\] \[\int_{a}^{b}\left(f(t)+\delta_{x}^{-}-\delta_{x}^{+}\right) \mathrm{d}t =\int_{a}^{b}f(t)\,\mathrm{d}t.\]
For each \(x\in\mathbb{R}\), denote the functions
\[\Delta_{x}^{-}(z) =\int_{-\infty}^{z}\delta_{x}^{-}\,\mathrm{d}t=\begin{cases}0&z<x \\ 1&z\geq x\end{cases}\] \[\Delta_{x}^{+}(z) =\int_{-\infty}^{z}\delta_{x}^{+}\,\mathrm{d}t=\begin{cases}0&z \leq x\\ 1&z>x.\end{cases}\]
Though not technically correct, we write that a function \(f+u_{x}^{-}\Delta_{x}^{-}+u_{x}^{+}\Delta_{x}^{+}\) is from \(\mathbb{R}\) to \(\mathbb{R}\). See Figure 4 for an example. We also allow \(\delta_{+\infty}^{-}\) and \(\delta_{-\infty}^{+}\), which satisfy the relevant parts of the equations above. We don't allow the other half-\(\delta\) functions at \(\pm\infty\) because it does not make sense in terms of integration.
Our stability conditions will be comprised of equivalence classes of pairs of useful functions.
**Definition 2.8**.: We call a function \(F:\mathbb{R}\to\mathbb{R}\)**useful** if it satisfies the following.
1. \(F=f+\sum_{x\in\mathbb{R}\cup\{+\infty\}}u_{x}^{-}\Delta_{x}^{-}+\sum_{x\in \mathbb{R}\cup\{-\infty\}}u_{x}^{+}\Delta_{x}^{+}\), where \(f:\mathbb{R}\to\mathbb{R}\) is a continuous function of bounded variation and each \(u_{x}^{-},u_{x}^{+}\) are in \(\mathbb{R}\).
2. The sums \(\sum_{x\in\mathbb{R}\cup\{+\infty\}}|u_{x}^{-}|\) and \(\sum_{x\in\mathbb{R}\cup\{-\infty\}}|u_{x}^{+}|\) both converge in \(\mathbb{R}\).
**Remark 2.9**.: Note Definition 2.8(2) implies the set \(\{u_{x}^{-}\mid u_{x}^{-}\neq 0\}\cup\{u_{x}^{+}\mid u_{x}^{+}\neq 0\}\) is at most countable. Combining (1) and (2) in Definition 2.8 means we think of \(F\) as having the notion of bounded variation.
We think of the value of a useful function \(F\) at \(x\) as being "the integral from \(-\infty\) to \(x\)" where the integrand is some function that includes at most countably-many half-\(\delta\) functions.
**Proposition 2.10**.: _Let \(F\) be a useful function and let \(a\in\overline{\mathbb{R}}\)._
1. _If_ \(a>-\infty\) _then_ \(\lim_{x\to a^{-}}F(x)\) _exists._
2. _If_ \(a<+\infty\) _then_ \(\lim_{x\to a^{+}}F(x)\) _exists._
3. _If_ \(a\in\mathbb{R}\) _then_ \(F(a)=\lim_{x\to a^{-}}F(x)+u_{a}^{-}\) _and_ \(F(a)+u_{a}^{+}=\lim_{x\to a^{+}}F(x)\)_._
Proof.: (1) and (2). Straightforward computations show that
\[\lim_{x\to a^{-}}F(x) =\lim_{x\to a^{-}}f(x)+\sum_{-\infty<x<a}u_{x}^{-}+\sum_{- \infty\leq x<a}u_{x}^{+} \text{if }a>-\infty\] \[\lim_{x\to a^{+}}F(x) =\lim_{x\to a^{+}}f(x)+\sum_{-\infty<x\leq a}u_{x}^{-}+\sum_{- \infty\leq x\leq a}u_{x}^{+} \text{if }a<+\infty.\]
Thus, (1) and (2) hold.
(3). By definition, we see that
\[F(a)=f(a)+\sum_{-\infty<x\leq a}u_{x}^{-}+\sum_{-\infty\leq x<a}u_{x}^{+}.\]
Thus, using (1) and (2), we see that (3) holds.
**Notation 2.11**.: Let \(F\) be a useful function. For each \(x\in\mathbb{R}\), we define
\[F_{\min}(a): =\min\{F(a),\lim_{x\to a^{-}}F(x),\lim_{x\to a^{+}}F(x)\}\] \[F_{\max}(a): =\max\{F(a),\lim_{x\to a^{-}}F(x),\lim_{x\to a^{+}}F(x)\}.\]
We also define
\[F(-\infty): =\lim_{x\to-\infty^{+}}F(x)-u_{-\infty}^{+}\qquad\quad F(+\infty): =\lim_{x\to+\infty^{-}}F(x)+u_{+\infty}^{-}\] \[F_{\min}(-\infty): =\min\{F(-\infty),\lim_{x\to-\infty^{+}}F(x)\}\] \[F_{\min}(+\infty): =\min\{F(-\infty),\lim_{x\to+\infty^{-}}F(x)\}\] \[F_{\max}(-\infty): =\max\{F(-\infty),\lim_{x\to-\infty^{+}}F(x)\}\] \[F_{\max}(+\infty): =\max\{F(-\infty),\lim_{x\to+\infty^{-}}F(x)\}.\]
**Definition 2.12**.: Let \(F\) be a useful function. We define the **graph**\(\mathcal{G}(F)\) of \(F\) to be the following subset of \(\mathbb{R}^{2}\):
\[\left\{(x,y)\,|\,x\in\mathbb{R},\ F_{\min}(x)\leq y\leq F_{\max}(x)\right\}\,.\]
The **completed graph**, denoted \(\overline{\mathcal{G}(F)}\) of \(F\) is the following subset of \(\overline{\mathbb{R}}\times\mathbb{R}\):
\[\left\{(x,y)\,\big{|}\,x\in\overline{\mathbb{R}},\ F_{\min}(x)\leq y\leq F_{ \max}(x)\right\}\,.\]
**Remark 2.13**.: Let \(F=f+\sum_{x\in\mathbb{R}\cup\{+\infty\}}u_{x}^{-}\Delta_{x}^{-}+\sum_{x\in \mathbb{R}\cup\{-\infty\}}u_{x}^{+}\Delta_{x}^{+}\) be a useful function. For any \(a\leq b\in\mathbb{R}\) there exists \(c\leq d\in\mathbb{R}\), such that \(\mathcal{G}(F)\cap([a,b]\times\mathbb{R})=\mathcal{G}(F)\cap([a,b]\times[c,d])\).
We now define red-blue function pairs, which are used to define equivalence classes of pairs of useful functions. The red-blue function pairs are analogs of the red and blue functions from Section 1.
**Definition 2.14**.: Let \(R=r+\sum_{x\in\mathbb{R}}u_{x}^{-}\Delta_{x}^{-}+\sum_{x\in\mathbb{R}}u_{x}^{ +}\Delta_{x}^{+}\) and let \(B=b+\sum_{x\in\mathbb{R}}v_{x}^{-}\Delta_{x}^{-}+\sum_{x\in\mathbb{R}}v_{x}^{ +}\Delta_{x}^{+}\) be useful functions. We say the pair \((R,B)\) is a **red-blue function pair** if the following criteria are satisfied.
1. For all \(x\in\mathbb{R}\), we have \(R_{\max}(x)=R(x)\) and \(B_{\min}(x)=B(x)\).
2. If \(s\) is a source, \(u_{s}^{-}=u_{s}^{+}=v_{x}^{-}=v_{x}^{+}=0\).
3. For all \(x\in\overline{\mathbb{R}}\), \[R(x)\leq B_{\max}(x)\qquad\qquad\text{ and }\qquad\qquad R_{\min}(x)\leq B(x).\]
4. We have \(R(-\infty)=B(-\infty)\) and \(R(+\infty)=B(+\infty)\).
5. The useful function \(R\) is constant on blue intervals. That is: for \(s\leq x<y<s^{\prime}\) in \(\mathbb{R}\) where \((s,s^{\prime})\) is blue, we have \(r(x)=r(y)\) and \(u_{y}^{-}=u_{y}^{+}=0\).
6. The useful function \(B\) is constant on red intervals. That is: for \(s<x<y\leq s^{\prime}\) in \(\mathbb{R}\) where \((s,s^{\prime})\) is red, we have \(b(x)=b(y)\) and \(v_{x}^{-}=v_{x}+=0\).
**Lemma 2.15**.: _Let \((R,B)\) be a red-blue function pair._
1. _For any_ \(a\leq b\) _and_ \(c\leq d\) _in_ \(\mathbb{R}\)_, the set_ \(\mathcal{G}(R)\cap([a,b]\times[c,d])\) _is closed in_ \(\mathbb{R}^{2}\)_._
2. _For any_ \(a\leq b\in\mathbb{R}\) _the useful function_ \(R\) _has a local maximum on_ \([a,b]\) _in the sense that there exists_ \(x\in[a,b]\) _such that for all_ \(y\in[a,b]\)_:_ \(R_{\max}(y)\leq R_{\max}(x)\)_._
3. _For any_ \(a\leq b\in\mathbb{R}\) _the useful function_ \(R\) _has a local minimum on_ \([a,b]\) _in the sense that there exists_ \(x\in[a,b]\) _such that for all_ \(y\in[a,b]\)_:_ \(R_{\min}(x)\leq R_{\min}(y)\)
_Statements (1)-(3) are true when we replace \(R\), \(r\), and \(u\) with \(B\), \(b\), and \(v\), respectively._
Proof.: We first prove (1) for \(R\) as the proof for \(B\) is identical. Let \(\{(x_{i},y_{i})\}\) be a sequence in \(\mathcal{G}(R)\cap([a,b]\times[c,d])\) that converges to \((w,z)\). Since \(\{x_{i}\}\) converges to \(w\) we assume, without loss of generality, that \(\{x_{i}\}\) is monotonic. If there exists \(i\in\mathbb{N}\) such that \(x_{i}=w\) then, assuming monotonicity, we know \((w,z)\in\mathcal{G}(R)\). Thus, assume \(x_{i}\neq w\) for all \(i\in\mathbb{N}\).
Without loss of generality, assume \(\{x_{i}\}\) is increasing. The decreasing case is similar. Since \(\sum_{x\in\mathbb{R}}|u_{x}^{-}|+|u_{x}^{+}|<\infty\), we know that
\[\lim_{i\to\infty}|R_{\max}(x_{i})-R_{\min}(x_{i})|=0.\]
And so,
\[\lim_{i\to\infty}R_{\max}(x_{i})=\lim_{i\to\infty}R_{\min}(x_{i})=\lim_{i\to \infty}R(x_{i})=\lim_{\to w^{-}}R(x).\]
Then we must have
\[\lim_{i\to\infty}y_{i}=\lim_{\to w^{-}}R(x).\]
Therefore, \((w,z)\in\mathcal{G}(R)\).
Next, we only prove (2) for \(R\) as the remaining proofs are similar and symmetric. By Remark 2.13 there exists \(c\leq d\in\mathbb{R}\) such that
\[\mathcal{G}(R)\cap([a,b]\times\mathbb{R})=\mathcal{G}(R)\cap([a,b]\times[c,d]).\]
Then there must be a greatest lower bound \(d_{0}\geq c\) for all \(d\) such that the equation above holds. Since \(\mathcal{G}(R)\cap([a,b]\times[c,d_{0}])\) must be closed by Lemma 2.15(1), there must be a point \((x,d_{0})\in\mathcal{G}(R)\) for \(a\leq x\leq b\). This is the desired \(x\).
### Stability conditions
**Definition 2.16**.: Let \((R,B)\) and \((R^{\prime},B^{\prime})\) be red-blue function pairs. We say \((R,B)\) and \((R^{\prime},B^{\prime})\) are **equivalent** if there exists a constant \(\mathfrak{c}\in\mathbb{R}\) such that, for all \(x\in\overline{\mathbb{R}}\) and \(y\in\mathbb{R}\), we have
\[(x,y)\in\overline{\mathcal{G}(R)}\text{ if and only if }(x,y+\mathfrak{c})\in \overline{\mathcal{G}(R^{\prime})}\]
and
\[(x,y)\in\overline{\mathcal{G}(B)}\text{ if and only if }(x,y+\mathfrak{c})\in \overline{\mathcal{G}(B^{\prime})}.\]
A **stability condition on \(Q\)**, denoted \(\sigma\), is an equivalence class of red-blue function pairs. We denote by \(\mathcal{S}(Q)\) the set of stability conditions on \(Q\).
We now define the **modified** versions of a continuous quiver \(Q\) of type \(\mathbb{A}\), an interval \(I\) of a module \(M_{I}\) in \(\operatorname{Ind}^{\mathrm{r}}(Q)\), and graphs of red-blue function pairs. This makes it easier to check whether or not an indecomposable module is semistable with respect to a particular stability condition.
**Definition 2.17**.:
1. Let \(Q\) be a continuous quiver of type \(\mathbb{A}\) with finitely many sinks and sources. We define a totally ordered set \(\widehat{Q}\), called the **modified quiver** of \(Q\), in the following way. First we define the elements. * For each \(x\in\mathbb{R}\) such that \(x\) is not a sink nor a source of \(Q\), \(x\in\widehat{Q}\). * If \(s\in\mathbb{R}\) is a source of \(Q\), then \(s\notin\widehat{Q}\).
* If \(s\in\mathbb{R}\) is a sink of \(Q\), then \(s_{-},s_{+}\in\widehat{Q}\). These are distinct elements, neither of which is in \(\mathbb{R}\).
* If \(-\infty\) (respectively, \(+\infty\)) is a sink then \(-\infty_{+}\in\widehat{Q}\) (respectively, \(+\infty_{-}\in\widehat{Q}\)).
Now, the partial order on \(\widehat{Q}\) is defined in the following way. Let \(x,y\in\widehat{Q}\).
* Suppose \(x,y\in\mathbb{R}\cap\widehat{Q}\). Then \(x\leq y\) in \(\widehat{Q}\) if and only if \(x\leq y\) in \(\mathbb{R}\).
* Suppose \(x\in\mathbb{R}\) and \(y=s_{\pm}\), for some sink \(s\) of \(Q\) in \(\mathbb{R}\). If \(x<s\) in \(\mathbb{R}\) then \(x<y\) in \(\widehat{Q}\). If \(s<x\) in \(\mathbb{R}\) then \(y<x\) in \(\widehat{Q}\).
* Suppose \(x=s_{\varepsilon}\) and \(y=s^{\prime}_{\varepsilon^{\prime}}\) for two sinks \(s,s^{\prime}\) of \(Q\) in \(\mathbb{R}\). We consider \(-<+\). Then \(x\leq y\) if and only if (i) \(s<s^{\prime}\) or (ii) \(s=s^{\prime}\) and \(\varepsilon\leq\varepsilon^{\prime}\).
* If \(x=-\infty_{+}\in\widehat{Q}\) (respectively, \(y=+\infty_{-}\in\widehat{Q}\)), then \(x\) is the minimal element (respectively, \(y\) is the maximal element) of \(\widehat{Q}\). If \(s\in\mathbb{R}\) is a sink of \(Q\) then we say \(s_{-}\) is blue and \(s_{+}\) is red. If \(-\infty_{+}\in\widehat{Q}\) we say \(-\infty_{+}\) is blue. All other \(x\in\widehat{Q}\) are red (respectively, blue) if and only if \(x\in\mathbb{R}\) is red (respectively, blue).
2. Let \(I\) be an interval of \(\mathbb{R}\) such that \(M_{I}\) is in \(\operatorname{Ind}^{\mathrm{r}}(Q)\). The **modified interval**\(\widehat{I}\) of \(\widehat{Q}\) has minimum given by the following conditions. * If \(\inf I\) is not \(-\infty\) nor a sink of \(Q\) then \(\min\widehat{I}=\inf I\). * If \(\inf I\) is a sink \(s\) of \(Q\) then (i) \(\min\widehat{I}=s_{-}\) if \(\inf I\in I\) or (ii) \(\min\widehat{I}=s_{+}\) if \(\inf I\notin I\). * If \(\inf I=-\infty\) then \(\min\widehat{I}=-\infty_{+}\). The maximal element of \(\widehat{I}\) is defined similarly.
3. Let \((R,B)\) be a red-blue funtion pair. The **modified graph** of \((R,B)\) is a subset of \(\widehat{Q}\times\mathbb{R}\). It is defined as follows. * For each \(x\in\mathbb{R}\) not a sink nor a source of \(Q\) and each \(y\in\mathbb{R}\), \((x,y)\in\widehat{G}(R,B)\) if and only if \([[(x,y)\in\mathcal{G}(B)\) and \(x\) is blue] or \([(x,y)\in\mathcal{G}(R)\) and \(x\) is red]] * For each \(s\in\mathbb{R}\) a sink of \(Q\) and each \(y\in\mathbb{R}\), \[(s_{-},y)\in\widehat{G}(R,B)\text{ if and only if }(s,y)\in\mathcal{G}(B)\] \[(s_{+},y)\in\widehat{G}(R,B)\text{ if and only if }(s,y)\in\mathcal{G}(R).\] * If \(-\infty_{+}\in\widehat{Q}\), then for all \(y\in\mathbb{R}\), \[(-\infty_{+},y)\in\widehat{\mathcal{G}}(R,B)\text{ if and only if }(-\infty,y)\in\overline{\mathcal{G}(R)}.\] * If \(+\infty_{-}\in\widehat{Q}\), then for all \(y\in\mathbb{R}\), \[(+\infty_{-},y)\in\widehat{\mathcal{G}}(R,B)\text{ if and only if }(+\infty,y)\in\overline{\mathcal{G}(B)}.\]
The following proposition follows from straightforward checks.
**Proposition 2.18**.: _There is a bijection between \(\operatorname{Ind}^{\mathrm{r}}(Q)\) and intervals of \(\widehat{Q}\) with distinct minimal and maximal element._
Using the modified definitions, we now define what it means to be semistable.
**Definition 2.19**.: Let \(Q\) be a continuous quiver of type \(\mathbb{A}\) with finitely many sinks and sources, \(\sigma\in\mathcal{S}(Q)\), and \((R,B)\) be a representative of \(\sigma\).
We say an indecomposable module \(M_{I}\) in \(\operatorname{Ind}^{\mathrm{r}}(Q)\) is \(\boldsymbol{\sigma}\)**-semistable** if there exists a horizontal line \(\ell=\widehat{I}\times\{h\}\subset\widehat{Q}\times\mathbb{R}\) satisfying the following conditions.
1. The endpoints of \(\ell\) touch \(\widehat{\mathcal{G}}(R,B)\). That is, \((\min\widehat{I},h),(\max\widehat{I},h)\in\widehat{\mathcal{G}}(R,B)\).
2. The line \(\ell\) may touch but not cross \(\widehat{\mathcal{G}}(R,B)\). That is, for each \(x\in\widehat{I}\) such that \(x\notin\{\max\widehat{I},\min\widehat{I}\}\), we have \[R_{\max}(x)\leq h\leq B_{\min}(x),\] where if \(x=s_{\pm}\) then \(R_{\max}(x)=R_{\max}(s)\) and \(B_{\min}(x)=B_{\min}(s)\).
**Remark 2.20**.: Notice that \(M_{I}\) is \(\sigma\)-semistable whenever the following are satisfied:
* We have \([F_{\min}(\inf I),F_{\max}(\inf I)]\cap[{F^{\prime}}_{\min}(\sup I),{F^{\prime}} _{\max}(\sup I)]\neq\emptyset\), where \(F\) is \(R\) if \(\inf I\) is red and is \(B\) if \(\inf I\) is blue and similarly for \(F^{\prime}\) and \(\sup I\).
* For all submodules \(M_{J}\) of \(M_{I}\), \({F^{\prime}}_{\min}(\sup J)\leq F_{\min}(\inf J)\), where \(F,\inf J\) and \(F^{\prime},\sup J\) are similar to the previous point.
Thus, this is a continuous analogue to the semistable condition in the finite case.
**Definition 2.21**.: Let \(Q\) be a continuous quiver of type \(\mathbb{A}\) with finitely many sinks and sources, \(\sigma\in\mathcal{S}(Q)\), and \((R,B)\) be a representative of \(\sigma\).
We say \(\sigma\) satisfies the **four point condition** if, for any \(\sigma\)-semistable module \(M_{I}\), we have \(|(\widehat{I}\times\{h\})\cap(\widehat{Q}\times\mathbb{R})|\leq 3\), where \(\widehat{I}\times\{h\}\) is as in Definition 2.19. We denote the set of stability conditions that satisfy the four point condition as \(\mathcal{S}_{\mathrm{fpc}}(Q)\).
Recall Definition 2.5.
**Lemma 2.22**.: _Let \(Q\) be a continuous quiver of type \(\mathbb{A}\) with finitely many sinks and sources and let \(M_{I}\), \(M_{J}\) be indecomposables in \(\mathrm{Ind}^{r}(Q)\). Let \(a=\inf I\), \(b=\sup I\), \(c=\inf J\), and \(d=\sup J\). Then \(\mathrm{Ext}^{1}(M_{J},M_{I})\cong\Bbbk\cong\mathrm{Hom}(M_{I},M_{J})\) if and only if one of the the following hold:_
* \(a<c<b<d\)_, and_ \(b,c\) _are red;_
* \(c<a<d<b\)_, and_ \(a,d\) _are blue;_
* \(c<a\leq b<d\)_, and_ \(a\) _is blue, and_ \(b\) _is red; or_
* \(a<c<d<b\)_, and_ \(c\) _is red, and_ \(d\) _is blue._
Proof.: It is shown in [10] that \(\mathrm{Hom}\) and \(\mathrm{Ext}\) between indecomposables must be \(0\) or \(1\) dimensional.
Since \(\mathrm{Hom}(M_{I},M_{J})\neq 0\) we obtain one of the items in the list where the first or last inequality may not be strict. Since \(\mathrm{Ext}(M_{I},M_{J})\neq 0\) we see all the inequalities must be strict.
The itemized list implies \(\mathrm{Hom}(M_{I},M_{J})\neq 0\). Then there is a short exact sequence \(M_{I}\hookrightarrow M_{I\cup J}\oplus M_{I\cap J}\twoheadrightarrow M_{J}\) and so \(\mathrm{Ext}(M_{J},M_{I})\neq 0\).
**Definition 2.23**.: Let \(M_{I}\) and \(M_{J}\) be indecomposables in \(\mathrm{Ind}^{r}(Q)\) for some continuous quiver of type \(\mathbb{A}\) with finitely many sinks and sources. We say \(M_{I}\) and \(M_{J}\) are \(\mathbf{N}_{\pi}\)**-compatible** if both of the following are true:
\[\dim_{\Bbbk}(\mathrm{Ext}(M_{J},M_{I})\oplus\mathrm{Hom}(M_{I},M_ {J})) \leq 1\] \[\dim_{\Bbbk}(\mathrm{Ext}(M_{I},M_{J})\oplus\mathrm{Hom}(M_{J},M _{I})) \leq 1.\]
One can verify this is equivalent to Igusa and Todorov's compatibility condition in [12] when \(Q\) has the straight descending orientation.
In terms of colors and set operations, \(\mathbf{N}_{\pi}\)-compatibility can be expressed as follows.
**Lemma 2.24**.: \(M_{I}\) _and \(M_{J}\) are \(\mathbf{N}_{\pi}\)-compatible if one of the following is satisfied._
1. \(I\cap J=\emptyset\)_,_
2. \(I\subset J\) _and_ \(J\setminus I\) _is connected, or vice versa,_
3. \(I\subset J\) _and both endpoints of_ \(I\) _are the same color, or vice versa,_
4. \(I\cap J\neq I\)_,_ \(I\cap J\neq J\)_, and_ \(I\cap J\) _has endpoints of opposite color._
**Theorem 2.25**.: _Let \(\sigma\in\mathcal{S}(Q)\). The following are equivalent._
* \(\sigma\in\mathcal{S}_{\mathit{fpc}}(Q)\)_._
* _The set of_ \(\sigma\)_-semistable indecomposables is maximally_ \(\mathbf{N}_{\pi}\)_-compatible._
Proof.: Let \((R,B)\) be a representative of \(\sigma\).
\(\Leftarrow\)**.** We prove the contrapositive. Suppose \(\sigma\) does not satisfy the four point condition. Then there are \(a<b<c<d\) in \(\widehat{Q}\) that determine indecomposable modules \(M_{a,b}\), \(M_{a,c}\), \(M_{a,d}\), \(M_{b,c}\), \(M_{b,d}\), \(M_{c,d}\). Here, the notation \(M_{x,y}\) means the interval indecomposable with interval \(I\) such that \(\min\widehat{I}=x\) and \(\max\widehat{I}=y\). Using Lemma 2.24 we see that at least two of the modules must be not \(\mathbf{N}_{\pi}\)-compatible.
\(\Rightarrow\)**.** Now suppose \(\sigma\) satisfies the four point condition. By Lemma 2.24 we see that the set of \(\sigma\)-semistable indecomposables is \(\mathbf{N}_{\pi}\)-compatible. We now check maximality.
Let \(M_{J}\) be an indecomposable in \(\operatorname{Ind}^{\mathrm{r}}(Q)\) such that \(M_{J}\) is not \(\sigma\)-semistable. Recall left and right colors from Definition 2.6. There are four cases depending on whether \(J\) is left red or left blue and whether \(J\) is right red or right blue. However, the case where \(J\) is both left and right red red is similar to the case where \(J\) is both left and right blue. Furthermore, the cases where \(J\) is left red and right blue is similar to the case where \(J\) left blue and right red. Thus we reduce to two cases where \(J\) is left red: either (1) \(J\) is right blue or (2) \(J\) is right red. (Notice the case where \(M_{J}\) is a simple projective \(M_{[s,s]}\) is similar to the case where \(J\) is left red and right blue.)
**Case (1)**. Since \(M_{J}\) is not \(\sigma\)-semistable, we first consider that \(M_{J}\) fails Definition 2.19(1) but satisfies Definition 2.19(2). Notice that, in this case, it is not possible that \(\inf J=-\infty\) or \(\sup J=+\infty\). Since \(M_{J}\) is left red, right blue, and fails Definition 2.19(1), we must have \(R_{\max}(\inf J)<B_{\min}(\sup J)\). Otherwise, we could create a horizontal line segment in \(\widehat{Q}\times\mathbb{R}\) satisfying Definition 2.19(1). Let \(\varepsilon>0\) such that \(0<\varepsilon<B_{\min}(\sup J)-R_{\max}(\inf J)\). Let
\[\ell=\widehat{Q}\times\{R_{\max}(\inf J)+\varepsilon\}.\]
By Lemma 2.15(1), there exists \(w<\min\widehat{J}\) and \(z>\max\widehat{J}\) in \(\widehat{Q}\) such that the module \(M_{I}\) corresponding to \([w,z]\subset\widehat{Q}\) (Proposition 2.18) is \(\sigma\)-semistable.
Now suppose \(M_{J}\) does not satisfy Definition 2.19(2). First suppose there exists \(x\in J\) such that \(R_{\max}(x)>R_{\max}(\inf J)\). We extend the argument of the proof of Lemma 2.15 to show that \(\overline{\mathcal{G}(R)}\) must have global maxima in the following sense. There is some set \(X\) such that, for all \(x\in X\) and \(y\notin X\), we have \(R_{\max}(y)<R_{\max}(x)\) and, for each \(x,x^{\prime}\in X\), we have \(R_{\max}(x)=R_{\max}(x^{\prime})\). In particular, there is \(z\in\widehat{Q}\) such that \(\min\widehat{J}<z<\max\widehat{J}\) and for all \(x\) such that \(\min\widehat{J}\leq x<z\) we have \(R_{\max}(x)<R_{\max}(z)\). If there is \(x\in[\min\widehat{J},z]\) such that \(B_{\min}(x)<R_{\max}(z)\) then there is \(w\in[\min\widehat{J},z]\) such that the module \(M_{I}\) corresponding to \([w,z]\) is \(\sigma\)-semistable. In particular, \(M_{I}\) is left blue and right red. By Lemma 2.24 we see that \(M_{J}\) and \(M_{I}\) are not \(\mathbf{N}_{\pi}\)-compatible. If no such \(x\in[\min\widehat{J},z]\) exists then there
is a \(w<\min\widehat{J}\) such that the module \(M_{I}\) corresponding to \([w,z]\) is \(\sigma\)-semistable. Since \(M_{I}\) is right red we again use Lemma 2.24 and see that \(M_{I}\) and \(M_{J}\) are not \(\mathbf{N}_{\pi}\)-compatible.
**Case (2)**. If \(M_{J}\) satisfies Definition 2.19(2) but fails Definition 2.19(1), then the function \(R_{\max}(x)\) must be monotonic. If \(R_{\max}(x)\) is decreasing then let \(x^{\prime}=\inf J+\varepsilon\) be red. By Lemma 2.15(1) we can find some \(\widehat{I}\) with left endpoint \(x+\varepsilon\) and blue right endpoint \(y^{\prime}\) such that \(y^{\prime}>\sup J\) and \(M_{I}\) is \(\sigma\)-semistable. By Lemma 2.24, \(M_{I}\) and \(M_{J}\) are not \(\mathbf{N}_{\pi}\)-compatible. A similar argument holds if \(R_{\max}(x)\) is monotonic increasing.
Now suppose \(M_{J}\) fails Definition 2.19(2). The argument for the second half of Case (1) does not depend on whether \(J\) is right red or right blue. Therefore, the theorem is true.
Let \(T\) and \(T^{\prime}\) be maximally \(\mathbf{N}_{\pi}\)-compatible sets. We call a bijection \(\mu:T\to T^{\prime}\) a **mutation** if \(T^{\prime}=(T\setminus\{M_{I}\})\cup\{M_{J}\}\), for some \(M_{I}\in T\) and \(M_{J}\in T^{\prime}\), and \(\mu(M_{K})=M_{K}\) for all \(K\neq I\). (Then \(\mu(M_{I})=M_{J}\).)
## 3. Continuous tilting
We construct a continuous version of tilting. Consider a stability condition \(\sigma\) on a continuous quiver of type \(\mathbb{A}\) where \(-\infty\) is a sink and \(s\) is either the smallest source or a real number less than the smallest source. Then continuous tilting at \(s\) will replace the red interval \(K=[-\infty,s)\) with the blue interval \(K^{*}=(-\infty,s]\) and keep the rest of \(Q\) unchanged. Thus, \(\widehat{Q}=K\coprod Z\) is replaced with \(\widehat{Q}^{*}=K^{*}\coprod Z\). We have an order reversing bijection \(\mathfrak{t}:K\to K^{*}\) given by
\[\mathfrak{t}(x)=\tan\left(\tan^{-1}s-\tan^{-1}x-\frac{\pi}{2}\right).\]
This extends, by the identity on \(Z\), to a bijection \(\overline{\mathfrak{t}}:\widehat{Q}\to\widehat{Q}^{*}\).
### Compatibility conditions
We start with the continuous compatibility conditions for representable modules over the real line. Given a continuous quiver \(Q\) of type \(\mathbb{A}\), we consider intervals \(I\) in \(\mathbb{R}\). Let \(M_{I}\) denote the indecomposable module with support \(I\). We say that \(I\) is **admissible** if \(M_{I}\) is representable. It is straightforward to see that \(I\) is admissible if and only if the following hold.
1. \(\inf I\in I\) if and only if it is blue, and
2. \(\sup I\in I\) if and only if it is red.
By Definition 2.3, neither endpoint of \(I\) can be a source. When \(I=[s,s]\) is a sink, \(\widehat{I}=[s_{-},s_{+}]\). We use notation to state this concisely: For any \(a<b\in\widehat{Q}\), let \(\widehat{I}(a,b)\) be the unique admissible interval in \(\mathbb{R}\) with endpoints \(a,b\). Thus \(a\in\widehat{I}(a,b)\)if and only if\(a\) is blue and \(b\in\widehat{I}(a,b)\) if and only if \(b\) is red. (Recall that every element of \(\widehat{Q}\) is colored red or blue.)
Recall that for each \(\sigma\in\mathcal{S}_{\mathrm{fpc}}(Q)\), the set of \(\sigma\)-semistable modules form a maximally \(\mathbf{N}_{\pi}\)-compatible set (Theorem 2.25).
### Continuous tilting on modules
**Lemma 3.1**.:
1. _Continuous tilting gives a bijection between admissible intervals_ \(I=\widehat{I}(a,b)\) _for_ \(Q\) _and admissible intervals_ \(I^{\prime}\) _for_ \(Q^{\prime}\) _given by_ \(I^{\prime}=\widehat{I}(\overline{\mathfrak{t}}(a),\overline{\mathfrak{t}}(b))\) _if_ \(\overline{\mathfrak{t}}(a)<\overline{\mathfrak{t}}(b)\) _in_ \(\widehat{Q}^{\prime}\) _and_ \(I^{\prime}=\widehat{I}^{\prime}(\overline{\mathfrak{t}}(b),\overline{\mathfrak{t} }(a))\) _if_ \(\overline{\mathfrak{t}}(b)<\overline{\mathfrak{t}}(a)\)_._
_._
2. _Furthermore,_ \(M_{I},M_{J}\) _are_ \(\mathbf{N}_{\pi}\)_-compatible for_ \(Q\) _if and only if_ \(M_{I^{\prime}},M_{J^{\prime}}\) _are_ \(\mathbf{N}_{\pi}\)_-compatible for_ \(Q^{\prime}\)_._
For each admissible interval \(I\) for \(Q\), denote by \(\phi(M_{I})\) the module \(M_{I^{\prime}}\), where \(I^{\prime}\) is the admissible interval of \(Q^{\prime}\) obtained from \(I\) by continuous tilting.
Lemma 3.1 immediately implies the following.
**Theorem 3.2**.: _Continuous tilting gives a bijection \(\Phi\) between maximal compatible sets of representable indecomposable modules over \(Q\) and those of \(Q^{\prime}\). Furthermore if \(\mu:T\to T^{\prime}\) is a mutation then so is \(\Phi(\mu):\Phi T\to\Phi T^{\prime}\) given by \(\phi(M_{I})\mapsto\phi(\mu(M_{I}))\)._
Proof of Lemma 3.1.: (a) Since \(\overline{\mathfrak{t}}:\widehat{Q}\to\widehat{Q}^{\prime}\) is a bijection and \(\widehat{I}(a,b)\) is admissible by notation, we get a bijection with admissible \(Q^{\prime}\) intervals by definition.
(b) Suppose that \(I=\widehat{I}(a,b)\) and \(J=\widehat{I}(c,d)\) with \(a\leq c\) by symmetry. We use Lemma 2.24 to check \(\mathbf{N}_{\pi}\)-compatibility. For this proof, we say "\(I\) and \(J\) are compatible" to mean "\(M_{I}\) and \(M_{J}\) are \(\mathbf{N}_{\pi}\)-compatible".
1. If \(a,b,c,d\) are not distinct then \(\overline{\mathfrak{t}}(a),\overline{\mathfrak{t}}(b),\overline{\mathfrak{t} }(c),\overline{\mathfrak{t}}(d)\) are also not distinct. So, \(I,J\) are compatible for \(Q\) and \(I^{\prime},J^{\prime}\) are compatible for \(Q^{\prime}\) in this case. So, suppose \(S=\{a,b,c,d\}\) has size \(|S|=4\).
2. If \(S\cap K=\emptyset\) then \(I,J\subset Z\). So, \(I^{\prime}=I\) and \(J^{\prime}=J\) are compatible for \(Q^{\prime}\) if and only if \(I,J\) are compatible for \(Q\).
3. If \(|S\cap K|=1\) then \(S\cap K=\{a\}\). Then \(\overline{\mathfrak{t}}\) does not change the order of \(a,b,c,d\) and does not change the colors of \(b,c,d\). So, \(I,J\) are compatible for \(Q\) if and only if \(I^{\prime},J^{\prime}\) are compatible for \(Q^{\prime}\).
4. If \(|S\cap K|=2\) there are three cases: (a) \(a<b<c<d\), (b) \(a<c<b<d\) or (c) \(a<c<d<b\). If \(I,J\) are in case (a) then so are \(I^{\prime},J^{\prime}\) and both pairs are compatible. If \(I,J\) are in case (b) then \(I^{\prime},J^{\prime}\) are in case (c) and vise versa. Since the colors of \(a,c\) change in both cases (from red to blue), \(I,J\) are compatible for \(Q\) if and only if \(I^{\prime},J^{\prime}\) are compatible for \(Q^{\prime}\).
5. If \(|S\cap K|=3\) there are the same three cases as in case (4). If \(I,J\) are in case (a), then \(I^{\prime},J^{\prime}\) are in case (c) and vise-versa. Since the middle two vertices are the same color, both pairs are compatible. If \(I,J\) are in case (b) then so are \(I^{\prime},J^{\prime}\) and both pairs are not compatible.
6. If \(S\subset K\) then \(a,b,c,d\) reverse order and all become blue. So, \(I,J\) are compatible if and only if they are in cases (a) or (c) and \(I^{\prime},J^{\prime}\) are in the same case and are also compatible.
In all cases, \(I,J\) are compatible for \(Q\) if and only if \(I^{\prime},J^{\prime}\) are compatible for \(Q^{\prime}\).
We can relate continuous tilting to cluster theories, introduced by the authors and Todorov in [11].
**Definition 3.3**.: Let \(\mathcal{C}\) be an additive, \(\Bbbk\)-linear, Krull-Remak-Schmidt, skeletally small category and let \(\mathbf{P}\) be a pairwise compatibility condition on the isomorphism classes of indecomposable objects in \(\mathcal{C}\). Suppose that for any maximally \(\mathbf{P}\)-compatible set \(T\) and \(X\in T\) there exists at most one \(Y\notin T\) such that \((T\setminus\{X\})\cup\{Y\}\) is \(\mathbf{P}\)-compatible.
Then we call maximally \(\mathbf{P}\)-compatible sets \(\mathbf{P}\)**-clusters**. We call bijections \(\mu:T\to(T\setminus\{X\})\cup\{Y\}\) of \(\mathbf{P}\)-clusters \(\mathbf{P}\)**-mutations**. We call the groupoid whose objects are \(\mathbf{P}\)-clusters and whose morphisms are \(\mathbf{P}\)-mutations (and identity functions) the \(\mathbf{P}\)**-cluster theory of \(\mathcal{C}\).** We denote this groupoid by \(\mathscr{T}_{\mathbf{P}}(\mathcal{C})\) and denote the
inclusion functor into the category of sets and functions by \(I_{\mathcal{C},\mathbf{P}}:\mathscr{T}_{\mathbf{P}}(\mathcal{C})\to\mathscr{S}et\). We say \(\mathbf{P}\)**induces** the \(\mathbf{P}\)-cluster theory of \(\mathcal{C}\).
The isomorphism of cluster theories was introduced by the second author in [17].
**Definition 3.4**.: An **isomorphism of cluster theories** is a pair \((F,\eta)\) with source \(\mathscr{T}_{\mathbf{P}}(\mathcal{C})\) and target \(\mathscr{T}_{\mathbf{Q}}(\mathcal{D})\). The \(F\) is a functor \(F:\mathscr{T}_{\mathbf{P}}(\mathcal{C})\to\mathscr{T}_{\mathbf{Q}}(\mathcal{D})\) such that \(F\) induces a bijection on objects and morphisms. The \(\eta\) is a natural transformation \(\eta:I_{\mathcal{C},\mathbf{P}}\to I_{\mathcal{D},\mathbf{Q}}\circ F\) such that each component morphism \(\eta_{T}:T\to F(T)\) is a bijection.
We see that, for any continuous quiver \(Q\) of type \(\mathbb{A}\), the pairwise compatibility condition \(\mathbf{N}_{\pi}\) induces the cluster theory \(\mathscr{T}_{\mathbf{N}_{\pi}}(\operatorname{mod}^{\mathrm{r}}(Q))\). The following corollary follows immediately from Theorem 3.2.
**Corollary 3.5** (to Theorem 3.2).: _For any pair of continuous quivers \(Q\) and \(Q^{\prime}\) of type \(\mathbb{A}\) with finitely many sinks and sources, there is an isomorphism of cluster theories \(\mathscr{T}_{\mathbf{N}_{\pi}}(\operatorname{mod}^{\mathrm{r}}(Q))\to\mathscr{T }_{\mathbf{N}_{\pi}}(\operatorname{mod}^{\mathrm{r}}(Q^{\prime}))\)._
### Continuous tilting of stability conditions
Given a stability condition \(\sigma\) for \(Q\), we obtain a stability condition \(\sigma^{\prime}\) for \(Q^{\prime}\) having the property that the \(\sigma^{\prime}\)-semistable modules are related to the \(\sigma\)-semistable modules for \(Q\) by continuous tilting (the bijection \(\Phi\) of Theorem 3.2). Later we will see that these stability conditions give the same measured lamination on the Poincare disk.
We continue with the notation from sections 3.1 and 3.2 above. If the stability condition \(\sigma\) on \(Q\) is given by the red-blue pair \((R,B)\), the tilted stability condition \(\sigma^{\prime}\) on \(Q^{\prime}\) will be given by \((R^{\prime},B^{\prime})\) given as follows.
1. The pair \((R^{\prime},B^{\prime})\) will be the same as \((R,B)\) on \([s,\infty)\).
2. On \(K^{\prime}=(-\infty,s_{-}]\subseteq\widehat{Q}^{\prime}\), the new red function \(R^{\prime}\) will be constantly equal to \(R_{-}(s)\).
3. On \(K^{\prime}=(-\infty,s_{-}]\), the new blue function \(B^{\prime}\) can be given by "flipping" \(R\) horizontally and flipping each "island" vertically, in either order.
**Notation 3.6**.: Let \(F\) be a useful function. By \(F_{-}(a)\) we denote \(\lim_{x\to a^{-}}F(a)\), for any \(a\in(-\infty,+\infty]\). By \(F_{+}(a)\) we denote \(\lim_{x\to a^{+}}F(a)\), for any \(a\in[-\infty,+\infty)\).
**Definition 3.7**.: A (red) **island** in \(K=[-\infty,s)\subseteq\widehat{Q}\) is an open interval \((x,y)\) in \(K\) which is either:
1. \((x,r)\) where \(x<r\) so that \(R(x)\geq R_{-}(s)\) and \(R(z)<R_{-}(s)\) for all \(x<z<s\) or
2. \((x,y)\) where \(x<y<s\), \(R(x)\geq R(y)\geq R_{-}(s)\), \(R(z)<R(y)\) for all \(x<z<y\) and \(R(w)\leq R(y)\) for all \(y<w<s\).
**Lemma 3.8**.: \(z\in(-\infty,s)\) _is in the interior of some island in \(K\) if and only if there exists \(y\in(z,s)\) so that \(R(z)<R(y)\)._
Proof.: \((\Rightarrow)\) If \(z\) lies in the interior of an island \((x,y)\) there are two cases. (1) For \(y<s\), \(R(z)<R(y)\). (2) For \(y=s\), \(R(z)<R_{-}(s)\). But \(R_{-}(s)\) is a limit, so there is a \(y<s\) arbitrarily close to \(s\) so that \(R(z)<R(y)\) and \(z<y<s\).
\((\Leftarrow)\) Let \(y\in(z,s)\) so that \(R(z)<R(y)\). Let \(r=sup\{R(y)\,:\,y\in(z,s)\}\). If \(r=R(y)\) for some \(y\in(z,s)\), let \(y\) be minimal. (By the 4 point condition there are at most 2 such \(y\).) Then \(z\) lies in an island \((x,y)\) for some \(x<z\).
If the maximum is not attained, there exists a sequence \(y_{i}\) so that \(R(y_{i})\) converges to \(r\). Then \(y_{i}\) converges to some \(w\in[z,s]\). If \(w\in(z,s)\) then \(R(z)=r\) and we are reduced to the previous case. Since \(R(z)<r\), \(w\neq z\). So, \(w=s\) and \(r=R_{-}(s)\). Then \(z\) lies in an island \((x,s)\) for some \(s<z\). (\(x=\max\{w<z\,:\,R(w)\geq r\}\)) In both cases, \(z\) lies in an island as claimed.
To define the new blue function \(B^{\prime}\), we need a function \(H\) defined as follows.
\[H(z):=\begin{cases}R(y)&\text{if $z\in(x,y]$ for some island $(x,y)$ where $y<s$}\\ R_{-}(s)&\text{if $z\in(x,s)$ and $(x,s)$ is an island}\\ R(z)&\text{for all other $z\in[-\infty,s)$}\end{cases}\]
**Remark 3.9**.: Note that \(H(z)>R(z)\) if \(z\) is in the interior of an island and \(H(z)=R(z)\) otherwise.
**Lemma 3.10**.: \(H\) _is a nonincreasing function, i.e., \(H(x)\geq H(y)\) for all \(x<y<s\). Also, \(H(z)=H_{-}(z)=\lim_{y\to z-}H(y)\) for all \(z<s\) and \(H_{-}(s)=R_{-}(s)\)._
**Remark 3.11**.: Since \(H\) is decreasing and converging to \(R_{-}(s)\) we must have: \(H(x)=H_{-}(x)\geq H_{+}(x)\geq R_{-}(s)\) for all \(x<s\).
Proof.: If \(H(u)<H(z)\) for some \(u<z<s\) then \(R(u)\leq H(u)<H(z)\). But \(H(z)\) is equal to either \(R(z),R_{-}(s)\) or \(R(y)\) for some \(y>z\). So, \(R(u)<R(y)\) for some \(y\in(u,s)\). By Lemma 3.8, \(u\) lies in the interior of some island, say \((x,y)\) and, by definition of \(H\), \(H(u)=R(y)\geq R(w)\) for all \(w\geq y\) and \(H(u)=H(z)=H(y)\) for all \(u\leq z\leq y\). Thus, \(H\) is nonincreasing.
To see that \(H(z)=H_{-}(z)\) suppose first that \(z\in(x,y]\) for some island \((x,y)\). Then \(H(z)=R(y)\) is constant on the interval \((x,y]\). So, \(H(z)=H_{-}(z)=R(y)\). Similarly, \(H(z)=H_{-}(z)\) if \(z\in(x,s)\) and \((x,s)\) is an island. If \(z\) is not in any island, \(H(z)=R(z)\) and \(R(z)=R_{-}(z)\) since, otherwise, \(z\) would be on the right end of an island. And, \(H_{-}(z)\) would be the limit of those \(H(x)\) where \(x<z\) and \(H(x)=R(x)\). So, \(H_{-}(z)=R_{-}(z)=H(z)\) as claimed.
Since \(H(y)\geq R(y)\), we have: \(H_{-}(s)=\lim_{y\to s-}H(y)\geq R_{-}(s)\). If \(H_{-}(s)>R_{-}(s)\), say \(H_{-}(s)=R_{-}(s)+c\) then there is a sequence \(z_{i}\to s-\) so that \(H(z_{i})>R_{-}(s)+c/2\). For each \(z_{i}\) there is \(y_{i}\in[z_{i},s)\) so that \(H(z_{i})=R(y_{i})\). Then \(R(y_{i})>R_{-}(s)+c/2\) for all \(i\) which is not possible since \(y_{i}\to s-\). So, \(H_{-}(s)=R_{-}(s)\).
The monotonicity of \(H\) implies that its variation \(\mathsf{var}_{H}I\) on any interval \(I\) is the difference of its limiting values on the endpoints. The formula is:
\[\mathsf{var}_{H}(a,b)=H_{+}(a)-H_{-}(b).\]
Using \(H=H_{-}\) and \(H_{+}\) we can "flip" the islands up to get \(\widetilde{R}\):
\[\widetilde{R}(z)=H(z)+H_{+}(z)-R(z).\]
**Definition 3.12**.: The new blue function \(B^{\prime}\), shown in Figure 5, is given on \(K^{*}=(-\infty,s]\) by
\[B^{\prime}(z)=\widetilde{R}(\mathfrak{t}(z)).\]
The new red function is constant on \(K^{*}\) with value \(R^{\prime}(x)=R_{-}(s)\) for all \(x\in K^{*}\). On the complement of \(K^{*}\) in \(\widehat{Q}^{\prime}\), the red-blue pair \((R^{\prime},B^{\prime})\) is the same as before.
We will now show \(B^{\prime}\) is a useful function with the same variation on \((-\infty,s]\) as \(R\) has on \([-\infty,s)\). More precisely:
**Lemma 3.13**.: _The variation of \(R\) on any open interval \((a,b)\subset[-\infty,s)\) is equal to the variation of \(B^{\prime}\) on \((\mathfrak{t}(b),\mathfrak{t}(a))\)._
Proof.: Since \(B^{\prime}\) is obtained from \(\widetilde{R}\) by reversing the order of the first coordinate, we have \(\mathsf{var}_{B^{\prime}}(\mathfrak{t}(b),\mathfrak{t}(a))=\mathsf{var}_{ \widetilde{R}}(a,b)\). Thus, it suffices to show that \(\mathsf{var}_{\widetilde{R}}(a,b)=\mathsf{var}_{R}(a,b)\).
First, we do the case when \((a,b)\) is an island. Then \(H(z)=H_{+}(z)=R(b)>R(z)\) are constant for all \(z\in(a,b)\). So, \(\widetilde{R}=H+H_{+}-R\) has the same variation as \(R\) on \((a,b)\).
Write \(R=H+(R-H)\). Then we claim that
\[\mathsf{var}_{R}(a,b)=\mathsf{var}_{H}(a,b)+\mathsf{var}_{R-H}(a,b).\]
To see this take any sequence \(a<x_{0}<x_{1}<\cdots<x_{n}<b\). Then the sum
\[\sum_{i=1}^{n}|R(x_{i})-R(x_{i-1})|\]
can be broken up into parts. Let \(A_{1},\cdots,A_{m}\) be the sequence of disjoint subsets of \(S=\{x_{0},\cdots,x_{n}\}\) so that \(A_{j}\) is the intersection of \(S\) with some island \((a_{j},b_{j})\). We may assume that \(a_{j}\) for \(1<j\leq m\) and \(b_{j}\) for \(1\leq j<m\) are in the set \(S\) since they lie in the interval \((a,b)\). For \(1<j\leq m\), if \(x_{i}\) is the smallest element of \(A_{j}\), then \(x_{i-1}=a_{j}\) and the \(x_{i},x_{i-1}\) term in the approximation of \(\mathsf{var}_{H}(a,b)+\mathsf{var}_{H-R}(a,b)\) is
\[|H(a_{j})-H(x_{i})|+|(R-H)(a_{j})-(R-H)(x_{i})|=|R(a_{j})-H(x_{i})|+|H(x_{i})- R(x_{i})|\]
since \(H(a_{j})=R(a_{j})\). This sum is equal to \(|R(a_{j})-R(x_{i})|\), the corresponding term in the approximation of \(\mathsf{var}_{R}(a,b)\), since \(R(a_{j})\geq H(x_{i})>R(x_{i})\). Similarly, \(H(b_{j})=R(b_{j})\) by definition and \(R(b_{j})=H(x_{k})>R(x_{k})\) for any \(x_{k}\in A_{j}\). So,
\[|H(b_{j})-H(x_{k})|+|(R-H)(b_{j})-(R-H)(x_{k})|=|R(b_{j})-R(x_{k})|.\]
If \(x_{i},x_{i+1}\) both lie in \(A_{j}\) then \(H(x_{i})=H(x_{i+1})\). So,
\[|R(x_{i})-R(x_{i+1})|=|(R-H)(x_{i})-(R-H)(x_{i+1})|+|H(x_{i})-H(x_{i+1})|.\]
This equation also holds if \(x_{i},x_{i+1}\) do not lie in any \(A_{j}\) since, in that case, \(R=H\) at both \(x_{i}\) and \(x_{i+1}\). Thus every term in the sum approximating \(\mathsf{var}_{R}(a,b)\) is equal to the sum of the corresponding terms for \(\mathsf{var}_{H}(a,b)\) and \(\mathsf{var}_{R-H}(a,b)\). Taking supremum we get the equation \(\mathsf{var}_{R}(a,b)=\mathsf{var}_{H}(a,b)+\mathsf{var}_{R-H}(a,b)\) as claimed.
Figure 5. The function \(R\) is in red. \(H\), black, flattens the islands of \(R\). When the islands are flipped up, we get \(\widetilde{R}\) in green. The horizontal mirror image of this is the new blue function \(B^{\prime}\) on the right. Figures 8, 10 give another example.
A similar calculation shows that
\[\mathsf{var}_{\widetilde{R}}(a,b)=\mathsf{var}_{H_{+}}(a,b)+\mathsf{var}_{ \widetilde{R}-H_{+}}(a,b).\]
But this is equal to \(\mathsf{var}_{R}(a,b)=\mathsf{var}_{H}(a,b)+\mathsf{var}_{R-H}(a,b)\) since \(H-R=\widetilde{R}-H_{+}\) by definition of \(\widetilde{R}\) and \(\mathsf{var}_{H}(a,b)=H_{+}(a)-H_{-}(b)=\mathsf{var}_{H_{+}}(a,b)\). Thus \(\mathsf{var}_{R}(a,b)=\mathsf{var}_{\widetilde{R}}(a,b)=\mathsf{var}_{B^{ \prime}}(\mathfrak{t}(b),\mathfrak{r}(a))\).
For \(x_{0}\) in the interior of the domain of \(f\) let
\[\mathsf{var}_{f}(x_{0}):=\lim_{\delta\to 0}\mathsf{var}_{f}(x_{0}-\delta,x_{0}+ \delta)=\lim_{\delta\to 0}\mathsf{var}_{f}[x_{0}-\delta,x_{0}+\delta]\]
We call this the **local variation** of \(f\) at \(x_{0}\). If \(x_{0}\in(a,b)\) this is equivalent to:
\[\mathsf{var}_{f}(x_{0})=\mathsf{var}_{f}(a,b)-\mathsf{var}_{f}(a,x_{0})- \mathsf{var}_{f}(x_{0},b)\]
since this is the limit of \(\mathsf{var}_{f}(a,b)-\mathsf{var}_{f}(a,x_{0}-\delta)-\mathsf{var}_{f}[x_{0} +\delta,b)=\mathsf{var}_{f}[x_{0}-\delta,x_{0}+\delta]\).
To show that \(B^{\prime}\) is a useful function we need the following lemma.
**Lemma 3.14**.: _A real valued function \(f\) of bounded variation defined in a neighborhood of \(x_{0}\) is continuous at \(x_{0}\) if and only if its local variation, \(\mathsf{var}_{f}(x_{0})=0\). In particular, \(R\) is continuous at \(x\in K\) if and only if \(B^{\prime}\) is continuous at \(\mathfrak{t}(x)\in K^{*}\)._
Proof.: Suppose that \(\mathsf{var}_{f}(x_{0})=0\). Then, for any \(\varepsilon>0\) there is a \(\delta>0\) so that
\[\mathsf{var}_{f}(x_{0}-\delta,x_{0}+\delta)<\varepsilon.\]
Then \(|f(x)-f(x_{0})|<\varepsilon\) for all \(x\in(x_{0}-\delta,x_{0}+\delta)\). So, \(f\) is continuous at \(x_{0}\).
Conversely, suppose \(f\) is continuous at \(x_{0}\). Then, for any \(\varepsilon>0\) there is a \(\delta>0\) so that \(|f(x)-f(x_{0})|<\varepsilon\) for \(|x-x_{0}|<\delta\). Let \(V=\mathsf{var}_{f}[x_{0},x_{0}+\delta)\). By definition of variation there exist \(x_{0}<x_{1}<\cdots<x_{n}<x_{0}+\delta\) so that
\[\sum_{i=1}^{n}|f(x_{i})-f(x_{i-1})|>V-\varepsilon.\]
Since \(|f(x_{1})-f(x_{0})|<\varepsilon\) this implies \(\sum_{i=2}^{n}|f(x_{i})-f(x_{i-1})|>V-2\varepsilon\). So, \(\mathsf{var}_{f}[x_{0},x_{1})<2\varepsilon\). Similarly, there exists \(x_{-1}<x_{0}\) so that \(\mathsf{var}_{f}(x_{-1},x_{0})<2\varepsilon\). So, \(\mathsf{var}_{f}(x_{-1},x_{1})<4\varepsilon\) which is arbitrarily small.
For a useful function \(F\), recall that \(u_{a}^{-}=F(a)-\lim_{x\to a-}F(x)\) and \(u_{a}^{+}=\lim_{x\to a+}F(x)-F(a)\) (Proposition 2.10).
**Proposition 3.15**.: _Let \(F\) be a useful function. Then, the local variation of \(F\) at any point \(a\) is_
\[\mathsf{var}_{F}(a)=|u_{a}^{-}|+|u_{a}^{+}|.\]
Proof.: It follows from the triangle inequality that the variation of \(f+g\) on any open interval is bounded above and below by the sum and differences of the variations of \(f,g\) on that interval. This holds for local variations as well:
\[|\mathsf{var}_{g}(x)-\mathsf{var}_{f}(x)|\leq\mathsf{var}_{f+g}(x)\leq\mathsf{ var}_{f}(x)+\mathsf{var}_{g}(x)\]
Let \(g_{x}=u_{x}^{-}\Delta_{x}^{-}+u_{x}^{+}\Delta_{x}^{+}\). Then
\[\mathsf{var}_{F}(x)=\mathsf{var}_{g_{x}}(x)=|u_{x}^{-}|+|u_{x}^{+}|\]
since \(F-g_{x}\) is continuous at \(x\) and thus, by Lemma 3.14, has \(\mathsf{var}_{F-g_{x}}(x)=0\).
We can say slightly more for the functions \(R\) and \(B^{\prime}\). (See also Figure 6.)
**Lemma 3.16**.: _For any \(a\in K=[-\infty,s)\) let \(b=\mathfrak{t}(a)\in K^{*}\). Then \(v_{b}^{-}=B^{\prime}(b)-B^{\prime}_{-}(b)\leq 0\) and \(v_{b}^{+}=B^{\prime}_{+}(b)-B^{\prime}(b)\geq 0\). In particular, \(B^{\prime}(b)=min\,B^{\prime}(b)\)._
Proof.: Since \(B^{\prime}\) is the mirror image of \(\widetilde{R}\), \(v_{b}^{-}\) and \(v_{b}^{+}\) for \(B^{\prime}\) are equal to \(-v_{a}^{+},-v_{a}^{-}\) for \(\widetilde{R}\), respectively, where \(v_{a}^{-}=\widetilde{R}(a)-\widetilde{R}_{-}(a)\) and \(v_{a}^{+}=\widetilde{R}_{+}(a)-\widetilde{R}(a)\). Thus it suffices to show that \(v_{a}^{-}\leq 0\) and \(v_{a}^{+}\geq 0\).
We have \(u_{a}^{-}=R(a)-R_{-}(a)\geq 0\). Also, \(\widetilde{R}_{-}=(H+H_{+}-R)_{-}=2H-R_{-}\). So,
\[v_{a}^{-} =(\widetilde{R}(a)-H_{+}(a))-(\widetilde{R}_{-}(a)-H_{+}(a))\] \[=(H(a)-R(a))+R_{-}(a)-2H(a)+H_{+}(a)\] \[=-u_{a}^{-}-(H(a)-H_{+}(a))\leq 0\]
Similarly, we have \(u_{a}^{+}=R_{+}(a)-R(a)\leq 0\) and \(\widetilde{R}_{+}(a)=2H_{+}(a)-R_{+}(a)\). So,
\[v_{a}^{+} =(\widetilde{R}_{+}(a)-H_{+}(a))-(\widetilde{R}(a)-H_{+}(a))\] \[=(H_{+}(a)-R_{+}(a))-(H(a)-R(a))\] \[=(H_{+}(a)-H(a))-u_{a}^{+}\]
To show that \(v_{a}^{+}\geq 0\), there are two cases. If \(a\) lies in an island \((x,y)\), then \(H_{+}(a)=H(a)=R(y)\) (or \(R_{-}(s)\) if \(y=s\)) and \(v_{a}^{+}=-u_{a}^{+}\geq 0\). If \(a\) does not lie in an island then \(H(a)=R(a)\) and \(H_{+}(a)\geq R_{+}(a)\). So, \(v_{a}^{+}\geq 0\).
**Theorem 3.17**.: _The new pair \((R^{\prime},B^{\prime})\) is a red-blue pair for the quiver \(Q^{\prime}\) and the \(\sigma^{\prime}\)-semistable \(Q^{\prime}\) modules given by this pair are the continuous tilts of the \(\sigma\)-semistable \(Q\)-modules given by the original pair \((R,B)\)._
Proof.: Lemmas 3.13 implies that \(R\) and \(B^{\prime}\) have the same local variation at the corresponding points \(x\) and \(\mathfrak{t}(x)\). In particular, \(R\) and \(B^{\prime}\) have discontinuities at corresponding points by Lemma 3.14 and the \(B^{\prime}(a)=min\,B^{\prime}(a)\) by Lemma 3.16.
The new red function \(R^{\prime}\) is constantly equal to \(R_{-}(s)\) on \(K^{*}\) and equal to the old function \(R\) on the complement \(Z\). So, \(B^{\prime}(x)\geq R^{\prime}(x)\) and they have the same limit as \(x\to-\infty\) by Remark 3.11. Thus \((R^{\prime},B^{\prime})\) form a red-blue pair for \(Q^{\prime}\).
Let \(\sigma,\sigma^{\prime}\) be the stability conditions on \(Q,Q^{\prime}\) given by the red-blue pairs \((R,B)\) and \((R^{\prime},B^{\prime})\), resp. It remains to show that the admissible interval \(I=\widehat{I}(a,b)\) is
Figure 6. This red function \(R\) has a spike on the right end \(b\) of an island \((a,b)\) and a discontinuity at the left end \(a\). When the island is flipped, we get a downward spike at \(a\) and a discontinuity at \(b\). The function \(R\) is the maximum and the tilted functions \(\widetilde{R}\) and \(B^{\prime}\) are minimums on vertical lines.
\(\sigma\)-semistable for \(Q\) if and only if the corresponding interval \(I^{\prime}\) is \(\sigma^{\prime}\)-semistable for \(Q^{\prime}\) where \(I^{\prime}=\widehat{I}^{\prime}(\overline{\mathfrak{t}}(a),\overline{\mathfrak{t }}(b))\) if \(\overline{\mathfrak{t}}(a)<\overline{\mathfrak{t}}(b)\) in \(\widehat{Q}^{\prime}\) and \(I^{\prime}=\widehat{I}^{\prime}(\overline{\mathfrak{t}}(b),\overline{ \mathfrak{t}}(a))\) if \(\overline{\mathfrak{t}}(b)<\overline{\mathfrak{t}}(a)\).
Consider \(a<b\) in \(\overline{\mathbb{R}}\). There are three cases.
1. \(a=\overline{\mathfrak{t}}(a)\) and \(b=\overline{\mathfrak{t}}(b)\) both lie in \(Z\).
2. \(-\infty\leq a<b<s\)\((a,b\in K)\) and \(-\infty<\overline{\mathfrak{t}}(b)<\overline{\mathfrak{t}}(a)\leq s\)\((\overline{\mathfrak{t}}(a),\overline{\mathfrak{t}}(b)\in K^{*})\).
3. \(a\in K\), \(\overline{\mathfrak{t}}(a)\in K^{*}\) and \(b=\overline{\mathfrak{t}}(b)\in Z\).
In Case (1), the stability conditions \(\sigma,\sigma^{\prime}\) given by the red and blue functions are the same on \(Z\). So, \(\widehat{I}(a,b)\) is \(\sigma\)-semistable if and only if \(\widehat{I}^{\prime}(a,b)=\widehat{I}^{\prime}(\overline{\mathfrak{t}}(a), \overline{\mathfrak{t}}(b))\) is \(\sigma^{\prime}\)-semistable for \(Q^{\prime}\).
In Case (2), we claim that \(\widehat{I}(a,b)\) at height \(h\) is \(\sigma\)-semistable if and only if \(\widehat{I}^{\prime}(\overline{\mathfrak{t}}(b),\overline{\mathfrak{t}}(a))\) is \(\sigma^{\prime}\)-semistable at height \(h^{\prime}\) where \(h^{\prime}=2H(b)-h\).
An example can be visualized in Figure 6 by drawing horizontal lines at height \(h<H\) and \(h^{\prime}>H_{+}\) under the line \(H\) on the left and over \(H_{+}\) on the right.
To see this in general, note that if \(\widehat{I}(a,b)\) at height \(h\) is \(\sigma\)-semistable then, for all \(z\in(a,b)\), \(R(z)\leq h\) (with equality holding for at most one value of \(z\), call it \(z=c\)) and \(R(a),R(b)\geq h\). Then for each \(z\in[a,b]\), \(H(z)\geq h\). So, for each \(z\in(a,b)\), \(z\neq c\), we have \(H(z)>R(z)\). By Remark 3.9, \(z\) lies in the interior of an island for \(R\). But \(\widetilde{R}(z)-H_{+}(z)=H(z)-R(z)>0\). So, the same values of \(z\) lie in islands for \(\widetilde{R}\) and \(\widetilde{R}(z)-h^{\prime}=h-R(z)\geq 0\). Also, \(\widetilde{R}(a),\widetilde{R}(b)\leq h^{\prime}\) since:
\[h^{\prime}-\widetilde{R}(b) =2H(b)-h-H(b)-H_{+}(b)+R(b)\] \[=(H(b)-H_{+}(b))+(R(b)-h)\geq 0\]
and, since \(H_{+}(a)=H(b)\) and \(H(a)=\) either \(H(b)\) or \(R(a)\),
\[h^{\prime}-\widetilde{R}(a) =2H(b)-h-H(a)-H_{+}(a)+R(a)\] \[=R(a)-h+H(b)-H(a)\] \[\text{either }=R(a)-h\geq 0\] \[\text{or }=H(b)-h\geq 0\]
Therefore, \([a,b]\times h^{\prime}\) is a chord for \(\widetilde{R}\), making its mirror image \([\overline{\mathfrak{t}}(b),\overline{\mathfrak{t}}(a)]\times h^{\prime}\) a chord for \(B^{\prime}\) and thus \(\widehat{I}^{\prime}(\overline{\mathfrak{t}}(b),\overline{\mathfrak{t}}(a))\) is \(\sigma^{\prime}\)-semistable for \(Q^{\prime}\) at height \(h^{\prime}\). An analogous argument shows the converse. So, \(\widehat{I}(a,b)\) at height \(h\) is \(\sigma\)-semistable for \(Q\) if and only if \(\widehat{I}^{\prime}(\overline{\mathfrak{t}}(b),\overline{\mathfrak{t}}(a))\) is \(\sigma^{\prime}\)-semistable at height \(h^{\prime}\) for \(Q^{\prime}\).
In Case (3), we change notation to match Figure 6. Suppose we have \(b\in K\), \(\overline{\mathfrak{t}}(b)\in K^{*}\) and \(c=\overline{\mathfrak{t}}(c)\in Z\). We claim that \(\widehat{I}(b,c)\) is \(\sigma\)-semistable at height \(h\) if and only if \(\widehat{I}^{\prime}(\overline{\mathfrak{t}}(b),\overline{\mathfrak{t}}(c))\) is \(\sigma^{\prime}\)-semistable at the same height \(h\).
In Figure 6, the chord \([b,c]\times h\) would be a horizontal line starting at any point on the vertical red line at \(b\) and going to the right. For \(\widetilde{R}\), we have \(H(b)\geq h\geq H_{+}(b)\), so a horizontal line at height \(h\) starting anywhere on the vertical segment \(b\times[H_{+}(b),H(b)]\) could go left without hitting the function \(\widetilde{R}\) except at height \(h=H_{+}(a)=H(b)\) where it would touch the function at \((a,H_{+}(a))\) then continue. For \(B^{\prime}\), the horizontal line starting at \((\overline{\mathfrak{t}}(b),h)\) would go right, possibly touch the curve at \(\overline{\mathfrak{t}}(a)\) and continue to the point \((c,h)\).
The situation in general is very similar. \(\widehat{I}(b,c)\) is \(\sigma\)-semistable at height \(h\) for some \(c\in Z\) if and only if \(H_{+}(b)\leq h\leq H(b)=R(b)\). Since \(H_{+}(b)\) is the supremum of \(R(x)\) for all \(b<x<s\), this is equivalent to saying the horizontal line at \(h\) does
not touch the curve \(R\) except possibly at one point (not more by the four point condition). If \(h=H(b)\), this horizontal line might continue to the left of \((b,h)\) an hit at most one point \((a,h)\) on the curve \(R\).
If \(h<H(b)\) then the horizontal line at \((b,h)\) on \(\widetilde{R}\), would go to the left and not hit anything since, for all \(x<b\), we have \(\widetilde{R}(x)\geq H_{+}(x)\geq H(b)>h\). So, the line from \((\widetilde{\mathfrak{t}}(b),h)\) to \((\widetilde{\mathfrak{t}}(c),h)\) would not hit \(B^{\prime}\).
If \(h=H(b)\), then, for all \(x<b\), \(\widetilde{R}(x)\geq H_{+}(x)\geq H(b)=h\). So, the line going left from \((b,h)=(b,H(b))\) would stay under \(\widetilde{R}\) possibly touching it at most once, say at \((a,h)\). Then \((a,b)\) would be an island and we have the situation in Figure 6. By the four point condition we cannot have another point \(a^{\prime}\) with the same property since \((a,h),(b,h),(c,h)\) are already on a line. The horizontal line going right from \((\widetilde{\mathfrak{t}}(b),h)\) would touch the curve \(B^{\prime}\) at \((\widetilde{\mathfrak{t}}(a),h)\) and continue to \((\widetilde{\mathfrak{t}}(c),h)\).
So, \(\widetilde{I}(b,c)\) being \(\sigma\)-semistable at height \(h\) implies that \(\widetilde{I}^{\prime}(\widetilde{\mathfrak{t}}(b),\widetilde{\mathfrak{t}}(c))\) is \(\sigma^{\prime}\)-semistable at the same height \(h\). The converse is similar since going from \(B^{\prime}\) to \(R\) is analogous (change \(B^{\prime}\) to \(-B^{\prime}\) and make it red). This concludes the proof in all cases.
## 4. Measured Laminations and Stability Conditions
In this section we connect measured laminations of the hyperbolic plane to stability conditions for continuous quivers of type \(\mathbb{A}\). We first define measured laminations (Definition 4.1) of the hyperbolic plane and prove some basic results we need in Section 4.1. In Section 4.2 we describe the correspondence that connects stability conditions to measured laminations. In Section 4.3 we present a candidate for continuous cluster characters. In Section 4.4 we briefly describe how all maximally \(\mathbf{N}_{\pi}\)-compatible sets come from a stability condition. In Section 4.5 we describe maps between cluster categories of type \(\mathbb{A}_{n}\) that factor through our continuous tilting. We also give an example for type \(\mathbb{A}_{4}\).
### Measured Laminations
We denote by \(\mathfrak{h}^{2}\) the Poincare disk model of the hyperbolic plane and by \(\partial\mathfrak{h}^{2}\) the boundary of the disk such that \(\partial\mathfrak{h}^{2}\) is the unit circle in \(\mathbb{C}\). Recall a **lamination** of \(\mathfrak{h}^{2}\) is a maximal set of noncrossing geodesics and that a geodesic in \(\mathfrak{h}^{2}\) is uniquely determined by a distinct pair of points on \(\partial\mathfrak{h}^{2}\).
Let \(L\) be a lamination of \(\mathfrak{h}^{2}\). Choose two open interval subsets \(A\) and \(B\) of \(\partial\mathfrak{h}^{2}\), each of which may be all of \(\partial\mathfrak{h}^{2}\) or empty. Let \(O_{A,B}\) be the set of geodesics with one endpoint in \(A\) and the other in \(B\). We call \(O_{A,B}\) a **basic open subset** of \(L\). Notice that \(O_{A,B}=O_{B,A}\). The basic open sets define a topology on \(L\).
**Definition 4.1**.: Let \(L\) be a lamination of \(\mathfrak{h}^{2}\) and \(\mathcal{M}:L\to\mathbb{R}_{\geq 0}\) a measure on \(L\). We say \((L,\mathcal{M})\) is a **measured lamination** if \(0<\mathcal{M}(O_{A,B})<\infty\) for every \(O_{A,B}\neq\emptyset\).
Notice that we immediately see any measured lamination \((L,\mathcal{M})\) has finite measure. That is, \(0<\mathcal{M}(L)<\infty\).
We now define some useful pieces of laminations.
**Definition 4.2**.: Let \(L\) be a lamination of \(\mathfrak{h}^{2}\).
1. Let \(\gamma\in L\) be a geodesic determined \(a,b\in\partial\mathfrak{h}^{2}\). We say \(\gamma\) is a **discrete arc** if there exists non-intersecting open subsets \(A\ni a\) and \(B\ni b\) of \(\partial\mathfrak{h}^{2}\) such that \(O_{A,B}=\{\gamma\}\).
2. Let \(a\in\partial\mathfrak{h}^{2}\). Let \(A\) be some interval subset of \(\partial\mathfrak{h}^{2}\) with more than one element such that for every geodesic \(\gamma\in L\) determined by some \(a^{\prime}\in A\) and \(b\in\partial\mathfrak{h}^{2}\), we have \(b=a\). Then we define the set \(K\) of geodesics determined by the pair \(a,A\) to be called a **fountain**. We say \(K\) is **maximal** if a fountain determined by \(a,A^{\prime}\), where \(A^{\prime}\supseteq A\), is precisely \(K\).
3. Let \(A,B\) be interval subsets of \(\partial\mathfrak{h}^{2}\) whose intersection contains at most one point. Suppose that for every geodesic \(\gamma\in L\) determined by \(a,b\in\partial\mathfrak{h}^{2}\), we have \(a\in A\setminus\partial A\) if and only if \(b\in B\setminus\partial B\). If there is more than one such geodesic, we call the set \(K\) of all such geodesics determined by \(a,b\) with \(a\in A\) and \(b\in B\) a **rainbow**. We say \(K\) is **maximal** if a rainbow determined by \(A^{\prime}\supseteq A\) and \(B^{\prime}\supseteq B\) is precisely \(K\).
From the definitions we have a result about discrete arcs, fountains, and rainbows.
**Proposition 4.3**.: _Let \(L\) be a lamination of \(\mathfrak{h}^{2}\) and let \(K\) be a discrete geodesic, a fountain, or a rainbow. Then \(\mathcal{M}(K)>0\)._
Proof.: By definition, if \(K=\{\gamma\}\) is a discrete arc then \(K=O_{A,B}\) and so \(\mathcal{M}(K)>0\). Additionally, if \(K=L\) then \(K=O_{\partial\mathfrak{h}^{2},\partial\mathfrak{h}^{2}}\) and so \(\mathcal{M}(K)>0\). So we will assume \(K\) is either a fountain or a rainbow and \(K\neq L\); in particular \(K\) has more than one element.
First suppose \(K\) is a fountain determined by \(a\in\partial\mathfrak{h}^{2}\) and \(A\subset\partial\mathfrak{h}^{2}\). By definition \(K\) has more than one element and so \(A\setminus\partial A\neq\emptyset\). If \(a\notin A\) then let \(B\ni a\) be a small open ball around \(a\) in \(\partial\mathfrak{h}^{2}\) such that \(B\cap A=\emptyset\). Now consider \(O_{A\setminus\partial A,B}\). We see \(O_{A\setminus\partial A,B}\subset K\) and \(\mathcal{M}(O_{A\setminus\partial A,B})>0\). If \(a\in A\) then every geodesic determined by an \(a^{\prime}\) and \(b\) with \(a^{\prime}\in A\setminus(\{a\}\cup\partial A)\) has \(b=a\). Let \(A^{\prime}=A\setminus(\{a\}\cup\partial A)\) and let \(B\ni a\) be an open ball such that \(A\setminus\partial A\not\subset B\). Now we have \(O_{A^{\prime},B}\subset K\) and \(\mathcal{M}(O_{A^{\prime},B})>0\). Therefore \(\mathcal{M}(K)>0\).
Now suppose \(K\) is a rainbow determined by \(A\) and \(B\). Again we know \(K\) has more than one element so both \(A\setminus\partial A\) and \(B\setminus\partial B\) are nonempty. Take \(A^{\prime}=A\setminus\partial A\) and \(B^{\prime}=B\setminus\partial B\). Then \(O_{A^{\prime},B^{\prime}}\subset K\) and \(\mathcal{M}(O_{A^{\prime},B^{\prime}})>0\). Therefore, \(\mathcal{M}(K)>0\).
### The Correspondence
In this section, we recall the connection between \(\mathbf{N}_{\pi}\)-clusters and (unmeasured) laminations of \(\mathfrak{h}^{2}\) for the straight descending orientation of a continuous quiver of type \(\mathbb{A}\), from [12]. We then extend this connection to measured laminations and stability conditions that satisfy the four point condition, obtaining a "2-bijection" (Theorem 4.12). Then we further extend this "2-bijection" between measured laminations and stability conditions to all continuous quivers of type \(\mathbb{A}\) with finitely many sinks and sources (Corollary 4.13). We conclude that section with an explicit statement that tilting a stability condition \(\sigma\in\mathcal{S}_{\mathrm{fpc}}(Q)\) to a stability condition \(\sigma^{\prime}\in\mathcal{S}_{\mathrm{fpc}}(Q^{\prime})\) yields the _same_ measured lamination, for continuous quivers \(Q,Q^{\prime}\) of type \(\mathbb{A}\) (Theorem 4.14).
**Theorem 4.4** (from [12]).: _There is a bijection \(\Phi\) from maximally \(\mathbf{N}_{\pi}\)-compatible sets to laminations of \(\mathfrak{h}^{2}\). For each maximally \(\mathbf{N}_{\pi}\)-compatible set \(T\) and corresponding lamination \(\Phi(T)\), there is a bijection \(\phi_{T}:T\to\Phi(T)\) that takes objects in \(T\) to geodesics in \(\Phi(T)\)._
Before we proceed we introduce some notation to make some remaining definitions and proofs in this section more readable. First, we fix an indexing on \(\partial\mathfrak{h}^{2}\) in
the following way. To each point \(x\in\mathbb{R}\cup\{-\infty\}\) we assign the point \(e^{i\arctan(x)}\) in \(\partial\mathfrak{h}^{2}\). We now refere to points in \(\partial\mathfrak{h}^{2}\) as points in \(\mathbb{R}\cup\{-\infty\}\).
**Notation 4.5**.: Let \((L,\mathcal{M})\) be a measured lamination of \(\mathfrak{h}^{2}\).
* For each \(\gamma\in L\) we denote by \(a_{\gamma}\) and \(b_{\gamma}\) the unique points in \(\partial\mathfrak{h}^{2}\) that determine \(\gamma\) such that \(a_{\gamma}<b_{\gamma}\) in \(\mathbb{R}\cup\{-\infty\}\).
* For each \(x\in\partial\mathfrak{h}^{2}\) such that \(x\neq-\infty\), \[\frac{L}{x}:= \{\gamma\in L\mid\gamma_{a}<x<\gamma_{b}\}\] \[L\cdot x:= \{\gamma\in L\mid\gamma_{b}=x\}\] \[x\cdot L:= \{\gamma\in L\mid\gamma_{a}=x\}.\]
* For \(-\infty\), \[\frac{L}{-\infty}:= \emptyset\] \[L\cdot(-\infty):= \emptyset\] \[(-\infty)\cdot L:= \{\gamma\in L\mid\gamma_{a}=-\infty\}.\]
* Finally, for some interval \(I\subset\mathbb{R}\), \[I\cdot L:= \bigcup_{x\in I}x\cdot L=\{\gamma\in L\mid\gamma_{b}\in I\}\] \[L\cdot I:= \bigcup_{x\in I}L\cdot x=\{\gamma\in L\mid\gamma_{a}\in I\}.\]
We denote by \(\mathcal{L}\) the set of measured laminations of \(\mathfrak{h}^{2}\) and by \(\overline{\mathcal{L}}\) the set of laminations of \(\mathfrak{h}^{2}\) (without a measure).
Now we define how to obtain a useful function \(F\) from any measured lamination \(L\in\mathcal{L}\). We will use this to define a function \(\mathcal{L}\to\mathcal{S}_{\text{fpc}}(Q)\), where \(Q\) is the continuous quiver of type \(\mathbb{A}\) with straight descending orientation.
**Definition 4.6**.: Let \((L,\mathcal{M})\in\mathcal{L}\). We will define a useful function \(F\) on \(-\infty\), \(+\infty\), and then all of \(\mathbb{R}\). For \(-\infty\), define
\[u_{-\infty}^{-}:= 0 u_{-\infty}^{+}:= -\mathcal{M}((-\infty)\cdot L)\] \[F(-\infty):= 0 f(-\infty):= 0.\]
For \(+\infty\), define
\[u_{+\infty}^{-}=u_{+\infty}^{+}=F(+\infty)=f(+\infty)=0.\]
For each \(a\in\mathbb{R}\), define
\[u_{a}^{-}:= \mathcal{M}(L\cdot a) u_{a}^{+}:= -\mathcal{M}(a\cdot L)\] \[F(a):= -\mathcal{M}\left(\frac{L}{a}\right) f(a):= F(a)-\left(\sum_{x\leq a}u_{x}^{-}\right)-\left(\sum_{x<a}u_{x}^{+} \right).\]
First, note that since \(\mathcal{M}(L)<\infty\), each of the assignments is well-defined. It remains to show that \(F\) is a useful function.
**Proposition 4.7**.: _Let \((L,\mathcal{M})\in\mathcal{L}\) and let \(F\) be as in Definition 4.6. Then \(F\) is useful._
Proof.: Since \(\mathcal{M}(L)<\infty\), we see \(\sum_{x\in\mathbb{R}\cup\{+\infty\}}|u_{x}^{-}|+\sum_{x\in\mathbb{R}\cup\{-\infty \}}|u_{x}^{+}|<\infty\). Now we show \(f\) is continuous. Consider \(\lim_{x\to a^{-}}f(x)\) for any \(a\in\mathbb{R}\):
\[\lim_{x\to a^{-}}f(x) =\lim_{x\to a^{-}}\left[F(x)-\left(\sum_{y\leq x}u_{y}^{-}\right)- \left(\sum_{y<x}u_{y}^{+}\right)\right]\] \[=\lim_{x\to a^{-}}\left[-\mathcal{M}\left(\frac{L}{x}\right)- \left(\sum_{y\leq x}\mathcal{M}(L\cdot y)\right)-\left(\sum_{y<x}\mathcal{M}(y \cdot L)\right)\right]\] \[=-\mathcal{M}\left(\frac{L}{a}\right)-\mathcal{M}(L\cdot a)- \left(\sum_{x<a}\mathcal{M}(L\cdot a)\right)-\left(\sum_{x<a}\mathcal{M}(a \cdot L)\right)\] \[=F(a)-\left(\sum_{x\leq a}u_{x}^{-}\right)-\left(\sum_{x<a}u_{x} ^{+}\right)\] \[=f(a).\]
A similar computation shows \(\lim_{x\to a^{+}}f(x)=f(a)\). Therefore, \(f\) is continuous on \(\mathbb{R}\). We also note that \(\lim_{x\to\pm\infty}f(x)=0\), using similar computations.
It remains to show that \(f\) has bounded variation. Let \(a<b\in\mathbb{R}\) and let \(F_{0}=f\). Denote by \(\mathsf{var}_{f}([a,b))\) the variance of \(f\) over \([a,b)\). We see that
\[\mathsf{var}_{f}([a,b))=\mathcal{M}(([a,b)\cdot L)\cup(L\cdot[a,b)))-\sum_{x \in[a,b)}(\mathcal{M}(x\cdot L)+\mathcal{M}(L\cdot x).\]
That is, \(\mathsf{var}_{f}([a,b)\) is the measure the geodesics with endpoints in \([a,b)\) that are not discrete and do not belong to a fountain. So,
\[\mathsf{var}_{f}([a,b))\leq\mathcal{M}(([a,b)\cdot L)\cup(L\cdot[a,b))).\]
Then we have
\[\mathsf{var}_{f}(\mathbb{R})=\sum_{i\in\mathbb{Z}}\mathsf{var}_{f}([i,i+1)) \leq\sum_{i\in\mathbb{Z}}\mathcal{M}(([i,i+1)\cdot L)\cup(L\cdot[i,i+1)))<\infty.\]
Thus, \(f\) has bounded variation.
We state the following lemma without proof, since the proof follows directly from Definition 2.14 and 4.6 and Proposition 4.7.
**Lemma 4.8**.: _Let \((L,\mathcal{M})\in\mathcal{L}\) and let \(F\) be as in Definition 4.6. Then \((F,0)\) is a red-blue function pair for the continuous quiver \(Q\) of type \(\mathbb{A}\) with straight descending orientation._
Now now define the function \(\mathcal{L}\to\mathcal{S}(Q)\).
**Definition 4.9**.: Let \((L,\mathcal{M})\in\mathcal{L}\), let \(F\) be as in Definition 4.6, and let \(Q\) be the continuous quiver of type \(\mathbb{A}\) with straight descending orientation. The map \(\Phi:\mathcal{L}\to\mathcal{S}(Q)\) is defined by setting \(\Phi((L,\mathcal{M}))\) equal to the equivalence class of \((F,0)\).
**Lemma 4.10**.: _Let \(L\in\mathcal{L}\) and let \(\partial\mathfrak{h}^{2}\) be indexed as \(\mathbb{R}\cup\{-\infty\}\), as before. Suppose there are points \(a,b\in\partial\mathfrak{h}^{2}\) such that for all \(x\in(a,b)\) we have \(\mathcal{M}(\frac{L}{x})\geq\mathcal{M}(\frac{L}{a})\) and \(\mathcal{M}(\frac{L}{x})\geq\mathcal{M}(\frac{L}{b})\). Then the geodesic in \(\mathfrak{h}^{2}\) uniquely determined by \(a\) and \(b\) is in \(L\)._
Proof.: For contradiction, suppose there is \(\alpha\in L\) such that \(\alpha\) is uniquely determined by \(c\) and \(d\), where \(a<c<b<d\). Then, we must have \(\beta\in L\) uniquely determined by \(c\) and \(d\), or else there is a set \(K\) with positive measure such that \(K\subset\frac{L}{b}\) but \(K\not\subset\frac{L}{c}\). Similarly, we must have \(\gamma\in L\) uniquely determined by \(a\) and \(c\). Now, we cannot have a fountain at \(c\) or else we will have a set with positive measure \(K\) such that \(K\subset\frac{L}{b}\) or \(K\subset\frac{L}{a}\) but \(K\not\subset\frac{L}{c}\). Since \(c\) has a geodesic to both the left and right, both \(\alpha\) must be discrete. But then \(\{\alpha\}\) has positive measure, a contradiction. Thus, there is no \(\alpha\in L\) such that \(\alpha\) is uniquely determined by \(c\) and \(d\), where \(a<c<b<d\). Similarly, there is no \(\alpha\in L\) such that \(\alpha\) is uniquely determined by \(c\) and \(d\), where \(c<a<d<b\). Therefore, since \(L\) is maximal, we must have the geodesic uniquely determined by \(a\) and \(b\) in \(L\).
**Proposition 4.11**.: _Let \((L,\mathcal{M})\in\mathcal{L}\), let \(F\) be as in Definition 4.6, and let \(Q\) be the continuous quiver of type \(\mathbb{A}\) with straight descending orientation. Then \(\Phi((L,\mathcal{M}))\in\mathcal{S}_{\text{fpc}}(Q)\)._
Proof.: For contradiction, suppose there exists a \(\Phi((L,\mathcal{M}))\)-semistable module \(M_{I}\) such that \(|(\widehat{I}\times\{h\})\cap(\widehat{Q}\times\mathbb{R})|\geq 4\). Choose \(4\) points \(a<b<c<d\) in \(\widehat{Q}\) corresponding to four intersection points.
For the remainder of this proof, write \(x\)-\(y\) to mean the geodesic in \(\mathfrak{h}^{2}\) uniquely determined by \(x\neq y\in\partial\mathfrak{h}^{2}\). By Lemma 4.10, we have the following geodesics in \(L\): \(a\)-\(b\), \(a\)-\(c\), \(a\)-\(d\), \(b\)-\(c\), \(b\)-\(d\), and \(c\)-\(d\). However, this is a quadrilateral with _both_ diagonals, as shown in Figure 7. Since \(L\) is a lamination, this is a contradiction.
**Theorem 4.12**.: _Let \(Q\) be the continuous quiver of type \(\mathbb{A}\) with straight descending orientation. Then \(\Phi:\mathcal{L}\to\mathcal{S}_{\text{fpc}}(Q)\) is a bijection. Furthermore, for a measured lamination \(L\) and stability condition \(\Phi(L)\), there is a bijection \(\phi_{L}\) from \(L\) to \(\Phi(L)\)-semistable indecomposable modules._
Proof.: By the proof of Proposition 4.11, we see that the second claim follows. Thus, we now show \(\Phi\) is a bijection.
**Injectivity.** Consider \((L,\mathcal{M})\) and \((L^{\prime},\mathcal{M}^{\prime})\) in \(\mathcal{L}\). Let \(\sigma=\Phi(L,\mathcal{M})\) and \(\sigma^{\prime}=\Phi(L^{\prime},\mathcal{M}^{\prime})\). If \(L\neq L^{\prime}\) then we see that the set of \(\sigma\)-semistable modules is different from the set of \(\sigma^{\prime}\)-semistable modules. Thus, \(\sigma\neq\sigma^{\prime}\). If \(L=L^{\prime}\) but \(\mathcal{M}\neq\mathcal{M}^{\prime}\) there must be some \(x\in\mathbb{R}\cup\{-\infty\}\) such that \(\mathcal{M}(\frac{L}{x})\neq\mathcal{M}^{\prime}(\frac{L^{\prime}}{x})\). But the functions \(F\) and \(F^{\prime}\) from \(L\) and \(L^{\prime}\), respectively using Definition 4.6, both have the same limits at \(\pm\infty\). Thus, \(\widehat{\mathcal{G}}(F,0)\) is not a vertical translation of \(\widehat{\mathcal{G}}(F^{\prime},0)\) in \(\mathbb{R}^{2}\). Therefore, \(\sigma\neq\sigma^{\prime}\).
Figure 7. The geodesics \(a\)-\(b\), \(a\)-\(c\), \(a\)-\(d\), \(b\)-\(c\), \(b\)-\(d\), and \(c\)-\(d\) used in the proof of Proposition 4.11. Notice \(a\)-\(b\), \(b\)-\(c\), \(c\)-\(d\), and \(a\)-\(d\) form a quadrilateral and its diagonals, \(a\)-\(c\) and \(b\)-\(d\), cross.
**Surjectivity.** Let \(\sigma\) be a stability condition. Let \(T\) be the maximal \(\mathbf{N}_{\pi}\)-compatible set of indecomposable modules determined by \(\sigma\) (Theorem 2.25). Let \(L\) be the lamination of \(\mathfrak{h}^{2}\) uniquely determined by \(T\) (Theorem 4.4). In particular, the indecomposable \(M_{I}\) corresponds to the geodesic uniquely determined by \(\inf I\) and \(\sup I\).
Let \((R,B)\) be the representative of \(\sigma\) such that \(B=0\); that is, \((R,B)=(R,0)\). For each \(x\in\partial\mathfrak{h}^{2}\), let
\[\mathcal{M}(L\cdot x) =u_{x}^{-} \mathcal{M}(x\cdot L) =-u_{x}^{+}\] \[\mathcal{M}\left(\frac{L}{x}\right) =R(x).\]
Since \(r\) must have bounded variation and \(\sum_{x\in\overline{R}}|u_{x}^{-}|+|u_{x}^{+}|<\infty\), we see \(\mathcal{M}(L)<\infty\).
Let \(O_{A,B}\subset L\) be a basic open subset. If \(O_{A,B}=\emptyset\) then we're done.
Now we assume \(O_{A,B}\neq\emptyset\) and let \(\gamma\in O_{A,B}\). If there exist two stability indicators for the indecomposable \(M_{I}\) corresponding to \(\gamma\), with heights \(h_{0}<h_{1}\), then we know \(\mathcal{M}(\{\gamma\})>|h_{1}-h_{0}|>0\) and so \(\mathcal{M}(O_{A,B})>0\).
We now assume there is a unique stability indicator of height \(h\) for the indecomposable \(M_{I}\) corresponding to \(\gamma\). Without loss of generality, since \(O_{A,B}=O_{B,A}\), assume \(a=\gamma_{a}\in A\) and \(b=\gamma_{b}\in B\). We know that, for all \(a<x<b\), we have \(R(x)\leq R(a)\) and \(R(x)\leq R(b)\). There are two cases: (1) \(R(x)<R(a)\) and \(R(x)<R(b)\), for all \(x\in(a,b)\), and (2) there exists \(x\in(a,b)\) such that \(R(x)=R(a)\) or \(R(x)=R(b)\).
**Case (1).** Let \(e=\tan(\frac{1}{2}(\tan^{-1}(a)+\tan^{-1}(b)))\). Let \(\{h_{i}\}_{i\in\mathbb{N}}\) be a strictly increasing sequence such that \(h_{0}=R(e)\) and \(\lim_{i\to\infty}h_{i}=h\). By Lemma 2.15(1) and our assumption that \(\mathcal{M}(\{\gamma\})=0\), for each \(i>0\), there is a stability indicator with height \(h_{i}\) and endpoints \(a_{i},b_{i}\) such that \(a<a_{i}<b_{i}<b\). Then \(\lim_{i\to\infty}a_{i}=a\) and \(\lim_{i\to\infty}b_{i}=b\), again by Lemma 2.15(1). Since \(A\) and \(B\) are open, there is some \(N\in\mathbb{N}\) such that, for all \(i\geq N\), we have \(a_{i}\in A\) and \(b_{i}\in B\). Let \(C=(a,a_{N})\) and \(D=(b_{N},b)\). Then, \(\mathcal{M}(O_{C,D})\geq|h-h_{N}|\) and so \(\mathcal{M}(O_{A,B})>0\).
**Case (2).** Assume there exists \(x\in(a,b)\) such that \(R(x)=R(a)\) or \(R(x)=R(b)\). Let \(e\) be this \(x\). If \(R(x)=R(a)\) and \(a=-\infty\), then \(R(b)=0\) (or else \(\gamma\notin O_{A,B}\subset L\)). Then we use the technique from Case (1) with \(b\) and \(+\infty\) to obtain some \(C=(b,d)\) and \(D=(c,+\infty)\) such that \(\mathcal{M}(O_{C,D})>0\). Thus, \(\mathcal{M}(O_{A,B})>0\).
Now we assume \(a>-\infty\) and \(R(x)=R(a)\) or \(R(x)=R(b)\). We consider \(R(x)=R(b)\) as the other case is similar. Since \(\sigma\) satisfies the four point condition, we know that for any \(\varepsilon>0\) such that \(R(b+\varepsilon)<R(b)\) we must have \(0<\lambda<\varepsilon\) such that \(R(b+\lambda)>R(b)\). Similarly, for any \(\varepsilon>0\) such that \(R(a-\varepsilon)<R(b)\) we must have \(0\leq\lambda<\varepsilon\) such that \(R(a-\lambda)>R(b)\). Notice the strict inequality in the statement about \(R(b+\lambda)\) and the weak inequality in the statement about \(R(a-\lambda)\).
Let \(\{h_{i}\}\) be a strictly decreasing sequence such that \(h_{0}=0\) and \(\lim_{i\to\infty}h_{i}=h\). By Lemma 2.15(1) and our assumption that \(\mathcal{M}(\{\gamma\})=0\), for each \(i>0\), there is a stability indicator with height \(h_{i}\) and endpoints \(a_{i},b_{i}\) such that \(a_{i}\leq a<b<b_{i}\). Since \(\sigma\) satisfies the four point condition, and again by Lemma 2.15(1), \(\lim_{i\to\infty}b_{i}=b\). Since \(A\) and \(B\) are open, there is \(N\in\mathbb{N}\) such that, if \(i\geq N\), we have \(a_{i}\in A\) and \(b_{i}\in B\). If \(a_{i}=a\) for any \(i>N\), let \(C\) be a tiny epsilon ball around \(a\) that does not include \(b\). Otherwise, let \(C=(a_{N},a)\). Let \(D=(b,b_{N})\). Then \(\mathcal{M}(O_{C,D})\geq|h_{N}-h|\) and so \(\mathcal{M}(O_{A,B})>0\).
**Conclusion.** Since \(\mathcal{M}(L)<\infty\), we know \(\mathcal{M}(O_{A,B})<\infty\) for each \(O_{A,B}\). This proves \((L,\mathcal{M})\) is a measured lamination. By the definition of \(\Phi\), we see that \(\Phi(L,\mathcal{M})=\sigma\). Therefore, \(\Phi\) is surjective and thus bijective.
**Corollary 4.13** (to Theorems 3.17 and 4.12).: _Let \(Q\) be a continuous quiver of type \(\mathbb{A}\). Then there is a bijection \(\Phi:\mathcal{L}\to\mathcal{S}_{\text{fpc}}(Q)\). Furthermore, for a measured lamination \(L\) and stability condition \(\Phi(L)\), there is a bijection \(\phi_{L}\) from \(L\) to \(\Phi(L)\)-semistable indecomposable modules._
**Theorem 4.14**.: _Let \(\sigma\in\mathcal{S}_{\text{fpc}}(Q)\) be the stability condition given by \((R,B)\) and let \(\sigma^{\prime}\in\mathcal{S}_{\text{fpc}}(Q^{\prime})\) be given by \((R^{\prime},B^{\prime})\). Then \(\sigma,\sigma^{\prime}\) give the same measured lamination on the Poincare disk._
Proof.: The set of geodesics going from intervals \((a,b)\) to \((x,y)\) has the same measure as those going from \((\mathfrak{t}(a),\mathfrak{t}(b))\) to \((\mathfrak{t}(x),\mathfrak{t}(y))\) where we may have to reverse the order of the ends. We can break up the intervals into pieces and assume that \((a,b),(x,y)\) are either both in \(K\), both in \(Z\) or one is in \(K\) and the other in \(Z\). The only nontrivial case is when \((a,b)\) is in \(K\) and \((x,y)\) is in \(Z\). In that case, the measure of this set of geodesics for \(\sigma\) is equal to the variation of \(H\) on \((a,b)\) since the islands don't "see" \(Z\). Similarly, the measure of the same set of geodesics for \(\sigma^{\prime}\), now parametrized as going from \((\mathfrak{t}(b),\mathfrak{t}(a))\) to \((x,y)\) is equal to the variation of \(H^{\prime}\) on \((\mathfrak{t}(b),\mathfrak{t}(a))\) where \(H^{\prime}(z)=H_{+}(\mathfrak{t}(z))\).
There is one other case that we need to settle: We need to know that the local variation of \(H\) at \(-\infty\) is equal to the local variation of \(H^{\prime}\) at \(r\). But this holds by definition of \(H,H^{\prime}\).
An example of a stability condition \(\sigma\) and corresponding measured lamination are shown in Figures 8, 9. The continuously tilted stability condition \(\sigma^{\prime}\) is shown in Figure 10.
Figure 8. The modified graph of the red-blue function pair \((R,B)\). Horizontal lines indicated semistable indecomposable representations. The rectangle labeled \(X\) represents one object with positive measure. The measure of a region is given by its height.
### Continuous cluster character
We present a candidate for the continuous cluster character using the formula from [15] which applies in the continuous case. This lives in a hypothetical algebra having a variable \(x_{t}\) for every real number \(t\). In this algebra which we have not defined, we give a simple formula for the cluster
Figure 10. This is the continuous tilting of Figure 8. There are two islands \(F1\) and \(R3\) which have been flipped up. The measured lamination (Figure 9) is unchanged, only relabeled.
Figure 9. The lamination of the hyperbolic plain {corresponding to the stability condition shown in Figure 8. The thick arc labeled \(X\) is an isolated geodesic with positive measure. The measure is equal to the height of the rectangle labeled \(X\) in Figure 8.
variable of an admissible module \(M_{ab}\) where \(a<b\) and the quiver \(Q\) is oriented to the left (is red) in a region containing \((a,b]\) in its interior. In analogy with the cluster character in the finite case (5) or [15], replacing summation with integration, we define \(\chi(M_{ab})\) to be the formal expression:
\[\chi(M_{ab})=\int_{a}^{b}\frac{x_{a}x_{b}\,\mathrm{d}t}{x_{t}^{2}}. \tag{6}\]
This could be interpreted as an actual integral of some function \(x_{t}\). For example, if we let \(x_{t}=t\) then we get \(\chi(M_{ab})=b-a\), the length of the support of \(M_{ab}\). The constant function \(x_{t}=1\) gives the same result.
The same cluster character formula will be used for modules with support \([a,b)\) in the blue region (where the quiver is oriented to the right).
This can also be written as
\[\chi(M_{ab})=x_{a}\chi(P_{b})-x_{b}\chi(P_{a})\]
where \(P_{b}\) is the projective module at \(b\) with cluster character
\[\chi(P_{b})=\int_{-\infty}^{b}\frac{x_{b}\,\mathrm{d}t}{x_{t}^{2}}.\]
Then the cluster mutation equation
\[\chi(M_{ac})\chi(M_{bd})=\chi(M_{ab})\chi(M_{cd})+\chi(M_{bc})\chi(M_{ad})\]
follows, as in the finite \(A_{n}\) case, from the Plucker relation on the matrix:
\[\begin{bmatrix}x_{a}&x_{b}&x_{c}&x_{d}\\ \chi(P_{a})&\chi(P_{b})&\chi(P_{c})&\chi(P_{d})\end{bmatrix}.\]
In Figures 8 and 9, if the measure of \(X=M_{df}\) is decreased to zero, the height of the rectangle in Figure 8 will go to zero, the four point condition will be violated and we can mutate \(X\) to \(X^{*}=M_{bh}\). Then the cluster characters are mutated by the Ptolemy equation:
\[\chi(X)\chi(X^{*})=\chi(M_{fh})\chi(M_{bd})+\chi(M_{dh})\chi(M_{bf})\]
where \(\chi(M_{bd})\) and \(\chi(M_{fh})\) are given by (6) and the other four terms have a different equation since there is a source (0) in the middle (\(d<0<f\)):
\[\chi(X)=\chi(M_{df})=\int_{d}^{0}\int_{0}^{f}\frac{x_{d}x_{0}x_{f}}{x_{s}^{2} x_{t}^{2}}\,\mathrm{d}s\,\mathrm{d}t+\frac{x_{d}x_{f}}{x_{0}}.\]
The double integral counts the proper submodules \(M_{ds}\oplus M_{tf}\subset X\) and there is one more term for the submodule \(X\subseteq X\).
The continuous cluster character will be explained in more detail in another paper.
### Every \(\mathbf{N}_{\pi}\)-cluster comes from a stability condition
Let \(L\) be a lamination of \(\mathfrak{h}^{2}\). Then there exists a measured lamination \((L,\mathcal{M})\) in the following way. There are at most countably many discrete arcs in \(L\). Assign each discrete arc a natural number \(n\). Then, set \(\mathcal{M}(\{\gamma_{n}\})=\frac{1}{1+n^{2}}\), for \(n\in\mathbb{N}\). Let \(K\) be the set of all discrete geodesics in \(L\). On \(L\setminus K\), give each \(O_{A,B}\) it's transversal measure. Thus, we have given \(L\) a finite measure satisfying Definition 4.1. Therefore, \((L,\mathcal{M})\) is a measured lamination. This means the set of measured laminations, \(\mathcal{L}\), surjects on to the set of laminations, \(\overline{\mathcal{L}}\), by "forgetting" the measure. Then, the set \(\mathcal{S}_{\mathrm{fpc}}(Q)\)
for some continuous quiver of type \(\mathbb{A}\) with finitely many sinks and sources, surjects onto the set of \(\mathbf{N}_{\pi}\)-clusters, \(\mathcal{T}_{\mathbf{N}_{\pi}}\), in the following way.
Essentially, there is a surjection \(\mathcal{S}_{\mathrm{fpc}}(Q)\twoheadrightarrow\mathcal{T}_{\mathbf{N}_{\pi}}\) defined using the surjection \(\mathcal{L}\twoheadrightarrow\overline{\mathcal{L}}\). If we follow the arrows around, we see that each stability condition \(\sigma\) is set to the set of \(\sigma\)-semistable modules, which form an \(\mathbf{N}_{\pi}\)-cluster.
### Maps between cluster categories of type \(\mathbb{A}_{n}\)
Let \(Q\) be a quiver of type \(\mathbb{A}_{n}\), for \(n\geq 2\). Label the vertices \(1,\ldots,n\) in \(Q\) such that there is an arrow between \(i\) and \(i+1\) for each \(1\leq i<n\).
For each \(i\in\{-1,0,\ldots,n,n+1,n+2\}\) let
\[x_{i}=\tan\left(\frac{i+1}{n+3}\pi-\frac{\pi}{2}\right).\]
We define a continuous quiver \(\mathcal{Q}\) of type \(\mathbb{A}\) based on \(Q\), called the **continuification** of \(Q\). If \(1\) is a sink (respectively, source) then \(-\infty\) is a sink (respectively, source) in \(\mathcal{Q}\). If \(n\) is a sink (respectively, source) then \(+\infty\) is a sink (respectively, source in \(\mathcal{Q}\)). For all \(i\) such that \(2\leq i\leq n-1\), we have \(x_{i}\) is a sink (respectively, source) in \(\mathcal{Q}\) if and only if \(i\) is a sink (respectively, source) in \(Q\).
Define a map \(\Omega:\operatorname{Ind}(\mathcal{C}(Q))\to\operatorname{Ind}^{\mathrm{r}}( \mathcal{Q})\) in Figure 11 on page 1.
Let \(1\leq m<n\) such that there is a path \(1\to m\) in \(Q\) or a path \(m\to 1\) in \(Q\) (possibly trivial). Let \(Q^{\prime}\) be obtained from from \(Q\) by reversing the path between \(1\) and \(m\) (if \(m=1\) then \(Q=Q^{\prime}\)). It is well known that \(\mathcal{D}^{b}(Q)\) and \(\mathcal{D}^{b}(Q^{\prime})\) are equivalent as triangulated categories. Let \(F:\mathcal{D}^{b}(Q)\to\mathcal{D}^{b}(Q^{\prime})\) be a triangulated equivalence determined by sending \(P_{n}[0]\) to \(P_{n}[0]\). Furthermore, we know \(\tau\circ F(M)\cong F\circ\tau(M)\) for every object \(M\) in \(\mathcal{D}^{b}(Q)\), where \(\tau\) is the Auslander-Reiten translation. Then this induces a functor \(\overline{F}:\mathcal{C}(Q)\to\mathcal{C}(Q^{\prime})\). Overloading notation, we denote by \(\overline{F}:\operatorname{Ind}(\mathcal{C}(Q))\to\operatorname{Ind}( \mathcal{C}(Q^{\prime}))\) the induced map on isomorphism classes of indecomposable objects.
Let \(\mathcal{Q}^{\prime}\) be the continuification of \(Q^{\prime}\) and \(\Omega^{\prime}:\operatorname{Ind}(\mathcal{C}(Q^{\prime}))\to \operatorname{Ind}^{\mathrm{r}}(\mathcal{Q}^{\prime})\) the inclusion defined in the same way as \(\Omega\). Notice that that orientation of \(\mathcal{Q}\) and \(\mathcal{Q}^{\prime}\) agree above \(x_{m}\). Furthermore, if \(m>1\), the interval \((-\infty,x_{m})\) is blue in \(\mathcal{Q}\) if and only if it is red in \(\mathcal{Q}^{\prime}\) and vice versa. Using Theorem 3.2, there is a map \(\phi:\operatorname{Ind}^{\mathrm{r}}(\mathcal{Q})\to\operatorname{Ind}^{ \mathrm{r}}(\mathcal{Q}^{\prime})\) such that \(\{M,N\}\subset\operatorname{Ind}^{\mathrm{r}}(\mathcal{Q})\) are \(\mathbf{N}_{\pi}\)-compatible if and only if \(\{\phi(M),\phi(N)\}\subset\operatorname{Ind}^{\mathrm{r}}(\mathcal{Q}^{\prime}))\) are \(\mathbf{N}_{\pi}\)-compatible. Following tedious computations, we have the following commutative diagram that preserves compatibility:
\[\diagram{\nodenode{}}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\{}\node{} \node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{}\node{} \node{}\node{}\{}\node{}\node{}\node{}\node{}\node{}\{}\node{}\node{}\{}\node{} \node{}\{}\node{}\node{}\node{}\{}\node{}\node{}\{}\node{}\{}\node{}\{}\node{}\{} \node{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{} \node{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{}\node{}\{}\{}\node{}\{}\node{}\{}\{}\node{}\{}\node{}\{}\{}\node{}\{}\{}\node{}\{}\{}\node{}\{}\{}\node{}\{}\{}\node{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\node{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\node{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\{\}\node{}\{}\{}\{}\{}\{}\{\}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{\}\node{}\{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\{}\node{}\{}\{}\{}\{}\{}\)\(\{}\)\(\{}\{}\)\(\{}\{}\)\(\{}\)\(\{}\)\(\{}\)\(\{}\)\(\{}\)\(\{}\)\(\{}\)\(\)\)\(\}\)\(\)\(\)\)\(\)\)\(\)\(\)\)\(\)\(\)\)\(\)\)\(\)\(\)\
#### 4.5.1. An example for \(\mathbb{A}_{4}\) quivers
Let \(Q,Q^{\prime}\) be the following quivers and let \(\mathcal{Q},\mathcal{Q}^{\prime}\) be the respective continuifications defined above with functions \(\Omega,\Omega^{\prime}\).
Let \(\overline{F}:\mathcal{C}(Q)\to\mathcal{C}(Q^{\prime})\) be defined as above. A visualization of the commutative diagram above in \(\mathfrak{h}^{2}\) is contained in Figure 12 on page 37.
For \(\Omega:\operatorname{Ind}(\mathcal{C}(Q))\to\operatorname{Ind}^{\mathrm{r}}( \mathcal{Q})\):
\[A =\Omega(P_{4}) F =\Omega(M_{23}) K =\Omega(P_{3}[1])\] \[B =\Omega(P_{3}) G =\Omega(I_{3}) L =\Omega(I_{1})\] \[C =\Omega(P_{2}) H =\Omega(P_{4}[1])) M =\Omega(P_{2}[1])\] \[D =\Omega(P_{1}) I =\Omega(S_{2}) N =\Omega(P_{1}[1])\] \[E =\Omega(S_{3}) J =\Omega(I_{2}).\]
To save space we will indicate an indecomposable module by its support interval.
\[A =[\tan(3\pi/14),+\infty) F =[\tan(-\pi/14),\tan(5\pi/14)) K =[\tan(-5\pi/14),\tan(3\pi/14))\] \[B =[\tan(\pi/14),+\infty) G =[\tan(-3\pi/14),\tan(5\pi/14)) L =[\tan(-3\pi/14),\tan(\pi/14))\] \[C =[\tan(-\pi/14),+\infty) H =[\tan(-5\pi/14),\tan(5\pi/14)) M =[\tan(-5\pi/14),\tan(\pi/14))\] \[D =[\tan(-3\pi/14),+\infty) I =[\tan(-\pi/14),\tan(3\pi/14)) N =[\tan(-5\pi/14),\tan(-\pi/14))\] \[E =[\tan(\pi/14),\tan(5\pi/14)) J =[\tan(-3\pi/14),\tan(3\pi/14)).\]
For \(\Omega^{\prime}:\operatorname{Ind}(\mathcal{C}(Q^{\prime}))\to \operatorname{Ind}^{\mathrm{r}}(\mathcal{Q}^{\prime})\):
\[A =\Omega^{\prime}(P_{4}^{\prime}) F =\Omega^{\prime}(I_{2}^{\prime}) K =\Omega^{\prime}(P_{3}^{\prime}[1])\] \[B =\Omega^{\prime}(P_{3}^{\prime}) G =\Omega^{\prime}(I_{3}^{\prime}) L =\Omega^{\prime}(P_{1}^{\prime})\] \[C =\Omega^{\prime}(M_{23}^{\prime}) H =\Omega^{\prime}(P_{4}^{\prime}[1]) M =\Omega^{\prime}(P_{2}^{\prime})\] \[D =\Omega^{\prime}(I_{4}^{\prime}) I =\Omega^{\prime}(P_{1}^{\prime}[1]) N =\Omega^{\prime}(S_{2}^{\prime})\] \[E =\Omega^{\prime}(I_{1}^{\prime}) J =\Omega^{\prime}(P_{2}^{\prime}[1]).\]
In \(\operatorname{mod}^{\mathrm{r}}(\mathcal{Q}^{\prime}))\):
\[A =[\tan(3\pi/14),+\infty) F =(\tan(-5\pi/14),\tan(5\pi/14)) K =(\tan(-\pi/14),\tan(3\pi/14))\] \[B =(-\infty,+\infty) G =(\tan(-3\pi/14),\tan(5\pi/14)) L =(-\infty,\tan(-3\pi/14)]\] \[C =(\tan(-5\pi/14),+\infty) H =(\tan(-\pi/14),\tan(5\pi/14)) M =(-\infty,\tan(\pi/14))\] \[D =(\tan(-3\pi/14),+\infty) I =(\tan(-5\pi/14),\tan(3\pi/14)) N =(\tan(-5\pi/14),\tan(-\pi/14)]\] \[E =(-\infty,\tan(5\pi/14)) J =(\tan(-3\pi/14),\tan(3\pi/14)).\]
The orange highlights changes due to tilting. The purple highlights a _coincidental_ fixed endpoint (but notice the change in open/closed).
## Future Work
There are a few questions that naturally arise from our results. What is the connection between our tilting and the reflection functors introduced in [14]? What if we considered _all_ modules over a continuous quiver of type \(\mathbb{A}\), instead of just those that are representable. Can we expand Section 4.3 and describe a continuous cluster algebra? The authors plan to explore some of these questions in future research.
There is still much work to do with general continuous stability, as well. What can we learn by studying measured laminations of other surfaces? For example, can we connect a continuous type \(\mathbb{D}\) quiver to measured laminations of the punctured (Poincare) disk? In the present paper, we consider stability conditions in the sense of King. What about other kinds of stability conditions? Furthermore, can the connections between stability conditions and moduli spaces be generalized to the continuous case?
Figure 12. \(\mathbb{A}_{4}\) example – arcs in \(\mathfrak{h}^{2}\). Continuous tilting doesn’t move arcs in the hyperbolic plane. We can see this by relabeling the boundary of \(\mathfrak{h}^{2}\) accordingly. We also see how the diagonals of the heptagon (which models the cluster combinatorics for \(\mathcal{C}(Q)\) and \(\mathcal{C}(Q^{\prime})\)) are preserved by \(\overline{F}\). | ```
A type A continuous quiverの representable moduleに対する安定条件を、キングの意味で導入します。これは、四点条件と呼ばれる特別な条件とともに、安定条件を定義します。この安定条件は、デルタ関数の一種である半デルタ関数の一般化を用いて定義されます。有限個のsinkとsourceを持つtype Aの連続的 quiversの場合、四点条件を満たす安定条件は、双対性のある測度されたラミナレーションと一致する。この過程で、最初の作者とTodorov が、連続的クラストカテゴリーを、線形連続 quivers type A とラミナレーションの hyperbolic plane に対する、既存の結果を、有限個の sink と source を持つtype A の連続的 quivers にまで拡張します。また、連続的クラストキャラクタの公式を与える。
``` |
2309.11598 | A theory satisfying a strong version of Tennenbaum's theorem | We answer a question of Pakhomov by showing that there is a consistent, c.e.
theory $T$ such that no theory which is definitionally equivalent to $T$ has a
computable model. A key tool in our proof is the model-theoretic notion of
mutual algebraicity. | Patrick Lutz, James Walsh | 2023-09-20T19:21:11 | http://arxiv.org/abs/2309.11598v1 | # A theory satisfying a strong version of Tennenbaum's theorem
###### Abstract.
We answer a question of Pakhomov by showing that there is a consistent, c.e. theory \(T\) such that no theory which is definitionally equivalent to \(T\) has a computable model. A key tool in our proof is the model-theoretic notion of mutual algebraicity.
## 1. Introduction
Tennenbaum's theorem states that there is no computable nonstandard model of \(\mathsf{PA}\)[15]. Often, this result is viewed as giving us one reason the standard model of \(\mathsf{PA}\) is special--it is the only computable model--but another perspective is possible: Tennenbaum's theorem is a source of examples of consistent, c.e. theories with no computable models.
To explain this perspective, let us say that a theory \(T\) has the **Tennenbaum property** if \(T\) has no computable models. Tennenbaum's theorem implies that there are many consistent extensions of \(\mathsf{PA}\) with the Tennenbaum property. For example, the theory \(\mathsf{PA}+\neg\mathsf{Con}(\mathsf{PA})\) (which asserts that \(\mathsf{PA}\) is inconsistent) is a consistent extension of \(\mathsf{PA}\) with only nonstandard models and hence, by Tennenbaum's theorem, with no computable models. Furthermore, a slight extension of the proof of Tennenbaum's theorem can be used to prove that many other theories have the Tennenbaum property. For example, it is not hard to show that \(\mathsf{ZFC}\) has no computable models [Ham] and likewise for much weaker theories like \(\mathsf{Z}_{2}\) (the theory of full second order arithmetic), or even \(\mathsf{RCA}_{0}\) (at least if "model" is understood in the usual sense of first order logic). More generally, it seems to be an empirical fact that every natural theory which interprets even a small fragment of second order arithmetic has the Tennenbaum property.
Recently, however, Pakhomov showed that this phenomenon is somewhat fragile: it depends on the specific language in which the theory is presented [14]. To make this idea precise, Pakhomov used the notion of **definitional equivalence** (also known as **synonymy**), a strong form of bi-interpretability introduced by de Bouvere in [1]. Roughly speaking, theories \(T\) and \(T^{\prime}\) in languages \(\mathcal{L}\) and \(\mathcal{L}^{\prime}\) are definitionally equivalent if they can be viewed as two instances of a single theory, but with different choices of which notions to take as primitive.
**Theorem 1.1** (Pakhomov).: _There is a theory \(T\) which is definitionally equivalent to \(\mathsf{PA}\) such that any consistent, c.e. extension of \(T\) has a computable model._
This theorem implies that every consistent, c.e. extension of \(\mathsf{PA}\) is definitionally equivalent to a theory with a computable model. Moreover, the techniques used by Pakhomov are not restricted to extensions of \(\mathsf{PA}\). For example, Pakhomov notes that they are sufficient to prove that \(\mathsf{ZF}\) is definitionally equivalent to a theory with a computable model. More generally, Pakhomov's techniques seem sufficient to prove that each example we have given so far of a theory with the Tennenbaum property is definitionally equivalent to a theory without the Tennenbaum property.
In light of these observations, Pakhomov asked how general this phenomenon is [10]. In particular, does it hold for every consistent, c.e. theory?
**Question 1** (Pakhomov).: _Is every consistent, c.e. theory definitionally equivalent to a theory with a computable model?_
The purpose of this paper is to answer this question in the negative. In other words, to give an example of a consistent, c.e. theory which satisfies a strong version of the Tennenbaum property.
**Theorem 1.2**.: _There is a consistent, c.e. theory \(T\) such that no theory which is definitionally equivalent to \(T\) has a computable model._
To prove this theorem, we construct a theory \(T\) which has no computable models but is also model-theoretically tame. A key observation in our proof is that if a theory \(T\) is sufficiently tame then any theory definitionally equivalent to \(T\) must also be fairly tame. In particular, if \(T\) is sufficiently tame then every theory which is definitionally equivalent to \(T\) satisfies a weak form of quantifier elimination.
Here's why this is useful. Suppose that \(M\) is a model of a theory \(T^{\prime}\) which is definitionally equivalent to \(T\). It follows from the definition of "definitionally equivalent" that within \(M\), we can define a model of \(T\). If \(T^{\prime}\) had quantifier elimination then we could assume that this definition is quantifier free and thus \(M\) can compute a model of \(T\). Since \(T\) has no computable models, this would imply that \(M\) itself is not computable. Unfortunately, we can't quite follow this strategy: we don't know that \(T^{\prime}\) has full quantifier elimination, but only a weak version of it. However, using this weak form of quantifier elimination we can show that \(M\) can computably approximate a model of \(T\) and, by picking \(T\) so that its models cannot even be computably approximated, this is enough to show that \(M\) is not computable.
The specific form of model-theoretic tameness that we use in our proof is known as **mutual algebraicity**, first defined in [1] and subsequently developed by Laskowski and collaborators (e.g. [12, 13, 14]). The main result we need from the theory of mutual algebraicity is a quantifier elimination theorem proved by Laskowski in [12].
Our use of tame model theory in this paper is somewhat reminiscent of techniques used by Emil Jerabek in the paper [15]. In that paper, Jerabek separated two conditions which imply that a theory \(T\) is essentially undecideable: the condition that \(T\) can represent all partially recursive functions and the condition that \(T\) interprets Robinson's \(R\). To accomplish this, he used the fact that the model completion of the empty theory in an arbitrary language is model-theoretically tame--in particular, it eliminates \(\exists^{\infty}\) and is \(\mathsf{NSOP}\). He ended the paper by asking whether there are more connections between formal arithmetic and tame model theory. We believe our results constitute a partial answer to his question.
### Acknowledgements
We thank Peter Cholak, Nick Ramsey, Charlie McCoy, Andrew Marks, Forte Shinko, Mariana Vicaria and Kyle Gannon for helpful conversations, James Hanson for pointing us to the literature on mutual algebraicity and Chris Laskowski for help in understanding that literature.
## 2. Preliminaries on definitional equivalence and mutual algebraicity
In this section we will give the formal definition of definitional equivalence, fix some notation related to it and review the facts about mutual algebraicity that we need.
### Definitional equivalence
To define definitional equivalence, we first need the concept of a definitional extension of a theory.
**Definition 2.1**.: Given a theory \(T\) in language \(\mathcal{L}\), a **definitional extension** of \(T\) is a theory \(T^{\prime}\supseteq T\) in a language \(\mathcal{L}^{\prime}\supseteq\mathcal{L}\) such that
1. \(\boldsymbol{T^{\prime}}\) **is conservative over \(\boldsymbol{T}\):** for each sentence \(\varphi\in\mathcal{L}\), \(T^{\prime}\vdash\varphi\) if and only if \(T\vdash\varphi\).
2. **The symbols in \(\boldsymbol{\mathcal{L}^{\prime}}\) are definable in \(\boldsymbol{\mathcal{L}}\):** for each constant symbol \(c\), relation symbol \(R\) and function symbol \(f\) in \(\mathcal{L}^{\prime}\), there is a corresponding formula \(\varphi_{c}\), \(\varphi_{R}\), or \(\varphi_{f}\) in \(\mathcal{L}\) such that \[T^{\prime}\vdash\forall x\,(x=c\leftrightarrow\varphi_{c}(x))\] \[T^{\prime}\vdash\forall\overline{x}\,(R(\overline{x}) \leftrightarrow\varphi_{R}(\overline{x}))\] \[T^{\prime}\vdash\forall\overline{x},y\,(f(\overline{x})=y \leftrightarrow\varphi_{f}(\overline{x},y)).\]
**Definition 2.2**.: Theories \(T\) and \(T^{\prime}\) in disjoint signatures are **definitionally equivalent** if there is a single theory which is a definitional extension of both \(T\) and \(T^{\prime}\).
More generally, theories \(T\) and \(T^{\prime}\) are definitionally equivalent if they are definitionally equivalent after renaming their symbols to make their signatures disjoint. However, there is no loss of generality from ignoring theories with overlapping signatures, so we will do that for the rest of this paper.
**Example 2.3**.: The theories of the integers with plus and with minus--i.e. \(T=\operatorname{Th}(\mathbb{Z},+)\) and \(T^{\prime}=\operatorname{Th}(\mathbb{Z},-)\)--are definitionally equivalent because plus and minus can both be defined in terms of the other. More formally, the theory \(T^{\prime\prime}=\operatorname{Th}(\mathbb{Z},+,-)\) is a definitional extension of both \(T\) and \(T^{\prime}\). In contrast, it is well-known that the theories \(\operatorname{Th}(\mathbb{Z},+)\) and \(\operatorname{Th}(\mathbb{Z},\times)\) are _not_ definitionally equivalent, because neither plus nor times can be defined in terms of the other.
A key point about definitional equivalence is that if \(T\) and \(T^{\prime}\) are definitionally equivalent theories in languages \(\mathcal{L}\) and \(\mathcal{L}^{\prime}\), respectively, then every model of \(T\) can be viewed as a model of \(T^{\prime}\) and vice-versa. Likewise, every \(\mathcal{L}\)-formula can be viewed as an \(\mathcal{L}^{\prime}\)-formula and vice-versa. It will be useful to us to make this idea precise and to fix some notation.
Translating models.Suppose that \(T\) and \(T^{\prime}\) are definitionally equivalent theories in languages \(\mathcal{L}\) and \(\mathcal{L}^{\prime}\), respectively. Let \(T^{\prime\prime}\) be an \(\mathcal{L}^{\prime\prime}\)-theory witnessing the definitional equivalence of \(T\) and \(T^{\prime}\)--i.e. \(\mathcal{L}\cup\mathcal{L}^{\prime}\subseteq\mathcal{L}^{\prime\prime}\) and \(T^{\prime\prime}\) is a definitional extension of both \(T\) and \(T^{\prime}\).
Suppose that \(R\) is a relation symbol in \(\mathcal{L}\). Since \(T^{\prime\prime}\) is a definitional extension of \(T^{\prime}\), there is an \(\mathcal{L}^{\prime}\)-formula, \(\varphi_{R}\), which \(T^{\prime\prime}\) proves is equivalent \(R\). We will refer to this formula as the \(\boldsymbol{\mathcal{L}^{\prime}}\)**-definition of \(\boldsymbol{R}\)**. Similarly, every other constant, relation and function symbol of \(\mathcal{L}\) has an \(\mathcal{L}^{\prime}\)-definition and vice-versa.
Given a model \(M\) of \(T^{\prime}\), we can turn \(M\) into an \(\mathcal{L}\)-structure by interpreting each constant, relation and function symbol of \(\mathcal{L}\) according to its \(\mathcal{L}^{\prime}\)-definition.1 Furthermore, it is not hard to check that the resulting \(\mathcal{L}\)-structure is always a model of \(T\). We will denote the model produced in this way by \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\). Likewise, if \(M\) is a model of \(T\) then we can transform it into a model of \(T^{\prime}\), which we will denote \(M^{\mathcal{L}\to\mathcal{L}^{\prime}}\).
It is important to note that for any model \(M\) of \(T^{\prime}\), \(M\) and \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\) have the same underlying set and \((M^{\mathcal{L}^{\prime}\to\mathcal{L}})^{\mathcal{L}\to\mathcal{L}^{\prime}}=M\). Thus we may think of \(M\) and \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\) as two different ways of viewing the same structure.
**Translating formulas.** A similar transformation is possible for formulas. Suppose \(\varphi\) is an \(\mathcal{L}\)-formula. Then by replacing each constant, relation and function symbol in \(\varphi\) by the corresponding \(\mathcal{L}^{\prime}\)-definition, we obtain an \(\mathcal{L}^{\prime}\)-formula, which we will denote \(\varphi^{\mathcal{L}\to\mathcal{L}^{\prime}}\). Likewise we can transform any \(\mathcal{L}^{\prime}\)-formula \(\varphi\) into an \(\mathcal{L}\)-formula, which we will denote \(\varphi^{\mathcal{L}^{\prime}\to\mathcal{L}}\).
**Example 2.4**.: Suppose \(f\) is a unary relation symbol in \(\mathcal{L}\), \(\varphi_{f}(x,y)\) is its \(\mathcal{L}^{\prime}\)-definition and \(\psi\) is the \(\mathcal{L}\)-formula \(\forall x,y\left(f(f(x))=f(y)\right)\). Then \(\psi^{\mathcal{L}\to\mathcal{L}^{\prime}}\) is the formula \(\forall x,y\left(\exists z_{1},z_{2},z_{3}\left(\varphi_{f}(x,z_{1})\wedge \varphi_{f}(z_{1},z_{2})\wedge\varphi_{f}(y,z_{3})\wedge z_{2}=z_{3}\right))\).
It is not hard to check that our translations of models and of formulas are compatible with each other. In particular, if \(M\) is a model of \(T^{\prime}\), \(\varphi\) is an \(\mathcal{L}^{\prime}\)-formula and \(\overline{a}\) is a tuple in \(M\) then \(M\vDash\varphi(\overline{a})\) if and only if \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\vDash\varphi^{\mathcal{L}^{\prime}\to \mathcal{L}}(\overline{a})\). Note that this implies that \(M\) and \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\) have the same algebra of definable sets.
### Mutual algebraicity
As mentioned in the introduction, we will use the model-theoretic notion of mutual algebraicity. The key definitions are of mutually algebraic formulas and mutually algebraic structures.
**Definition 2.5**.: Given a structure \(M\), a formula \(\varphi(\overline{x})\) with parameters from \(M\) is **mutually algebraic over \(M\)** if there is some number \(k\in\mathbb{N}\) such that for every nontrivial partition \(\overline{x}=\overline{x}_{0}\cup\overline{x}_{1}\) and every tuple \(\overline{a}_{0}\) in \(M\), there are at most \(k\) tuples \(\overline{a}_{1}\) such that \(M\vDash\varphi(\overline{a}_{0},\overline{a}_{1})\).
Note that the mutual algebraicity depends on what the free variables of the formula are. In particular, it is not preserved by adding dummy variables. Also note that any formula with at most one free variable is mutually algebraic.
**Example 2.6**.: If \(M\) is the structure \((\mathbb{N},+)\) then the formula \(x=y+5\) is mutually algebraic over \(M\) because if we fix \(x\) there is at most one \(y\) satisfying the formula, and vice-versa. On the other hand, the formula \(x=y+z+5\) is not mutually algebraic over \(M\) because when we fix \(z\) there are infinitely many pairs \(x,y\) which satisfy the formula.
**Definition 2.7**.: A structure \(M\) is **mutually algebraic** if every formula is equivalent to a Boolean combination of formulas which are mutually algebraic over \(M\) (and which are allowed to have parameters from \(M\)).
**Example 2.8**.: The structure \((\mathbb{N},\operatorname{Succ})\) of natural numbers with the successor function has quantifier elimination and thus every formula is equivalent to a Boolean combination of atomic formulas. It is easy to check that the atomic formulas are all mutually algebraic and thus that the structure itself is. In contrast, it is possible to show that the structure \((\mathbb{Q},\leq)\), despite having quantifier elimination, is not mutually algebraic (for example, one can show that the formula \(x\leq y\) is not equivalent to a Boolean combination of mutually algebraic formulas).
### Quantifier elimination for mutually algebraic structures
We will make use of two quantifier elimination theorems for mutually algebraic structures. The first is due to Laskowski.
**Theorem 2.9** ([10], Theorem 4.2).: _If \(M\) is mutually algebraic then every formula \(\varphi(\overline{x})\) is equivalent over \(M\) to a Boolean combination of formulas of the form \(\exists\overline{x}\,\theta(\overline{y},\overline{z})\) (which may have parameters from \(M\)) where \(\theta\) is quantifier free and mutually algebraic over \(M\) and \(\overline{y}\) is a subset of \(\overline{x}\)._
**Theorem 2.10**.: _If \(M\) is a mutually algebraic structure and \(\varphi(\overline{x})\) is mutually algebraic over \(M\), then there is a quantifier free formula \(\theta(\overline{x},\overline{y})\) (which may have parameters from \(M\)) such that \(\exists\overline{y}\,\theta(\overline{x},\overline{y})\) is mutually algebraic over \(M\) and \(M\vDash\varphi(\overline{x})\to\exists\overline{y}\,\theta(\overline{x}, \overline{y})\)._
The second theorem is a relatively straightforward consequence of the first one, together with some facts from the theory of mutual algebraicity. Our goal for the rest of this section is to give the proof. To do so, we will need a lemma about mutually algebraic formulas, due to Laskowski and Terry.
**Lemma 2.11** ([10], Lemma A.1).: _Suppose \(M\) is a structure and_
\[\varphi(\overline{x}):=\bigwedge_{i}\alpha_{i}(\overline{x}_{i})\wedge \bigwedge_{j}\neg\beta_{j}(\overline{x}_{j})\]
_is a formula such that_
1. \(\varphi(\overline{x})\) _is mutually algebraic over_ \(M\)_._
2. \(\{\overline{a}\mid M\vDash\varphi(\overline{a})\}\) _contains an infinite set of pairwise disjoint tuples._
3. _Each_ \(\alpha_{i}(\overline{x}_{i})\) _and_ \(\beta_{j}(\overline{x}_{j})\) _is mutually algebraic over_ \(M\)_._
_Then \(\alpha(\overline{x})=\bigwedge_{i}\alpha_{i}(\overline{x}_{i})\) is mutually algebraic over \(M\)._
Actually we need a slightly stronger version of this lemma. In particular, we need to replace the second condition on \(\varphi\) with the apparently weaker assumption that \(\{\overline{a}\mid M\vDash\varphi(\overline{a})\}\) is infinite. The next lemma, also due to Laskowski, tells us that since \(\varphi\) is mutually algebraic, the two conditions are actually equivalent.
**Lemma 2.12** ([11], Lemma 3.1).: _Suppose \(M\) is a structure and \(\varphi(\overline{x})\) is a formula which is mutually algebraic over \(M\). If \(\{\overline{x}\mid M\vDash\varphi(\overline{a})\}\) is infinite then it contains an infinite set of pairwise disjoint tuples._
We can now prove Theorem 2.10.
Proof of Theorem 2.10.: By applying Laskowski's theorem and writing the resulting formula in disjunctive normal form, we get
\[M\vDash\varphi(\overline{x})\leftrightarrow\bigvee_{i}\left(\bigwedge_{j} \alpha_{i,j}(\overline{x}_{i,j})\wedge\bigwedge_{k}\neg\beta_{i,k}(\overline{ x}_{i,k})\right)\]
where each \(\alpha_{i,j}(\overline{x}_{i,j})\) and each \(\beta_{i,k}(\overline{x}_{i,k})\) is existential and mutually algebraic over \(M\).
For each \(i\), define
\[\varphi_{i}(\overline{x}) :=\bigwedge_{j}\alpha_{i,j}(\overline{x}_{i,j})\wedge\bigwedge_{k }\neg\beta_{i,k}(\overline{x}_{i,k})\] \[\alpha_{i}(\overline{x}) :=\bigwedge_{j}\alpha_{i,j}(\overline{x}_{i,j})\] \[A_{i} =\{\overline{a}\mid M\vDash\varphi_{i}(\overline{a})\}\]
Note that since \(\varphi(\overline{x})\) is mutually algebraic and \(M\vDash\varphi_{i}(\overline{x})\to\varphi(\overline{x})\), \(\varphi_{i}(\overline{x})\) is also mutually algebraic. Thus by Lemma 2.11 above (or rather, its slightly strengthened version), we have that either \(A_{i}\) is finite or \(\alpha_{i}(\overline{x})\) is mutually algebraic.
In the former case, define \(\gamma_{i}(\overline{x}):=\bigvee_{\overline{a}\in A_{i}}\overline{x}= \overline{a}\) and in the latter case, define \(\gamma_{i}(\overline{x}):=\alpha_{i}(\overline{x})\). In either case, note that \(\gamma_{i}\) is existential and mutually algebraic over \(M\) and that
\(M\vDash\varphi_{i}(\overline{x})\to\gamma_{i}(\overline{x})\). Since \(\varphi(\overline{x})\) and \(\bigvee_{i}\varphi_{i}(\overline{x}_{i})\) are equivalent in \(M\), this gives us
\[M\vDash\varphi(\overline{x})\to\bigvee_{i}\gamma_{i}(\overline{x}).\]
Since each \(\gamma_{i}(\overline{x})\) is mutually algebraic, so is their disjunction. Pulling the existential quantifiers to the front, we have the desired formula.
## 3. The counterexample
In this section we will describe the theory we use to answer Pakhomov's question. In order to do so, we need to fix a computable infinite binary tree \(R\) with the property that none of its paths can be computably approximated. More precisely, say that a sequence \(x\in 2^{\omega}\) is **guessable** if there is an algorithm which, for each number \(n\), enumerates a list of at most \(O(n^{2})\) strings of length \(n\), one of which is \(x\!\upharpoonright\!n\). We need a computable infinite binary tree \(R\), none of whose paths are guessable.
It is not hard to directly construct such a tree \(R\) but we can also simply pick a computable infinite binary tree whose paths are all Martin-Lof random. Such a tree is known to exist2 and it is also easy to check that Martin-Lof random sequences are not guessable. See the book _Algorithmic Randomness and Complexity_ by Downey and Hirschfeldt for more details about Martin-Lof randomness [1].
Footnote 2: For example we can simply take the complement of any of the levels of the universal Martin-Lof test.
Essentially, our theory is the simplest theory all of whose models code an infinite path through \(R\). We now give a more precise description.
**The language.** Let \(\mathcal{L}\) be the language whose signature consists of:
1. A constant symbol, \(0\).
2. Two unary function symbols, \(S\) and \(P\).
3. A unary relation symbol, \(A\).
Also, although it is not officially part of the language \(\mathcal{L}\), we will often use the following notation. Given any \(n\in\mathbb{N}\),
* \(\underline{n}\) denotes the \(\mathcal{L}\)-term \(S^{n}(0)\), e.g. \(\underline{3}\) denotes \(S(S(S(0)))\).
* \(\underline{-n}\) denotes the \(\mathcal{L}\)-term \(P^{n}(0)\), e.g. \(-\underline{3}\) denotes \(P(P(P(0)))\).
* \(\overline{x+\underline{n}}\) denotes the \(\mathcal{L}\)-term \(S^{n}(x)\) and \(\underline{x+\underline{-n}}\) denotes the \(\mathcal{L}\)-term \(P^{n}(x)\). We will also sometimes use \(x-\underline{n}\) to denote \(x+\underline{-n}\).
* We will often refer to \(S\) as "successor" and \(P\) as "predecessor."
**The theory.** Fix a computable infinite binary tree \(R\), none of whose infinite paths are guessable, and let \(T\) be the \(\mathcal{L}\)-theory consisting of:
1. The theory of the integers with \(0\), successor and predecessor, i.e. \(\operatorname{Th}(\mathbb{Z},0,\operatorname{Succ},\operatorname{Pred})\).
2. Axioms stating that \(A\) (restricted to the elements \(\underline{0},\underline{1},\underline{2},\ldots\)) describes a path through \(R\). More precisely, for each \(n\in\mathbb{N}\), \(T\) contains the sentence \[\bigvee_{\sigma\in R_{n}}\left[\bigg{(}\bigwedge_{\sigma(i)=0}\neg A(\underline {i})\bigg{)}\wedge\bigg{(}\bigwedge_{\sigma(i)=1}A(\underline{i})\bigg{)}\right]\] where \(R_{n}\) denotes the set of strings in \(R\) of length \(n\).
The second set of axioms ensures that from any model of \(T\), we can computably recover a path through the tree \(R\). We will now explain how this works.
Given a sentence \(\varphi\) and a model \(M\), let's use the notation \(\llbracket\varphi\rrbracket^{M}\) to denote **the truth-value of \(\varphi\) in \(M\)**. We will often identify sequences of truth values with binary sequences by
thinking of "true" as \(1\) and "false" as \(0\). Now suppose that \(M\) is a model of \(T\). We claim that the sequence \(\llbracket A(0)\rrbracket^{M},\llbracket A(1)\rrbracket^{M},\llbracket A(2) \rrbracket^{M},\ldots\) is an infinite path through \(R\). The point is that the axioms above guarantee that, for each \(n\in\mathbb{N}\), the length \(n\) initial segment of this sequence agrees with some _specific_ length \(n\) string in \(R\). Since all of its proper initial segments are in \(R\), the sequence \(\llbracket A(0)\rrbracket^{M},\llbracket A(1)\rrbracket^{M},\llbracket A(2) \rrbracket^{M},\ldots\) is indeed a path through \(R\).
Note that this immediately implies that no model of \(T\) is not computable--any such model computes an infinite path through \(R\), but no such path is computable. In spite of this, we will see later that models of \(T\) have quantifier elimination and so are very well-behaved in model-theoretic terms.
### Models of \(T\)
It will help to have a clear picture of the structure of models of \(T\) and to fix some terminology for later. Since \(T\) includes the theory of the integers with successor and predecessor, \(T\) proves that \(S\) and \(P\) are injective functions with no cycles and that they are inverses. Thus any model of \(T\) consists of a disjoint union of one or more \(\mathbb{Z}\)-chains, with \(S\) moving forward along each chain, \(P\) moving backward and the constant \(0\) sitting in the middle of one of the chains. There is also a well-defined notion of distance: the distance between two elements of the same chain is simply the number of steps apart they are on the chain (and the distance between elements of two different chains is \(\infty\)).
Furthermore, each element of each chain is labelled with a truth value (corresponding to whether the predicate \(A\) holds of that element or not) and thus each chain gives rise to a bi-infinite binary sequence. If we start at the element \(0\) and move forward along its chain, then, as we saw above, the binary sequence we get is guaranteed to be a path through the tree \(R\).
Given a model \(M\) of \(T\) and elements \(a,b\in M\), we will use the following terminology.
* The **signed distance** from \(a\) to \(b\) is the unique integer \(k\) (if it exists) such that \(b=a+\underline{k}\). If no such \(k\) exists then the signed distance is \(\infty\).
* The **distance between**\(a\) and \(b\) is the absolute value of the signed distance (where the absolute value of \(\infty\) is \(\infty\)).
* For \(k\in\mathbb{N}\), the \(\boldsymbol{k}\)**-neighborhood** of \(a\) is the set \(\{a-\underline{k},a-(k-1),\ldots,a+\underline{k}\}\).
Note that if the signed distance from \(a\) to \(b\) is \(k<\infty\), the signed distance from \(b\) to \(a\) is \(-k\).
**Remark 3.1**.: By choosing a somewhat more complicated theory, it is possible to simplify some of the proofs later in this paper. In particular, we can add axioms to \(T\) which state that \(A\) behaves _generically_, in the sense that every finite pattern of values of \(A\) occurs somewhere. More precisely, for every finite binary string \(\sigma\in 2^{<\omega}\) we add the axiom
\[\exists x\bigg{[}\bigg{(}\bigwedge_{\sigma(i)=0}\neg A(x+\underline{i}) \bigg{)}\wedge\bigg{(}\bigwedge_{\sigma(i)=1}A(x+\underline{i})\bigg{)}\bigg{]}.\]
Equivalently, we can replace \(T\) with its model completion. Making this change would allow us to simplify the proofs of Propositions 4.1 and 4.4 and Lemma 4.7.
## 4. Proof of the main theorem
Let \(\mathcal{L}\) and \(T\) be the language and theory described in the previous section. In this section, we will prove that no theory which is definitionally equivalent to \(T\) has a computable model. Since \(T\) is a consistent, c.e. theory, this is enough to prove Theorem 1.2.
In order to prove this, let's fix a language \(\mathcal{L}^{\prime}\) and an \(\mathcal{L}^{\prime}\)-theory \(T^{\prime}\) which is definitionally equivalent to \(T\). Note that since the language \(\mathcal{L}\) has finite signature, we may assume that \(\mathcal{L}^{\prime}\) does as well.3 Now fix a model \(M\) of \(T^{\prime}\). Our goal is to prove that \(M\) is not computable.4
Footnote 3: The point is that if a theory \(T\) is in a language with finite signature and \(T^{\prime}\) is any theory definitionally equivalent to \(T\) then \(T^{\prime}\) has a subtheory in a language with finite signature which is also definitionally equivalent to \(T\).
Footnote 4: Recall that a model is computable if its underlying set is \(\mathbb{N}\) and all of its functions and relations are computable as functions or relations on \(\mathbb{N}\). Note that since we are assuming \(\mathcal{L}^{\prime}\) has finite signature, we don’t need to worry about whether these functions and relations are uniformly computable.
Before beginning, it will be useful to fix a few conventions. First, recall from section 2.1 that \(M\) gives rise to a model \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\) of \(T\) which has the same underlying set and the same algebra of definable sets as \(M\). We will often abuse notation slightly and use \(M\) to refer to both \(M\) itself and \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\). For example, if \(\varphi\) is an \(\mathcal{L}\)-formula, we will use \(M\vDash\varphi(\overline{a})\) to mean \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\vDash\varphi(\overline{a})\). Also, we will say things like "\(b\) is the successor of \(a\)" to mean \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\vDash b=S(a)\). Second, unless explicitly stated otherwise, we assume that formulas do not contain parameters.
### Proof strategy
To prove that \(M\) is not computable, we will show that the sequence \(\llbracket A(\underline{0})\rrbracket^{M},\llbracket A(\underline{1}) \rrbracket^{M},\ldots\) is guessable (in the sense of section 3) relative to an oracle for \(M\). Since the axioms of \(T\) ensure that this sequence is a path through the tree \(R\), and hence not guessable, this is enough to show that \(M\) is not computable.
To show that the sequence \(\llbracket A(\underline{0})\rrbracket^{M},\llbracket A(\underline{1}) \rrbracket^{M},\ldots\) is guessable from an oracle for \(M\), we will first prove that \(M\) is mutually algebraic. To do so, we will essentially show that models of \(T\) have quantifier elimination and use this to prove that \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\) is mutually algebraic. The mutual algebraicity of \(M\) itself follows because mutual algebraicity is preserved under definitional equivalence (because mutual algebraicity depends only on the algebra of definable sets, which is itself preserved under definitional equivalence).
Once we know that \(M\) is mutually algebraic, we can apply the quantifier elimination results of section 2.3 to infer that \(S\) and \(A\) are close to being quantifier-free definable in \(M\). In particular, the formula \(S(x)=y\) is mutually algebraic and so, by Theorem 2.10, there is an existential \(\mathcal{L}^{\prime}\)-formula \(\psi_{S}(x,y)\) such that \(\psi_{S}\) is mutually algebraic and \(M\vDash S(x)=y\to\psi_{S}(x,y)\).
We can think of \(\psi_{S}\) as a multi-valued function which takes each element \(a\in M\) to the set of elements \(b\in M\) such that \(M\vDash\psi_{S}(a,b)\). Since \(\psi_{S}\) is an existential formula, the graph of this multi-valued function is computably enumerable from an oracle for \(M\). Since \(\psi_{S}\) is mutually algebraic, there are only finitely many elements in the image of each \(a\). And since \(M\vDash S(x)=y\to\psi_{S}(x,y)\), the successor of \(a\) is always in the image of \(a\). Putting this all together, we can think of this multi-valued function as giving us, for each \(a\in M\), a finite list of guesses for \(S(a)\) which is computably enumerable relative to an oracle for \(M\).
To finish the proof, we can leverage our ability to enumerate a finite list of guesses for the successor of each element to enumerate a short list of guesses for each initial segment of the sequence \(\llbracket A(\underline{0})\rrbracket^{M},\llbracket A(\underline{1}) \rrbracket^{M},\ldots\). To accomplish this, we will have to make use of our understanding of the structure of definable subsets of \(M\), which we first develop in order to prove mutual algebraicity.
### Model-theoretic tameness of \(M\)
Our first goal is to prove that \(M\) is mutually algebraic. One way to do this is to show that models of \(T\) satisfy quantifier elimination and then note that all atomic \(\mathcal{L}\)-formulas are mutually algebraic over \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\)--this implies that \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\) is mutually algebraic and hence that \(M\) is as well. However, it will be helpful for
us later to have a more detailed understanding of the structure of definable subsets of \(M\). Thus, instead of just proving quantifier elimination for models of \(T\), we will prove a stronger statement, which is essentially a quantitative version of quantifier elimination.
To explain this stronger statement, let's first consider the meaning of quantifier elimination in models of \(T\). By examining the atomic formulas of \(\mathcal{L}\), we can see that it means that for every \(\mathcal{L}\)-formula \(\varphi(\overline{x})\) and tuple \(\overline{a}\), the truth of \(\varphi(\overline{a})\) depends only on which elements of \(\overline{a}\) are close to each other (and to \(0\)), how close they are, and the values of the predicate \(A\) in a small neighborhood of each element. In our stronger statement, we will quantify exactly what "close" and "small" mean in this description. We will also extend this to \(\mathcal{L}^{\prime}\)-formulas. We will refer to the resulting statement as the **indiscernability principle** for \(M\). In order to make all of this precise, we first need to introduce some terminology.
**The radius of a formula.** For any \(\mathcal{L}\)-formula \(\varphi\) written in prenex normal form, inductively define the **radius** of \(\varphi\), written \(\operatorname{rad}(\varphi)\), as follows.
1. If \(\varphi\) is quantifier free then \(\operatorname{rad}(\varphi)\) is the total number of occurrences of \(S\) and \(P\) in \(\varphi\).
2. If \(\varphi\) has the form \(\exists x\,\psi\) or \(\forall x\,\psi\) then \(\operatorname{rad}(\varphi)=2\cdot\operatorname{rad}(\psi)\).
If \(\varphi\) is an \(\mathcal{L}^{\prime}\)-formula in prenex normal form then we define \(\operatorname{rad}(\varphi)\) in a similar way except that we change the case where \(\varphi\) is quantifier free to define \(\operatorname{rad}(\varphi)\) to be \(\operatorname{rad}(\varphi^{\mathcal{L}^{\prime}\to\mathcal{L}})\) (after first putting \(\varphi^{\mathcal{L}^{\prime}\to\mathcal{L}}\) in prenex normal form). The idea of the radius of a formula is that in the description of quantifier elimination for \(M\) above, we should interpret "close" to mean "within distance \(\operatorname{rad}(\varphi)\)."
**The \(r\)-type of a tuple.** Given a tuple \(\overline{a}=(a_{1},\dots,a_{n})\) in \(M\) and a number \(r\in\mathbb{N}\), define:
* The \(r\)**-distance table** of \(\overline{a}\) records the signed distances between the coordinates of \(\overline{a}\) and between each coordinate of \(\overline{a}\) and \(0\), treating any distance greater than \(r\) as \(\infty\). More precisely, it is the function \(f\colon\{0,1,\dots,n\}^{2}\to\{-r,-(r-1),\dots,r,\infty\}\) such that if the distance between \(a_{i}\) and \(a_{j}\) is at most \(r\) then \(f(i,j)\) is the signed distance from \(a_{i}\) to \(a_{j}\) and otherwise \(f(i,j)=\infty\) (and where we interpret \(a_{0}\) as \(0\)).
* The \(r\)**-neighborhood type** of any element \(a\in M\) is the sequence of truth values \([\![A(a-\underline{r})]\!]^{M},[\![A(a-\underline{r-1})]\!]^{M},\dots,[\![A(a+ \underline{r})]\!]^{M}\).
* The \(r\)**-type** of \(\overline{a}\) is the \(r\)-distance table of \(\overline{a}\) together with the sequence recording the \(r\)-neighborhood type of each coordinate of \(\overline{a}\).
**The indiscernability principle.** We can now state a formal version of the indiscernability principle described above.
**Proposition 4.1**.: _If \(\varphi\) is an \(\mathcal{L}\)-formula in prenex normal form and of radius \(r\) and \(\overline{a},\overline{b}\) are tuples in \(M\) with the same \(r\)-type then \(M\vDash\varphi(\overline{a})\) if and only if \(M\vDash\varphi(\overline{b})\)._
Proof.: By induction on the number of quantifiers in \(\varphi\). For quantifier free formulas, this is easy to verify. If \(\varphi\) has quantifiers then it suffices to assume \(\varphi=\exists x\,\psi\) since the case of a universal quantifier is symmetric (i.e. by considering \(\neg\varphi\) instead of \(\varphi\) and pushing the negation past the quantifiers to get it into prenex normal form). Also, it's enough to assume \(M\vDash\varphi(\overline{a})\) and prove \(M\vDash\varphi(\overline{b})\)--the other direction also follows by symmetry.
So let's assume that \(M\vDash\exists x\,\psi(\overline{a},x)\). Thus there is some \(c\) such that \(M\vDash\psi(\overline{a},c)\). We need to find some \(d\) such that \(M\vDash\psi(\overline{b},d)\). Note that it is enough to find \(d\) such that \(\overline{a}c\) and \(\overline{b}d\) have the same \(r/2\)-type, because if this holds then we can apply the induction hypothesis to \(\psi\) to get that \(M\vDash\psi(\overline{b},d)\).
There are two cases depending on whether \(c\) is close to any element of \(\overline{a}\) or not. Also to reduce casework, we adopt the convention that \(a_{0}=b_{0}=0\) (note that this does not change the fact that \(\overline{a}\) and \(\overline{b}\) have the same \(r\)-type).
**Case 1.** First suppose that \(c\) is distance at most \(r/2\) from some coordinate of \(\overline{a}\). In particular, there is some \(i\leq n\) and \(-r/2\leq k\leq r/2\) such that \(c=a_{i}+\underline{k}\). In this case, we can pick \(d\) to be close to the corresponding element of \(\overline{b}\), i.e. \(d=b_{i}+\underline{k}\). We claim that \(\overline{a}c\) and \(\overline{b}d\) have the same \(r/2\)-type.
First, we need to check that the \(r/2\)-distance tables are the same. It suffices to check that for each \(j\), either \(a_{j},c\) and \(b_{j},d\) have the same signed distance or both have distance greater than \(r/2\). Suppose that \(a_{j}=c+\underline{k}^{\prime}\) for some integer \(-r/2\leq k^{\prime}\leq r/2\). By substitution, \(a_{j}=(a_{i}+\underline{k})+\underline{k}^{\prime}=a_{i}+\underline{k}+k^{ \prime}\). Since \(|k+k^{\prime}|\leq r\) and since \(\overline{a},\overline{b}\) have the same \(r\)-distance table, this implies that \(b_{j}=b_{i}+\underline{k}+k^{\prime}\) and hence that \(b_{j}=d+\underline{k}^{\prime}\). The other cases can be handled similarly.
Second, we need to check that the \(r/2\)-neighborhood type of \(c\) is the same as that of \(d\). This follows from the fact that the \(r/2\)-neighborhood of \(c\) is contained in the \(r\)-neighborhood of \(a_{i}\), the \(r/2\)-neighborhood of \(d\) is contained in the \(r\)-neighborhood of \(b_{i}\) and the \(r\)-neighborhood types of \(a_{i}\) and \(b_{i}\) are the same.
**Case 2.** Now suppose that \(c\) is distance more than \(r/2\) from every coordinate of \(\overline{a}\). It is enough to find some \(d\) which has the same \(r/2\)-neighborhood type as \(c\) and which is distance more than \(r/2\) from every coordinate of \(\overline{b}\). The point is that for such a \(d\), it is easy to see that \(\overline{a}c\) and \(\overline{b}d\) have the same \(r/2\)-type.
We now claim that some such \(d\) must exist.5 Suppose for contradiction that this is false. Then every element of \(M\) with the same \(r/2\)-neighborhood type as \(c\) must be contained in the \(r/2\) neighborhood of some element of \(\overline{b}\). In particular, this implies that there are a finite number of such elements and they all have the form \(b_{i}+\underline{k}\) for some \(i\leq n\) and \(-r/2\leq k\leq r/2\).
Footnote 5: This case becomes more or less trivial if \(T\) is modified in the way described in Remark 3.1. This is because the existence of such an element \(d\) is guaranteed by the extra axioms described in that remark.
Suppose there are exactly \(m\) such elements and they are equal to \(b_{i_{1}}+\underline{k_{1}},\ldots,b_{i_{m}}+\underline{k_{m}}\) (where for each \(j\), \(-r/2\leq k_{j}\leq r/2\)). It follows from the fact that \(\overline{a}\) and \(\overline{b}\) have the same \(r\)-type that the corresponding elements \(a_{i_{1}}+\underline{k_{1}},\ldots,a_{i_{m}}+\underline{k_{m}}\) are also all distinct and have the same \(r/2\)-neighborhood type as \(c\). However, since only \(m\) elements of \(M\) have this \(r/2\)-neighborhood type, \(c\) must be among this list of elements, which contradicts the assumption that \(c\) is not within distance \(r/2\) of any coordinate of \(\overline{a}\).
**Corollary 4.2**.: _Proposition 4.1 also holds for all \(\mathcal{L}^{\prime}\)-formulas in prenex normal form._
Proof.: Suppose \(\varphi\) is an \(\mathcal{L}^{\prime}\)-formula of radius \(r\) and that \(\overline{a},\overline{b}\) are tuples in \(M\) with the same \(r\)-type. In the case where \(\varphi\) is quantifier free, the radius of \(\varphi^{\mathcal{L}^{\prime}\to\mathcal{L}}\) is also \(r\), for the trivial reason that radius of a quantifier-free \(\mathcal{L}^{\prime}\)-formula is defined as the radius of its \(\mathcal{L}\)-translation. Hence, we can apply the indiscernability principle to \(\varphi^{\mathcal{L}^{\prime}\to\mathcal{L}}\) to get
\[M\vDash\varphi(\overline{a}) \iff M\vDash\varphi^{\mathcal{L}^{\prime}\to\mathcal{L}}(\overline{a})\] \[\iff M\vDash\varphi^{\mathcal{L}^{\prime}\to\mathcal{L}}(\overline{b}) \iff M\vDash\varphi(\overline{b}).\]
When \(\varphi\) has quantifiers, the inductive argument that we gave in the proof of Proposition 4.1 still works.
### \(M\) is mutually algebraic
For a fixed \(r\)-type, the assertion that a tuple \(\overline{x}=(x_{1},\ldots,x_{n})\) has that \(r\)-type is expressible as a Boolean combination of \(\mathcal{L}\)-formulas of the following forms.
1. \(x_{i}=x_{j}+\underline{k}\) for some indices \(i,j\leq n\) and some \(-r\leq k\leq r\).
2. \(x_{i}=\underline{k}\) for some index \(i\leq n\) and some \(-r\leq k\leq r\).
3. \(A(x_{i}+\underline{k})\) for some index \(i\leq n\) and some \(-r\leq k\leq r\).
It is easy to check that each type of formula listed above is mutually algebraic over \(M\) (for the second and third there is actually nothing to check because they both involve only one free variable). Furthermore, for any fixed \(r\), there are a finite number of possible \(r\)-types. Thus the indiscernability principle implies that every \(\mathcal{L}\)-formula \(\varphi\) is equivalent to a finite conjunction of Boolean combinations of mutually algebraic \(\mathcal{L}\)-formulas (namely a conjunction over all \(r\)-types that satisfy \(\varphi\)).
This shows that \(M\) is mutually algebraic when considered as an \(\mathcal{L}\)-structure (i.e. that \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\) is mutually algebraic). However, it is easy to conclude that \(M\) is also mutually algebraic when considered as an \(\mathcal{L}^{\prime}\)-structure. For a given formula \(\varphi\), we know from our reasoning above that \(\varphi^{\mathcal{L}^{\prime}\to\mathcal{L}}\) is equivalent to a Boolean combination of mutually algebraic \(\mathcal{L}\)-formulas. Next, we can replace each formula in this Boolean combination by its corresponding \(\mathcal{L}^{\prime}\)-formula. Since the mutual algebraicity of a formula only depends on the set that it defines, and since this is invariant under translating between \(\mathcal{L}\) and \(\mathcal{L}^{\prime}\), we conclude that \(\varphi\) is equivalent to a Boolean combination of mutually algebraic \(\mathcal{L}^{\prime}\)-formulas.
**Remark 4.3**.: The reasoning above also shows that \(M\) has quantifier elimination when considered as an \(\mathcal{L}\)-structure (i.e. \(M^{\mathcal{L}^{\prime}\to\mathcal{L}}\) has quantifier elimination). The point is just that a tuple having a certain \(r\)-type is expressible as a quantifier free \(\mathcal{L}\)-formula.
### The satisfaction algorithm
We will now explain how the indiscernability principle implies that the satisfaction relation for \(\mathcal{L}^{\prime}\)-formulas over \(M\) is very nearly computable relative to an oracle for \(M\). At the end of this subsection, we will explain why this is useful.
The main idea (of computing the satisfaction relation) is that to check whether \(M\vDash\exists x\,\varphi(\overline{a},x)\), we don't need to try plugging in every element of \(M\) for \(x\), just those elements which are close to some coordinate of \(\overline{a}\) (or to \(0\)), plus one element of each possible \(\operatorname{rad}(\varphi)\)-neighborhood type which is far from all the coordinates of \(\overline{a}\). In other words, checking the truth of an existential formula can be reduced to checking the truth of a finite number of atomic formulas. This intuition is formalized by the next proposition, whose proof essentially just consists of this idea, but with a number of messy details in order to make precise the idea of trying all the different \(\operatorname{rad}(\varphi)\)-neighborhood types which are far from elements of \(\overline{a}\).
**Proposition 4.4** (Satisfaction algorithm for existential formulas).: _Suppose \(\varphi(\overline{x})\) is an existential \(\mathcal{L}^{\prime}\)-formula with radius \(r\). There is an algorithm which, given a tuple \(\overline{a}\) in \(M\) and the following data_
_(1) an oracle for \(M\)_
_(2) and a finite set \(U\subseteq M\),_
_tries to check whether \(M\vDash\varphi(\overline{a})\). Furthermore, if \(U\) contains the \(r\)-neighborhood of every coordinate of \(\overline{a}\) then the output of the algorithm is correct._
Proof.: Let \(\theta(\overline{x},\overline{y})\) be a quantifier free formula such that \(\varphi(\overline{x})=\exists\overline{y}\,\theta(\overline{x},\overline{y})\) and let \(n=|\overline{x}|\) and \(m=|\overline{y}|\) (i.e. the number of free and bound variables in \(\varphi\), respectively). Next, fix a finite set \(V\) such that for each possible \(r\)-neighborhood type \(p\), \(V\) contains at least \((2r+1)(n+m+1)\) points of type \(p\) (or if fewer than \((2r+1)(n+m+1)\) points have \(r\)-neighborhood type \(p\) then \(V\) contains every such point).6 Also \(V\) should contain \(0\). Let \(V^{\prime}\) be the set consisting
of all elements within distance \(r\) of some element of \(V\). Note that since \(V^{\prime}\) is finite, we can "hard-code" it into our algorithm.
_Algorithm description._ To check if \(M\vDash\varphi(\overline{a})\), look at each tuple \(\overline{b}\) of elements of \(U\cup V^{\prime}\) and check if \(M\vDash\theta(\overline{a},\overline{b})\). If this occurs for at least one such \(\overline{b}\) then output "true." Otherwise, output "false." Note that checking the truth of a quantifier free formula (such as \(\theta\)) is computable from an oracle for \(M\).
_Verification._ Let's assume that \(U\) contains the \(r\)-neighborhood of each coordinate of \(\overline{a}\) and check that the output of the algorithm is correct. It is obvious that the algorithm has no false positives: if \(M\vDash\theta(\overline{a},\overline{b})\) for some \(\overline{b}\) then \(M\vDash\varphi(\overline{a})\). Thus it suffices to assume that \(M\vDash\varphi(\overline{a})\) and show that there is some tuple \(\overline{b}\) in \(U\cup V^{\prime}\) such that \(M\vDash\theta(\overline{a},\overline{b})\).
To accomplish this, we will pick elements of \(\overline{b}\) one at a time and, at each step, ensure that all the elements we have picked so far come from the set \(U\cup V^{\prime}\). More precisely, we will pick elements \(b_{1},\ldots,b_{m}\) such that for each \(i\leq m\),
\[M\vDash\exists y_{i+1}\ldots\exists y_{m}\,\theta(\overline{a},b_{1},\ldots,b_ {i},y_{i+1},\ldots,y_{m})\]
and we will try to ensure that for each \(i\), \(b_{i}\in U\cup V^{\prime}\). However, in order to do this, we will need a somewhat stronger inductive assumption.
Let's first explain on an informal level how the induction works and why we need a stronger inductive assumption. On the first step of the induction, things work pretty well. It is possible to use the indiscernability principle to show that we can pick some \(b_{1}\) which satisfies the condition above and which is close to some element of either \(\overline{a}\) or \(V\). Since \(U\) contains a reasonably large neighborhood around each element of \(\overline{a}\) and \(V^{\prime}\) contains a reasonably large neighborhood around each element of \(V\), this means we can pick \(b_{1}\) from \(U\cup V^{\prime}\). On the second step of the induction, however, things start to go wrong. We can again use the indiscernability principle to show that we can pick some \(b_{2}\) which satisfies the condition above and which is close to either \(b_{1}\) or to some element of either \(\overline{a}\) or \(V\). In the latter case, there is no problem: we can still pick \(b_{2}\) from \(U\cup V^{\prime}\). But in the former case, there may be a problem. If the element \(b_{1}\) we picked on the first step happens to be near the "boundary" of \(U\cup V^{\prime}\) then even a \(b_{2}\) which is relatively close to it might no longer be inside \(U\cup V^{\prime}\).
We can fix this problem by requiring not just that \(b_{1}\) is in \(U\cup V^{\prime}\), but also that it is far from the "boundary" of \(U\cup V^{\prime}\). In other words, we need to require that \(b_{1}\) is close to \(\overline{a}\) or \(V\) in some stronger way than simply requiring that it be in \(U\cup V^{\prime}\). In fact, it is enough to require that \(b_{1}\) be within distance \(r/2\) of some element of \(\overline{a}\) or \(V\) and more generally, that each \(b_{i}\) is within distance \(r/2+\ldots+r/2^{i}\) of some element of \(\overline{a}\) or \(V\).
To state this formally, we define sets \(W_{0}\subseteq W_{1}\subseteq W_{2}\subseteq\ldots\subseteq W_{m}\) as follows. \(W_{0}\) consists of the coordinates of \(\overline{a}\) together with the elements of \(V\). For each \(0<i\leq m\), \(W_{i}\) consists of all points in \(M\) which are within distance \(r/2^{i}\) of some element of \(W_{i-1}\) (note that this is equivalent to being within distance \(r/2+r/4+\ldots+r/2^{i}\) of some element of \(W_{0}\)). Note that by assumption, \(U\cup V^{\prime}\) contains the \(r\)-neighborhood of each element of \(W_{0}\). It follows that each \(W_{i}\) is contained in \(U\cup V^{\prime}\)
Also, define a sequence of formulas \(\varphi_{0},\varphi_{1},\ldots,\varphi_{m}\) by removing the quantifiers from \(\varphi\) one at a time. More precisely, define
\[\varphi_{i}(\overline{x},y_{1},\ldots,y_{i}):=\exists y_{i+1}\,\ldots,\exists y _{m}\theta(\overline{x},\overline{y}).\]
So, for example,
* \(\varphi_{0}(\overline{x})=\exists y_{1}\ldots\exists y_{m}\theta(\overline{x},\overline{y})=\varphi(\overline{x})\)
* \(\varphi_{1}(\overline{x},y_{1})=\exists y_{2}\ldots\exists y_{m}\theta( \overline{x},\overline{y})\)
* \(\varphi_{2}(\overline{x},y_{1},y_{2})=\exists y_{3}\ldots\exists y_{m}\theta( \overline{x},\overline{y})\)
* \(\ldots\)
* \(\varphi_{m}(\overline{x},y_{1},\ldots,y_{m})=\theta(\overline{x},\overline{y})\).
We will now inductively construct a sequence of points \(b_{1},\ldots,b_{m}\) such that for each \(i\), \(b_{i}\in W_{i}\) and \(M\vDash\varphi_{i}(\overline{a},b_{1},\ldots,b_{i})\). Since \(W_{m}\subseteq U\cup V^{\prime}\) and \(\varphi_{m}=\theta\), this is sufficient to finish the proof.
The base case of this induction is simply the assertion that \(M\vDash\varphi(\overline{a})\) which we assumed above. Now assume that we have already found \(b_{1},\ldots,b_{i}\) and we will show how to find \(b_{i+1}\). Since \(M\vDash\varphi_{i}(\overline{a},b_{1},\ldots,b_{i})\), there is some \(c\) such that \(M\vDash\varphi_{i+1}(\overline{a},b_{1},\ldots,b_{i},c)\). The idea is that we can pick \(b_{i+1}\) by mimicking \(c\). If \(c\) is within distance \(r/2^{i+1}\) of some coordinate of \(\overline{a}\), \(0\) or some \(b_{j}\) for \(j\leq i\) then we set \(b_{i+1}=c\). Otherwise, we can pick \(b_{i+1}\) to be some element of \(V\) with the same \(r\)-neighborhood type as \(c\) and which is also distance at least \(r/2^{i+1}\) from all coordinates of \(\overline{a}\), \(0\) and all \(b_{j}\). We can do this because either \(V\) contains many points of that \(r\)-neighborhood type (more than all the points within distance \(r/2^{i+1}\) of \(\overline{a}\), \(0\) and \(b_{1},\ldots,b_{i}\)--this is why we chose the number \((2r+1)(n+m+1)\)) or there are not very many such points and \(V\) contains \(c\) itself. Note that in the first case, \(b_{i+1}\) is within distance \(r/2^{i+1}\) of some element of \(W_{i}\), and in the second case, \(b_{i+1}\in V\). Thus in either case \(b_{i+1}\in W_{i+1}\).
Also, note that in either case \(\overline{a}b_{1}\ldots b_{i}c\) and \(\overline{a}b_{1}\ldots b_{i}b_{i+1}\) have the same \(r/2^{i+1}\)-type. Since the radius of \(\varphi_{i+1}\) can be seen to be \(r/2^{i+1}\) and \(M\vDash\varphi_{i+1}(\overline{a},b_{1},\ldots,b_{i},c)\), the indiscernability principle implies that \(M\vDash\varphi_{i+1}(\overline{a},b_{1},\ldots,b_{i+1})\), as desired.
We now want to give an algorithm to compute the satisfaction relation of an arbitrary formula. One way to do this is to recursively apply the idea of Proposition 4.4 to reduce checking the truth of a formula with an arbitrary number of quantifiers to checking the truth of a finite number of atomic formulas. However, if we invoke the quantifier elimination results of section 2.3 then we can do something simpler. Recall that Theorem 2.9 tells us every formula is equivalent over \(M\) to a Boolean combination of existential formulas. Thus the algorithm for existential formulas almost immediately yields an algorithm for arbitrary formulas.
**Proposition 4.5** (Satisfaction algorithm for arbitrary formulas).: _Suppose \(\varphi(\overline{x})\) is an \(\mathcal{L}^{\prime}\)-formula. There is a number \(r\in\mathbb{N}\) and an algorithm which, given any tuple \(\overline{a}\) in \(M\) and the following data_
_(1) an oracle for \(M\)_
_(2) and a finite set \(U\subseteq M\),_
_tries to check whether \(M\vDash\varphi(\overline{a})\). Furthermore, if \(U\) contains the \(r\)-neighborhood of every coordinate of \(\overline{a}\) then the algorithm is correct._
**Definition 4.6**.: For convenience, we will refer to the number \(r\) in the statement of this proposition as the **satisfaction radius** of \(\varphi\).
Proof.: By Theorem 2.9, \(\varphi(\overline{x})\) is equivalent over \(M\) to a Boolean combination of existential \(\mathcal{L}^{\prime}\)-formulas, \(\psi_{1}(\overline{x}),\ldots,\psi_{m}(\overline{x})\) (which may have parameters from \(M\)). Let \(r_{1},\ldots,r_{m}\) denote the radii of these formulas and let \(r=\max(r_{1},\ldots,r_{m})\).
The algorithm is simple to describe, but is made slightly more complicated by the fact that the formulas \(\psi_{i}\) may contain parameters from \(M\). For clarity, we will first assume that they do not contain such parameters and then explain how to modify the algorithm in the case where they do.
Here's the algorithm (in the case where there are no parameters). For each \(i\leq m\), use the algorithm for existential formulas and the set \(U\) to check the truth of \(\psi_{i}(\overline{a})\). Then assume all the reported truth values are correct and use them to compute the truth value of \(\varphi(\overline{a})\).
If \(U\) contains an \(r\)-neighborhood around every coordinate of \(\overline{a}\) then for each \(i\leq m\), it contains an \(r_{i}\)-neighborhood around each coordinate of \(\overline{a}\). So in this case, the truth values we compute for \(\psi_{1}(\overline{a}),\ldots,\psi_{m}(\overline{a})\) are guaranteed to be correct and thus the final truth value for \(\varphi(\overline{a})\) is also correct.
Now suppose that the formulas \(\psi_{i}\) contain parameters from \(M\). Let \(\overline{b}_{i}\) be the tuple of parameters of \(\psi_{i}\). Let \(V\) be the set containing the \(r\)-neighborhood of each element of each tuple of parameters \(\overline{b}_{i}\). The only modification that is needed to the algorithm described above is that instead of using \(U\) itself, we should use \(U\cup V\) when applying the satisfaction algorithm for existential formulas (and note that since \(V\) is finite, we can simply hard-code it into our algorithm).
Here's why this algorithm is useful. Note that if we had some way of computably generating the set \(U\) then we would be able to outright compute the satisfaction relation for \(\varphi\) using just an oracle for \(M\). In turn, this would allow us to use an oracle for \(M\) to compute the sequence \([\![A(\underline{0})]\!]^{M},[\![A(\underline{1})]\!]^{M},[\![A(\underline{2}) ]\!]^{M},\ldots\), which is a path through \(R\). Since \(R\) has no computable paths, this would imply \(M\) is not computable. Thus to finish our proof of the uncomputability of \(M\), it is enough to find an algorithm for generating the set \(U\) needed by the satisfaction algorithm. Actually, we can't quite do this in general, but we can do something almost as good: we can enumerate a short list of candidates for \(U\). This is enough to show that the sequence \([\![A(\underline{0})]\!]^{M},[\![A(\underline{1})]\!]^{M},[\![A(\underline{2}) ]\!]^{M},\ldots\) is guessable from an oracle for \(M\). Since \(R\) has no guessable paths, this is still enough to imply that \(M\) is not computable.
### The guessing algorithm
We will now prove that \(M\) is not computable. As discussed above, we will do so by proving that the sequence \([\![A(\underline{0})]\!]^{M},[\![A(\underline{1})]\!]^{M},[\![A(\underline{2}) ]\!]^{M},\ldots\) is guessable relative to an oracle for \(M\). Since the axioms of \(T\) ensure that this sequence is a path through \(R\) and since no path through \(R\) is guessable, this implies that \(M\) is not computable.
In other words, we can complete our proof by constructing an algorithm which, given an oracle for \(M\) and a number \(n\), enumerates a list of at most \(O(n^{2})\) guesses (at least one of which is correct) for the finite sequence \([\![A(\underline{0})]\!]^{M},[\![A(\underline{1})]\!]^{M},\ldots,[\![A( \underline{n})]\!]^{M}\).
The rest of this section is devoted to constructing this algorithm. Since it would become annoying to append the phrase "relative to an oracle for \(M\)" to every other sentence that follows, we will adopt the convention that we always implicitly have access to an oracle for \(M\), even if we do not say so explicitly. Thus whenever we say that something is computable or computably enumerable, we mean relative to an oracle for \(M\).
**Warm-up: when \(S\) has a quantifier free definition.** We will begin by constructing an algorithm for one especially simple case. Note that this case is included only to demonstrate how the satisfaction algorithm can be used and to motivate the rest of the proof; it can be skipped without missing any essential details.
The "especially simple case" we are referring to is the case in which \(S\) has a quantifier free \(\mathcal{L}^{\prime}\)-definition. We will see that in this case, the sequence \([\![A(\underline{0})]\!]^{M},[\![A(\underline{1})]\!]^{M},\ldots\) is not only guessable, but actually computable.
To begin, let \(\varphi_{S}(x,y)\) be the \(\mathcal{L}^{\prime}\)-definition of \(S\)--i.e. for every \(a,b\in M\), \(M\vDash S(a)=b\) if and only if \(M\vDash\varphi_{S}(a,b)\). Note that since \(\varphi_{S}\) is quantifier-free, the successor function in \(M\) is computable: to find \(S(a)\) we can just enumerate elements of \(M\) until we see an element \(b\) such that \(M\vDash\varphi_{S}(a,b)\) (which we can check because \(\varphi_{S}\) is quantifier-free). Likewise, we
can also compute the predecessor function: instead of waiting for an element \(b\) such that \(M\vDash\varphi_{S}(a,b)\), we wait for an element \(b\) such that \(M\vDash\varphi_{S}(b,a)\).
We can now explain how to compute \(\llbracket A(\underline{0})\rrbracket^{M},\llbracket A(\underline{1}) \rrbracket^{M},\ldots\). Let \(\varphi_{A}(x)\) be the \(\mathcal{L}^{\prime}\)-definition of \(A\) and let \(r\) be the satisfaction radius of \(\varphi_{A}\). Given a number \(n\), do the following.
1. First use the fact that the successor function is computable to compute \(\underline{n}=S^{n}(0)\).
2. Next, use the fact that the successor and predecessor functions are computable to compute the \(r\)-neighborhood of \(\underline{n}\). Let \(U\) denote the set of elements in this \(r\)-neighborhood.
3. Finally, use the satisfaction algorithm for \(\varphi_{A}\), along with the set \(U\), to check whether \(M\vDash\varphi_{A}(\underline{n})\) and output the result as the truth value of \(A(\underline{n})\). Note that since \(U\) really does contain the \(r\)-neighborhood of \(\underline{n}\), the outcome of this step is guaranteed to be correct.
### Idea of the full algorithm
We have just seen an algorithm that computes the sequence \(\llbracket A(\underline{0})\rrbracket^{M},\llbracket A(\underline{1}) \rrbracket^{M},\ldots\) (without needing to make guesses) in the special case where \(S\) is definable by a quantifier-free \(\mathcal{L}^{\prime}\)-formula. We can no longer assume that there is a quantifier-free definition of \(S\), but by applying the quantifier elimination theorem for mutually algebraic formulas over mutually algebraic structures from section 2.3, we have something almost as good. Namely, let \(\varphi_{S}(x,y)\) be the \(\mathcal{L}^{\prime}\)-definition of \(S\). It is easy to see that \(\varphi_{S}\) is mutually algebraic and so, by Theorem 2.10, there is a mutually algebraic existential formula \(\psi_{S}(x,y)\) (possibly with parameters from \(M\)) such that \(M\vDash\varphi_{S}(x,y)\to\psi_{S}(x,y)\).
The formula \(\psi_{S}(x,y)\) should be thought of as an "approximation" to the successor relation in \(M\). In particular, for a fixed element \(a\), any \(b\) such that \(M\vDash\psi_{S}(a,b)\) holds should be thought of as a candidate for \(S(a)\) and any \(b\) such that \(M\vDash\psi_{S}(b,a)\) holds should be thought of as a candidate for \(P(a)\). This is justified by the following two facts.
1. Since \(M\vDash\varphi_{S}(x,y)\to\psi_{S}(x,y)\), we have \(M\vDash\psi_{S}(a,S(a))\) and \(M\vDash\psi_{S}(P(a),a)\). In other words, the candidates for the successor and predecessor of \(a\) include the true successor and predecessor of \(a\), respectively.
2. Since \(\psi_{S}\) is mutually algebraic, there are not very many such candidates.
The core idea of the algorithm is that since \(\psi_{S}(x,y)\) is existential, the set of candidates for \(S(a)\) and \(P(a)\) is computably enumerable: to check if \(M\vDash\psi_{S}(a,b)\), we simply wait until we see some tuple in \(M\) which can serve as a witness. Thus we have an algorithm which, given any \(a\in M\), enumerates a short list of candidates for \(S(a)\) and \(P(a)\).
Next, we can bootstrap this into an algorithm which, for any \(a\in M\) and any number \(n\in\mathbb{N}\), enumerates a list of guesses for the sequence \(a-\underline{n},a-(n-1),\ldots,a+\underline{n}\): basically, enumerate guesses for the successor and predecessor of \(a\), then enumerate guesses for the successor and predecessor of each of those guesses and so on, for \(n\) rounds. This puts us in a situation much like the previous subsection (where the successor and predecessor functions were computable). In particular, we can enumerate guesses for the sequence \(\llbracket A(\underline{0})\rrbracket^{M},\ldots,\llbracket A(\underline{n}) \rrbracket^{M}\) as follows.
1. First, let \(\varphi_{A}(x)\) be the \(\mathcal{L}^{\prime}\)-definition of \(A\) and let \(r_{A}\) be the satisfaction radius of \(\varphi_{A}\).
2. Given a number \(n\), enumerate guesses for the sequence \(\underline{-r_{A}},\ldots,\underline{n+r_{A}}\).
3. For each such guess, use the satisfaction algorithm to compute a guess for the sequence \(\llbracket A(\underline{0})\rrbracket^{M},\ldots,\llbracket A(\underline{n}) \rrbracket^{M}\).
Note that if the guess from the second step is correct then the guess from the last step will be too because in this case we have correctly identified \(\underline{0},\ldots,\underline{n}\), along with the \(r_{A}\)-neighborhood of each one.
There is only one problem with this algorithm: we may enumerate too many guesses. Suppose that our algorithm for enumerating guesses for the successor of an element of \(M\) enumerates \(k\) guesses. Then it seems that we might end up enumerating up to \(k^{n}\) guesses for \(a+\underline{n}\): \(k\) guesses for \(a+\underline{1}\), \(k^{2}\) guesses for \(a+\underline{2}\) (since each guess for \(a+\underline{1}\) gives rise to \(k\) guesses for \(a+\underline{2}\)), and so on. Thus in the algorithm above, we might end up with about \(k^{n}\) guesses for the sequence \([\![A(\underline{0})]\!]^{M},\ldots,[\![A(\underline{n})]\!]^{M}\), which is not enough to show that the sequence \([\![A(\underline{0})]\!]^{M},[\![A(\underline{1})]\!]^{M},\ldots\) is guessable.
The second key idea of our algorithm is that we actually don't end up with so many guesses. It is possible to show that since \(\psi_{S}\) is mutually algebraic, if \(M\vDash\psi_{S}(a,b)\) then--almost always--\(a\) and \(b\) are close to each other. In particular, if the radius of \(\psi_{S}\) is \(r\) then with only finitely many exceptions, \(a\) and \(b\) must be distance at most \(r\) apart (this will be proved in Lemma 4.7 below). If we ignore the finitely many exceptions, then this implies that for any \(a\), every candidate for \(S(a)\) is within distance \(r\) of \(a\). By induction, this implies that every candidate for \(a+\underline{n}\) is within distance \(rn\) of \(a\). The point is that this means there are at most \(rn\) such candidates (rather than \(k^{n}\)).
This does not quite solve our problem: even if there are only \(rn\) candidates for \(a+\underline{n}\), there could still be exponentially many candidates for the sequence \(a-\underline{n},\ldots,a+\underline{n}\). However, it can be combined with other tricks to reduce the number of guesses to \(O(n^{2})\). This will be explained in detail in the proof of Lemma 4.9.
### Details of the algorithm
We will now describe the details of the algorithm and verify that it works correctly. We will break the algorithm (and its verification) into three parts, which work as follows.
1. **The successor and predecessor guessing algorithm:** an algorithm which takes as input an element \(a\in M\) and uses the existential formula approximating the successor relation to enumerate candidates for the successor and predecessor of \(a\). This is described in Lemma 4.8.
2. **The neighborhood guessing algorithm:** an algorithm which takes as input an element \(a\in M\) and a number \(n\) and uses the ideas discussed above to enumerate candidates for the sequence \(a-\underline{n},\ldots,a+\underline{n}\). This is described in Lemma 4.9.
3. **The \(A\) guessing algorithm:** an algorithm which takes as input a number \(n\) and uses the neighborhood guessing algorithm together with the satisfaction algorithm to enumerate candidates for the sequence \([\![A(\underline{0})]\!]^{M},\ldots,[\![A(\underline{n})]\!]^{M}\). This is described in Lemma 4.10.
Before describing these algorithms and proving their correctness, we need to prove one technical lemma (which is related to our comment above stating that if \(M\vDash\psi_{S}(a,b)\) then \(a\) and \(b\) are usually close together).
**Lemma 4.7**.: _Suppose that \(\varphi(x,y)\) is a formula (possibly with parameters from \(M\)) of radius \(r\) which is mutually algebraic over \(M\). There is a finite set \(X\) of elements of \(M\) such that if \(M\vDash\varphi(a,b)\) then either \(a\) and \(b\) are distance at most \(r\) apart or at least one of \(a,b\) is in \(X\).7_
Footnote 7: Note that if \(T\) is modified in the way described in Remark 3.1 then both the statement and proof of this lemma can be simplified somewhat. In particular, we can replace the set \(X\) with the \(r\)-neighborhood of \(0\).
Proof.: It will help to first make explicit the parameters of \(\varphi\). Let \(\overline{c}\) denote the tuple of parameters and write \(\varphi^{\prime}(x,y,\overline{z})\) to denote the version of \(\varphi\) with the parameters exposed, i.e. \(\varphi(x,y)\) is \(\varphi^{\prime}(x,y,\overline{c})\).
Call a pair \((a,b)\)**exceptional** if \(a\) and \(b\) are more than distance \(r\) apart and both are more than distance \(r\) from every coordinate of \(\overline{c}\) and \(M\vDash\varphi(a,b)\). We will show that if
\((a,b)\) is exceptional then the \(r\)-neighborhood type of \(a\) occurs only finitely often in \(M\), and likewise for \(b\). Since there are only finitely many \(r\)-neighborhood types, this shows that there are only finitely many exceptional pairs. This is sufficient to finish the proof since we can take \(X\) to consist of all elements which are part of some exceptional pair, together with the \(r\)-neighborhood of each coordinate of \(\overline{c}\).
The claim about exceptional pairs follows from the indiscernability principle. Suppose \((a,b)\) is an exceptional pair. If \(a^{\prime}\) is any element of \(M\) which is distance more than \(r\) from all of \(b\) and from every coordinate of \(\overline{c}\) and which has the same \(r\)-neighborhood type as \(a\) then by the indiscernability principle we have
\[M\vDash\varphi(a,b)\implies M\vDash\varphi^{\prime}(a,b,\overline{c})\implies M \vDash\varphi^{\prime}(a^{\prime},b,\overline{c})\]
and hence \(M\vDash\varphi(a^{\prime},b)\). Since \(\varphi\) is mutually algebraic, there can only be finitely many such \(a^{\prime}\). Thus, outside of the \(r\)-neighborhood of \(b\) and of each coordinate of \(\overline{c}\), there are only finitely many elements with the same \(r\)-neighborhood type as \(a\). Since these \(r\)-neighborhoods are themselves finite, they also contain only finitely many elements with the same \(r\)-neighborhood type as \(a\) and thus we have shown that the \(r\)-neighborhood type of \(a\) only occurs finitely often in \(M\). Symmetric reasoning establishes the same result for \(b\).
**Lemma 4.8** (Guessing algorithm for successors and predecessors).: _There is an algorithm which, given any \(a\in M\) enumerates two lists of elements of \(M\) such that_
1. \(S(a)\) _is in the first list and_ \(P(a)\) _is in the second list._
2. _There is a constant upper bound (independent of_ \(a\)_) on the distance between any enumerated element and_ \(a\)_._
Proof.: Let \(\varphi_{S}(x,y)\) be the \(\mathcal{L}^{\prime}\)-definition of \(S\) (i.e. \(M\vDash S(a)=b\) if and only if \(M\vDash\varphi_{S}(a,b)\)). Since \(\varphi_{S}(x,y)\) is mutually algebraic, we can apply Theorem 2.10 to obtain a mutually algebraic existential \(\mathcal{L}^{\prime}\)-formula \(\psi_{S}(x,y)\) (which may contain parameters from \(M\)) such that \(M\vDash\varphi_{S}(x,y)\to\psi_{S}(x,y)\). Let \(r\) be the radius of \(\psi_{S}\). By Lemma 4.7, there is a finite set \(X\) such that if \(M\vDash\psi_{S}(b,c)\) then either \(b\) and \(c\) are distance at most \(r\) apart or at least one of \(b,c\) is in \(X\). We will hard-code into our algorithm the elements of \(X\), along with the identity of their successors and predecessors.
Note that since \(\psi_{S}(x,y)\) is an existential formula, it follows that for a fixed \(a\), the set of elements \(b\) such that \(M\vDash\psi_{S}(a,b)\) is computably enumerable (to see why, note that we can simply enumerate tuples in \(M\) until we find one that witnesses the existential formula \(\psi_{S}(a,b)\)), and likewise for the set of elements \(b\) such that \(M\vDash\psi_{S}(b,a)\). Thus our algorithm may work as follows.
1. Begin enumerating elements \(b\) such that \(M\vDash\psi_{S}(a,b)\) or \(M\vDash\psi_{S}(b,a)\).
2. For each element \(b\) such that \(M\vDash\psi_{S}(a,b)\), check if either \(a\) or \(b\) is in \(X\). If so, use the hard-coded list of successors and predecessors of elements of \(X\) to check if \(b\) is a successor of \(a\). If this is true, enumerate \(b\) into the first list. If \(a\) and \(b\) are both not in \(X\) then enumerate \(b\) into the first list with no extra checks.
3. Do the same thing for each element \(b\) such that \(M\vDash\psi_{S}(b,a)\), but enumerate \(b\) into the second list instead of the first.
Since \(M\vDash\varphi_{S}(x,y)\to\psi_{S}(x,y)\), the true successor and predecessor of \(a\) will be successfully enumerated. Also, if \(b\) is some element of \(M\) which is distance more than \(r\) from \(a\) then either \(M\nvDash\psi_{S}(a,b)\) and \(M\nvDash\psi_{S}(b,a)\), in which case \(b\) will not be enumerated, or one of \(a,b\) is in \(X\), in which case \(b\) will still not be enumerated (because it is not a true successor or predecessor of \(a\)).
**Lemma 4.9** (Guessing algorithm for neighborhoods).: _There is an algorithm which, given any \(a\in M\) and number \(n\in\mathbb{N}\), enumerates a list of at most \(O(n^{2})\) guesses for the sequence \(a-\underline{n},\ldots,a+\underline{n}\), one of which is correct._
Proof.: It is easiest to describe our algorithm in the following way. We will first describe an algorithm which has access to certain extra information (which might not be computable from an oracle for \(M\)) and which uses this extra information to correctly compute the sequence \(a-\underline{n},\ldots,a+\underline{n}\). We then obtain an algorithm for enumerating guesses for the sequence by trying each possible value of the extra information and running the algorithm on each of these values in parallel.8 To finish, we will have to show that there are only \(O(n^{2})\) possible values for the extra information.
Footnote 8: A slightly subtle point here is that the algorithm which uses extra information to compute \(a-\underline{n},\ldots,a+\underline{n}\) might not terminate if the extra information it is given is incorrect. Thus some of the possible values that we try for the extra information will never actually output a guess. This is why we only say that our final algorithm _enumerates_ a list of guesses rather than that it _computes_ a list of guesses.
To begin, let \(r_{1}\) be the constant from the statement of Lemma 4.8 (i.e. the upper bound on the distance between any \(a\) and any element which is enumerated by the algorithm for guessing successors and predecessors of \(a\)). Let \(\varphi_{S}(x,y)\) be the \(\mathcal{L}^{\prime}\)-definition of \(S\) and let \(r_{2}\) be the satisfaction radius of \(\varphi_{S}\).
Suppose we are given an element \(a\in M\) and a number \(n\in\mathbb{N}\) as input. Let \(N=r_{1}n+r_{2}\). Our algorithm proceeds in two phases.
1. In the first phase, we will use the algorithm from Lemma 4.8 to collect candidates for \(a+\underline{i}\) for each \(-N\leq i\leq N\). More precisely, for each such \(i\) we will find a set \(U_{i}\) which contains \(a+\underline{i}\) and which is contained in the \(r_{1}|i|\)-neighborhood of \(a\).
2. In the second phase, we will use the sets of candidates collected in the first stage as input to the satisfaction algorithm (applied to \(\varphi_{S}\)) to determine the exact identities of \(a+\underline{i}\) for each \(-n\leq i\leq n\).
The "extra information" that we alluded to above is needed in the first phase of the algorithm. This is because the sets \(U_{i}\) are not quite computable from an oracle for \(M\), but only computably enumerable. However, since the they are all finite, it is possible to compute them exactly with only a small amount of additional information. Let \(i\) be the index of the last \(U_{i}\) to have a new element enumerated into it and let \(m\) be the size of \(U_{i}\) once all its elements have been enumerated (note that such an \(i\) and \(m\) exist because all the \(U_{j}\) are finite). We claim that the pair \((i,m)\) is enough information to allow us to compute all the sets \(U_{j}\) exactly and that there are only \(O(n^{2})\) possible values for this pair.
To see why we can compute all the \(U_{j}\) exactly, note that given \(i\) and \(m\) we can simply keep enumerating elements into all the \(U_{j}\) until we see that \(U_{i}\) has size \(m\). To see why there are only \(O(n^{2})\) possible values for the pair \((i,m)\), note that there are only \(2N+1\) possible values for \(i\) and at most \(r_{1}(2N+1)\) possible values for \(m\) (since \(U_{i}\) is contained in the \(r_{1}|i|\)-neighborhood of \(a\), which has \(r_{1}(2|i|+1)\leq r_{1}(2N+1)\) elements). Thus there are at most \(r_{1}(2N+1)^{2}=O(n^{2})\) possible values for \((i,m)\).
_Phase 1: collecting candidates._ The sets \(U_{i}\) for \(-N\leq i\leq N\) can be enumerated as follows. To begin with, set \(U_{0}=\{a\}\) and set all other \(U_{i}=\varnothing\). Then run the following processes in parallel: for each \(-N<i<N\) and each element \(b\) of \(U_{i}\), use the algorithm of Lemma 4.8 to enumerate candidates for the successor and predecessor of \(b\). If \(i\geq 0\) then add each such candidate for the successor of \(b\) to \(U_{i+1}\). If \(i\leq 0\) then add each candidate for the predecessor of \(b\) to \(U_{i-1}\). It is easy to show by induction that for each \(i\), \(a+\underline{i}\) will eventually be enumerated into \(U_{i}\) and that each element enumerated into \(U_{i}\) is distance at most \(r_{1}|i|\) from \(a\).
_Phase 2: computing neighbors exactly._ Given the sets \(U_{i}\) from phase 1, we can compute the exact identities of \(a-\underline{n},\ldots,a+\underline{n}\) as follows. First, let \(U=U_{-N}\cup\ldots\cup U_{N}\) and note that \(a+\underline{0}=a\). Next, loop over \(i=0,1,\ldots,n-1\). On step \(i\), we will compute \(a+\underline{i}+\underline{1}\) and \(a-\underline{(i+1)}\). Suppose that we are on step \(i\) of the algorithm and assume for induction that we have already successfully computed \(a+\underline{i}\) and \(a-\underline{i}\) (note that for \(i=0\) this is trivial). Now do the following:
1. For each \(b\in U_{i+1}\), use the satisfaction algorithm (of Proposition 4.5) with the set \(U\) to check if \(M\vDash\varphi_{S}(a+\underline{i},b)\).
2. For each \(b\in U_{-(i+1)}\), use the satisfaction algorithm with the set \(U\) to check if \(M\vDash\varphi_{S}(b,a-\underline{i})\).
Note that each \(b\in U_{i+1}\) is within distance \(r_{1}(i+1)\) of \(a\). Since \(U\) contains the entire \(N\)-neighborhood of \(a\) and \(N=r_{1}n+r_{2}\geq r_{1}(i+1)+r_{2}\), \(U\) also contains the \(r_{2}\)-neighborhood of \(b\). Thus the conditions of the satisfaction algorithm are fulfilled and so we correctly compute whether \(b\) is the successor of \(a+\underline{i}\) or not. And since \(U_{i+1}\) is guaranteed to contain \(a+\underline{i+1}\), our algorithm will correctly identify \(a+\underline{i+1}\). Completely symmetric reasoning applies to show that our algorithm will correctly identify \(a-\underline{(i+1)}\).
**Lemma 4.10** (Guessing algorithm for \(A\)).: _There is an algorithm which, given any number \(n\in\mathbb{N}\), enumerates a list of at most \(O(n^{2})\) guesses for the sequence \([\![A(\underline{0})]\!]^{M},\ldots,[\![A(\underline{n})]\!]^{M}\), one of which is correct._
Proof.: Let \(\varphi_{A}(x)\) be the \(\mathcal{L}^{\prime}\)-definition of \(A\) (i.e. \(M\vDash A(a)\) if and only if \(M\vDash\varphi_{A}(x)\)) and let \(r\) be the satisfaction radius of \(\varphi_{A}\). This algorithm essentially just combines the algorithm for guessing neighborhoods with the satisfaction algorithm for \(\varphi_{A}\).
Given a number \(n\in\mathbb{N}\) as input, first use the algorithm from Lemma 4.9 to enumerate guesses for the sequence \(-\underline{(n+r)},\ldots,\underline{n+r}\) (this can be done by simply giving the element \(0\in M\) and the number \(n+r\) as input to that algorithm). Let \(b_{-(n+r)},\ldots,b_{n+r}\) be one such guess and let \(U=\{b_{i}\mid-(n+r)\leq i\leq n+r\}\). For each \(0\leq i\leq n\), use the satisfaction algorithm with the set \(U\) to check if \(M\vDash\varphi_{A}(b_{i})\). If so, report that \([\![A(\underline{i})]\!]^{M}\) is true and otherwise report that it is false.
So for each guess for the sequence \(-\underline{(n+r)},\ldots,\underline{n+r}\) we generate exactly one guess for the sequence \([\![A(\underline{0})]\!]^{M},\ldots,[\![A(\underline{n})]\!]^{M}\) and thus we generate at most \(O((n+r)^{2})=O(n^{2})\) guesses overall. Furthermore, one of the guesses for the sequence \(-\underline{(n+r)},\ldots,\underline{n+r}\) is guaranteed to be correct. For this guess, each \(b_{i}\) is actually equal to \(\underline{i}\) and for each \(i\leq n\), the set \(U\) really does contain the \(r\)-neighborhood of \(b_{i}\). Thus, for this guess, each \([\![A(\underline{i})]\!]^{M}\) is computed correctly.
## 5. Questions
### Bi-interpretability
Since definitional equivalence is a strong form of bi-interpretability, it seems reasonable to ask whether Theorem 1.2 still holds when definitional equivalence is replaced with bi-interpretability.
**Question 2**.: _Is there a consistent, c.e. theory such that no theory bi-interpretable with it has a computable model?_
It seems possible that the theory \(T\) we used in our proof of Theorem 1.2 could also be used to answer this question, but there are a few difficulties. One issue is that while the mutual algebraicity of a structure is preserved under definitional equivalence, it is not always preserved under bi-interpretability.
**Example 5.1** (Bi-interpretability fails to preserve mutual algebraicity).: Let \(\mathcal{L}\) be the language with just equality and let \(\mathcal{L}^{\prime}\) be a language with two sorts \(U\) and \(V\), and two function symbols \(f,g\colon V\to U\). Let \(T\) be the \(\mathcal{L}\)-theory describing an infinite set and let \(T^{\prime}\) be the \(\mathcal{L}^{\prime}\)-theory which states that \(U\) is infinite and \((f,g)\) is a bijection from \(V\) to \((U\times U)\setminus\{(x,x)\mid x\in U\}\).
Given a model of \(T^{\prime}\), we can obtain a model of \(T\) by forgetting the sort \(V\) and the functions \(f\) and \(g\). Given a model of \(T\) we can obtain a model of \(T^{\prime}\) as follows. Take as the underlying set for the model, the set of all pairs \((x,y)\) with pairs of the form \((x,x)\) forming the sort \(U\) and pairs of the form \((x,y)\) for \(x\neq y\) forming the sort \(V\). For the functions \(f\) and \(g\), simply take \(f((x,y))=(x,x)\) and \(g((x,y))=(y,y)\).
It is not hard to check that these two interpretations give a bi-interpretation. However, while every model of \(T\) is clearly mutually algebraic, the same is not true for \(T^{\prime}\). For example, the formula \(f(y)=x\) is not equivalent to any Boolean combination of mutually algebraic formulas.
A second issue (not unrelated to the first) is that, in our proof, we relied on the fact that any model \(M\) of a theory definitionally equivalent to \(T\) carries a notion of distance inherited from \(T\). In particular, we used this to bound the number of guesses required by the neighborhood guessing algorithm of Lemma 4.9. However, if \(M\) is only a model of a theory bi-interpretable with \(T\), it is not clear if there is still a good notion of distance which can play this role.
### Natural theories
Arguably, the theory \(T\) that we used to prove Theorem 1.2 is not very natural. It would be interesting to know if this is necessary.
**Question 3**.: _Is there a natural theory witnessing Theorem 1.2?_
Of course, much depends on what the word "natural" means. In the interests of asking a somewhat more concrete question, let's say that a theory is natural if it has been studied (at least implicitly) by mathematicians who are not logicians.
We can rephrase our question as follows: is there any natural theory which has satisfies the robust version of the Tennenbaum property implicit in Theorem 1.2? In light of Pakhomov's results, which seem to show that any theory interpreting a decent amount of arithmetic is definitionally equivalent to a theory without the Tennenbaum property, it seems like a good idea to first ask whether any natural theory satisfies the regular version of the Tennenbaum property but does not interpret any nontrivial fragment of arithmetic. We are not aware of any such theory and would consider it highly interesting.
**Question 4**.: _Is there any natural (consistent) theory \(T\) such that \(T\) has no computable models and does not interpret any nontrivial fragment of arithmetic?_
One can ask a similar question on the level of models rather than theories. In analogy with our definition for theories, let's say that a countable structure is natural if it has been studied by mathematicians who are not logicians.
**Question 5**.: _Is there a natural countable structure with no computable presentation?_
Again, we are not aware of any completely convincing example of such a structure and would consider any such example to be very interesting.
| We are answering Pakhomov's question by showing that there is a consistent, c.e. theory $T$ such that no theory which is definitionally equivalent to $T$ has an acomputable model. A key tool in our proof is the model-theoretic notion of mutual algebraicity. |
2307.00139 | 3D oxygen vacancy order and defect-property relations in multiferroic
(LuFeO$_3$)$_9$/(LuFe$_2$O$_4$)$_1$ superlattices | Oxide heterostructures exhibit a vast variety of unique physical properties.
Examples are unconventional superconductivity in layered nickelates and
topological polar order in (PbTiO$_3$)$_n$/(SrTiO$_3$)$_n$ superlattices.
Although it is clear that variations in oxygen content are crucial for the
electronic correlation phenomena in oxides, it remains a major challenge to
quantify their impact. Here, we measure the chemical composition in
multiferroic (LuFeO$_3$)$_9$/(LuFe$_2$O$_4$)$_1$ superlattices, revealing a
one-to-one correlation between the distribution of oxygen vacancies and the
electric and magnetic properties. Using atom probe tomography, we observe
oxygen vacancies arranging in a layered three-dimensional structure with a
local density on the order of 10$^{14}$ cm$^{-2}$, congruent with the
formula-unit-thick ferrimagnetic LuFe$_2$O$_4$ layers. The vacancy order is
promoted by the locally reduced formation energy and plays a key role in
stabilizing the ferroelectric domains and ferrimagnetism in the LuFeO$_3$ and
LuFe$_2$O$_4$ layers, respectively. The results demonstrate the importance of
oxygen vacancies for the room-temperature multiferroicity in this system and
establish an approach for quantifying the oxygen defects with atomic-scale
precision in 3D, giving new opportunities for deterministic defect-enabled
property control in oxide heterostructures. | K. A. Hunnestad, H. Das, C. Hatzoglou, M. Holtz, C. M. Brooks, A. T. J. van Helvoort, D. A. Muller, D. G. Schlom, J. A. Mundy, D. Meier | 2023-06-30T21:14:47 | http://arxiv.org/abs/2307.00139v1 | D oxygen vacancy order and defect-property relations in multiferroic (LuFeO\({}_{3}\))\({}_{9}\)/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattices
###### Abstract
Oxide heterostructures exhibit a vast variety of unique physical properties. Examples are unconventional superconductivity in layered nickelates[1] and topological polar order in (PbTiO\({}_{3}\))\({}_{n}\)/(SrTiO\({}_{3}\))\({}_{n}\) superlattices[2, 3]. Although it is clear that variations in oxygen content are crucial for the electronic correlation phenomena in oxides[4], it remains a major challenge to quantify their impact[5]. Here, we measure the chemical composition in multiferroic (LuFeO\({}_{3}\))\({}_{9}\)/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattices, revealing a one-to-one correlation between the distribution of oxygen vacancies and the electric and magnetic properties. Using atom probe tomography, we observe oxygen vacancies arranging in a layered three-dimensional structure with a local density on the order of 10\({}^{14}\) cm\({}^{-3}\), congruent with the formula-unit-thick ferrimagnetic LuFe\({}_{2}\)O\({}_{4}\) layers. The vacancy order is promoted by the locally reduced formation energy and plays a key role in stabilizing the ferroelectric domains and ferrimagnetism in the LuFeO\({}_{3}\) and LuFe\({}_{2}\)O\({}_{4}\) layers, respectively. The results demonstrate the importance of oxygen vacancies for the room-temperature multiferroicity in this system and establish an approach for quantifying the oxygen defects with atomic-scale precision in 3D, giving new opportunities for deterministic defect-enabled property control in oxide heterostructures.
## I Introduction
The concentration and distribution of oxygen in strongly correlated electron systems is essential for the material's response[5]. By introducing oxygen vacancies or interstitials, electronic and magnetic properties can be controlled, and even entirely new functional properties can be obtained[6]. For example, redox reactions can change the oxygen-stoichiometry and drive topotactic transitions[7], resistive switching[8], and ferroelectric self-poling[9]. In structured materials, the oxygen diffusion length is usually comparable to dimensions of the system[10] and local variations in oxygen content naturally arise due to varying defect formation energies[11]. The latter plays a crucial role for property-engineering in oxide heterostructures, where atomically precise interfaces in combination with defect engineering are used to tailor, e.g., polar order[12], magnetic exchange interactions[13], and the onset of superconductivity[14, 15].
Quantifying emergent spatial variations in oxygen content at the atomic level, however, is extremely challenging [5, 16]. Enabled by the remarkable progress in high-resolution transmission electron microscopy, it is possible to image individual oxygen defects in heterostructures [17] and, for sufficiently high defect densities, chemical fingerprints associated with their accumulation or depletion at interfaces/interlayers can be detected [18, 19, 20, 21]. Despite their outstanding capabilities, these electron-microscopy based methods are not quantitative and inherently restricted to 2D projections along specific zone axes. This restriction prevents the full three-dimensional (3D) analysis of oxygen defects and limits the microscopic understanding of the interplay between oxygen defects and the material's physical properties. An experimental approach that, in principle, facilitates the required chemical accuracy and sensitivity to overcome this fundamental challenge is atom probe tomography (APT). The potential of APT is demonstrated by previous work on bulk oxide superconductors [4] and ferroelectrics [22], measuring stoichiometric variations at the nanoscale and lattice positions occupied by individual dopant atoms, respectively.
Here, we quantify the distribution of oxygen vacancies in (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattices and demonstrate its importance for the electric and magnetic orders that lead to room-temperature multiferroicity in this system. Using APT, we show that oxygen vacancies (\(\nu_{0}\)) have a propensity to accumulate in the LuFe\({}_{2}\)O\({}_{4}\) monolayers, forming a layered 3D structure with an average density of about \((7.8\pm 1.8)\cdot 10^{13}\) cm\({}^{-2}\). The oxygen vacancies facilitate the electrical screening that is essential for stabilizing the ferroelectric order and control the oxidation state of the iron (Fe), which is responsible for the emergent ferrimagnetism. The results clarify the defect-property relation and show that the multiferroic behavior in (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) is intertwined with - and promoted by - the 3D oxygen vacancy order.
Figure 1a shows a high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) image of the (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattice. The system exhibits spontaneous electric and magnetic order, facilitating magnetoelectric multiferroicity at room temperature [23]. The ferroelectricity relates to the displacement of the Lu atoms in the LuFeO\({}_{3}\) layers (up-down: +_P_; down-down-up: -_P_, see Fig. 1a), whereas the ferrimagnetism has been explained based on Fe\({}^{2+}\)\(\rightarrow\) Fe\({}^{3+}\) charge-transfer excitations in the LuFe\({}_{2}\)O\({}_{4}\) layers [24]. Interestingly, the multiferroic (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattice develops an unusual ferroelectric domain state with extended positively charged domain walls in the LuFeO\({}_{3}\) layers, where the polarization meets head-to-head (\(\rightarrow\)\(\leftarrow\)) [25]. The formation of charged head-to-head domain walls is surprising as they have high electrostatic costs, which raises the question how the material stabilizes them.
To understand the microscopic mechanism that leads to the distinct magnetic and electric order in (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\), we map the 3D chemical composition of the superlattice using APT. For the APT
Figure 1: **3D imaging of the (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattices structure.****a**, HAADF-STEM image recorded along the [100] zone axis and schematic showing the atomic structure of the superlattice. Ferroelectric +_P_ and -_P_ domains are colored blue and red, respectively. **b**, SEM image of an APT needle. Three different layers are visible, corresponding to the Cr protection layer (dark grey), the (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattice (bright), and the substrate. **c**, 3D reconstruction of the APT data. Superlattice and substrate are represented by the Fe and ZrO ionic species, respectively. The dark lines in the superlattice correspond to double-Fe columns of the LuFe\({}_{2}\)O\({}_{4}\) layers. **d**, Zoom-in to one of LuFe\({}_{2}\)O\({}_{4}\) layers in **c**, resolving the double-Fe columns.
analysis, we deposit a protective capping layer (Pt, Cr or Ti) and prepare needle-shaped specimens using a focused ion beam (FiB, see Methods) as shown in Fig. 1b. The needle-like shape is a requirement in APT experiments and allows for producing the high electric fields required for field evaporation of surface atoms when a voltage > 2 kV is applied. The capping layer ensures that the (LuFeO\({}_{3}\))\({}_{9}\)/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattice is located below the tip of the needle, which enables us to analyze a larger volume and, hence, improve the chemical precision of the experiment. Figure 1c shows the 3D reconstruction of the measured volume, where Fe and ZrO ionic species are presented to visualize the superlattice and substrate, respectively (mass spectrum and bulk chemical composition are presented in Supplementary Fig. S1). The LuFe\({}_{2}\)O\({}_{4}\) layers are visible as darker lines due to their higher concentration of Fe atoms compared to LuFeO\({}_{3}\). The 3D reconstruction shows that the spacing between the LuFe\({}_{2}\)O\({}_{4}\) layers varies within the analyzed volume of the superlattice, ranging from approximately 2 nm to 6 nm. At the atomic scale, the LuFe\({}_{2}\)O\({}_{4}\) layers exhibit the characteristic double-Fe layers (Fig. 1d), consistent with the HAADF-STEM data in Fig. 1a. Furthermore, enabled by the 3D APT imaging, we observe step-like discontinuities in individual LuFe\({}_{2}\)O\({}_{4}\) layers in Fig. 1c. The observation of such growth-related imperfections leads us to the conclusion that the multiferroic response of the material is rather robust and resilient against such local disorder.
Most importantly for this work, the APT measurement provides information about the local chemical composition of the superlattice. Figure 2a displays the concentration of the different atomic species evaluated for the region marked by the white dashed line in Fig. 1c. The line plots are derived by integrating the data in the direction perpendicular to the long axis of the needle-shaped sample, showing pronounced anomalies at the position of the LuFe\({}_{2}\)O\({}_{4}\) layers (marked by dashed lines). In total, seven peaks are resolved labelled 1 to 7; two peaks correspond to the discontinuous LuFe\({}_{2}\)O\({}_{4}\) layer (represented by the double-peak 1/2) and five peaks to the continuous LuFe\({}_{2}\)O\({}_{4}\) layers resolved in Fig. 1c (3) to 7). In all cases, we consistently find an enhancement in Fe concentration and a decrease in Lu and O concentration in the LuFe\({}_{2}\)O\({}_{4}\) layers relative to LuFeO\({}_{3}\). A more detailed analysis of the chemical composition of one of the continuous LuFe\({}_{2}\)O\({}_{4}\) layers (i.e., layer 3) is presented in Fig. 2b. Figure 2b compares measured and calculated concentration profiles for Lu, Fe, and O. The calculated concentration profile corresponds to a stoichiometric (LuFeO\({}_{3}\))\({}_{9}\)/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattice, assuming a realistic experimental resolution of about 0.6 nm, showing a good agreement
Figure 2: **3D oxygen vacancy order.****a**, Profiles of the relative chemical composition, with the surface to the left in the plot. Anomalies are observed at all the LuFe\({}_{2}\)O\({}_{4}\) layers, numbered 1 to 7. **b**, Measured (data points) and theoretically expected (solid line) chemical concentration profile at LuFe\({}_{2}\)O\({}_{4}\) layer 3. The shaded area highlights that the measured oxygen content is lower than in stoichiometric LuFe\({}_{2}\)O\({}_{4}\), indicating an accumulation of oxygen vacancies. **c**, 3D visualization of the oxygen stoichiometry based on the APT data set. Oxygen vacancies arrange in a layered three-dimensional structure congruent with the formula-unit-thick ferrimagnetic LuFe\({}_{2}\)O\({}_{4}\) layers. Within the LuFe\({}_{2}\)O\({}_{4}\) layers, oxygen vacancies form puddle-like regions of reduced LuFe\({}_{2}\)O\({}_{4\delta}\) (blue).
with the experimental data for Lu and Fe. In contrast, the measured concentration of O is lower than expected, indicating an accumulation of oxygen vacancies, v\({}_{0}\). By integrating over the layer, we find a v\({}_{0}\) density of \((7.8\pm 1.8)*10^{13}\) cm\({}^{-2}\), which corresponds on average to an oxygen deficient state in LuFe\({}_{2}\)O\({}_{4-\delta}\) with \(\delta\)= 0.5.
The same trend is observed for other LuFe\({}_{2}\)O\({}_{4}\) layers with minor layer-to-layer variations in the v\({}_{0}\) density (see Supplementary Fig. S2), indicating that the oxygen vacancies form a layered 3D structure within the (LuFeO\({}_{3}\))\({}_{9}\)/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattice, congruent with the arrangement of the LuFe\({}_{2}\)O\({}_{4}\) layers. It is important to note, however, that within the different layers the distribution of v\({}_{0}\) is inhomogeneous as shown by the 3D map in Fig. 2c. This map presents the local chemical composition and reflects the periodic variation in v\({}_{0}\) density in the LuFeO\({}_{3}\) and LuFe\({}_{2}\)O\({}_{4}\) layers, consistent with the integrated data in Fig. 2a,b. Furthermore, it reveals a puddle-like distribution of the oxygen vacancies with puddle sizes in the order of a few nanometers and a maximum local v\({}_{0}\) density of up to \(\approx\) 10\({}^{14}\) cm\({}^{-2}\) (i.e., a reduction to LuFe\({}_{2}\)O\({}_{3.25}\)).
To better understand the propensity of the oxygen vacancies to accumulate at the LuFe\({}_{2}\)O\({}_{4}\) layers, we calculate and compare the v\({}_{0}\) defect formation energies for LuFeO\({}_{3}\) and LuFe\({}_{2}\)O\({}_{4}\) using density functional theory (DFT) calculations (Methods). Possible vacancy sites are located in the Lu- or Fe-layers ( v\({}_{0}^{\text{LuO}_{2}}\) or v\({}_{0}^{\text{FeO}}\) ) as illustrated in Fig. 3a,b. The respective defect formation energies as function of temperature are plotted in Fig. 3c, showing that the formation energy for oxygen vacancies at v\({}_{0}^{\text{FeO}}\) sites is in general lower than at v\({}_{0}^{\text{LuO}_{2}}\) sites. In addition, the comparison of the data for LuFe\({}_{2}\)O\({}_{4}\) and LuFeO\({}_{3}\) indicates that it is energetically favorable to accommodate oxygen vacancies in LuFe\({}_{2}\)O\({}_{4}\), yielding an energy reduction of 0.5 eV per v\({}_{0}^{\text{FeO}}\) relative to oxygen vacancies in LuFeO\({}_{3}\). Thus, by accumulating oxygen vacancies in the LuFe\({}_{2}\)O\({}_{4}\) layers, the (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattice can substantially reduce its energy, which promotes the formation of
Figure 3: **Defect formation energy for oxygen vacancies in LuFe\({}_{2}\)O\({}_{4}\) and LuFeO\({}_{3}\).****a,b**, Schematics illustrating possible defect sites for oxygen vacancies in LuFe\({}_{2}\)O\({}_{4}\) and LuFeO\({}_{3}\) respectively. **c**, Comparison of oxygen vacancy formation energy as a function of temperature for an oxygen partial pressure of 10-10 atm. The lowest energy is found for oxygen vacancies in the Fe layers of LuFe\({}_{2}\)O\({}_{4}\)(v\({}_{0}^{\text{FeO}}\)). **d**, Schematic of the superlattice structure, summarizing the APT and DFT results. Oxygen vacancies accumulate in the LuFe\({}_{2}\)O\({}_{4}\) layers due to the locally reduced formation energy. The oxygen vacancies stabilize a ferroelectric tail-to-tail configuration at these layers and provide the electrons that are needed to screen the head-to-head domain walls that form in the LuFeO\({}_{3}\) layers.
\(\nu_{0}^{\text{FeO}}\)-rich LuFe\({}_{2}\)O\({}_{4}\) layers and, hence, a layered 3D v\({}_{0}\) order consistent with the APT results.
The observed 3D oxygen vacancy order has a direct impact on electric and magnetic properties and provides insight into their microscopic origin. The accumulation of v\({}_{0}\) effectively leads to electron-doping of the LuFe\({}_{2}\)O\({}_{4}\) layers. We find that the locally measured v\({}_{0}\) density (Fig. 2) is equivalent to a positive charge of 25 \(\pm\) 5 uC/cm\({}^{2}\). The latter explains why the (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattice develops the unusual ferroelectric tail-to-tail configuration at the LuFe\({}_{2}\)O\({}_{4}\) layers (seen in Fig. 1a and 3d). The latter carry a negative charge of about 12 uC/cm\({}^{2}\), which partially compensates the positive charge associated with the oxygen vacancies. As a consequence of the energetically favored tail-to-tail configuration at the LuFe\({}_{2}\)O\({}_{4}\) layers, formation of positively charged head-to-head domain walls within the LuFeO\({}_{3}\) layers is enforced which, in turn, are screened by a redistribution of the mobile electrons generated by the v\({}_{0}\). The importance of this redistribution of electrons, i.e., the charge transfer of free electrons to the head-to-head domain walls goes beyond the ferroelectric properties. It also drives the change in the oxidation state of Fe in the LuFe\({}_{2}\)O\({}_{4}\) layers (Fe\({}^{2+}\)\(\rightarrow\) Fe\({}^{3+}\)) [23, 24, 25], which is crucial for the ferrimagnetic order of the material.
In conclusion, both the electric and magnetic properties of (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) are closely related to oxygen vacancy ordering. The results clarify the microscopic origin the unusual ferroelectric domain structure and the enhancement of the magnetic response, revealing the importance of extrinsic defect-driven mechanisms for the emergence of room-temperature multiferroicity. Quantitative 3D imaging of oxygen defects and chemical profiling at the atomic-scale is of interest beyond the defect-property relations discussed in this work and can provide insight into defect-driven effects and the general role that oxygen vacancies (or interstitial) play in emergent phenomena in oxide hetero-structures. The approach shown demonstrates the benefit of quantitative atomic-scale characterization of oxygen at interfaces in oxides, which is crucial for a better understanding of their complex chemistry and physics, as well as improved property engineering and their utilization in nanoelectronic and oxitronic technologies.
## Methods
**Sample preparation and characterization:** Thin films of (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) were grown by reactive-oxide molecular-beam epitaxy in a Veeco GEN10 system on (111) (ZrO)\({}_{0.950}\)(Y\({}_{2}\)O\({}_{3}\))\({}_{0.005}\) (or 9.5 mol% yttria-stabilized zirconia) (YSZ) substrates, as described in Ref. [23]. A 300 nm Ti or Cr protective layer was deposited on top of the film with e-beam evaporation using a Pfeiffer Vacuum Classic 500, at a rate of 1 A/s. The characteristic needle shaped specimens for APT were prepared with a Helios NanoLab DualBeam FIB as described by Ref. [26]. Cross-sectional TEM specimens were prepared using an FEI Strata 400 FIB with a final milling step of 2 keV to reduce surface damage.
**Transmission electron microscopy:** Selected specimens were inspected to ensure adequate sample quality with TEM using a JEOL JEM-2100F Field Emission Electron Microscope operating at 200kV. The high-resolution HAADF-STEM image in Fig. 1 was acquired on a 100-keV Nion UltraSTEM, a fifth-order aberration-corrected microscope. The lutthetium distortions were quantified from HAADF-STEM images, as described in Ref. [23].
**Atom probe tomography:** APT measurements were recorded with a Cameca LEAP 5000XS instrument, operating in laser pulsed mode. Data was collected at cryogenic temperature (\(T\) = 25 K) with an applied bias between 2 kV and 10 kV. Laser pulses with 30 pJ energy and 250 kHz frequency were used, and the detection rate was set to 0.5%, i.e., 2 ions detected every 1000 pulse. The raw APT data was reconstructed into 3D datasets with the Cameca IVAS 3.6.12 software, using the voltage profile to determine the radial evolution. The image compression factor and field reduction factor was adjusted to make the thin film flat relative to the substrate.
**First-principles calculations of oxygen vacancy formation.** To understand the tendency of formation of an oxygen vacancy (v\({}_{0}\)) in the (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattice, we studied the formation energy (\(\Delta E_{f}\)) of an oxygen vacancy as a function of temperature (\(T\)) and oxygen partial pressure (\(p\)) by considering the bulk ferroelectric state of LuFeO\({}_{3}\) (space group \(P\)\(G\)\(x\)\(cm\)) and the bulk ferroelectric Fe\({}^{2+}\)/Fe\({}^{3+}\) charge-ordered state of LuFe\({}_{2}\)O\({}_{4}\) (space group \(C\)\(m\)). This is a reasonable consideration as in the (LuFeO\({}_{3}\))/(LuFe\({}_{2}\)O\({}_{4}\))\({}_{1}\) superlattices the improper ferroelectric signature
trimer distortions are induced in the LuFe\({}_{2}\)O\({}_{4}\) layer by the LuFeO\({}_{3}\) layers [23]. The formation of oxygen vacancies was studied by extracting one oxygen atom from the supercell of the ferrite systems. We used the following equation to calculate \(\Delta E_{f}\), [27, 28]
\[\Delta E_{f}=E(\nu_{0})-E_{0}+\Delta x\mu_{0}\]
where \(E(\nu_{0})\) and \(E_{0}\) represent the total energies of the supercell with and without an oxygen vacancy, respectively, and \(\Delta x\) denotes the number of \(\nu_{0}\) created in the supercell. As we considered a neutral oxygen vacancy, \(\Delta E_{f}\) does not depend on the charge state of \(\nu_{0}\) and the Fermi level of the system [28]. The chemical potential of oxygen atom, denoted as \(\mu_{0}\), was calculated by the following equation [29],
\[\mu_{0}(p,T)=\mu_{0}(p_{0},T_{0})+\mu_{0}(p_{1},T)+\frac{1}{2}k_{B}T\ln\left( \frac{p}{p_{1}}\right)\]
Here, \(\mu_{0}(p_{0},T_{0})\) represents the oxygen chemical potential at zero pressure (\(p_{0}\)=0) and zero temperature (\(T_{0}=0\)). According to the first principles calculations, \(\mu_{0}(p_{0},T_{0})=\frac{1}{2}E(\text{O}_{2})\), where \(E(\text{O}_{2})\) denotes the total energy of an O\({}_{2}\) molecule. The second term, \(\mu_{0}(p_{1},T)\), which denotes the contribution of the temperature to the oxygen chemical potential at a particular pressure of \(p_{1}=1\) atm, was obtained from the experimental data [30]. The third term, \(\frac{1}{2}k_{B}T\ln\left(\frac{p}{p_{1}}\right)\), represents the contribution of pressure to the chemical potential of oxygen. Here, \(k_{B}\) is the Boltzmann constant. In the present study, we considered two kinds of oxygen vacancies, located in the Lu- (\(\nu_{0}^{\text{LuO}_{2}}\)) or Fe (\(\nu_{0}^{\text{Fe0}}\)) layers, as illustrated in Fig. 3a,b. The oxygen vacancy formation was calculated in supercells consisting of 12 formula units and 24 formula units of LuFe\({}_{2}\)O\({}_{4}\) and LuFeO\({}_{3}\), respectively.
**Computational details.** We calculated the total energies by performing first-principles calculations by employing the density functional theory (DFT) method and the projector augmented plane-wave (PAW) basis method as implemented in the VASP (Vienna _Ab initio_ Simulation Package) [31, 32]. The Perdew-Burke-Ernzerhof (PBE) form of the generalized gradient approximation (GGA) was used to calculate the exchange correlation functional [33]. A kinetic energy cut-off value of 500 eV and appropriate k-point meshes were selected so that total ground state energies were converged to \(10^{-6}\) eV and the Hellman-Feynman forces were converged to 0.001 eV A\({}^{\text{-}4}\). For each structure, coordinates of all atoms and lattice vectors were fully relaxed. The GGA+U method as developed by Dudarev _et al._[34] was employed to deal with electron correlation in the Fe \(3d\) state. All calculations were performed by considering \(U_{\text{eff}}=U-J_{\text{H}}=4.5\) eV for the Fe \(3d\) states, where \(U\) and \(J_{\text{H}}\) represent the spherically averaged matrix elements of on-site Coulomb interactions. Additionally, we cross-checked the value of \(\Delta E_{f}\) by varying the value of \(U_{\text{eff}}\) from 3.5 to 5.5 eV, as was used in the previous studies [35, 36, 23]. We considered Lu \(4f\) states in the core. All the calculations of total energies were performed with ferromagnetic collinear arrangement of Fe spins and without spin-orbit coupling.
**Estimation of O vacancy density:** Due to a change in unit cell composition at the LuFe\({}_{2}\)O\({}_{4}\) layer, the O vacancy density cannot directly be extracted from the profile in Fig. 2. Instead, a simulation based on the ideal superlattice structure without any defect was made (solid line in Fig. 2). Using a DFT-based structure of the superlattice, the ideal atomic distribution was simulated. The atoms were then shifted around randomly to simulate the spatial resolution of experiment, which was done with a gaussian distribution with 0.55 nm standard deviation. A chemical profile across the simulated structure was then performed to get an expectation of an ideal superlattice structure. The difference between the simulated profile to the real data (shaded area in Fig. 2) represents a measure for the \(\nu_{0}\) concentration. This concentration was converted into a \(\nu_{0}\) density by multiplying it with the oxygen density of the simulated data so that the limited APT detection efficiency is not affecting the final value. The 3D map of the oxygen depletion (Fig. 2c) is derived by displaying the chemical composition in the lateral dimension within five 20 x 20 x 1.5 nm\({}^{3}\) volumes. Chemical composition is converted into formula units (i.e., LuFe\({}_{2}\)O\({}_{4}\)) by measuring the local chemical composition, and compensating for the spatial resolution of the instrument as the oxygen depletion is spread out of regions larger than the actual LuFe\({}_{2}\)O\({}_{4}\) layer.
| 酸化物ヘテロ構造は、ユニークな物理特性の多様性を示します。例としては、層状ニッケル酸と、(PbTiO$_3$)$_n$/(SrTiO$_3$)$_n$超格子における非 conventionnal 超電導性、そして、 (PbTiO$_3$)$_n$/(SrTiO$_3$)$_n$超格子における、 (PbTiO$_3$)$_n$/(SrTiO$_3$)$_n$超格子における、 (PbTiO$_3$)$_n$/(SrTiO$_3$)$_n$超格子における、 (PbTiO$_3$)$_n$/(SrTiO$_3$)$_n$超格子における、 (PbTiO$_3$)$_n$/(SrTiO$_3$)$_n$超格子における、 (PbTiO$_3$)$_n$/(SrTiO$_3$)$_n$超格子における、 (PbTiO$_ |
2306.00210 | PERFOGRAPH: A Numerical Aware Program Graph Representation for
Performance Optimization and Program Analysis | The remarkable growth and significant success of machine learning have
expanded its applications into programming languages and program analysis.
However, a key challenge in adopting the latest machine learning methods is the
representation of programming languages, which directly impacts the ability of
machine learning methods to reason about programs. The absence of numerical
awareness, aggregate data structure information, and improper way of presenting
variables in previous representation works have limited their performances. To
overcome the limitations and challenges of current program representations, we
propose a graph-based program representation called PERFOGRAPH. PERFOGRAPH can
capture numerical information and the aggregate data structure by introducing
new nodes and edges. Furthermore, we propose an adapted embedding method to
incorporate numerical awareness. These enhancements make PERFOGRAPH a highly
flexible and scalable representation that effectively captures programs
intricate dependencies and semantics. Consequently, it serves as a powerful
tool for various applications such as program analysis, performance
optimization, and parallelism discovery. Our experimental results demonstrate
that PERFOGRAPH outperforms existing representations and sets new
state-of-the-art results by reducing the error rate by 7.4% (AMD dataset) and
10% (NVIDIA dataset) in the well-known Device Mapping challenge. It also sets
new state-of-the-art results in various performance optimization tasks like
Parallelism Discovery and NUMA and Prefetchers Configuration prediction. | Ali TehraniJamsaz, Quazi Ishtiaque Mahmud, Le Chen, Nesreen K. Ahmed, Ali Jannesari | 2023-05-31T21:59:50 | http://arxiv.org/abs/2306.00210v2 | Perfograph: A Numerical Aware Program Graph Representation for Performance Optimization and Program Analysis
###### Abstract
The remarkable growth and significant success of machine learning have expanded its applications into programming languages and program analysis. However, a key challenge in adopting the latest machine learning methods is the representation of programming languages, which directly impacts the ability of machine learning methods to reason about programs. The absence of numerical awareness, composite data structure information, and improper way of presenting variables in previous representation works have limited their performances. To overcome the limitations and challenges of current program representations, we propose a novel graph-based program representation called Perfograph. Perfograph can capture numerical information and the composite data structure by introducing new nodes and edges. Furthermore, we propose an adapted embedding method to incorporate numerical awareness. These enhancements make Perfograph a highly flexible and scalable representation that can effectively capture programs' intricate dependencies and semantics. Consequently, it serves as a powerful tool for various applications such as program analysis, performance optimization, and parallelism discovery. Our experimental results demonstrate that Perfograph outperforms existing representations and sets new state-of-the-art results by reducing the error rate by 7.4% (AMD dataset) and 10% (NVIDIA dataset) in the well-known Device Mapping challenge. It also sets new state-of-the-art results in various performance optimization tasks like Parallelism Discovery and Numa and Prefetchers Configuration prediction.
## 1 Introduction
In recent years, the remarkable success of machine learning has led to transformative advancements across numerous fields, including compiler optimization and program analysis. The applications include compiler heuristics prediction, optimization decisions, parallelism detection, etc. [4; 14]. The training process generally involves feeding program data as input and transforming it into a representation suitable for machine learning models. The selection of program representation is crucial, as it can significantly impact the performance of the machine learning model [11]. With the development of graph neural networks (GNNs), an increasing number of graph representations of programs have been incorporated into GNN models for program analysis [3; 26]. One of the pioneering efforts in developing a comprehensive graph representation for programs is ProGraML [13]. ProGraML incorporates control, data, and call dependencies as integral components of a program's representation. In contrast to prior sequential learning systems for code, ProGraML closely resembles the intermediate representations used by compilers, and the propagation of information through these graphs
mimics the behavior of typical iterative data-flow analyses. Despite the success of ProGraML has achieved, there are shortcomings in this current state-of-the-art program representation, especially in performance-oriented downstream tasks. These limitations stem from neglecting numerical values available at compile time and the inadequate representation of composite data types.
In this paper, we present Perfograph* to address the limitations of the current state-of-the-art program representation. Additionally, we propose a novel way to embed numbers in programs in an elegant way that our DL model will not face unknown numbers during inference time. Our experiments demonstrate that Perfograph sets new state-of-the-art results in numerous downstream tasks. For example, in Device Mapping downstream task, Perfograph yield error rates as low as 6% and 10% depending on the target hardware. Moreover, Perfograph even outperforms the tools and models specially designed for specific tasks such as parallelism discovery.
Footnote *: Code available at: [https://github.com/tehranixyz/perfograph](https://github.com/tehranixyz/perfograph)
Overall, the main contributions of this paper are:
* A new compiler and language agnostic program representation based on LLVM-IR that represents programs as graphs.
* The proposed representation supports composite data types and provides numerical awareness, making it highly effective for performance optimization tasks.
* Evaluation of the proposed representation on common downstream tasks and outperforming state-of-the-art representations such as ProGraML.
* Quantification of the proposed approach on a new set of downstream tasks such as parallelism discovery and configuration of NUMA systems.
The rest of the paper is structured as follows: Section 2 presents the related works. In the section 3, we provide a motivational example, showing the limitations of ProGraML, the state-of-the-art program represention. This section is followed by section 4 where we present our proposed representation Perfograph along with the novel way of embedding numerical values. In section 5, experimental results on downstream tasks are provided, and finally, Section 6 concludes the paper and discusses some future works.
## 2 Related Works
Machine learning has brought significant advancements in many fields, and program analysis and software engineering are no exceptions. However, Machine Learning (ML) and Deep Learning (DL) models can not directly process raw source code to reason about programs. Therefore, researchers have explored different approaches to represent applications in a format suitable for DL models. Generally, there are three types of commonly used program presentations: sequence of tokens, Abstract Syntax Tree (AST), and Intermediate Representation (IR).
**Sequence of tokens:** The initial attempts [15; 20; 25] represented source code as a sequence of tokens, such as identifiers, variable names, or operators. This approach intuitively treats programming languages similarly to natural languages. It allows for the utilization of advanced natural language process (NLP) techniques, particularly with the advent of large language models [18; 21; 22]. However, this token-based representation overlooks the inherent dependency information within the program's structure. It fails to capture the unique relationships and dependencies between different elements of the code, which can limit its effectiveness in tasks such as compiler optimization and code optimization.
**AST:** An AST represents the structure of a program by capturing its hierarchical organization. It is constructed based on the syntactic rules of the programming language and provides a high-level abstraction of the code. Previous works have leveraged ASTs as inputs to tree-based models for various code analysis tasks like software defect prediction [16] and code semantic study [9]. Moreover, there have been efforts to augment ASTs into graphs that incorporate program analysis flows such as control flow and data flow. These AST-based graph representations capture more comprehensive code dependency information and have shown superior results compared to traditional approaches in previous works [2; 3].
**IR:** IR is an intermediate step between the source code and the machine code generated by a compiler. Previous work [36] has utilized IR to train an encoding infrastructure for representing programs as a distributed embedding in continuous space. It augments the Symbolic encodings with the flow of information to capture the syntax as well as the semantics of the input programs. However, it
generates embedding at the program or function level and also requires data-flow analyses type for generating the embedding. In contrast, our approach derives embedding from the representation and works at the more fine-grained instruction level. More recent works [6; 13; 8] have leveraged IR-based graph representation to better capture essential program information, such as control flow, data flow, and dependencies. However, despite their success, IR-based graph representations have certain limitations. For example, these representations may not be numeric-aware or may lack the ability to adequately represent composite data types. In this work, we propose Perfograph, a graph representation based on IR, to address these limitations.
## 3 Motivation
As stated in the related work section, program representations based on the intermediate representation are very effective in enabling DL models to automate the process of various optimizations. One such representation is ProGraML, whose performance surpasses other code representations, making it state-of-the-art in various optimizations and downstream tasks. However, despite its potential, it suffers from several limitations. To name a few: it is incapable of properly carrying out information regarding read and write operations to the memory location, has no support for composite data types, and discards numerical values. Listing 1 shows a code snippet where a 3-dimensional array is defined.Figure 1 shows the ProGraML representation of this code snippet. For illustration purposes, instruction nodes and control flow edges are shown in blue, whereas red represents variable, constant nodes, and data flow edges. Green edges show the call graph. As it can be seen, ProGraML fails to represent some critical information. For instance, code float arr[2][3][4] is converted to LLVM IR [2 x [3 x [4 x float]]]*, which is used to construct a node in ProGraML. It eliminates the composite data's structure information, like the array's dimension. Leaving it up to the DL model to infer the meaning behind the numbers in [2 x [3 x float]]*. Moreover, in this representation, only the type of numbers (e.g., int8,float) are considered, and the actual values of the numbers are not given attention. The absence of numerical awareness limits the performance of ProGraML in applications where numerical values play an important role. A numerically aware representation can help understand and optimize operations involving numeric data types, constants, and expressions. There are also some anomalies in the way temporary variables are depicted. For example, in 1, we see the fourth alloca node allocates memory for a variable, and two store instructions are applied on two separate nodes representing the variable. Thus, the information about the first store instruction is not carried out properly when the second store instruction happens. In the following section, we will see how Perfograph effectively addresses many limitations in the current state-of-the-art program representation. Perfograph uses ProGraML representation as its initial graph and reconstructs the graphs by addressing the aforementioned limitations.
## 4 Perfograph: A fined-grained numerical aware graph representation
Perfograph is graph representation based on LLVM IR. In fact, Perfograph is built on top of ProGraML, however, it is not suffering from the limitations that the ProGraML has, helping DL models to reason over the complex structure of the programs enabling them to help compilers
Figure 1: ProGraML representation of Listing 1.
make more accurate optimization decisions, especially in terms of performance. Figure 2 shows how various enhancements and improvements are applied to construct a more precise representation. Consider a simple code example of defining a variable and increasing it by one {int i = 0; i++;}. Figure 1(a) shows the ProGraML representation of this code example. In the following subsection, we will explain how Perfograph is constructed by addressing the limitations shown in Figure 1(a).
### Representing Local Identifiers and store instruction
**Local Identifiers:** Local identifiers' names are preceded by % in LLVM Intermediate representation. Memory allocation on the stack is done by alloca instruction. One of the limitations of the current state-of-the-art program representation, ProGraML, is that it is unable to carry out information regarding the operations that happen to a memory location. For instance, in Figure 1(a), the two store nodes represent storing values of 0 and 1 to variable i. However, as shown, each store instruction accesses a separate memory location, making it difficult for the graph neural network to reason over the operations that happen to a memory location. For the embedding vector of the second store node in 1(a) to represent the fact that some information regarding the variable i has changed, one has to increase the number of GNN layers to 3 to support up to 3 hops when propagating the messages in GNN. This can potentially limit the ability of the GNN model if there are a greater number of hops between the two store nodes shown in Figure 1(a). To address this limitation, instead of having more than one variable node (oval-shape nodes) per identifier, Perfograph only considers one variable node in its graph representation. Any load or store instruction will refer to the same variable node. These changes are shown in Figure 1(b). We see that the store nodes in Figure 1(b) access the same memory location, thus representing the fact that those store instructions are modifying the same memory location.
Store instruction: LLVM uses store instruction to write into memory. store instruction has two arguments, a value to store and the address to which it will store the value. ProGraML differentiates between these two arguments by adding a position feature to the edges as shown in Figure 1(a). However, since the store instruction modifies the contents at the corresponding memory address, we posit that it is better to reflect the fact the content of the identifier has changed. To present this information, Perfograph adds an extra edge from the store node to node representing the identifier whose value is modified by the store instruction. Figure 1(c) shows these changes in the graph constructed by Perfograph.
### Numerical Awareness
Numbers can be a significant factor in optimization decisions. For example, they can show the loop bound, and different optimizations can be considered depending on the loop bound. Perfograph, unlike ProGraML, not only considers the type of numbers such as i32, i64, float but also the actual values of the numbers. Perfograph presents numbers as constant nodes (shown as diamond nodes in Figure 1(d). As illustrated in Figure 1(d), numerical constant nodes have the actual value of the number in their feature set in addition to the type of the number. Even though numerical constant nodes have the value of the number as one of their features, there is a need to embed the numbers in a way that no unknown number will be seen in the test phase.
Figure 2: Perfograph addresses the existing limitations in program representation.
Unlike other tokens, numbers are harder to embed as an infinite amount of numbers exists, and to handle all ranges of numbers, we need to have a very large vocabulary set. However, with a very large vocabulary size, the DL models may still encounter numbers in the inference phase that they have not seen in the training phase. We propose a novel way of embedding numbers called Digit Embedding. Figure 3 shows our approach. To embed a number, we first break down the number to its digits; then, we consider a position for each one of the digits. The goal is to let DL models realize the place value of each digit. Then, each digit and its corresponding position are embedded and summed together. Therefore, we will have an embedding representing the information about the digits and their positions. For instance, in Figure 3, we embed each digit and its corresponding position with an output dimension of 3. Since the number has four digits, the results would be a vector/tensor of size \(4\times 3\). To make sure the Digit Embedding of numbers has the same length across numbers with varying sizes of digits, we apply an aggregation function over the embedding dimension. Since the output embedding dimension is three in this example, we would have one vector of length three representing the number after aggregation. The aggregation function can be of any type (Max, Mean, etc.).
### Composite Data Types
Composite data types, such as arrays and vectors, are an essential part of applications. They play an important role in many applications, such as matrix multiplications. Thus, presenting these data types helps the DL models better understand programs. Current LLVM IR-based program representations fail to present composite data types appropriately. For example, consider a three-dimensional integer array. In LLVM IR, this array is shown as 3 x [2 x [3 x i32]]]*. As can be seen, the length of the arrays and their data types are inferable. However, without proper representation, the DL model's capacities will be spent on learning these deterministic facts (i.e., the length of the arrays and their type). Perfograph considers composite data types as a new node type in its representations. Figure 3(b) shows how composite data types are supported by Perfograph. Unlike other LLVM IR-based representations, Perfograph support multi-dimensional arrays and vectors. Perfograph creates a chain of nodes to present the different dimensions of the arrays. In Figure 3(a), we see there is a node representing the three-dimensional array [3 x [2 x [3 x i32]]]*. Perfograph breaks down the corresponding node into three white nodes (since it is a three-dimension array) as shown in Figure 3(b). Then each node has a context representing that specific dimension of the array. For example, the context for the third dimension is [3 x i32], whereas for the second dimension, the context is [2 x [3 x i32]]. For each composite type node, in addition to the context of the node, we specifically add the length of the array and its type as additional features. For composite data types whose lengths are not known during compile time, we follow the LLVM conventions by considering the length of those data types as vscale. These enhancements will help the DL models to reason over the dimensions and types of composite data types. As a result, Perfograph will ultimately enable the DL models to have more accurate predictions for applications that deal with arrays and vectors.
Figure 4: Perfograph supports composite data types.
Figure 3: Overview of the digit embedding.
Experimental Results and Downstream Tasks
In this section, we evaluate Perfograph on 6 downstream tasks. For each downstream task, we will explain the task itself, the dataset, and the baselines. More details regarding the dataset, the models that are experimented with, and the baselines can be found in the supplementary material section.
### Experimental Setup
In our experiments, we use DGL's [37] RGCN [34] implementation for Perfograph representation. The graphs from Perfograph are treated as heterogeneous and managed via the HeteroGraphConv module. We use a hardware setup of two 18-Core Intel Skylake 6140 CPUs and two NVIDIA Tesla V100-32GB GPUs. The embedding space for numbers is generated by extracting digits and positions from a numeric token of an IR statement, then passed to a PyTorch [29] embedding layer for digit and position embeddings. These are combined for the final numeric token embedding. Non-numeric tokens directly go through the PyTorch embedding layer. Each Perfograph heterogeneous node converts to a 120-dimensional vector via this embedding. We use the Adam [24] Optimizer, relu [1] activation function, a learning rate of \(0.01\), and hidden_dim parameter of \(60\). Mean aggregation is applied to combine different node type results before a linear classification layer, which outputs a probability distribution for each class. The class with the highest probability is the prediction.
### Device Mapping
**Problem Definition:** We apply Perfograph to the challenging heterogeneous device mapping [12] problem. In this task, there are a number of OpenCL kernels that we need to predict which accelerator (CPU or GPU) yields higher performance. We compare Perfograph against DeepTune [12], Inst2Vec [6], and ProGraML [13]. The results of the baselines are quoted from [13].
**Dataset:** For this task, we use the dataset published in [12]. In this dataset, there are 256 OpenCL kernels available, and 680 LLVM IR instances are extracted from them. There are two types of GPUs: AMD and NVIDIA. For each of the GPUs, the runtimes of the kernels are recorded in the dataset. For AMD, 276 kernels show better performance in GPU, and 395 kernels show better performance in CPU. Whereas for NVIDIA, 385 kernels have better runtimes with GPU, and 286 kernels have better runtimes with CPU. We consider this as a binary CPU or GPU classification problem.
**Results:** As the dataset is small, we use the same 10-fold validation (with 80% training, 10% validation, and 10% testing) like ProGraML [13] and chose the model with the highest validation accuracy. The hand-crafted features of [19] are also used as graph-level features in our model to enhance the performance following the approach in [13]. Table 1 and 2 show the final precision, call, f1-score, and accuracy for AMD and NVIDIA devices. Figure 5 compares Perfograph with state-of-the-art models on the Device Mapping dataset. We can see that Perfograph sets new state-of-the-art results by achieving the lowest error rate among the baselines both for AMD and NVIDIA, indicating the effectiveness of Perfograph.
### Parallelism Discovery
**Problem Definition:** In this problem, given a sequential loop, we try to predict whether a loop can be executed in parallel. We treat this problem as a binary classification problem with two classes: Parallel and Non-Parallel.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & Precision & Recall & F1-score & Accuracy \\ \hline CPU & 0.94 & 0.94 & 0.94 & 0.94 \\ GPU & 0.94 & 0.94 & 0.94 & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Perfograph results for AMD devices.
Figure 5: Performance comparison the device mapping task with state-of-the-art models [lower is better].
**Dataset:** The OMP_Serial dataset [10] is used for this task. It contains around 6k compilable source C files with Parallel and Non-Parallel loops. The training dataset contains around 30k IR files. The OMP_Serial dataset has three test subsets to compare the performance with three traditional parallelism assistant tools: Pluto (4032 IR files), autoPar (3356 IR files), and DiscoPoP (1226 IR files).
**Results:** We evaluate Perfograph on all three subsets and compare it with traditional rule-based tools: Pluto [7], autoPar [31], DiscoPoP [27], and also Deep Learning based tools: Graph2Par [10], ProGraML. Table 3 shows the results. The results of Pluto and Graph2par are reported from [10]. As ProGraML does not have this downstream task in their paper, we used the ProGraML representation in our pipeline to generate the results. Results show that traditional rule-based tools have the highest precision but the lowest accuracy because those tools are overly conservative while predicting parallel loops. So, they miss out on a lot of parallelism opportunities. Perfograph achieves considerably good precision scores across all the test subsets. In terms of accuracy, Perfograph surpasses the current state-of-the-art approaches by 2% in the Pluto and autoPar subset. In the DiscoPoP subset, it achieves an impressive 99% accuracy and surpasses ProGraML by 9%.
### Parallel Pattern Detection
**Problem Definition:** Parallel loops often follow some specific patterns. Identifying parallel patterns is important because it helps developers understand how to parallelize a specific program since each parallel pattern needs to be treated differently. As a result, we apply Perfograph to identify potential parallel patterns in sequentially written programs. Only the three most common parallel patterns are considered: Do-all (Private), Reduction, and Stencil [32]. Given a loop, the task is to predict the pattern.
**Dataset:** For this experiment, we also use the OMP_Serial dataset [10]. This dataset contains source codes of different parallel patterns. These programs are collected from well-known benchmarks like NAS Parallel Benchmark [23], PolyBench [30], BOTS benchmark [17], and the Starbench benchmark [5]. Then, template programming packages like Jinja [33] are used to create synthetic programs from the templates collected from the mentioned benchmarks. The dataset contains 200 Do-all (Private), 200 Reduction, and 300 Stencil loops.
**Results:** We used 80% of the dataset for training and 20% for testing. Table 4 represents our findings. The results of Praformer and Graph2par are reported from [10]. We compare with these two approaches as they are specifically developed for solving this problem. For generating the results with ProGraML, we used the ProGraML representation in our pipeline. We can see Perfograph achieves an impressive 99% accuracy on the OMP_Serial Parallel Pattern dataset. It surpasses the state-of-the-art ProGraML model by 3%. This indicates the strength of Perfograph to capture the syntactic and structural patterns embedded into source programs. From Table 4, we can also see Perfograph has high precision for all three patterns and achieves a high precision score for Do-all and Stencil patterns while maintaining very good accuracy.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Subset & Approach & Precision & Recall & F1-score & Accuracy \\ \hline \multirow{3}{*}{Pluto} & Pluto & 1 & 0.39 & 0.56 & 0.39 \\ & Graph2par & 0.88 & 0.93 & 0.91 & 0.86 \\ & ProGraML & 0.88 & 0.88 & 0.87 & 0.89 \\ & **Perfograph** & 0.91 & 0.90 & 0.89 & **0.91** \\ \hline \multirow{3}{*}{autoPar} & autoPar & 1 & 0.14 & 0.25 & 0.38 \\ & Graph2par & 0.90 & 0.79 & 0.84 & 0.80 \\ & ProGraML & 0.92 & 0.69 & 0.67 & 0.84 \\ & **Perfograph** & 0.85 & 0.91 & 0.85 & **0.86** \\ \hline \multirow{3}{*}{DiscoPoP} & DiscoPoP & 1 & 0.54 & 0.70 & 0.63 \\ & Graph2par & 0.90 & 0.79 & 0.84 & 0.81 \\ \cline{1-1} & ProGraML & 0.92 & 0.94 & 0.92 & 0.91 \\ \cline{1-1} & **Perfograph** & 0.99 & 1 & 0.99 & **0.99** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison of Perfograph on the OMP_Serial dataset.
### Numa and Prefetchers Configuration Prediction
**Problem Definition:** An appropriate configuration of Non-Uniform Memory Access (NUMA) and Hardware Prefetchers significantly impacts program performance. In this experiment, we define the task of NUMA and prefetcher selection as predicting the right configuration within a given tuning parameter search space. We evaluate the performance of both ProGraML and Perfograph for this task by converting each program in the dataset to ProGraML and Perfograph graphs following the approach in [35].
**Dataset:** We use the dataset in [35], which includes a diverse set of intermediate representation files coupled with the optimal configuration. The dataset incorporates various LLVM compiler optimization flags to produce different forms of the same program. There are 57 unique kernels (IR files) in this dataset, and around 1000 optimization flags are applied, resulting in 57000 IR files in total. Each IR file within the dataset is accompanied by its runtime on two architectures, Sandy Bridge and Skylake, across thirteen different NUMA and prefetcher configurations.
**Results:** Following the approach in the study of TehraniJamasz _et al._, we partition the dataset into ten folds for cross-validation. Figure 5(a) and 5(b) illustrate the performance results in terms of error rates. On average, Perfograph outperforms ProGraML by 3.5% and 1.8% for the Sandy Bridge and Skylake architecture, respectively. These improvements demonstrate the effectiveness of Perfograph compared to the state-of-the-art ProGraML.
### Thread Coarsening Factor (TCF) Prediction
**Problem Definition:** Thread coarsening is an optimization technique for parallel programs by fusing the operation of two or more threads together. The number of threads that can be fused together is known as the Thread Coarsening Factor (TCF). For a given program, the task is to predict the coarsening factor value (1, 2, 4, 8, 16, 32) that leads to the best runtime. The running time with coarsening factor 1 is used as the baseline for calculating speedups. For this task, we compare Perfograph against DeepTune [12], Inst2Vec [6] and ProGraML [13]. The results of the baselines are quoted from [6]. However, since ProGraML has not been evaluated on this task in the past, we apply ProGraML representation in our setup for comparison.
**Dataset:** We use the dataset of Ben-Nun et al. [12]. The dataset contains only 17 OpenCL kernels. For each kernel, the dataset has the runtime information on four different GPUs for the different
\begin{table}
\begin{tabular}{c c c c c c} \hline \multicolumn{5}{c}{OMP\_Serial Dataset.} \\ \hline Approach & Pattern & Precision & Recall & F1-score & Accuracy \\ \hline \multirow{3}{*}{Pragformer} & Do-all & 0.86 & 0.85 & 0.86 & \\ & Reduction & 0.89 & 0.87 & 0.87 & 0.86 \\ & Stencil & N/A & N/A & N/A & \\ \hline \multirow{3}{*}{Graph2Par} & Do-all & 0.88 & 0.87 & 0.87 & \\ & Reduction & 0.9 & 0.89 & 0.91 & 0.9 \\ & Stencil & N/A & N/A & N/A & \\ \hline \multirow{3}{*}{ProGraML} & Do-all & 0.92 & 0.90 & 0.91 & \\ & Reduction & 0.92 & 0.92 & 0.92 & 0.96 \\ & Stencil & 0.98 & 1 & 0.99 & \\ \hline \multirow{3}{*}{**Perfograph**} & Do-all & 1 & 0.97 & 0.99 & \\ & Reduction & 0.97 & 1 & 0.99 & \\ \cline{1-1} & Stencil & 1 & 1 & 1 & \\ \hline \end{tabular}
\end{table}
Table 4: Performance comparison for the parallel pattern detection task with Perfograph on the
Figure 6: Breakdown of the NUMA and prefetchers configuration prediction per fold [lower is better].
thread coarsening factor values. Hence, for each kernel, we have the runtime corresponding to each thread coarsening factor value on a specific GPU device.
**Results:**
we design the problem as a multi-class classification problem where given a kernel, we try to predict which thread coarsening factor provides the highest performance. As the dataset is very small, we apply a 17-fold cross-validation approach. In each fold, we train our model on 16 data points, and the model is tested on the one unseen data point that is left out of the training set. Figure 7 shows the comparison of kernels with the correct Thread Coarsening Factor (TCF) found by ProGraML and Perfograph. Across the four platforms in total Perfograph is able to correctly predict the TCF for 17 cases, whereas ProGraML is able to find only 9 cases. In two of the platforms (AMD Radeon HD 5900 and NVIDIA GTX 480) where ProGraML failed to find any kernel with the correct TCF, Perfograph can find three kernels in both of the platforms with the correct TCF value. As shown in 5, even though Perfograph outperforms ProGraML on most computing platforms, it falls behind inst2vec. We posit the reason is that inst2vec has a pretraining phase where it is trained using skip-gram. On the other hand, 17 kernels are very small. Therefore, a DL-based model is not able to generalize enough. However, we can see that even on a smaller dataset, Perfograph achieved comparable speedups with respect to the current state-of-the-art models.
### Algorithm Classification
**Problem Definition:** Previous downstream tasks showed that in most of the cases, Perfograph outperforms the baselines. Those tasks were mostly performance oriented. We go further by applying Perfograph on a different downstream task which is algorithm classification. The task involves classifying a source code into 1 of 104 classes. In this task, we compare the results of Perfograph to those of inst2vec, ProGraML. The results of the baselines are quoted from [13].
**Dataset:** We use the POJ-104 dataset [28] in a similar setup as [13] that contains around 240k IR files for training and 10k files for testing.
**Results:** For this task, inst2vec has error rate of 5.17, whereas ProGraML has error rate of 3.38. Perfograph yields an error rate of 5.00 which is better than inst2vec and slightly behind ProGraML. One of the reasons is that ProGraML already has a very small error rate in this task, leaving a very small gap for improvement, however still Perfograph's result is very close to that of ProGraML.We could not reproduce the results in ProGraML paper in our setup. In fact, when we applied ProGraML in our setup, the error rate of ProGraML was 6.00. Moreover, we posit that for algorithm classification, numbers are not a significant factor. Therefore, numerical awareness can confuse the models a little bit. However, this experiment shows that Perfograph is very close to ProGraML's performance in this task and shows the applicability of Perfograph to a wider range of downstream tasks.
## 6 Conclusion and Future Work
In this paper, we presented Perfograph, an LLVM IR-based graph representation of programs that supports composite data types such as arrays and vectors and is numerical aware. Moreover, it addresses several limitations of the previous IR-based graph representations. Perfograph is
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Computing Platform & DeepTune & inst2vec & ProGraML & Perfograph \\ \hline AMD Radeon HD 5900 & 1.1 & **1.37** & 1.15 & 1.19 \\ AMD Tahiti 7970 & 1.05 & 1.1 & 1.00 & **1.14** \\ NVIDIA GTX 480 & **1.1** & 1.07 & 0.98 & 1.03 \\ NVIDIA Tesla K20c & 0.99 & **1.06** & 1.03 & 1.01 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Speedups achieved by coarsening threads
Figure 7: Correct TCF found by ProGraML vs Perfograph [higher is better].
evaluated on various downstream tasks, and experimental results indicate that Perfograph is indeed effective, outperforming state-of-the-art in most of the downstream tasks. Perfograph numerical awareness capability is limited to the numerical values that are available at the compile time. For future work, we intend to augment our representation by adding support for dynamic information and checking the possibility of integrating hardware performance counters with our representation. Moreover, we plan to develop a pre-trained embedding model using our representation. Having a pre-trained model will help to solve the problem of limiting training samples in some downstream tasks.
| 機械学習の驚異的な成長と重要な成功は、プログラム言語やプログラム分析の適用範囲を広げている。しかし、最新の機械学習手法を採用するための鍵となる課題は、プログラム言語の表現であり、これは機械学習手法がプログラムについての推理能力に影響を与える。過去の表現の不足である数値意識、集約データ構造情報、変数の表現方法の不適切さが、そのパフォーマンスを制限している。これらの限界と課題に対処するために、私たちはプログラム表現のgraph-basedな表現であるPERFOGRAPHを提案する。PERFOGRAPHは、新しいノードとエッジを追加することで数値情報と集約データ構造を捉えることができる。さらに、数値意識を組み込むための適応された埋め込み手法も提案する。これらの強化により、PERFOGRAPHは複雑な依存関係とsemanticsを効果的に捉える、高度に柔軟でスケーラブルな表現となる。したがって、それはプログラム分析、パフォーマンス最適化、並列探索 |
2309.10686 | Interval Signal Temporal Logic from Natural Inclusion Functions | We propose an interval extension of Signal Temporal Logic (STL) called
Interval Signal Temporal Logic (\ISTL). Given an STL formula, we consider an
interval inclusion function for each of its predicates. Then, we use minimal
inclusion functions for the $\min$ and $\max$ functions to recursively build an
interval robustness that is a natural inclusion function for the robustness of
the original STL formula. The resulting interval semantics accommodate, for
example, uncertain signals modeled as a signal of intervals and uncertain
predicates modeled with appropriate inclusion functions. In many cases,
verification or synthesis algorithms developed for STL apply to \ISTL with
minimal theoretic and algorithmic changes, and existing code can be readily
extended using interval arithmetic packages at negligible computational
expense. To demonstrate \ISTL, we present an example of offline monitoring from
an uncertain signal trace obtained from a hardware experiment and an example of
robust online control synthesis enforcing an STL formula with uncertain
predicates. | Luke Baird, Akash Harapanahalli, Samuel Coogan | 2023-09-19T15:13:27 | http://arxiv.org/abs/2309.10686v2 | # Interval Signal Temporal Logic from Natural Inclusion Functions
###### Abstract
We propose an interval extension of Signal Temporal Logic (STL) called Interval Signal Temporal Logic (I-STL). Given an STL formula, we consider an interval inclusion function for each of its predicates. Then, we use minimal inclusion functions for the \(\min\) and \(\max\) functions to recursively build an interval robustness that is a natural inclusion function for the robustness of the original STL formula. The resulting interval semantics accommodate, for example, uncertain signals modeled as a signal of intervals and uncertain predicates modeled with appropriate inclusion functions. In many cases, verification or synthesis algorithms developed for STL apply to I-STL with minimal theoretic and algorithmic changes, and existing code can be readily extended using interval arithmetic packages at negligible computational expense. To demonstrate I-STL, we present an example of offline monitoring from an uncertain signal trace obtained from a hardware experiment and an example of robust online control synthesis.
Autonomous systems, constrained control, fault detection.
## I Introduction
Signal Temporal Logic (STL) is an expressive language for encoding desired dynamic behavior of a system. STL specifications are built from predicate functions over the system output as well as Boolean and temporal connectives. For example, a warehouse robot may be required to visit regions defined by predicate functions in a prescribed order and deadline, or a building HVAC system might be allowed to violate a prescribed temperature range for only a limited period of time. STL is equipped with both qualitative logical semantics [1] and quantitative robustness semantics [2] that quantify the margin by which a specification is violated or satisfied.
Two major applications of STL include monitoring and control synthesis. For monitoring, the goal is to determine whether a given signal satisfies an STL specification [3]. There are several available tools and algorithms in the literature for efficient monitoring of an STL specification [4, 5, 6]. For control synthesis, the goal is to obtain a control strategy such that the resulting system output is guaranteed to satisfy a given STL specification. Control synthesis is usually posed as an optimal control problem by including the robustness metric in the cost or constraints. This problem is generally non-convex and non-smooth due to the composition of \(\min\) and \(\max\) appearing in the definition of the robustness metric and is often converted to a mixed-integer program [7, 8]. For example, a state-of-the-art mixed-integer linear program for STL control sythesis over affine predicates with quadratic costs using a minimal number of binary variables is proposed in [8] and implemented in the stlpy Python package. Alternate approaches to control synthesis include under-approximating the non-smooth robustness metric with a smooth approximation [9, 10] and using control barrier functions for certain fragments of STL [11, 12].
One major challenge is accommodating uncertainty in the system dynamics, the system output, and/or the STL specification itself. A variation of STL called pSTL allows satisfaction or violation of a specification over a signal to occur with some probability [13]. Similarly, the paper [14] propagates stochastic robustness intervals of STL robustness with linear predicates for safe motion planning. The paper [15] proposes a monitoring algorithm that accommodates uncertainty and time perturbations using intervals for finite-horizon STL formulas but is limited to monitoring and does not consider uncertainty in the STL predicates. In the context of online monitoring, the paper [6] presents an algorithm where the robustness of a partial signal is predicted as an interval before an entire signal is observed so that satisfaction or violation can be reported early if zero robustness is not in the interval. The paper [16] develops an offline monitoring algorithm for handling common models of sensor uncertainty within an STL framework.
The main contribution of this letter is an interval extension of STL called Interval-STL (I-STL) to accommodate interval-valued uncertainty in the system or specification. The syntax and semantics of I-STL are the same as STL except interval inclusion functions replace predicate functions and \(\min\) and \(\max\) are replaced with their interval inclusion counterparts, resulting in interval-valued quantitative robustness semantics and three-valued qualitative logical semantics for I-STL. Unlike previous works, our construction accommodates uncertainty in the predicate functions themselves. Our main theorem is a soundness result establishing that the interval robustness of I-STL over-approximates the usual STL robustness under any realization of the uncertainty, and similarly for the logical semantics. A main feature of I-STL is that, since its definition is built from inclusion functions and interval arithmetic, existing algorithms for STL are often easily extended to I-STL using
mature interval analysis packages at negligible computational expense. In particular, we extend stlpy to I-STL using our interval toolbox npinterval[17], and we demonstrate the resulting algorithms on two examples: monitoring an uncertain signal and synthesizing a controller for an uncertain system.
The rest of this letter is outlined as follows. Section 2 presents mathematical preliminaries needed for the interval arithmetic and STL. Section 3 is the primary theoretic contribution of this letter describing I-STL. Section 4 gives a brief discussion of advantages and limitations of I-STL. Section 5 provides examples of our method applied to monitoring and control synthesis followed by Section 6 which concludes this letter.
## 2 Mathematical Preliminaries
### Notation
We denote the standard partial order on \(\mathbb{R}^{n}\) by \(\leq\), i.e., for \(x,y\in\mathbb{R}^{n}\), \(x\leq y\) if and only if \(x_{i}\leq y_{i}\) for all \(i\in\{1,\ldots,n\}\). A (bounded) _interval_ of \(\mathbb{R}^{n}\) is a set of the form \(\{z:\underline{x}\leq z\leq\underline{x}\}=:[\underline{x},\overline{x}]\) for some endpoints \(\underline{x},\overline{x}\in\mathbb{R}^{n}\), \(\underline{x}\leq\overline{x}\). Let \(\mathbb{IR}^{n}\) denote the set of all intervals on \(\mathbb{R}^{n}\). We also use the notation \([x]\in\mathbb{IR}^{n}\) to denote an interval when its endpoints are not relevant or implicitly understood to be \(\underline{x}\) and \(\overline{x}\). For a function \(f:\mathbb{R}^{n}\to\mathbb{R}^{m}\) and a set \(\mathcal{X}\subseteq\mathrm{dom}(f)\), define the set valued extension \(f(\mathcal{X}):=\{f(x):x\in\mathcal{X}\}\).
A discrete-time signal in \(\mathbb{R}^{n}\) is a function \(\mathbf{x}:\mathbb{N}\to\mathbb{R}^{n}\) where \(\mathbb{N}=\{0,1,2,\ldots\}\). A discrete-time interval signal in \(\mathbb{IR}^{n}\) is a function \([\mathbf{x}]:\mathbb{N}\to\mathbb{IR}^{n}\). If \(\mathbf{x}\) and \([\mathbf{x}]\) are such that \(\mathbf{x}(t)\in[\mathbf{x}](t)\) for all \(t\in\mathbb{N}\), we write \(\mathbf{x}\in[\mathbf{x}]\).
### Interval Analysis
Interval analysis extends operations and functions to intervals [18]. For example, if we know that \(a\in[\underline{a},\overline{a}]\), and \(b\in[\underline{b},\overline{b}]\), it is easy to see that the sum \((a+b)\in[\underline{a}+\underline{b},\overline{a}+\overline{b}]\). The same idea extends to general functions, using an inclusion function to over-approximate its output.
**Definition 1** (Inclusion Function [18]).: Given a function \(f:\mathbb{R}^{n}\to\mathbb{R}^{m}\), the interval function \([f]=[\underline{f},\overline{f}]:\mathbb{IR}^{n}\to\mathbb{IR}^{m}\) is an _inclusion function_ for \(f\) if, for every \([\underline{x},\overline{x}]\in\mathbb{IR}^{n}\), \(f([\underline{x},\overline{x}])\subseteq[f]([\underline{x},\overline{x}])\), or equivalently
\[\underline{f}([\underline{x},\overline{x}])\leq f(x)\leq\overline{f}([ \underline{x},\overline{x}])\quad\text{for all }x\in[\underline{x},\overline{x}].\]
An inclusion function is _minimal_ if for every \([\underline{x},\overline{x}]\), \([f]([\underline{x},\overline{x}])\) is the smallest interval containing \(f([\underline{x},\overline{x}])\), or equivalently
\[[f]_{i}([\underline{x},\overline{x}])=\left[\inf_{x\in[\underline{x}, \overline{x}]}f_{i}(x),\ \sup_{x\in[\underline{x},\overline{x}]}f_{i}(x)\right],\]
for each \(i\in\{1,\ldots,m\}\).
Of particular relevance to this letter are the minimal inclusion functions for \(\min\) and \(\max\).
**Proposition 1**.: _The minimal inclusion functions for \(\min(x_{1},x_{2})\) and for \(\max(x_{1},x_{2})\) with \(x_{1}\in[\underline{x}_{1},\overline{x}_{1}]\in\mathbb{IR}\), \(x_{2}\in[\underline{x}_{2},\overline{x}_{2}]\in\mathbb{IR}\), denoted as \([\min]\) and \([\max]\), are given by_
\[[\min]([x_{1}],[x_{2}]) =[\min(\underline{x}_{1},\underline{x}_{2}),\min(\overline{x}_{1}, \overline{x}_{2})], \tag{1}\] \[[\max]([x_{1}],[x_{2}]) =[\max(\underline{x}_{1},\underline{x}_{2}),\max(\overline{x}_{1}, \overline{x}_{2})]. \tag{2}\]
_Moreover, \([\min]\) and \([\max]\) extend inductively to multiple arguments in the usual way, e.g., \([\min]([x_{1}],[x_{2}],[x_{3}])=[\min(\underline{x}_{1},\underline{x}_{2}, \underline{x}_{3}),\min(\overline{x}_{1},\overline{x}_{2},\overline{x}_{3})]\), etc._
For some common functions, the minimal inclusion function is easily defined. For example, if a function is monotonic, the minimal inclusion function is simply the interval created by the function evaluated at its endpoints. However, when considering general functions, finding the minimal inclusion function is often not computationally viable. The following proposition provides a more computationally tractable approach.
**Proposition 2** (Natural Inclusion Functions).: _Given a function \(f:\mathbb{R}^{n}\to\mathbb{R}^{m}\) defined by a composition of functions/operations with known inclusion functions as \(f=e_{\ell}\circ e_{\ell-1}\circ\cdots\circ e_{1}\), an inclusion function for \(f\) is formed by replacing each composite function with its inclusion function as \([f]=[e_{\ell}]\circ[e_{\ell-1}]\circ\cdots\circ[e_{1}]\), and is called a natural inclusion function._
Existing software tools such as CORA[19] and npinterval[17] automate the construction of natural inclusion functions from general functions. We refer to [18, Section 2.4] for further discussion and other techniques to obtain other inclusion functions.
### Signal Temporal Logic
Signal Temporal Logic (STL) is defined over a set \(\mathcal{P}\) of _predicate functions_ where each \(\mu\in\mathcal{P}\) is a function \(\mu:\mathbb{R}^{n}\to\mathbb{R}\). STL specifications are formed using the syntax [10, 7]
\[\phi\triangleq(\mu(x)\geq 0)|\neg\phi|\phi\wedge\psi|\mathcal{U}_{[t_{1},t_{2}]}\psi \tag{3}\]
where \(\mu\in\mathcal{P}\). The operators conjunction \(\wedge\), until \(\mathcal{U}\), and negation \(\neg\) may be used to define disjunction \(\vee\), eventually \(\Diamond\), and always \(\Box\). We occasionally write \(\phi_{\mathcal{P}}\) to emphasize that \(\phi\) is over the set of predicate functions \(\mathcal{P}\).
An STL specification \(\phi\) is evaluated over a discrete-time signal \(\mathbf{x}:\mathbb{N}\to\mathbb{R}^{n}\). We adopt the quantitative robustness semantics of STL defined as follows. The robustness \(\rho^{\phi}\) of a specification \(\phi\) evaluated over signal \(\mathbf{x}\) at time \(t\) is calculated recursively as [10]
\[\begin{split}\rho^{\pi}(\mathbf{x},t)&=\mu(\mathbf{x}(t )),\quad\text{if }\pi=(\mu(x)\geq 0)\\ \rho^{\neg\phi}(\mathbf{x},t)&=-\rho^{\phi}(\mathbf{x },t)\\ \rho^{\phi\wedge\psi}(\mathbf{x},t)&=\min\left(\rho^{ \phi}(\mathbf{x},t),\rho^{\psi}(\mathbf{x},t)\right)\\ \rho^{\phi\vee\psi}(\mathbf{x},t)&=\max\left(\rho^{ \phi}(\mathbf{x},t),\rho^{\psi}(\mathbf{x},t)\right)\\ \rho^{\Box_{t_{1},t_{2}}\phi}(\mathbf{x},t)&=\min \limits_{t^{\prime}\in[t+t_{1},t_{1}+t_{2}]}\left(\rho^{\phi}(\mathbf{x},t^{ \prime})\right)\\ \rho^{\Diamond_{t_{1},t_{2}}\phi}(\mathbf{x},t)&=\max \limits_{t^{\prime}\in[t+t_{1},t+t_{2}]}\left(\rho^{\phi}(\mathbf{x},t^{ \prime})\right)\\ \rho^{\Diamond_{t_{1},t_{2}}\phi}(\mathbf{x},t)& =\max\limits_{t^{\prime}\in[t+t_{1},t+t_{2}]}\left(\rho^{\phi}(\mathbf{x},t^{ \prime})\right)\\ \rho^{\mathcal{U}_{t_{1},t_{2}}\phi}(\mathbf{x},t)& =\max\limits_{t^{\prime}\in[t+t_{1},t+t_{2}]}\min\left(\rho^{\phi}(\mathbf{x},t^{ \prime})\right)\\ \rho^{\mathcal{U}_{t_{1},t_{2}}\phi}(\mathbf{x},t)& =\max\limits_{t^{\prime}\in[t+t_{1},t+t_{2}]}\min\left(\rho^{\phi}(\mathbf{x},t^{ \prime}),\min\limits_{t^{\prime\prime}\in[t+t_{1},t^{\prime}]}\left(\rho^{\psi}( \mathbf{x},t^{\prime\prime})\right)\right).\end{split} \tag{4}\]
Qualitative semantics of STL formula \(\phi\) evaluated over signal \(\mathbf{x}\) are recovered from the robustness as [10]
\[\begin{split}\left[\mathbf{x}\models\phi\right]=\begin{cases}\text{True}& \text{if }\rho^{\phi}(\mathbf{x},0)\geq 0\\ \text{False}&\text{if }\rho^{\phi}(\mathbf{x},0)<0.\end{cases}\end{split}\
## 3 Interval Signal Temporal Logic
In standard STL, the robustness \(\rho^{\phi}\) of a specification \(\phi\) evaluated over a signal \(\mathbf{x}\) at a time \(t\) is a single number. With the aim of incorporating uncertainty in signal values and in predicate functions, in this section, we define and characterize _Interval Signal Temporal Logic_ that is evaluated over interval signals and whose quantitative semantics give an interval of robustness.
Interval Signal Temporal Logic (I-STL) is defined over a set of _interval predicate functions_\(\mathcal{I}\) where each \(\mathcal{M}\in\mathcal{I}\) is an interval function \(\mathcal{M}:\mathbbm{R}^{n}\rightarrow\mathbbm{R}\). I-STL syntax is the same as STL except we exchange predicate functions for interval predication functions.
**Definition 2**.: (I-STL Syntax) Given a set \(\mathcal{I}\) of interval predicate functions, I-STL syntax is defined by
\[\phi\triangleq(\mathcal{M}([x])\geq 0)|\neg\phi|\phi\wedge\psi|\phi\mathcal{M} _{[t_{1},t_{2}]}\psi \tag{6}\]
for \(\mathcal{M}\in\mathcal{I}\).
An I-STL specification \(\phi\) is evaluated over a discrete-time interval signal \([\mathbf{x}]:\mathbb{N}\rightarrow\mathbbm{R}^{n}\) where \([\mathbf{x}](t)\in\mathbbm{R}^{n}\) for each time \(t\in\mathbb{N}\). Using the minimal inclusion functions \([\min]\) and \([\max]\) given in (1) and (2), we now define the quantitative interval robustness semantics of I-STL as follows.
**Definition 3**.: (I-STL Quantitative Semantics) The _interval robustness_\([\rho]^{\phi}\) of an I-STL specification \(\phi\) evaluated over an interval signal \([\mathbf{x}]\) at time \(t\) is calculated recursively as
\[[\rho]^{\Pi}([\mathbf{x}],t) =\mathcal{M}([\mathbf{x}](t)),\quad\text{ if }\Pi=(\mathcal{M}([x]) \geq 0)\] \[[\rho]^{\neg\phi}([\mathbf{x}],t) =-[\rho]^{\phi}([\mathbf{x}],t)\] \[[\rho]^{\phi\wedge\psi}([\mathbf{x}],t) =[\min]\big{(}[\rho]^{\phi}([\mathbf{x}],t),[\rho]^{\psi}([ \mathbf{x}],t)\big{)}\] \[[\rho]^{\phi\vee\psi}([\mathbf{x}],t) =[\max]\big{(}[\rho]^{\phi}([\mathbf{x}],t),[\rho]^{\psi}([ \mathbf{x}],t)\big{)}\] \[[\rho]^{\Box_{t_{1},t_{2}}\phi}([\mathbf{x}],t) =[\min]\big{(}[\rho]^{\phi}([\mathbf{x}],t^{\prime})\big{)}\] \[[\rho]^{\phi_{t_{1},t_{2}}\phi}([\mathbf{x}],t) =[\max]\limits_{t^{\prime}\in[t+t_{1},t+t_{2}]}\big{(}[\rho]^{ \phi}([\mathbf{x}],t^{\prime})\big{)}\] \[[\rho]^{\phi\mathcal{U}_{t_{1},t_{2}}\phi}([\mathbf{x}],t)\] \[=[\max]\limits_{t^{\prime}\in[t+t_{1},t+t_{2}]}\min]\Bigg{(}[ \rho]^{\phi}([\mathbf{x}],t^{\prime}),\,[\min]\limits_{t^{\prime\prime}\in[t+t _{1},t^{\prime}]}\big{(}[\rho]^{\psi}([\mathbf{x}],t^{\prime\prime})\big{)} \Bigg{)}. \tag{7}\]
We also define three-valued logical semantics from the quantitative interval semantics as follows.
**Definition 4**.: (I-STL Three-Valued Logical Semantics) The truth-value of I-STL formula \(\phi\) evaluated over interval signal \([\mathbf{x}]\) is denoted \(\big{[}[\mathbf{x}]\models\phi\big{]}\) and given by
\[\big{[}[\mathbf{x}]\models\phi\big{]}=\begin{cases}\textsc{True}&\text{ if }[\rho]^{\phi}(\mathbf{x},0)\subseteq[0,\infty]\\ \textsc{False}&\text{ if }[\rho]^{\phi}(\mathbf{x},0)\subseteq[-\infty,0)\\ \textsc{Under}&\text{ else.}\end{cases} \tag{8}\]
We now establish the key property of I-STL: it provides interval bounds on the robustness of an STL specification given interval uncertainty in the predicate functions and/or signal.
**Definition 5** (Predicate interval extensions).: Given a set of predicate functions \(\mathcal{P}\), a set of interval predicate functions \(\mathcal{I}\) is an _interval extension_ of \(\mathcal{P}\) if for each \(\mu\in\mathcal{P}\) there exists a unique \(\mathcal{M}\in\mathcal{I}\) such that \(\mathcal{M}\) is an inclusion function for \(\mu\).
When \(\mathcal{I}\) is an interval extension of \(\mathcal{P}\), we can obtain an I-STL specification over \(\mathcal{I}\) from an STL specification \(\phi\) over \(\mathcal{P}\) by replacing every instance of a predicate function \(\mu\) with the corresponding \(\mathcal{M}\).
**Definition 6** (Induced I-STL specification).: Given an STL specification \(\phi_{\mathcal{P}}\) over the set of predicate functions \(\mathcal{P}\) and a set of interval predicate functions \(\mathcal{I}\) that is an extension of \(\mathcal{P}\), the I-STL specification that is obtained by replacing each instance of a predicate function \(\mu(x)\) in \(\phi_{\mathcal{P}}\) with the corresponding interval predicate function \(\mathcal{M}([x])\) is the I-STL specification over \(\mathcal{I}\)_induced by \(\phi_{\mathcal{P}}\) and is denoted \(\phi_{\mathcal{I}}\). When no confusion arises, we sometimes drop the subscript and write \(\phi\) for an STL specification and its induced I-STL specification.
We now present the main theoretical result of this letter, linking the semantics of an STL specification to the semantics of its induced I-STL specification.
**Theorem 1** (Soundness of Quantitative Semantics).: _Let \(\phi_{\mathcal{P}}\) be an STL specification over the set of predicate functions \(\mathcal{P}\). Let \(\mathcal{I}\) be an interval extension of \(\mathcal{P}\) and \(\phi_{\mathcal{I}}\) the I-STL specification over \(\mathcal{I}\) induced by \(\phi_{\mathcal{P}}\). Then, for any interval signal \([\mathbf{x}]:\mathbb{N}\rightarrow\mathbbm{R}^{n}\) and any signal \(\mathbf{x}\in[\mathbf{x}]\), it holds that_
\[\rho^{\phi_{\mathcal{P}}}(\mathbf{x},t)\in[\rho]^{\phi_{\mathcal{I}}}([ \mathbf{x}],t)\quad\text{for all }t. \tag{9}\]
_Moreover,_
\[\big{[}[\mathbf{x}]\models\phi_{\mathcal{I}}\big{]} =\textsc{True}\quad\text{ implies}\quad\big{[}\mathbf{x}\models\phi_{\mathcal{P}}\big{]}=\textsc{ True},\text{ and }\] \[\big{[}[\mathbf{x}]\models\phi_{\mathcal{I}}\big{]} =\textsc{False}\quad\text{ implies}\quad\big{[}\mathbf{x}\models\phi_{\mathcal{P}}\big{]}= \textsc{False}. \tag{10}\]
Proof.: Because each \(\mathcal{M}\in\mathcal{I}\) is a inclusion function for its corresponding predicate function \(\mu\in\mathcal{P}\) and \([\min]\) and \([\max]\) are inclusion functions, each equation in (7) is an inclusion function for the corresponding equation in (4) by Proposition 2. Therefore, (9) follows immediately from the defining property of inclusion functions. For (10), we observe that
\[\big{[}[\mathbf{x}]\models\phi_{\mathcal{I}}\big{]}=\textsc{True}\implies\rho^{ \phi_{\mathcal{I}}}([\mathbf{x}],0)\geq 0\]
so by (9), \(\rho^{\phi_{\mathcal{P}}}(\mathbf{x},0)\geq 0\), that is, \(\big{[}x\models\phi_{\mathcal{P}}\big{]}=\textsc{True}\), and symmetrically,
\[\big{[}[\mathbf{x}]\models\phi_{\mathcal{I}}\big{]}=\textsc{False}\implies \overline{\rho}^{\phi_{\mathcal{I}}}([\mathbf{x}],0)<0\]
so by (9), \(\rho^{\phi_{\mathcal{P}}}(\mathbf{x},0)<0\), that is, \(\big{[}x\models\phi_{\mathcal{P}}\big{]}=\textsc{False}\) where \(\rho^{\phi_{\mathcal{I}}}\) and \(\overline{\rho}^{\phi_{\mathcal{I}}}\) are the lower and upper-bounds of \([\rho]^{\phi_{\mathcal{I}}}\), that is, \(\big{[}\rho]^{\phi_{\mathcal{I}}}([\mathbf{x}],0)=[\rho^{\phi_{\mathcal{I}}}([ \mathbf{x}],0),\overline{\rho}^{\phi_{\mathcal{I}}}([\mathbf{x}],0)]\).
Note that if \(\big{[}[\mathbf{x}]\models\phi\big{]}=\textsc{Under}\) we cannot say anything about the truth value of \(\big{[}\mathbf{x}\models\phi\big{]}\). Also note that due to the composition of \([\min]\) and \([\max]\), our construction does not lead to minimal inclusion functions on the robustness \(\rho\) for arbitrary STL specifications.
## 4 Computational Considerations of I-STL
In practice, I-STL specifications most naturally arise by incorporating uncertainty in settings with STL constraints. Aside from the theoretical soundness guarantees of Theorem 1, a key feature of I-STL is that, algorithmically, it is often straightforward to modify existing STL algorithms such as offline monitoring, online monitoring, and control synthesis to incorporate the quantitative semantics in Definition 3. Concretely, as we demonstrate in the case studies, this is often as simple as replacing appropriate numerical computations with their interval counterparts using existing interval arithmetic computation packages, and in many settings, the increase in computational effort is negligible. A contribution of this letter, therefore, is an extension of the stlpy package for STL monitoring and control synthesis [8] to allow for I-STL monitoring and control synthesis using our interval arithmetic package npinterval[17], which implements intervals as a native datatype in the Python numpy package.
For example, consider a setting in which an STL specification is given over known and fixed predicate functions \(\mathcal{P}\). Suppose the objective is to monitor offline (i.e., after all measurements are collected) the robustness of \(\phi\) evaluated over a signal, but the true signal is not known exactly--with this uncertainty captured in the interval signal \([\mathbf{x}]\) instead. In this case, we construct a set of interval predicate functions \(\mathcal{I}\) as the natural interval extension of the original predicate functions, \(\mathcal{I}=\{[\mu]\mid\mu\in\mathcal{P}\}\), and then \([\rho]^{\phi}\) becomes an inclusion function for \(\rho^{\phi}\). We demonstrate this application below in Section 5.1.
We generalize further and consider a setting in which the predicate functions are parameter-dependent, and the parameter is not known exactly but known to be within an interval. For example, consider an affine predicate of the form
\[\mu(x)=a^{\top}x-b\]
for \(a\in\mathbb{R}^{n}\) and \(b\in\mathbb{R}\). If \(a\) and \(b\) are uncertain and only known to be within the intervals \([a]\) and \([b]\), it is natural to consider an interval predicate
\[\mathcal{M}([x])=[a]^{\top}[x]-[b]. \tag{11}\]
As an example, instantiating the predicate \(\mu(x)=a^{\top}x-b\) in stlpy is achieved with, _e.g._,
stlpy.STL.LinearPredicate(a,b)
for numpy arrays a and b. Creating the interval predicate (11) is achieved with
a_int = interval.get_iarray(_a,a_) b_int = interval.get_iarray(_b,b_) stlpy.STL.LinearPredicate(a_int,b_int)
where _a, a_, _b, b_ are numpy arrays for the lower and upper endpoints of \([a]\) and \([b]\), and get_iarray returns the numpy array of the interval data type.
More generally, given a parameterized predicate function of the form \(\mu(x,p)\) where \(p\in\mathbb{R}^{m}\) is an unknown parameter vector known to be within the interval \([p]\), we take as an interval extension the interval predicate function \(\mathcal{M}([x])=[\mu]([x],[p])\) where \([\mu]\) is any inclusion function for \(\mu\).
For example, given a parameterized Python function mu_p : \(\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}\) with fixed \(\mathrm{p}\in\mathbb{R}^{m}\),
mu = lambda x : mu_p(x, p=p) stlpy.STL.NonlinearPredicate(mu,n)
instantiates a nonlinear predicate parameterized by numpy array p. Comparatively, the code
p_int = interval.get_iarray(_p,p_) M = lambda x : mu_p(x, p=p_int) stlpy.STL.NonlinearPredicate(M,n)
instantiates a nonlinear interval predicate obtained from the natural inclusion of the parameterized predicate function mu_p evaluated with an uncertain parameter in the interval p_int := [_p, p_] \(\in\mathbb{IR}^{m}\). Note that npinterval automatically builds a natural inclusion function for mu when arrays of interval data-type are passed into the function.
We demonstrate this construction and its application in the examples in Section 5.2. We also illustrate how I-STL can be used for enforcing safety specifications due to the construction from inclusion functions.
## 5 Examples
In this section, we provide two example use cases of I-STL. First, we demonstrate monitoring on a signal measured from an experiment with a miniature blimp. We consider both linear and nonlinear uncertain predicates and measurement uncertainty. Then, we show how I-STL can be used in conjunction with theory from [20] for control synthesis of a linear system. All simulations were performed on a 2022 Dell Precision 5570 running Ubuntu 22.04.3 LTS1.
Footnote 1: The code for these examples is available at [https://github.com/qfactslab/Baird_LCSS2024](https://github.com/qfactslab/Baird_LCSS2024).
Because our implementation for monitoring and control synthesis builds on stlpy[8], we convert STL formulas in code into positive normal form (PNF), where negation \(\neg\) is only applied to predicates without loss of generality [21].
### Interval Monitoring on a Real Blimp
We illustrate monitoring of a signal taken from an experiment with the GT-MAB miniature blimp hardware platform [22]. Let \([v_{x},v_{y},v_{z},\omega_{x},\omega_{y},\omega_{z},x,y,z,\theta,\phi,\psi]\in \mathbb{R}^{12}\) be the blimp's state consisting of its velocity, angular rates, position, and orientation, and let \(S\subseteq\mathbb{R}^{2}\) be a square in the center of a room. We wish to monitor the conjunction of the following two specifications:
* The blimp's \(x\)-\(y\) position may only be within the square for up to three seconds, and must then remain outside the square for at least two seconds before reentering.
* The blimp must slow down to a speed of less than \(2m/s\) every three seconds.
We write these as two STL specifications in PNF,
\[\varphi =([x,y]^{\top}\notin S)\vee\Di_{[0,3]}\Box_{[0,2]}([x,y]^{\top}\notin S)\] \[\gamma =\Di_{[0,3]}(-\|[v_{x},v_{y},v_{z}]^{\top}\|_{2}+2\geq 0)\]
where the expression \([x,y]^{\top}\not\in S\) is written as \((x\geq d)\vee(x\leq-d)\vee(y\geq d)\vee(y\leq-d)\) where \(d=1.41m\) is half of the width of the square plus the radius of the blimp, assuming the origin is at the center of the room. The signal is generated from a PD controller with four waypoints placed at the coordinates \((0,1.51)\), \((1.51,0)\), \((0,-1.51)\) and \((-1.51,0)\) in the \(xy\)-plane. The PD controller switches target coordinates after reaching each one in order. Due to measurement uncertainty, we add an interval of \(\pm 0.075m/s\) to each of the velocity states and an interval of \(\pm 0.020m\) to each of the position states. We use a natural inclusion function to handle the nonlinear predicate.
The results of monitoring for \(\varphi\wedge\gamma\) along with a plot of the \(x\)-\(y\) state trajectory and a top-down picture of the square with the blimp is shown in Figure 1. Note that I-STL adds minimal overhead beyond what is equivalent to monitoring two signals instead of one due to the use of the npinterval Python package [17]. Standard STL robustness computations without uncertainty took \(2.450s\) while I-STL computations with uncertainties took \(5.06s\), which is \(6.5\%\) more than twice a standard STL robustness computation.
### Robust Control Synthesis for a Linear System
Consider the following specification adapted from [20, 5]
\[\phi=(0.7\leq y\leq 1.3)\vee\lozenge_{[0,2]}\square_{[0,2]}(0.7\leq y\leq 1.3), \tag{12}\]
and the discrete-time double integrator with bounded additive disturbance
\[x(t+1)=\underbrace{\begin{bmatrix}1&\Delta t\\ 0&1\\ \end{bmatrix}}_{A}x(t)+\underbrace{\begin{bmatrix}0\\ \Delta t\\ \end{bmatrix}}_{B}u(t)+w(t), \tag{13}\]
\[y(t)=\underbrace{\begin{bmatrix}1&0\\ \end{bmatrix}}_{C}x(t),\]
with \(x(t)\in\mathbb{R}^{2}\) for all \(t\in\mathbb{N}\), \(u\in[-1,1]\), and \(w\in[-0.001,0.001]^{2}\). Set \(\Delta t=0.25\). The horizon of an STL formula \(\phi\), denoted \(\|\phi\|\), is defined as the number of future time steps of a signal necessary to evaluate an STL formula. Its computation is given in [1], yielding \(\|\phi\|=4/0.25=16\) time steps for \(\phi\) in (12).
The requirement \(0.7\leq y\leq 1.3\) may be written as the conjunction of two linear predicate functions
\[\alpha y-\beta_{1}\geq 0,\text{ and }-\alpha y-\beta_{2}\geq 0,\]
where \(\alpha=1\), \(\beta_{1}=0.7\), and \(\beta_{2}=-1.3\). Suppose, however, that there is uncertainty in the linear predicates captured with the interval bounds \([\underline{\alpha},\overline{\alpha}]=[0.95,1.05]\), \([\underline{\beta}_{\cdot},\overline{\beta}_{1}]=[0.68,0.72]\), and \([\underline{\beta}_{\cdot},\overline{\beta}_{\cdot}]=[-1.28,-1.32]\) for \(\alpha\), \(\beta_{1}\), and \(\beta_{2}\). We wish to minimize the deviation from a nominal control strategy such that the robustness is non-negative for all possible disturbances and all possible realizations of the interval predicates.
Using Theorem 1 with the I-STL specification induced by (12), our control objective is achieved by requiring that the lower bound on the interval robustness be non-negative. We use the formulation from [20, Algorithm 1], with slight modifications to accommodate I-STL. In particular, we replace the original dynamics contraints with a new embedding system giving lower and upper bounds \(\underline{x}\) and \(\overline{x}\) on the state trajectory which is guaranteed to over-approximate the true behavior of the system, i.e., for all possible disturbances, \(x(t)\in[\underline{x}(t),\overline{x}(t)]\). Therefore, from [20, Equation (8)] using instead interval robustness, we obtain the optimization problem
\[\min_{\mu=\{\mu(t),\ldots,\mu(t+N-1)\}}\|\hat{u}(t)-\mu(t)\|\] (14) s.t. \[\begin{bmatrix}\underline{x}(\tau+1)\\ \overline{x}(\tau+1)\end{bmatrix}=\begin{bmatrix}A&0\\ 0&A\end{bmatrix}\begin{bmatrix}\underline{x}(\tau)\\ \overline{x}(\tau)\end{bmatrix}+\begin{bmatrix}B\\ B\end{bmatrix}u(\tau)+\begin{bmatrix}\underline{w}\\ \overline{w}\end{bmatrix}\] \[\begin{bmatrix}\underline{y}(\tau)\\ \overline{y}(\tau)\end{bmatrix}=\begin{bmatrix}C&0\\ 0&C\end{bmatrix}\begin{bmatrix}\underline{x}(\tau)\\ \overline{x}(\tau)\end{bmatrix},\quad t-\|\phi\|\leq\tau\leq t+N\] \[\rho^{\phi}([y],\tau)\geq 0,\quad\max\{t-\|\phi\|,0\}\leq\tau\leq t+N-b.\]
where \(A\), \(B\), and \(C\) are the matrices from (13). We select \(N=16\), \(b=1\) and solve in a receding horizon fashion as a mixed integer program using Gurobi. The resulting control strategy is plotted in Figure 2 along with the interval robustness in Figure 3.
The I-STL implementation doubles the state dimension and output dimension due to the embedding system, yielding double the dynamics equality constraints. Enforcing predicates in the I-STL constraint introduces extra binary variables. When applying the mixed-integer encoding from [8] with affine interval predicates, the expression
\[\alpha^{\top}y(t)-b+M(1-z)\geq\rho(t)\]
is modified by using the minimal inclusion function for \([\alpha]^{\top}[y]\)[23] (where \(p\) is the dimension of the output) to
\[\sum_{j=1}^{p}\min\{\underline{\alpha}_{j}\underline{y}_{j},\underline{ \alpha}_{j}\overline{y}_{j},\overline{\alpha}_{j}\underline{y}_{j},\overline{ \alpha}_{j}\overline{y}_{j}\}-\underline{b}+M(1-z)\geq\underline{\rho}(t),\]
Figure 1: A top-down view of the blimp (upper-left), a plot of the blimp trajectory in the \(\boldsymbol{x}\)-\(\boldsymbol{y}\) plane (upper-right), and the offline computed robustness of \(\boldsymbol{\varphi}\wedge\boldsymbol{\gamma}\) (bottom). The trajectory is generated from a waypoint following PD controller that regularly violates the specification, suggesting the need for controller redesign, for example.
which introduces extra binary variables. Otherwise, the number of constraints used to encode I-STL robustness remains the same. Over a \(30s\) trajectory in simulation, the I-STL implementation takes \(11.8s\) to compute, while the STL implementation without disturbances and uncertain predicates takes \(5.33s\).
## 6 Conclusion
In this letter we presented an interval extension of STL that uses inclusion functions to give sound interval overestimates of STL robustness. Thanks to the npinterval package, I-STL can be efficiently used for robust monitoring or control synthesis. Depending on the construction of existing code, it may require minimal engineering effort to adapt code to use interval robustness, giving robust guarantees while introducing computation time equivalent to monitoring two signals instead of one. Future work will include incorporating I-STL into a control synthesis problem on the GT-MAB hardware platform.
| intervalsignal temporal logic(ISTL)という、信号時間論理(STL)のインターバル拡張を提案します。STL式に対して、各PREDICATEのインターバル包含関数(interval inclusion function)を検討します。そして、最小包含関数を用いて最小化関数(min)と最大関数(max)を再帰的に構築し、元のSTL式に対する耐性(robustness)をinterval robustnessとして自然な包含関数(inclusion function)を構築します。これにより、不確実な信号を区間表現で表現し、不確実な予測を適切な包含関数で表現します。多くの場合、STLの検証や合成アルゴリズムは、ISTLに最小限の理論的およびアルゴリズム的変更で適用され、既存のコードは、非常に少ない計算コストで区間算術パッケージで拡張できます。ISTLを証明するために、不確実な信号トレースからオフライン監視の例、および |
2306.17778 | Look, Remember and Reason: Grounded reasoning in videos with language
models | Multi-modal language models (LM) have recently shown promising performance in
high-level reasoning tasks on videos. However, existing methods still fall
short in tasks like causal or compositional spatiotemporal reasoning over
actions, in which model predictions need to be grounded in fine-grained
low-level details, such as object motions and object interactions. In this
work, we propose training an LM end-to-end on low-level surrogate tasks,
including object detection, re-identification, and tracking, to endow the model
with the required low-level visual capabilities. We show that a two-stream
video encoder with spatiotemporal attention is effective at capturing the
required static and motion-based cues in the video. By leveraging the LM's
ability to perform the low-level surrogate tasks, we can cast reasoning in
videos as the three-step process of Look, Remember, Reason wherein visual
information is extracted using low-level visual skills step-by-step and then
integrated to arrive at a final answer. We demonstrate the effectiveness of our
framework on diverse visual reasoning tasks from the ACRE, CATER,
Something-Else and STAR datasets. Our approach is trainable end-to-end and
surpasses state-of-the-art task-specific methods across these tasks by a large
margin. | Apratim Bhattacharyya, Sunny Panchal, Mingu Lee, Reza Pourreza, Pulkit Madan, Roland Memisevic | 2023-06-30T16:31:14 | http://arxiv.org/abs/2306.17778v3 | # Look, Remember and Reason:
###### Abstract
Large language models have recently shown human level performance on a variety of reasoning tasks. However, the ability of these models to perform complex visual reasoning has not been studied in detail yet. A key challenge in many visual reasoning tasks is that the visual information needs to be tightly integrated in the reasoning process. We propose to address this challenge by drawing inspiration from human visual problem solving which depends on a variety of low-level visual capabilities. It can often be cast as the three step-process of "Look, Remember, Reason": visual information is incrementally extracted using low-level visual routines in a step-by-step fashion until a final answer is reached. We follow the same paradigm to enable existing large language models, with minimal changes to the architecture, to solve visual reasoning problems. To this end, we introduce rationales over the visual input that allow us to integrate low-level visual capabilities, such as object recognition and tracking, as surrogate tasks. We show competitive performance on diverse visual reasoning tasks from the CLEVR, CATER, and ACRE datasets over state-of-the-art models designed specifically for these tasks.
Machine Learning, ICML, ICML
## 1 Introduction
Autoregressive large language models (LLMs) have shown impressive results on various reasoning tasks such as on grade school math problems (Cobbe et al., 2021) and even on LSAT (OpenAI, 2023). Language models designed for these problems process only textual data to reason and solve the target task. Many real-world scenarios, however, require humans to reason in complex domains that engage various heterogeneous sensory inputs, _e.g_., perceptual cues and language. Motivated by this, multimodal LLMs (Alayrac et al., 2022; Koh et al., 2023; Zhang et al., 2023b) have gained traction, which model information both from the textual and the visual domains. While these models perform well on the tasks that rely on the global visual-textual relationships, _e.g_., captioning or dialogue (Alayrac et al., 2022; Driess et al., 2023; Koh et al., 2023), the ability of multimodal LLMs to understand spatio-temporal relationships and causal structures in visual data is rather under-explored. It is unclear to what degree the recent success of such models on visual reasoning is due to the models' encyclopedic common sense knowledge or their ability to perform complex visual perception. Disentangling high-level reasoning and background knowledge from perceptual capabilities is crucial to better understand and improve the visual reasoning capabilities of these models. Therefore, in this work, we focus on visual reasoning problems (Girdhar and Ramanan, 2020; Johnson et al., 2017; Zhang et al., 2021) which do not require ex
Figure 1: Our “Look, Remember, Reason” (LRR) model solves complex visual reasoning problems by generating grounded rationales. Crucially, we train the model on surrogate tasks, _e.g_., object re-identification, to enable necessary low-level visual capabilities. Our model “looks” at the visual input to extract relevant low-level information step-by-step, and it “remembers” results of intermediate steps. In the above example, this allows our LRR model to “reason” whether the query objects could activate the “Bikket” machine.
tensive common sense or background knowledge and are without the implicit biases over the scene and the object structure.
Consider the visual reasoning problem such as in Figure 1 from ACRE (Zhang et al., 2021), where the objective is to correctly answer whether the query objects (in bottom left) would activate the "Blicket" machine. Humans can solve this problem through a multi-step reasoning process where we attend to and extract visual information step by step using our low-level visual capabilities, such as object recognition and re-identification. For example, one strategy that humans may follow to solve this problem is: read the question; inspect the scene to create an overview of the present objects as well as any relevant low-level visual information; memorize the relevant information along the way; finally state the answer based on the extracted information. Such a reasoning process is crucial to deal with both the complexity of the task and the need to filter the rich visual data for relevant information. In short, such a reasoning process can be thought of as consisting of the three intermediate sub-tasks "Look, Remember, Reason" - looking for relevant visual cues, remembering the relevant cues along the way, and finally aggregating the collected information to arrive at the final answer. In this work, we boost the uni-modal large language models for texts to perform general-purpose multi-modal visual reasoning by augmenting them with _low-level visual capabilities_.
Our key contributions are: 1. We equip an off-the-shelf language model with the low-level visual capabilities to solve a diverse range of visual reasoning tasks. This is accomplished by training the LLM indirectly using surrogate tasks expressed in natural language requiring the generation of relevant rationales that follow the paradigm of "Look, Remember, Reason" and are grounded in the visual input. 2. We show that it is crucial in these tasks to let high-level concepts modulate the perceptual pathway, and we present an adapter module that accomplishes this through top-down attention controlled by the LLM. 3. Our general-purpose LRR model can perform varied visual reasoning tasks, including spatial reasoning (CLEVR;Johnson et al., 2017)), temporal reasoning (CATER; (Girdhar and Ramanan, 2020)), and causal visual reasoning (ACRE;(Zhang et al., 2021)). Our approach outperforms prior state-of-the-art particularly designed to perform one of these tasks by a large margin.
## 2 Related Work
**Large language models and reasoning.** Large language models have shown strong performance on a variety of natural language processing tasks, _e.g._, question answering, translation, summarization (Brown et al., 2020; Raffel et al., 2019). This progress has been enabled by scaling model size (Chowdhery et al., 2022; Kaplan et al., 2020; Rae et al., 2021), which has enhanced models' ability to learn in-context and unlock "emergent abilities" such as the ability to perform well on very challenging reasoning tasks (d'Avila Garcez and Lamb, 2020; Marcus, 2018, 2020) like commonsense reasoning (Wei et al., 2021; Sanh et al., 2022), symbolic reasoning (Ichter et al., 2022; Yao et al., 2022) and mathematical reasoning (Lewkowycz et al., 2022). The progress in reasoning abilities has been fueled by "chain-of-thought" or "rationale" based methods which aim to mimic human reasoning processes. This has been successfully applied to improve performance on arithmetic programs (Ling et al., 2017), and commonsense reasoning (Rajani et al., 2019) among others. LLMs can learn multi-step tasks like long-division (Recchia, 2021) through rationales and manipulation of an external environment in the form of a "scratch paper". Similarly, Nye et al. (2021) outputs intermediate steps to improve performance on computational problems. More recently, by utilizing models with a strong ability to learn in context, the possibility of generating rationales through prompting has been demonstrated in Kojima et al. (2022); Wei et al. (2022). Further, producing multiple chains of thought and selecting the final answer by majority vote has shown promise in Wang et al. (2022); Wei et al. (2022). In this work, we aim to leverage the abilities of LLMs to reason in natural language by generating rationales and performing visual reasoning tasks.
**Multi-modal language models.** Analogous to the large-scale models for text, there have been breakthroughs in the development of large multimodal approaches which can deal with multi-modal, specifically visual, inputs in addition to the text. Pix2seq (Chen et al., 2022) utilize auto-regressive language models to extract low-level visual information from images. ViperGPT (Suris et al., 2023), VisProg (Gupta and Kembhavi, 2022) and Chameleon (Lu et al., 2023) use language-based LLMs with vision sub-modules for multimodal tasks. Other approaches focus on joint modeling of visual and textual data. Such models include the CLIP (Radford et al., 2021) and BLIP (Li et al., 2022) which utilize natural language instead of image-level class labels. Flamingo (Alayrac et al., 2022) introduces a family of language and vision models which are pre-trained on diverse vision and language tasks with a large amount of vision and text data available from the web. Recent approaches such as CM3 (Aghajanyan et al., 2022) train a multimodal LLM on a large HTML corpus for image and text generation. Other approaches Eichenberg et al. (2022); Li et al. (2023); Liu et al. (2023); Manas et al. (2023); Tsimpoucskelli et al. (2021), instead of training vision and language models from scratch on the multimodal data, incorporate pretrained LLMs as language priors. Methods like Frozen (Tsimpoucskelli et al., 2021), leverage pretrained LLMs and train a vision encoder to encode images as a sequence of tokens which can be presented to the transformer in the same form as the text.
PaLM-E (Driess et al., 2023) provides images and text as interleaved multimodal latent vectors, allowing the model to process multiple images within any part of a sentence which serves as an input to the LLM where the model is trained end-to-end. LLaMA-Adapter (Zhang et al., 2023) introduces a adapter layer with zero-init attention to enable multimodal inputs with the LLaMA model (Touvron et al., 2023). LLaVA (Liu et al., 2023) finetunes the LLaMA model which is presented with the output of the vision encoder obtained from conversational data. A low-rank adaption to finetune a LLaMA model in a setting similar to LLaVA is further explored in mPLUG-owl (Ye et al., 2023). FROMAGE (Koh et al., 2023) on the other hand, freezes the language model, and fine-tunes the input and output linear layers to encode multimodal interactions. In our work, we systematically study the role of low-level visual skills for visual reasoning and introduce rationales with corresponding surrogate tasks.
**Attention-based models and visual reasoning.** Attention-based models have been studied extensively for visual reasoning (Ding et al., 2021; Hu et al., 2017; Hudson and Manning, 2018; Kamath et al., 2021; Mahajan and Roth, 2020; Santoro et al., 2017). Recent advances include an object-centric encoder and a transformer reasoning module to solve RPM-like benchmarks (Mondal et al., 2023), multi-hop feature modulation (Strub et al., 2018) and cascaded modulation networks (Yao et al., 2018) that use a multi-step comprehension process, neural interpreters (Rahaman et al., 2021) that factorize inference in a self-attention network and ALANS learner (Zhang et al., 2022) that combines abstract algebra and representation theory. Calibrating concepts and operations (Li et al., 2021) enables neural symbolic models to capture underlying data characteristics and perform hierarchical inference. In contrast to these approaches with task-specific architectures, we focus on using off-the-shelf LLMs with spatial features from a CNN for visual reasoning. We instill the ability to extract object-centric information in the network by using rationales, instead of resorting to specialized object detection modules.
## 3 Look, Remember, Reason
To allow visual reasoning by exploiting the highly expressive large language models, we propose a novel _"Look, Remember, Reason"_ framework. Our LRR model is based on a pre-trained LLM backbone, with additional cross attention layers (Dou et al., 2022; Rahman et al., 2023; Vaswani et al., 2017) to enable multi-modal inputs. To address the challenges presented by visual reasoning problems, we propose rationales obtained from multimodal signals. Unlike prior work (Zhang et al., 2023; 20), our rationales additionally include low-level visual surrogate tasks expressed in natural language crucial for visual reasoning tasks. These are supported by a top-down attention mechanism that allows high-level concepts to modulate the perceptual pathway. In the following, we first describe our LRR approach, followed by our rationales.
### Auto-regressive Pipeline
Inspired by the success of auto-regressive models in reasoning tasks (Cobbe et al., 2021), we formalize our LRR model in the auto-regressive framework. Our LRR model (depicted in Figure 2) with parameters \(\theta\) receives an interleaved stream of visual input, \(\mathbf{I}=(\mathbf{v}_{1},\dots,\mathbf{v}_{t_{v}})\), _e.g._, an image or a sequence of video frames of length \(t_{v}\), along with (tokenized) text \(\mathbf{S}=(\mathbf{s}_{1},\dots,\mathbf{s}_{t_{s}})\) of length \(t_{s}\). The tokenized text includes the rationales and answers to visual reasoning problems. We train the model by maximizing log-likelihood of the next token given the interleaved sequence of previous tokens and images,
\[\log(p_{\theta}(\mathbf{S}))=\sum_{t_{s}^{\prime}}\log(\mathbf{s}_{t_{s}^{ \prime}}|\mathbf{s}_{1},\dots,\mathbf{s}_{t_{s}^{\prime}-1},\mathbf{v}_{1}, \dots,\mathbf{v}_{t_{v}^{\prime}}) \tag{1}\]
where, \((\mathbf{v}_{1},\dots,\mathbf{v}_{t_{v}^{\prime}})\) is the interleaved visual input sequence upto the text token \(\mathbf{s}_{t_{s}^{\prime}}\). The backbone of our model consists of an off-the-shelf LLM. We use models from the OPT family (Zhang et al., 2022), but verified that similar performance can be achieved using other pre-trained models (Gao et al., 2021; Scao et al., 2022). The parameters are initialized from pre-trained LLMs, which allows us to exploit their existing reasoning capabilities. While the LLMs we use as backbone are trained on text only, visual reasoning relies on the extraction of visual information about spatial and temporal relationships between objects in the scene. Therefore, in our multi-modal setup, visual information \(\mathbf{I}\) needs to be mapped to the text-based representation space of the LLM. The key challenge here is that in comparison to text tokens, images are highly information dense - reflected in the popular adage "An image is worth a thousand words".
State-of-the-art multi-modal LLMs (Alayrac et al., 2022; Koh et al., 2023) map visual information to the textual domain using specially trained visual encoders such as Percoirer (Jaegle et al., 2021) or CLIP (Radford et al., 2021). As such visual encoders are trained to capture high-level global semantics, they are not well suited to capture low-level visual information underlying complex reasoning tasks. Models like PaLM-E (Driess et al., 2023) map visual information from image patches directly to the input token space of an LLM using a ViT or OSRT model (Dosovitsky et al., 2021; Sajjadi et al., 2022). It is challenging to capture the relevant visual information with the prior approaches due to the aforementioned discrepancy between the information density of images and text tokens. Therefore, we propose to use low-level grid-level visual features from an off-the-shelf CNN which preserves low-level visual information, coupled with a top-down attention mechanism that allows the LLM to directly extract low-level visual information.
### Top-down Cross Attention
Our top-down attention mechanism exploits the rich hierarchical representation encoded in the hidden states \(\mathbf{h}=\{\mathbf{h}^{1},\dots,\mathbf{h}^{m}\}\) of the LLM, where \(m\) is the number of self-attention layers in the LLM and \(\mathbf{h}^{i}\in\mathbb{R}^{t\times q}\). Here, \(t\) is sequence length \(t=t_{v}+t_{s}\) and \(q\) is the dimensionality of the embedding space. The first embedding layer of the LLM encodes tokens, whereas subsequent layers contain progressively richer and more information-dense representations than encode increasingly global information. Therefore, we propose to use the embedding layers higher in the hierarchy in our top-down attention mechanism to guide the information extraction process from visual inputs.
Our LRR model, as shown in Figure 2, employs grid-level visual features obtained from ResNet(He et al., 2016) based CNN, which allows us to preserve spatial information crucial for visual reasoning tasks. The adoption of a simple CNN ensures that our model is applicable across a variety of visual reasoning problems. In our approach, the CNN encodes the input image sequence \(\mathbf{I}=(\mathbf{v}_{1},\dots,\mathbf{v}_{t_{v}})\) into \(\tilde{\mathbf{I}}=(\tilde{\mathbf{v}}_{1},\dots,\tilde{\mathbf{v}}_{t_{v}})\), where \(\tilde{\mathbf{v}}_{i}=\text{CNN}(\mathbf{v}_{i})\) and \(\tilde{\mathbf{v}}_{i}\in\mathbb{R}^{g\times q^{\prime}}\). Here, \(g\) is the size of the grid and \(q^{\prime}\) the dimensionality of the CNN embedding space.
To integrate and "look" for the visual information from the CNN in our LLM pipeline, we employ cross attention (Cross-Attn) layers at higher levels \(\{k,\dots,m\}\) of the hierarchical LLM representation space, in addition to the self-attention (Self-Attn) layers present in the backbone LLM (_c.f_. Figure 2). The grid level features \(\tilde{\mathbf{I}}\) are first transformed using a multi-layer perceptron (MLP) for every cross-attention layer. For example, we learn a mapping \(\text{MLP}_{k}:\mathbb{R}^{q^{\prime}}\rightarrow\mathbb{R}^{q}\) to transform the grid level features \(\tilde{\mathbf{v}}_{i}\) as \(\tilde{\mathbf{v}}_{i}^{k}=\text{MLP}_{k}(\tilde{\mathbf{v}}_{i})\), for use as input to the first top-down cross attention layer. Furthermore, to preserve spatial information, we concatenate positional embeddings to each grid element \(\hat{\mathbf{v}}_{i}^{k}\). The grid level image features \(\hat{\mathbf{v}}_{i}^{k}\) fused with positional embeddings allow for top-down attention where the LLM guides the information extraction process using the representation \(\mathbf{h}^{k}\). We use the representation \(\hat{\mathbf{h}}^{k}\) after the application of the self-attention layer to guide the visual feature extraction process in the cross-attention layer. From Figure 2, the hidden representation \(\hat{\mathbf{h}}^{k}\) is transformed by a linear projection to serve as the query vector (\(Q_{s}\)) and the visual features \(\hat{\mathbf{v}}_{i}^{k}\) are linearly transformed to the keys and values (\(K_{v},V_{v}\)) of the cross attention layer respectively,
\[\hat{\mathbf{h}}^{k}=\textsc{Self-Attn}(\mathbf{h}^{k}) \tag{2}\] \[\hat{\mathbf{v}}_{i}^{k}=\textsc{Cross-Attn}(\hat{\mathbf{h}}^{k},\bar{\mathbf{v}}_{i}^{k})\] \[\mathbf{h}^{k+1}=\mathbf{h}^{k}+\hat{\mathbf{h}}^{k}+\hat{ \mathbf{v}}_{i}^{k}\] \[\mathbf{h}^{k+1}=\text{FFN}(\mathbf{h}^{k+1})+\mathbf{h}^{k+1}\]
where FFN denotes a feedforward layer defined the same way as in (Vaswani et al., 2017). We use the hidden state \(\hat{\mathbf{h}}^{i}\) as a query vector that encodes global semantics in the cross-attention layers with the spatial grid features \(\hat{\mathbf{v}}_{i}^{k}\) as keys and values. This allows the LLM to extract information relevant to solving visual reasoning problems, including object locations and their spatial relationships in \(\hat{\mathbf{v}}_{i}^{k}\). The hidden representation at level \(k+1\) now includes information from both the textual (\(\hat{\mathbf{h}}^{k}\)) and visual domains (\(\hat{\mathbf{v}}_{i}^{k}\)) and is thus multi-modal and includes low-level visual information. This is instrumental in generating rationales for visual reasoning tasks as discussed in the following.
### Rationales with Surrogate Tasks
Many complex reasoning tasks including visual reasoning, have serial step-by-step solutions, _i.e_., rationales, expressible in natural language (_c.f_. Figure 1). This property can be exploited in pure text-based reasoning tasks by placing exemplary rationales ("chain-of-thought") as a prompt to the LLM along with the statement of the task to be solved (_e.g_., (Wei et al., 2022)). However, this method of promptting with exemplary rationales cannot be used directly for visual reasoning as many of these tasks additionally rely on a number of generic, low-level visual skills. Examples of such visual skills include the ability to detect, describe
Figure 2: The architecture of our LRR model, highlighting the use of interleaved top-down cross-attention layers in between self-attention layers higher up in the hierarchy.
or enumerate objects present in the scene, to track or re-identify objects in the case of video, or to understand spatial relationships between multiple objects in the scene. Current state-of-the-art LLMs do not possess the ability to parse low-level visual information. A possible solution to circumvent the lack of low-level visual skills in LLMs is to combine them with low-level vision modules that can perform detection or tracking (Ding et al., 2021; Gupta and Kembhavi, 2022; Suris et al., 2023). However, such an approach limits scalability across tasks due to the dependence on task-specific low-level vision modules. In this work, we instead explore an alternative solution, based on the generic vision features produced by a CNN like ResNet. This design choice yields a model that is trainable end-to-end and low-level visual skills can be instilled by fine-tuning it on appropriate _surrogate tasks_ expressed in natural language.
In our LRR model, we leverage the flexibility of LLMs to express diverse low-level visual tasks through language in a generalized setup. Consider the visual reasoning problem from CLEVR in Figure 1, which requires low-level skills of object recognition and spatial reasoning. In this case, we design rationales with surrogate tasks as follows: For the low-level skill of object recognition, we introduce the surrogate task of explicitly listing all objects in the scene. Similarly, for spatial reasoning skills, we design rationales where we introduce the surrogate task of explicitly listing all objects left/right/front/behind a certain target object. This enables the model to understand the spatial relation of the target object to other objects in the scene. Moreover, for visual reasoning problems that require tracking or re-identification, we introduce the surrogate task of predicting the positions or identifying the target objects across video frames in the rationale. Including low-level visual tasks in the rationale has the additional benefit that the solutions to these tasks remain within the context window of the LLM so that they are in fact "remembered" by the LLM and can be exploited to "reason" and solve subsequent tasks. The experiments section provides practical details on rationale construction.
## 4 Experiments
We now evaluate our model on visual reasoning tasks from: ACRE (Zhang et al., 2021) which focuses on the discovery of causal structures, CLEVR (Johnson et al., 2017) which focuses on spatial reasoning and CATER (Girdhar and Ramanan, 2020) which focus on temporal reasoning. We first evaluate dataset specific fine-tuned variants of our LRR model for fair comparison to prior work, followed by a variant jointly trained on ACRE, CLEVR and CATER.
**Models and training details.** We focus on the OPT family of LLMs (Zhang et al., 2022b), particularly OPT-125M and OPT-1.3B. We train our LRR model on a single Nvidia A100 GPU. We used a ResNet-101 (He et al., 2016) as the vision backbone across all tasks (see Appendix).
\begin{table}
\begin{tabular}{l|c c c c c|c c c c} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{Model}} & \multicolumn{6}{c}{Compositional} & \multicolumn{6}{c}{Systematic} \\ & All & D.R. & I.D. & S.O. & B.B. & All & D.R. & I.D. & S.O. & B.B \\ \hline CNN-BERT (Ding et al., 2021) & 43.7 & 54.0 & 46.8 & 40.5 & 28.7 & 39.9 & 55.9 & 68.2 & 0.0 & 45.5 \\ NS-OPT (Zhang et al., 2021) & 69.0 & 92.5 & 76.0 & 88.3 & 13.4 & 67.4 & 94.7 & 88.3 & 82.7 & 16.0 \\ ALOE (Ding et al., 2021) & 91.7 & 97.1 & 90.8 & 96.8 & 78.8 & 93.9 & 97.1 & 71.2 & 98.9 & 94.4 \\ \hline OPT-125M+CLIP & 83.6 & 95.7 & 70.5 & 87.8 & 67.4 & 83.8 & 95.0 & 68.1 & 87.1 & 74.6 \\ OPT-125M+ViT & 96.9 & 99.4 & 95.0 & 97.3 & 93.5 & 96.7 & 99.1 & 95.0 & 98.3 & 93.3 \\ LRR (w/o Surrogate Re-ID) & 89.7 & 97.6 & 68.3 & 85.4 & 92.3 & 90.2 & 97.5 & 74.7 & 84.3 & 94.2 \\ LRR (Ours) & **99.3** & **99.8** & **98.5** & **99.5** & **98.7** & **99.0** & **99.8** & **98.4** & **99.8** & **97.6** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation on the ACRE dataset, where, D.R. – Direct evidence, I.D. – Indirect evidence, S.O. – Screened-off and B.B. – Backward Blocked subsets.
### Acre
The ACRE dataset (Zhang et al., 2021) focuses on evaluating the performance of vision systems on the problem of causal induction. Specifically, the dataset focuses on the problem of causal discovery using "blicket" detection experiments, originally administered to children. Such experiments involve a blicket detector which activates when a blicket object is put on it. The experiment involves a series of context trials, in which various (combinations of) objects are placed on the blicket detector, and subjects are shown whether the detector is activated. They are then asked which objects or (novel) combinations of objects would activate the machine. The examples in the ACRE dataset follow the same experimental design: each example contains 6 context trials and 4 questions on whether a certain combination of objects would activate the blicket machine.
**Rationale construction.** The key low-level visual challenge in the ACRE dataset is to identify objects in the context trials and to detect whether the blicket machine is activated. Therefore, we design the rationale with the surrogate tasks of object recognition and re-identification across the context trials. The rationale for each context trial describes the objects present and also assigns an unique integer ID to allow for re-identification. Additionally, the rationale also identifies state of the blicket machine (on/off). From Table 1: Trial 1 with objects: 1(medium gray rubber cylinder) causes blicket machine to go: off. Finally, the rationale re-identifies the objects in the query image,, from Table 1: Will the query with objects: 1(medium gray rubber cylinder), 5(medium blue metal sphere) activate the blicket?
This allows our LRR model to exploit the ("remembered") previous steps in the rationale to infer which context trials involved the objects in the query as well as the state of the blicket machine in the relevant trials. The model can then aggregate the information in the rationale to reason and arrive at the final answer.
**Baselines and evaluation.** We base our LRR models on the OPT-125M and ResNet-101 backbones. We also compare to several baselines in Table 2. To highlight the importance of our rationale generation process, we consider a baseline without the surrogate re-identification task: LRR (w/o Surrogate Re-ID). To highlight the importance of spatial grid based features along with top-down attention, we consider (with surrogate Re-ID), 1. A OPT-125M model with visual input at the first OPT (token embedding) layer using (global) CLIP (Dosovitskiy et al., 2021) embeddings, as in FORMAGe(Driess et al., 2023). 2. A OPT-125M model with visual input at the first OPT (token embedding) layer, as in PaLM-E (Driess et al., 2023). The visual input is patch-based, from ViT (Dosovitskiy et al., 2021).
We see that without the surrogate Re-ID task, our LRR model shows weak performance. This highlights the importance of "Look, Remember, Reason" paradigm where we explicitly solve the crucial Re-ID task and "remember" the results for each context trial. The importance of spatial grid features is illustrated by the weak performance of the OPT-125M+CLIP model, which is unable to effectively capture low-level visual cues due the pooling introduced by the CLIP model. Although the OPT-125M+ViT model uses spatial grid features, it's performance is limited by the lack of top-down attention guided by the rich representations of the LLM. Furthermore, LRR model outperforms the state-of-the-art ALOE (Ding et al., 2021) model by a large margin on both the compositional (where the training and test sets contain different visual features) and systematic splits (different numbers of activated machines in the context trials) of the ACRE dataset. The gain in performance is especially significant in the backward blocked subset (B.B.) where the blicketness cannot be inferred from correlation alone due to the presence of confounding objects and the indirect subset (I.D.) where information needs to be integrated from multiple context trials. This performance advantage is due to the step-by-step reasoning enabled by our rationales which allows the model to aggregate visual information across multiple context trials.
### Clevr
The CLEVR dataset contains 700k examples consisting of a query image, a question, and a functional program.
\begin{table}
\begin{tabular}{l|c|c c c c c c} \hline \hline & FT & & & Compare & Query & Compare \\ Method & vision & Overall & Count & Exist & Numbers & Attribute & Attribute \\ \hline CNN+LSTM+RN (Malinowski et al., 2018) & ✓ & 95.5 & 90.1 & 97.8 & 93.6 & 97.9 & 97.1 \\ CNN+LSTM+FILM (Perez et al., 2018) & ✓ & 97.6 & 94.3 & 99.3 & 93.4 & 99.3 & 99.3 \\ HAN+RN (Santoro et al., 2017) & ✓ & 98.8 & 97.2 & 99.6 & 96.9 & 99.6 & 99.6 \\ OCCAM (Wang et al., 2021) & ✓ & 99.4 & 98.1 & 99.8 & 99.0 & 99.9 & 99.9 \\ NS-VQA (Yi et al., 2018) & ✓ & **99.7** & **99.9** & **99.9** & **99.8** & 99.8 & 99.8 \\ MDETR (Kamath et al., 2021) & ✓ & **99.7** & 99.3 & **99.9** & 99.4 & **99.9** & **99.9** \\ \hline CNN+LSTM+SAN (Johnson et al., 2017) & ✗ & 68.5 & 52.2 & 71.1 & 73.5 & 85.3 & 52.3 \\ LRR (w/o Surrogate Spatial Reasoning) & ✗ & 51.4 & 40.2 & 59.1 & 55.8 & 53.1 & 61.6 \\ LRR (Ours) & ✗ & **97.9** & **95.6** & **98.7** & **98.7** & **98.5** & **98.3** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Evaluation on the CLEVR dataset, comparing to state-of-the-art.
The images consist of synthetically rendered 3D objects of various sizes, shapes, materials, and colors. The questions are designed to require multiple reasoning steps and are compositional in nature.
Rationale construction.The functional programs in CLVER consist of simpler sub-routines that follow a tree-like execution structure. These sub-routines decompose questions into simpler low-level object recognition and spatial reasoning tasks such as object counting and searching for objects based on spatial positions or materials, among others - operations necessary to solve the visual reasoning problem. We convert these sub-routines into rationales with surrogate tasks (details in Appendix C).
Baselines and evaluation.We begin with a comparison to state-of-the-art models in Table 3. Additionally, to highlight the importance of rationales with surrogate object recognition and spatial reasoning tasks, we consider a LRR (w/o Surrogate Spatial Reasoning) baseline.
From the results in Table 3, we see that non-LLM-based methods such as MDETR (Kamath et al., 2021) or NS-VQA (Yi et al., 2018), perform the best. However, such methods use a fine-tuned vision backbone (FT vision), based on DETR (Carion et al., 2020) or Mask-RCNN (He et al., 2017). The advantage of a fine-tuned vision backbone on CLEVR is mainly due to improved object detection performance in the presence of occlusions. However, the use of a fine-tuned vision backbones like DETR or Mask-RCNN makes it more challenging to apply the same model architecture across diverse visual reasoning tasks, _e.g_. the moving camera split of CATER (_ccf_. Table 5). Furthermore, DETR or Mask-RCNN type backbones require bounding box annotations, which are not always available. Even without a fine-tuned vision backbone, our LRR model outperforms FiLM (Perez et al., 2018) which employs a fine-tuned ResNet-101.
Finally, our LRR model without rationales LRR (w/o Surrogate Spatial Reasoning), performs significantly worse. This shows the importance of our surrogate object recognition and spatial reasoning tasks. We illustrate example rationales and further detailed ablations in the Appendix C.
### Cater
The CATER (Compositional Actions and TEmporal Reasoning) dataset is designed to test the ability to recognize compositions of object movements that require long-term temporal reasoning. Like CLEVR, the CATER dataset also consists exclusively of synthetically rendered 3D objects of various sizes, shapes, materials, and color. The synthetic nature of the CATER dataset ensures the lack of implicit biases over scene and object structure and thus the focus of the dataset is exclusively on the temporal structure of object movements. This makes the CATER dataset ideal to measure the temporal reasoning abilities of the current state-of-the-art LLMs. Similar to (Ding et al., 2021), we focus on the hardest task from the CATER dataset, _i.e_., adversarial target tracking under occlusion and containment. This task amounts to predicting the position of a special object, referred to as "sintch", at the end of each video sequence. This is challenging as the position of the "sintch" can be occluded or (recursively) contained within other objects at the end of the sequence. This task is posed as a classification problem over a \(6\times 6\) grid. There are two splits of the dataset, static camera, and moving camera. The moving camera split is more challenging as the grid position of a certain object becomes much harder to discern from a single frame and long-term spatio-temporal correlations need to be captured.
Rationale construction.Building on the insights from the CLEVR dataset, we decompose the final grid classification problem into a sequence of simpler problems, using rationales with multi-target tracking as a surrogate low-level visual task. The rationale contains the grid positions of the sintch at every video frame. Following the paradigm of "Look, Remember, Reason" we include the surrogate task of tracking the medium and large cones in the scene, as these objects can occlude the sintch. With our rationale, the predicted intermediate grid positions of the objects of interest, _e.g_., the sintch and cones, are "remembered" by the LLM and can be used to reason about the final position of the sintch in case of recursive containment by the cones.
Baselines and evaluation.To highlight the importance of our rationale generation process, we consider a baseline
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{Static Camera} & \multicolumn{4}{c}{Moving Camera} \\ & Top-1(\(\uparrow\)) & Top-5(\(\uparrow\)) & L1(grid;\(\downarrow\)) & Top-1(\(\uparrow\)) & Top-5(\(\uparrow\)) & L1(grid;\(\downarrow\)) \\ \hline R3D LSTM (Girdhar and Ramanan, 2020) & 60.2 & 81.8 & 1.2 & 28.6 & 63.3 & 1.7 \\ R3D + NL LSTM (Girdhar and Ramanan, 2020) & 46.2 & 69.9 & 1.5 & 38.6 & 70.2 & 1.5 \\ ALOE (Ding et al., 2021) & 74.0 & 94.0 & 0.44 & 59.7 & 90.1 & 0.69 \\ \hline OPNet\({}^{\dagger}\)(Shamsian et al., 2020) & 74.8 & - & 0.54 & - & - & - \\ Hopper\({}^{\dagger}\)(Zhou et al., 2021) & 73.2 & 93.8 & 0.85 & - & - & - \\ TFC V3D Depthwise\({}^{\dagger}\)(Zhang, 2022) & 79.7 & 95.5 & 0.47 & - & - & - \\ \hline LRR (w/o Surrogate Tracking) & 61.7 & 82.4 & 0.73 & 49.3 & 65.8 & 1.23 \\ LRR (Ours) & **85.1** & **96.2** & **0.23** & **75.1** & **91.9** & **0.48** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Evaluation on the CATER dataset (\({}^{\dagger}\)results reported only for static camera).
where the rationale consists only of the grid positions of the snitch at every video frame, without the surrogate task of tracking the cones, LRR (w/o Surrogate Tracking). Both our LRR model and baselines are based on the OPT-125M backbone. Our LRR model is trained jointly on both the static and moving camera splits, similar to ALOE (Ding et al., 2021). The results are presented along with a comparison to state-of-the-art models in Table 4. Note that, OPNet (Shamsian et al., 2020), Hopper (Zhou et al., 2021), TFC V3D Depthwise (Zhang, 2022) and Loci (Traub et al., 2023) report results only on the static camera split. Loci (Traub et al., 2023) reports an impressive 90.7% accuracy on the static camera split, but it is not applicable to the moving camera split due to its static background and camera model.
Our LRR model outperforms TFC V3D Depthwise (Zhang, 2022) model on the static camera and ALOE (Ding et al., 2021) on the challenging moving camera split by a large margin. The large performance gain over the LRR (w/o Surrogate Tracking) baseline shows the advantage of using surrogate tracking tasks in the rationale. This shows that our rationales help capture long-term spatio-temporal correlations by "remembering" the intermediate positions of the objects of interest within the context window of the LLM. Without the multi-target tracking surrogate task, the model does not learn to track the cones and thus fails in cases of containment. We report qualitative examples in Table 5 and in Appendix D, which illustrates that our model is able to successfully track objects in cases of recursive containment and moving cameras.
### Multi-dataset Training and Evaluation
Finally, we train our LRR model with OPT-1.3B and ResNet-101 backbones jointly on all three datasets: ACE, CLEVR and CATER in Table 6. Note that this is highly challenging due to the diverse nature of these tasks, which calls for different low and high level visual and reasoning skills. Despite these challenges, our jointly trained LRR model for the first time shows performance comparable to the dataset specific fine-tuned variants on such diverse visual reasoning tasks. This shows the ability of our LRR model to adapt to diverse visual reasoning tasks. A promising direction of future research would be joint training on an even larger set of visual reasoning datasets, potentially using larger LLM backbones and generalization to novel visual reasoning problems through in-context learning (Brown et al., 2020).
## 5 Conclusion
We show that off-the-shelf LLMs can solve complex visual reasoning tasks when supervised with rationales with surrogate visual tasks and equipped with top-down visual attention. We exploit the flexibility of LLMs in language modeling, which allows us to express diverse low-level visual tasks, _e.g_., recognition, tracking, and re-identification, in the form of language. The use of off-the-shelf LLM and vision backbones allows our model to be readily applicable across diverse tasks. It outperforms the state-of-the-art by 7.6% and 5.1% on the compositional and systematic splits of the ACE dataset and; by 5.4% Top-1 accuracy on static camera and 15.4% Top-1 accuracy on moving camera splits of the CATER dataset. Further, the performance of our LRR model is comparable to the state-of-the-art with task-specific architectures on CLEVR. While we obtain the best results through dataset specific fine-tuning, our LRR model jointly trained on ACE, CLEVR and CATER performs favorably even through these datasets are highly diverse.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{6}{c}{Datasets} \\ \cline{2-7} & CLEVR & CATER: Static Camera & CATER: Moving Camera & ACE: Comp & ACE: Sys \\ \cline{2-7} Method & Acc\(\uparrow\) & Top 1\(\uparrow\) & Top 5\(\uparrow\) & Top 1\(\uparrow\) & Top 5\(\uparrow\) & Acc\(\uparrow\) & Acc\(\uparrow\) \\ \hline LRR (Fine-tuned) & **97.9** & **85.1** & 96.2 & 75.1 & 91.9 & **99.3** & **99.0** \\ LRR (Joint) & 97.3 & 83.7 & **96.5** & **75.2** & **92.8** & 98.9 & 98.7 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Evaluation of our LRR model trained jointly on CLEVR, CATER and ACE (_c.f_. Tables 2 to 4).
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{6}{c}{Datasets} \\ \cline{2-7} & CLEVR & CATER: Static Camera & CATER: Moving Camera & ACE: Comp & ACE: Sys \\ \cline{2-7} Method & Acc\(\uparrow\) & Top 1\(\uparrow\) & Top 5\(\uparrow\) & Top 1\(\uparrow\) & Top 5\(\uparrow\) & Acc\(\uparrow\) & Acc\(\uparrow\) \\ \hline LRR (Fine-tuned) & **97.9** & **85.1** & 96.2 & 75.1 & 91.9 & **99.3** & **99.0** \\ LRR (Joint) & 97.3 & 83.7 & **96.5** & **75.2** & **92.8** & 98.9 & 98.7 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Example rationales generated by our LRR model on CATER. The rationales contain the surrogate task of multi-target tracking. We show the predicted grid locations of the cones and the snitch below. | multimodal言語モデル (LM) は、最近の動画における高レベルの論理処理タスクで有望なパフォーマンスを発揮してきました。しかし、既存の方法では、因果性や構成的空間-時系列論理に関するタスクにおいて、モデルの予測は、物体運動や物体相互作用などの微細な低レベルの細部で基づいていなければなりません。この研究では、LMを低レベルの代替タスク、オブジェクト検出、再識別、追跡など、エンド-ツー-エンドでトレーニングし、必要な低レベルの視覚的な能力を与えようとします。私たちは、空間-時系列注意を持つ2つのストリームのビデオエンコーダが、ビデオにおける必要な静止的な視覚的および運動的な信号を捉えるのに有効であることを示しました。LMの低レベルの代替タスクを実行する能力を利用することで、ビデオにおける論理処理を「見、記憶、思考」の3段階プロセス |
2309.10692 | Collisional heating of icy planetesimals. I. Catastrophic collisions | Planetesimals in the primordial disc may have experienced a collisional
cascade. If so, the comet nuclei later placed in the Kuiper belt, scattered
disc, and Oort Cloud would primarily be fragments and collisional rubble piles
from that cascade. However, the heating associated with the collisions cannot
have been strong enough to remove the hypervolatiles that are trapped within
more durable ices, because comet nuclei are rich in hypervolatiles. This places
constraints on the diameter of the largest bodies allowed to participate in
collisional cascades, and limits the primordial disc lifetime or population
size. In this paper, the thermophysical code NIMBUS is used to study the
thermal evolution of planetesimals before, during, and after catastrophic
collisions. The loss of CO during segregation of $\mathrm{CO_2:CO}$ mixtures
and during crystallisation of amorphous $\mathrm{H_2O}$ is calculated, as well
as mobilisation and internal relocation of $\mathrm{CO_2}$. If an amorphous
$\mathrm{H_2O}$ host existed, and was protected by a $\mathrm{CO_2:CO}$ heat
sink, only diameter $D<20\,\mathrm{km}$ (inner disc) and $D<64\,\mathrm{km}$
(outer disc) bodies could have been involved in a collisional cascade. If
$\mathrm{CO_2}$ was the only CO host, the critical diameters drop to
$D<20$-$32\mathrm{km}$. Avoiding disruption of larger bodies requires a
primordial disc lifetime of $<9\,\mathrm{Myr}$ at $15\,\mathrm{au}$ and
$<50$-$70\,\mathrm{Myr}$ at $30\,\mathrm{au}$. Alternatively, if a
$450\,\mathrm{Myr}$ disc lifetime is required to associate the primordial disc
disruption with the Late Heavy Bombardment, the disc population size must have
been 6-60 times below current estimates. | Björn J. R. Davidsson | 2023-09-19T15:27:23 | http://arxiv.org/abs/2309.10692v1 | # Collisional heating of icy planetesimals. I. Catastrophic collisions
###### Abstract
Planetesimals in the primordial disc may have experienced a collisional cascade. If so, the comet nuclei later placed in the Kuiper belt, scattered disc, and Oort Cloud would primarily be fragments and collisional rubble piles from that cascade. However, the heating associated with the collisions cannot have been strong enough to remove the hypervolatiles that are trapped within more durable ices, because comet nuclei are rich in hypervolatiles. This places constraints on the diameter of the largest bodies allowed to participate in collisional cascades, and limits the primordial disc lifetime or population size. In this paper, the thermophysical code imibus is used to study the thermal evolution of planetesimals before, during, and after catastrophic collisions. The loss of CO during segregation of CO\({}_{2}:\) CO mixtures and during crystallisation of amorphous H\({}_{2}\)O is calculated, as well as mobilisation and internal relocation of CO\({}_{2}\). If an amorphous H\({}_{2}\)O host existed, and was protected by a CO\({}_{2}:\) CO heat sink, only diameter \(D<20\,\)km (inner disc) and \(D<64\,\)km (outer disc) bodies could have been involved in a collisional cascade. If CO\({}_{2}\) was the only CO host, the critical diameters drop to \(D<20\)-\(32\,\)km. Avoiding disruption of larger bodies requires a primordial disc lifetime of \(<\)9 Myr at 15 au and \(<\)50-70 Myr at 30 au. Alternatively, if a 450 Myr disc lifetime is required to associate the primordial disc disruption with the Late Heavy Bombardment, the disc population size must have been 6-60 times below current estimates.
keywords: methods: numerical - comets: general - Kuiper belt: general - Oort Cloud - protoplanetary discs
## 1 Introduction
Current-day comets, Centaurs, trans-Neptunian objects (TNOs), Oort Cloud objects, and/or their planetesimal ancestors, were once located in the primordial disc. This ancient structure is thought to have stretched from \(\sim 15\,\)au to \(\sim 30\,\)au from the Sun, being located exterior to an initially compact giant-planet orbital configuration (Gomes et al., 2004). The primordial disc was disrupted by a gravitational instability among the giant planets. This led to the formation of the present-day reservoirs of small icy bodies: the dynamically hot Kuiper belt (superimposed on top, and beyond, a small pre-existing population of dynamically cold objects at 42-47 au), the scattered disc, and the Oort Cloud (Gomes et al., 2005; Morbidelli et al., 2005; Tsiganis et al., 2005; Levison et al., 2008). Objects in the last two populations are prone to dynamical evolution that allow some to enter the inner Solar system as Centaurs (some of which evolve dynamically to become Jupiter Family comets, or JFCs), Halley Type comets, or dynamically new comets (Fernandez, 1980; Duncan and Levison, 1997; Brasser and Morbidelli, 2013).
The thermal processing of primordial disc objects prior to their dispersal was investigated by Davidsson (2021). He found that objects in the \(D=4\)-\(200\,\)km diameter range lose all their pure CO ice on time-scales ranging 70 kyr-\(13\,\)Myr (depending on size) due to protosolar and radiogenic heating by long-lived isotopes. These time-scales are shorter than most estimates of the primordial disc lifetime (15-\(450\,\)Myr; see below). Therefore, pure CO (and other hypervolatiles such as N\({}_{2}\), O\({}_{2}\), CH\({}_{4}\), C\({}_{2}\)H\({}_{6}\), and noble gases) are not expected in small objects at the time of their dispersal towards colder regions. Choi et al. (2002) also found complete CO loss from a \(D=200\,\)km model object at heliocentric distance \(r_{\rm h}=30\,\)au on a \(\sim 10\,\)Myr timescale, though they only modelled the heat conduction process and not the gas diffusion process. Significant CO loss at \(r_{\rm h}=41\)-\(45\,\)au was reported by De Sanctis et al. (2001) in concurrent heat and gas diffusion modelling similar to that of Davidsson (2021), though they did not follow the process until completion. Steckloff et al. (2012) and Lisse et al. (2021, 2022) used a surface-ice sublimation model, and the time-scale of the thermal wave to reach the core, to argue that Arrokoh and other small to medium-sized Kuiper belt objects ought to be hypervolatile-free. Prialnik (2021) report complete loss of CO and CH\({}_{4}\) from an Arrokoth analogue body at \(r_{\rm h}=44\,\)au in \(100\,\)Myr (but survival at \(r_{\rm h}\geq 200\,\)au), though computational details in her conference abstract are sparse. Furthermore, the activation distance of most comets excludes hypervolatiles stored as clean ice (Jewitt, 2009).
However, CO outgassing from comets is ubiquitous in the inner Solar system (abundance ratios range \(0.002\leq{\rm CO/H_{2}O}\leq 0.294\) and have a mean of \(\langle{\rm CO/H_{2}O}\rangle=0.063\) for a sample of 21 comets; A'Hearn et al., 2012). Because pure CO ice cannot have survived the primordial disc stage in most small objects, a fraction of the CO must have been stored within a less volatile medium, such as amorphous H\({}_{2}\)O (e. g., Prialnik and Bar-Nun, 1987, 1988; Prialnik et al., 2004, 2008; Jewitt, 2009) or CO\({}_{2}\)(Gasc et al., 2017; Davidsson, 2021).
These hypervolatile reservoirs are fragile and sensitive to further disturbances. The ambient planetesimal core temperature in the outer half of the primordial disc would typically have been near \(\sim 50\,\)K. In order to initiate CO\({}_{2}:\) CO segregation (\(\sim 63\,\)K) and amorphous H\({}_{2}\)O crystallisation (\(\sim 85\,\)K), global long-term temperature elevations of merely \(\sim 10\)-\(35\,\)K are sufficient. One potential source of such
heating is catastrophic collisions among icy planetesimals, a process that Davidsson (2021) did not consider. It is clear that a potential collisional cascade cannot have been sufficiently energetic to set off wide-spread segregation and crystallisation. If that happened, the last reservoirs of CO would be lost, and the hypervolatile-free/poor fragments (and re-accumulated rubble piles thereof) would not be good comet analogues.
The current paper is therefore devoted to the problem of investigating the degree of CO loss during catastrophic collisions (taken as a representative of the loss of all hypervolatiles). Bodies that are large enough that their energetic catastrophic disruptions lead to massive CO loss, could not have participated in a collisional cascade. Thermophysical simulations thereby become powerful tools for identifying the largest body (with diameter \(D_{\bullet}\)) that could have been commonly involved in collisional cascades, because of the necessity of preserving significant deposits of CO. Such investigations have the potential of placing novel constraints on the lifetime and number density of the primordial disc (i. e., the disc must have disrupted sufficiently early, and/or the number density must have been sufficiently low, to prevent collisions that lead to massive hypervolatile loss). In the following, I argue that such constraints are urgently needed.
_The primordial disc lifetime._ A lifetime consistent with the Late Heavy Bombardment at \(\sim 450\) Myr (Morbidelli et al., 2012; Marchi et al., 2013) is dynamically possible if the giant planets emerged on resonant orbits when the stabilising gas disc evaporated, and the gap between the outermost giant planet and the inner disc edge was sufficiently large (Gomes et al., 2005; Morbidelli et al., 2007; Levison et al., 2011). However, the giant-planet migration associated with such instabilities risks to cause an unacceptable level of terrestrial-planet orbital excitation through secular resonant coupling (Agnor & Lin, 2012). One way to overcome such difficulties is the 'jumping Jupiter' scenario, where a late instability remains possible if the orbital separation of Jupiter and Saturn increases abruptly as they gravitationally scatter one or several ice giants (Morbidelli et al., 2009; Brasser et al., 2009). A second possibility is that the gravitational instability is unrelated to the Late Heavy Bombardment, and that it took place prior to terrestrial planet formation, so that the excited embryos re-establish orbits with low eccentricity and inclination through dynamical friction against remaining planetesimals before growing further (Agnor & Lin, 2012). The average time needed to grow terrestrial embryos to half their final size is about 15-25 Myr according to O'Brien et al. (2006), suggesting that the instability may have occurred as early as that. Various attempts to constrain the timing of the instability have been made by considering statistically large samples of models, and searching for cases that best reproduce various Solar system constraints. Nesvorny & Morbidelli (2012) studied cases with up to six giant planets (of which 1-2 ice giants are ejected from the Solar system), and found satisfactory solutions for instability times ranging 3.2-34 Myr. de Sousa et al. (2020) found mean instability times ranging 37-62 Myr. Although giant planet migration in the gas disc often place the planets on stable resonant orbits (e. g., Morbidelli & Crida, 2007; Morbidelli et al., 2007), de Sousa et al. (2020) studied cases where the giant planets emerge on unstable orbits after gas disc evaporation. In such cases, they found mean instability times as short as 4 Myr. I note, that Nesvorny (2018), in his comprehensive review of the problem, favours entry of Neptune into the primordial disc 'a few tens of millions of years after the dispersal of the protosolar nebula'.
However, even if the instability occurred very early, the disruption of the primordial disc is not instantaneous. In order to reproduce the inclination distribution of Kuiper belt objects, Nesvorny (2015) found that the passage of Neptune through the disc needs to proceed for at least 10 Myr. All in all, it is therefore likely that the primordial disc lifetime ranged 15-450 Myr.
_The primordial disc population size._ Estimating the number of objects that populated the disc is difficult. This problem stems from the difficulty of determining the fraction of the objects ending up in the scattered disc and Oort Cloud, evaluating the losses in those populations during the Solar system lifetime, estimating the fractional rates of injection towards the inner Solar system, and determining the current sizes of comet reservoirs through the influx of dynamically new comet (Oort Cloud), or through the influx of JFCs and deep observational surveys (scattered disc), while having to deal with low-number statistics and various forms of observational bias (for a brief review, see e. g., section 4.5 in Davidsson et al., 2016). Estimates of the primordial disc content of \(D\geq 2\) km bodies ranges from \(3\cdot 10^{9}\)-\(2\cdot 10^{10}\)(Bottke et al., 2012) to (1.9-5) \(\cdot 10^{11}\)(Morbidelli et al., 2009a; Brasser & Morbidelli, 2013). Similar two-orders-of-magnitude discrepancies of the scattered disc population size exist amongst various predictions and attempts to determine it observationally (Volk & Malhotra, 2008).
If the substantial ranges on primordial disc mass and lifetime could be narrowed down, by finding thermophysical constraints in addition to the dynamical and observational ones, our understanding of the early Solar system evolution would increase substantially. It would have implications on how and when the giant planets migrated, the time available for the growth of Pluto- and Eris-sized bodies, and the timing of water and organics injection into the inner Solar system relative the growth sequence of terrestrial planets and the emergence of Life.
Finally, primordial disc mass and lifetime affect the way we view the scientific contributions of cometary studies. If the highest estimates of the primordial disc population size are accurate, comets like the _Rosetta_ target 67P/Churyumov-Gerasimenko (hereafter, 67P) would necessarily have to be considered collisional fragments or rubble piles (Rickman et al., 2015; Morbidelli & Rickman, 2015), particularly if the primordial disc lifetime was long. However, if the population size was small and/or the lifetime was short, there might not have been many destructive collisions and most comets could be primordial (Massironi et al., 2015; Davidsson et al., 2016). Resolving this issue would tell us whether the physical properties of comets, revealed by spacecraft missions, primarily inform about early solar nebula accretion processes, or about later secondary processing through destructive collisions. Because the potential collisional processing would have taken place before the dislocation of objects to the Oort Cloud, this question is also highly relevant for _Comet Interceptor_ and similar missions targeting the presumably primitive dynamically new comets.
This paper is structured as follows: section 2 summarises the thermophysical model used for this work, and section 3 describes some necessary preparatory work. In particular, section 3.1 identifies the largest potential parent body to be considered in this paper, section 3.2 describes the collisional environment, section 3.3 discusses the generation of waste heat in collisions, and section 3.4 summarises the methodology for the thermophysical simulations. The main results are presented in section 4, they are discussed in section 5, and the conclusions are summarised in section 6. Furthermore, Appendix A discusses the problem of heating in continuum mechanics collision codes.
## 2 The Thermophysical Model
The modelling work is here made with the "Numerical Icy Minor Body evolUtion Simulator", or nimbus. The code is described in full detail by Davidsson (2021), therefore only a brief summary is made here, focusing on the applied model parameters. nimbus has also been used to model different aspects of Comet 67P (Davidsson et al., 2021, 2022a, 2022b), and of sporadically active Asteroid (3200) Phaethon (Masiero et al., 2021).
nimbus considers a spherical body and allows for any temporally changing orbit and spin state to be considered. Here, circular orbits in the ecliptic plane with semimajor axes {15, 23, 30} au are considered, assuming a fixed {\(\lambda\), \(\beta\)} = {0\({}^{\circ}\), 45\({}^{\circ}\)} spin axis orientation (ecliptic longitude and latitude) in fast-rotator mode. The model bodies are resolved by 18 angular-equidistant latitudinal slabs, and radially by 87-147 cells, with widths growing from 5 m at the surface to 0.4-1.5 km at the core, depending on body size (diameters \(D=\) {16, 20.2, 25.4, 32, 40.3, 50.8, 64} km are considered). See section 3.1 for further discussion on body sizes. The luminosity time evolution of the protosun follows that of a 1 M\({}_{\odot}\) star (Palla & Stahler, 1993), but a Solar nebula clearing time of \(t_{\rm c}=3\) Myr is applied to limit the luminosity to \(\leq 1\) times the current one. The thermal processing of the model bodies is primarily determined by the collisional energy injection, therefore parameters that are fine-tuning solar heating (shape, orbital eccentricity and inclination, spin-axis orientation, fast-rotator assumption) have negligible influences on the \(D_{\star}\) estimate. Shape does influence cooling times (by providing a different surface area available for radiative cooling compared to the volume of collisionally heated material, than a sphere). However, shape has no influence on the core temperature maximum following collisional flash-heating, that determines whether the segregation or crystallisation thresholds are reached (and whether CO is massively lost).
The initial absolute abundances (kg m\({}^{-3}\)) are defined by the refractories-to-water ice mass ratio \(\mu\), the molar abundances of CO (\(\nu_{\rm S}\)) and CO\({}_{2}\) (\(\nu_{\rm S}\)) relative to H\({}_{2}\)O, and porosity as function of depth. The porosity variation with depth is calculated from hydrostatic equilibrium, as detailed in section 3.3. Whether the water ice starts off as amorphous or crystalline, and the partition of CO between pure condensate and CO\({}_{2}\) and/or H\({}_{2}\)O hosts, are specified in section 4 for the individual simulations. When CO is trapped in amorphous H\({}_{2}\)O it is here assumed to be fully released upon crystallisation (i. e., nothing is transferred to the cubic water ice). Note, that if some CO would survive crystallisation by being trapped in cubic ice, contrary to this assumption, it would only be released if the temperature reaches 160-175 K (Bar-Nun et al., 1985). If cubic ice would be the only surviving CO host, it could not possibly explain the observed CO release from comets and Centaurs near and beyond 10 au (Jewitt, 2009) that never reach such temperatures. Therefore, the assumed full release of CO upon crystallisation is not a crucial factor in the current discussion. The nominal energy release during crystallisation, and its reduction based on the latent heat of released CO are done as in Davidsson (2021).
The appropriate range of \(\mu\) for medium-sized TNOs is unknown. The large bodies Pluto and Charon have \(\mu=0.655\pm 0.005\) and \(\mu=0.590\pm 0.015\), respectively (McKinnon et al., 2017). Davidsson et al. (2022a) demonstrated that Comet 67P needs \(\mu\approx 1\)-2 in order to reproduce the pre- and post-perihelion water production rate curve. However, other methods have resulted in a wide range of estimates for 67P (\(0.2\leq\mu<\infty\), as reviewed by Choukroun et al. 2020). I here nominally use \(\mu=4\), based on 67P estimates early during the _Rosetta_ mission (Rotundi et al., 2015). Generally, the choice of \(\mu\) primarily regulates the level of radiogenic heating within the refractory component and heating during crystallisation if the water is amorphous. In the current work, radiogenic heating is ignored (except for a few models, as specified). The rather large \(\mu\)-value therefore primarily acts to keep crystallisation heating relatively low. This is made intentionally, in order to primarily understand the level of processing caused by collisional heating. Secondary model-dependencies on \(\mu\) are discussed below.
The applied nominal abundances \(\nu_{\rm S}=0.04\)-0.06 (2 per cent as pure ice that is lost before the collisions, and another 2 per cent each in CO\({}_{2}\) and/or H\({}_{2}\)O) and \(\nu_{\rm S}=0.05\) were based on perihelion coma abundances measured _in situ_ at Comet 67P by Hansen et al. (2016). Those choices were made prior to the demonstration by Davidsson et al. (2022a) that \(\nu_{\rm S}\approx 0.3\) is needed within the _nucleus_ in order to simultaneously fit the observed H\({}_{2}\)O and CO\({}_{2}\) production rate curves (including the low perihelion coma CO\({}_{2}\)/H\({}_{2}\)O abundance ratio). That CO\({}_{2}\) abundance is close to the average for 50 low-mass protostars (Pontoppidan et al., 2008), and if comets have CO abundances similar to such protostars as well, it would suggest \(\nu_{\rm S}=0.13\)-0.26. In section 5, I discuss the error in \(D_{\star}\) (i. e., the estimated diameter of the largest admissible collisional cascade participant) introduced by ignoring radiogenic heating and potentially underestimating the CO and CO\({}_{2}\) abundances.
nimbus calculates how such an initial setup of abundances and porosities evolves over time due to transport of heat (by solid-state conduction, radiative conduction, and advection) and transport of mass (by gas diffusion, driven by sublimation, segregation, crystallisation, and accounting for recondensation processes) in two spatial dimensions (radially and latitudinally). The system of coupled differential equations describing such transport, as well as all auxiliary functions, are described by Davidsson (2021).
The temperature-dependent specific heat capacities and heat conductivities of compacted forsterite dust, H\({}_{2}\)O, CO\({}_{2}\), and CO used in nimbus are all taken from laboratory measurements. Conductivity is corrected for porosity as described by Shoshany et al. (2002), assuming \(r_{\rm p}=1\) mm pore radii (determining the radiative contribution to heat transport, which is negligible at the low temperatures considered here). Laboratory measurements are applied for saturation pressures and latent heats of all volatile species, and the equation of state used to calculate porosities under hydrostatic equilibrium. Though these functions must be considered accurate, the bulk specific heat capacity \(c(T)\) is somewhat sensitive to the assumed \(\mu\)-value (the water ice specific heat capacity is 11 times higher than that of dust at 50 K, and 4 times higher at 100 K). Changes to \(\mu\) (for fixed CO\(/\)H\({}_{2}\)O and CO\({}_{2}\)/H\({}_{2}\)O ratios) also modifies the total mass of CO and CO\({}_{2}\), thus the amount of energy required to remove or relocate those species. The effective solid-state heat conductivity \(\kappa_{\rm s}(T,\,\psi)\) is strongly dependent on the method used to correct the compacted heat conductivity for porosity \(\psi\). The bulk thermal inertia \(\Gamma=\sqrt{\rho_{\rm bulk}(cT)\kappa_{\rm s}(T,\,\psi)}\) incorporates the combined uncertainties in \(c(T)\) and \(\kappa_{\rm s}(T,\,\psi)\), and is observationally constrained. The assumptions used in this paper result in \(90\leq\Gamma\leq 220\) J m\({}^{-2}\) K\({}^{-1}\) s\({}^{-1/2}\) (amorphous water) or \(170\leq\Gamma\leq 270\) J m\({}^{-2}\) K\({}^{-1}\) s\({}^{-1/2}\) (crystalline water) at \(40\leq T\leq 80\) K (the temperature interval that includes ambient temperature initial conditions at \(r_{\rm h}=15\)-30 au, segregation and crystallisation threshold temperatures). The _Philote_ lander on Comet 67P has performed the only direct _in situ_ surface temperature measurements on an outer Solar system minor body, and the resulting initial estimate \(\Gamma=85\pm 35\) J m\({}^{-2}\) K\({}^{-1}\) s\({}^{-1/2}\)(Spohn et al., 2015) has since been revised to \(\Gamma\geq 120\) J m\({}^{-2}\) K\({}^{-1}\) s\({}^{-1/2}\)(Groussin et al., 2019). Analysis of remote-sensing irradiation measurements from orbit around Comet 67P have resulted in thermal inertia esti
mates ranging \(30\leq\Gamma\leq 160\,\mathrm{J\,m^{-2}\,K^{-1}\,s^{-1/2}}\)(e. g., Schloerb et al., 2015; Marshall et al., 2018; Davidsson et al., 2022). This suggests that the \(\Gamma\)-values applied in this work potentially may be on the high side (primarily because the effective heat conductivity may be larger than for at least some real bodies). It should also be kept in mind that the bulk interior of medium-sized TNOs may have effective heat conductivities that are different from that of the near-surface region on a single comet nucleus. Tests of how model results change with \(c(T)\), \(\kappa_{\mathrm{s}}(T,\,\psi)\), and \(\mu\) will be presented in section 4 and discussed in section 5.
The gas diffusivity is calculated assuming that pores have lengths \(L_{\mathrm{p}}=10\,\mathrm{mm},\mathrm{radii\,}r_{\mathrm{p}}=1\,\mathrm{mm}\) and unity tortuosity. The gas diffusivity parameters were selected to represent a medium of cm-sized porous pebbles, assuming the parent bodies were formed by gravitational pebble-swarm collapse in a streaming instability scenario (Youdin and Goodman, 2005; Johansen et al., 2007; Nesvorny et al., 2010). Note, that different physically reasonable assumptions about diffusivity will affect the time it takes for vapour to flow from the centre to the surface, but these time-scales are so short (years) compared to other physical processes (kyr-Myr) that it has no practical significance for the level of thermal processing. A zero diffusivity (unrealistic for the highly porous bodies considered here) would prevent CO from leaving the body altogether, but its internal release and net energy consumption would still take place (i. e., temperature solutions would not change). Because CO could never recondense as pure ice at \(r_{\mathrm{h}}\leq 30\,\mathrm{au}\), the vapour would remain within the body until the next major collision event, and then be lost to space. CO\({}_{2}\) relocation would be inhibited, though.
The current simulations fully account for CO diffusion in the pre-collision simulations that include CO ice. However, for the post-collision simulations where CO is released only through segregation and/or crystallisation, immediate escape of CO vapour is assumed. This is because simibus slows down quite significantly when considering CO diffusion with such sources, and this simplification has no effect on the model because it is too warm for CO recondensation. However, full account of CO diffusion is made at all times, because recondensation near the cool surface is common. The current simulations assume that the outgassing is so gentle that no dust erosion takes place, i. e., the body radii are not updated over time (for justifications, see Davidsson, 2021).
## 3 Preparations
### Largest potential parent in a collisional cascade
The largest parent that may have contributed substantially to the population of \(\sim 1\,\mathrm{km}\)-sized comets in a collisional cascade must fulfill two criteria: 1) it should not be so large and refractory-rich that long-lived radiogenic heating causes CO\({}_{2}\) : CO segregation and amorphous H\({}_{2}\)O crystallisation prior to its disruption; 2) the collisional heating itself should not trigger such segregation and crystallisation. Unless both criteria are fulfilled, the resulting \(\sim 1\,\mathrm{km}\)-sized rubble will have little to no CO and be poor comet analogues.
Davidsson (2021) demonstrated that a Hale-Bopp model analogue with \(D=74\,\mathrm{km}\), \(\mu=4\), and \(r_{\mathrm{h}}=23\,\mathrm{au}\) first loses all pure CO ice at \(\sim 10\,\mathrm{Myr}\), then completes core segregation and crystallisation \(\sim 18\,\mathrm{Myr}\) and \(\sim 30\,\mathrm{Myr}\) after formation, respectively. Unsegregated CO\({}_{2}\) : CO mixtures and amorphous water ice survive in thin near-surface zones where radiative cooling renders radiogenic and solar heating insufficient to cause phase transitions. Compared to original abundances, 13 per cent CO\({}_{2}\) : CO and 33 per cent amorphous water ice survive. Such deposits would be capable of providing the observed CO outgassing when the body enters the inner Solar system, if the body has remained intact since its thermal processing in the primordial disc. However, if such a body would participate in a collisional cascade, the resulting rubble would predominantly lack CO\({}_{2}\) : CO mixtures and amorphous water ice. This is because it starts out poor in such substances, and presumably would suffer additional losses because of collisional heating. Such rubble would be CO-poor, as opposed to most observed comets.
It is therefore likely that the largest participant in a potential collisional cascade in the primordial disc should be smaller and/or contain less radionuclide-carrying refractory material than the Hale-Bopp analogue. A suitable body is identified in the following.
In this paper, it is assumed that Comet 67P with \(D_{67\mathrm{P}}=4\,\mathrm{km}\) is the result of a collisional cascade that, in each catastrophic collision, reduced the mass of the largest daughter by a factor 2 with respect to the parent. The diameters \(D_{n}\) and generation numbers \(n\) of the ancestors are related by
\[D_{n}=2^{n/3}D_{67\mathrm{P}}. \tag{1}\]
For example, the immediate parent of Comet 67P (ancestor \(n=1\)) would be a \(D_{1}=5\,\mathrm{km}\) body, and ancestor \(n=7\) would be a \(D_{7}=20.2\,\mathrm{km}\) body. The largest potential ancestor that still is smaller than the Hale-Bopp analogue is \(n=12\) with \(D_{12}=64\,\mathrm{km}\). I first study how a smaller and less dusty body (than the Hale-Bopp analogue) evolves due to heating by long-lived radionuclides (here, \({}^{40}\)K, \({}^{232}\)Th, \({}^{235}\)U, and \({}^{238}\)U at chondritic abundances). Therefore, a \(D=64\,\mathrm{km}\) body with \(\mu=1\) (all the water initially being amorphous) that contained 5 per cent freely condensed CO\({}_{2}\), and 4 per cent CO (divided equally between the CO\({}_{2}\) and amorphous H\({}_{2}\)O hosts) was considered. Note that the CO and CO\({}_{2}\) abundances are molar with respect to water. The body was placed on a circular orbit in the ecliptic at \(r_{\mathrm{h}}=15\,\mathrm{au}\), with a spin axis having longitude \(\lambda=0^{\circ}\) and latitude \(\beta=45^{\circ}\) in the ecliptic system, and was modelled from an assumed solar nebula clearing at \(t_{\mathrm{c}}=3\,\mathrm{Myr}\) to \(t=43\,\mathrm{Myr}\).
Figure 1 (upper left) shows the time evolution of the core temperature. The temperature rises from an assumed initial temperature of \(T_{0}=20\,\mathrm{K}\) to \(T=63\,\mathrm{K}\) at \(t=10\,\mathrm{Myr}\). The heating rate then slows down because energy is being used to segregate CO out from CO\({}_{2}\). The global and core fractions of remaining CO\({}_{2}\) : CO mixtures are shown to the upper right. Figure 1 also shows the internal distributions of CO\({}_{2}\) : CO mixture abundance (lower left) and temperature (lower right) and at \(t=15.3\,\mathrm{Myr}\). That is the point in time when 10 per cent of the original population of \(D_{12}=64\,\mathrm{km}\) bodies would have suffered catastrophic disruption (see Sec. 3.2). The CO\({}_{2}\) : CO-abundance is heavily depleted except for the polar regions (29 per cent of the original CO\({}_{2}\)-bound CO remains), and all CO\({}_{2}\) : CO in the core has segregated by \(t=20.1\,\mathrm{Myr}\). During the first 15 Myr, the hypervolatile CO outgassing is steady at a few times \(10^{25}\,\mathrm{mloc\,s^{-1}}\), while the production rate of the significantly more stable supervolatile CO\({}_{2}\) is orders of magnitude lower.
After the core segregation is completed, the temperature keeps rising above the surface average because of radiogenic heating. The core temperature reaches a steady-state value near 74 K at \(t=43\,\mathrm{Myr}\) when the radiative loss rate at the surface balances the radiogenic heat production rate. This is merely \(\sim 6\,\mathrm{K}\) below the temperature at which the Hale-Bopp analogue experienced wide-spread crystallisation. At this time, only 2 per cent of the CO\({}_{2}\)-stored CO remains, showing that segregation runs to completion at \(r_{\mathrm{h}}=15\,\mathrm{au}\) in the long run.
It therefore seems like a body with \(D_{12}=64\,\mathrm{km}\) and \(\mu=1\) barely would manage to hold on to its most resilient CO host, the amorphous water ice, even in the absence of collisional heating. A
larger body size, a substantially smaller heat conductivity, and/or a higher abundance of refractories (\(\mu>1\)) would likely lead to eventual large-scale crystallisation.
I therefore settle for \(D_{12}=64\,\)km as the largest parent to be studied in the following. Such an ultimate parent in a collisional cascade stands a fair chance of producing CO-rich rubble, particularly if: 1) it is as dust-poor as assumed here; 2) it experiences the collision relatively early, when radiogenic heat has not had time to accumulate; 3) there are still abundant CO\({}_{2}\) : CO mixtures that can absorb collisional energy during segregation and help preserving CO stored in amorphous water ice. However, larger bodies could not avoid losing all their CO through a combination of radiogenic and collisional heating (unless being so massive that self-gravity prevents escape). They are prevented from participating in a collisional cascade, because they would produce a population of CO-free comet-sized bodies that are not present in the observational record.
### Collisional environment
In order to describe the collisional environment, I consider the primordial disc at heliocentric distances \(15\leq r_{\rm h}\leq 30\,\)au. It contains bodies with diameter \(0.02\leq D\leq 1000\,\)km, having a differential size-frequency distribution power-law index \(q=-3\), i. e., the number of bodies with diameter \(D\) is \(\mathcal{N}(D)\propto D^{q}\). It is assumed that the primordial disc contains \(N_{D>D_{\rm lim}}=2\cdot 10^{11}\) bodies with diameters \(D_{\rm lim}\geq 2.3\,\)km (Brasser & Morbidelli, 2013; Morbidelli & Rickman, 2015; Rickman et al., 2015). The surface density is assumed to go as \(\propto r_{\rm h}^{-1}\). The primordial disc is divided into Zone #1 (\(15\leq r_{\rm h}<20\,\)au), Zone #2 (\(20\leq r_{\rm h}<25\,\)au), and Zone #3 (\(25\leq r_{\rm h}\leq 30\,\)au), each containing a fraction \(f_{\rm z}=1/3\) of the bodies, consistent with the considered surface density. The number of impacts \([{\rm yr}^{-1}]\) on a target body of diameter \(D_{\rm p}\) within Zone #\(i\), by projectiles with diameters \(d_{\rm min}\leq d_{\rm proj}\leq d_{\rm max}\) originating from Zone #\(j\) is given by
\[C_{ij}=\frac{1}{4}P_{ij}\int_{d_{\rm min}}^{d_{\rm max}}\left(D_{\rm p}+d_{ \rm proj}\right)^{2}\,f_{\rm z}\mathcal{N}(d_{\rm proj})\;dd_{\rm proj}. \tag{2}\]
Here, \(P_{ij}\) is the mean intrinsic collision probability for targets in Zone #\(i\) and projectiles originating from Zone #\(j\), with numerical values from Morbidelli & Rickman (2015) in Table 1. For targets in Zone #1, 83 per cent of the projectiles originate from within the same zone. For Zone #3, the corresponding number is 90 per cent. These fractions are rather high, because they each have a single neighbouring zone. For Zone #2 targets, 66 per cent of the projectile come from the same zone, while 28 per cent come from Zone #1 and 5 per cent from Zone #3. For simplicity, only same-zone targets and projectiles are considered here (i. e., \(i=j\)), in order to assign a
Figure 1: A \(D=64\,\)km body at \(r_{\rm h}=15\,\)au, consisting of equal masses of amorphous water ice and refractories with chondritic abundances of long–lived radionuclides \({}^{40}\)K, \({}^{232}\)Th, \({}^{235}\)U, and \({}^{238}\)U, with 5 per cent condensed CO\({}_{2}\) and 4 per cent CO trapped in equal amounts within the H\({}_{2}\)O and CO\({}_{2}\). The long–term simulation of such a body shows that CO\({}_{2}\) : CO segregation eventually completes, but that CO within amorphous H\({}_{2}\)O survives if the body is only subjected to protosolar and radiogenic heating (i. e., no collisional heating). _Upper left:_ core temperature as function of time. _Upper right:_ the fraction of CO trapped in CO\({}_{2}\) (globally and at the core), versus time. _Lower left:_ the spatial distribution of CO–bearing CO\({}_{2}\) near the expected catastrophic collision time \(r_{10}=15.2\,\)Myr at this diameter and heliocentric distance (see equation 5). _Lower right:_ the internal temperature distribution at \(t_{10}\).
single typical projectile size and velocity that causes a catastrophic disruption of a given target body in a given zone. However, it is remembered that the real lifetimes are somewhat shorter than calculated here, and that some targets will be destroyed by unusually small (and fast) projectiles originating from other zones.
In equation 2, \(d_{\rm min}\) is the diameter of the smallest projectile capable of causing a catastrophic collision. It is calculated by first considering the critical specific catastrophic collisional energy \(Q_{\rm D}^{*}\) (\(J\,{\rm kg}^{-1}\)), for which the largest surviving fragment after the collision carries half the mass of the parent,
\[Q_{\rm D}^{*}=a_{\rm coll}\left(D_{\rm p}/2\right)^{3\mu_{\rm coll}}V_{ij}^{2- 3\mu_{\rm coll}} \tag{3}\]
with \(a_{\rm coll}=4\cdot 10^{-4}\) and coupling parameter \(\mu_{\rm coll}=0.42\) for weak, highly porous bodies according to Jutzi et al. (2017), and with impact velocity \(V_{ij}\) from Morbidelli & Rickman (2015) given in Table 1. By equating the total energy needed for the catastrophic disruption of the target with the kinetic energy of the projectile, one can solve for \(d_{\rm min}\) (assuming the two bodies have the same density),
\[d_{\rm min}=\frac{(2Q_{\rm D}^{*})^{1/3}D_{\rm p}}{V_{ij}^{2/3}}. \tag{4}\]
In order to define an upper limit for the integral in equation (2), projectiles carrying \(2Q_{\rm D}^{*}\) are considered.
The probability that an object of diameter \(D_{\rm p}\) is intact after a time \(t\) is given by \(p_{\rm int}=\exp(-C_{ij}t)\) according to Morbidelli & Rickman (2015). In order to define a timescale for substantial destruction of \(D_{\rm p}\)-type parents, I apply \(p_{\rm int}=0.9\) and
\[t_{10}=-\frac{\ln(0.9)}{C_{ij}}, \tag{5}\]
i. e., \(t_{10}\) is the time it would take to destroy 10 per cent of the \(D_{\rm p}\) population. This time-scale is admittedly arbitrary, but is meant to measure the time it would take for bodies of a given size to have started to contribute significantly to the catastrophic cascade. Solutions to equations (1)-(2) for Zones #1-#3 are shown in Tables 2-4, respectively. For example, a \(D_{\rm p}=64\) km body in Zone #2 would wait for \(t_{10}=44.2\) Myr to get destroyed by a \(d_{\rm proj}=38.2\) km projectile in a collision with \(Q_{\rm D}^{*}=1.72\cdot 10^{4}\) J kg\({}^{-1}\) that would create a \(D_{\rm d}=50.8\) km daughter.
### Bulk density and waste heat
The next step is to calculate pre-collision bulk densities and bulk porosities for the targets. That is necessary in order to evaluate the expected degree of compaction taking place during catastrophic collisions, which in turn determines what fraction of \(Q_{\rm D}^{*}\) that should go into waste heat. To this end, the hydrostatic equilibrium configurations of the parent bodies are calculated, for which the sum of gravitational pressure and dynamic pressure due to accretion, at any given depth, is balanced by the compressive strength of the material. The gravitational pressure is given by (e.g. Henke et al., 2012)
\[\mathcal{P}_{\rm g}(r)=-4\pi G\int_{r}^{R}\frac{\rho(r^{\prime})}{(r^{\prime}) ^{2}}\left(\int_{0}^{r}\rho(r^{\prime})(r^{\prime})^{2}\,dr^{\prime}\right)\, dr^{\prime}, \tag{6}\]
and the dynamic pressure is given by
\[\mathcal{P}_{\rm d}=\frac{1}{2}\rho_{\rm imp}V_{\rm imp}^{2} \tag{7}\]
where \(\rho_{\rm imp}\) and \(V_{\rm imp}\) are the mean density and velocity of the material that came together to form the body during primordial accretion. I require that these compressive pressures should be balanced (\(\mathcal{P}_{\rm g}+\mathcal{P}_{\rm d}=\mathcal{P}_{\rm m}\)) by the compressive strength \(\mathcal{P}_{\rm m}\) at which a granular material manages to resists further compression below the local porosity \(\psi=\psi(r)\). The function \(\mathcal{P}_{\rm m}=\mathcal{P}_{\rm m}(\psi)\) is calculated as a weighted average of the compressive strengths measured for silica particles (representing refractories) and water ice. For refractories, the omnidirectional version of equation (10) in Gutler et al. (2009) is applied. For ice, the measurements by Lorek et al. (2016) are applied for pressures \(<10^{5}\) Pa, but above this threshold 1 use data from Yasui & Arakawa (2009) obtained for a ref : ice = \(29:71\) mixture at \(T=206\) K (see their Fig. 2). I assume densities \(\rho_{\rm ref}=3000\) kg m\({}^{-3}\) and \(\rho_{\rm ice}=960\) kg m\({}^{-3}\) for refractories and ice mixtures, respectively. The mass fractions are taken as \(f_{\rm ref}=0.53\) and \(f_{\rm ice}=0.47\), which implies a volumetric fraction of ices \(f_{\rm V,ice}=0.63\) that is used as a weighting factor for the compressive strengths.
The compact density of this mixture of refractories and ices is \(\rho_{\rm solid}=1300\) kg m\({}^{-3}\). In order to evaluate \(\mathcal{P}_{\rm d}\) I assume \(\rho_{\rm imp}=300\) kg m\({}^{-3}\) and I require that a cloud with the mass of Comet 67P should compress into a \(D_{\rm 67P}=4\) km body with the bulk density near \(\rho_{\rm bulk}=535\) kg m\({}^{-3}\)(Jorda et al., 2016; Preusker et al., 2015; Patzold et al., 2019). The hydrostatic equilibrium calculations show that this happens if \(V_{\rm imp}=31\) m s\({}^{-1}\). The same \(\mathcal{P}_{\rm d}\)-value is applied in all calculations. Tables 2-4 show the resulting bulk densities \(\rho_{\rm bulk}\) as function of body size. Self-gravity is rather weak at these body sizes, e. g., the \(D_{\rm p}=64\) km body compresses to \(\rho_{\rm bulk}=556\) kg m\({}^{-3}\), marginally higher than that of Comet 67P. The porosity grows from \(\psi=0.55\) at the centre to \(\psi=0.58\) near the surface, giving a mean porosity of \(\psi=0.57\) (compared to the modelled value \(\psi_{\rm 67P}=0.59\) for Comet 67P), and a rather homogeneous interior (this level of variation is too small to have any practical influence).
disruption of parents with \(D_{\rm p}\leq 25.4\,\)km at \(r_{\rm h}=15\,\)au, and Table 4 shows that the limit is changed to \(D_{\rm p}\leq 50.8\,\)km at \(r_{\rm h}=30\,\)au.
The integral in equation (8) can be used to evaluate the volume of a \(1\,\)kg mass element resulting from using an energy \(Q_{\rm D}^{*}\) for compression. The corresponding density \(\rho_{\rm block}\) of the partially compressed fragments that will build the daughter bodies are listed in Tables 2-4 as well. When \(\rho_{\rm block}<\rho_{\rm solid}\) the blocks will have some level of micro-porosity. For simplicity, it is here assumed that the daughters built by gravitational reaccumulation of the partially compressed blocks with \(\rho_{\rm block}\) will have bulk densities that are equivalent to the ones caused by primordial formation (i. e., \(\rho_{\rm bulk}\)). That explains the values for macro-porosity \(\psi_{\rm macro}\) for the daughter bodies in Tables 2-4. According to this formalism, the bulk density of blocks within Comet 67P, had it been formed as the result of a catastrophic disruption of a \(D_{\rm p}=5\,\)km parent at 20-25 au, would have been \(\rho_{\rm block}\approx 990\,\)kg m\({}^{-3}\).
In case \(Q_{\rm D}^{*}>Q_{\rm max}\), the parent body material is assumed to reach full compression, \(\rho_{\rm block}=\rho_{\rm solid}\). For a brief moment, the material will be shocked to densities higher than \(\rho_{\rm solid}\), and it is here assumed that the work performed to compress above \(\rho_{\rm solid}\) is twice as large as that performed by the release wave when expanding the material back to \(\rho_{\rm solid}\). This estimate is based on measurements of the shock and particle velocities in water ice IV by Stewart & Ahrens (2005), applied to the method of calculating waste heat by Sharp & DeCarli (2006), indicating that about half of the internal energy increase is converted to waste heat in water ice at impact pressures \(\leq 3\,\)GPa. Consequently, the total waste heat is taken as
\[Q_{\rm waste}=\frac{1}{2}\left(Q_{\rm D}^{*}-Q_{\rm max}\right)+Q_{\rm max}. \tag{10}\]
The corresponding \(f_{\rm waste}=Q_{\rm waste}/Q_{\rm D}^{*}\) values are listed in Tables 2-4. Note that daughters formed from, e. g., \(D_{\rm p}\geq 32\,\)km at \(r_{\rm h}=15\,\)au parents therefore have \(\rho_{\rm block}=\rho_{\rm solid}=1300\,\)kg m\({}^{-3}\). As previously mentioned, it is assumed that the reaccumulated daughters of such parents have bulk densities identical to the \(\rho_{\rm bulk}\)-values expected from primordial formation (the \(\psi_{\rm macro}\)-values are calculated accordingly). For further discussion of waste heat during compression, see Appendix A.
### Methodology
The degree of thermal processing during a catastrophic impact event depends on the initial composition of the body and on the thermal evolution that took place prior to the collision. If condensed CO remains at the time of impact, it can act as a heat sink that protects potential deposits of CO within CO\({}_{2}\) and/or amorphous H\({}_{2}\)O. Similarly, segregation can delay or prevent the onset of CO\({}_{2}\) redistribution or loss, and H\({}_{2}\)O crystallisation. Therefore, a number of pre-collision simulations are performed with the purpose of: 1) finding the loss time-scale of freely condensed CO; 2) to investigating the stability of CO\({}_{2}\) : CO mixtures; 3) investigating the stability of CO\({}_{2}\) ice. These models consider \(7\leq n\leq 12\) parents with diameters in the range \(20.2\leq D\leq 64\,\)km. The lower cut-off is placed at sizes for which substantial volatile loss is not expected, to be verified during the simulations. These bodies are all considered to be the first parent (i. e., the largest body) participating in the collisional cascade, i. e., they are not themselves formed as the result of a catastrophic collision. This is done in order to attempt to define the approximate starting-point of a potential collisional cascade, i. e., the largest body size that substantially may have contributed to the current population of \(\sim 1\,\)km-class cometary collisional rubble. Furthermore, potential radiogenic heating by short-lived radionuclides (\({}^{26}\)Al and \({}^{60}\)Fe) is ignored, assuming that those effects in any case must have been small enough to leave the CO\({}_{2}\) : CO mixtures and amorphous H\({}_{2}\)O intact.
Ideally, such models should be run from solar nebula clearing (here taken as \(t_{\rm c}=3\,\)Myr) to the time of impact at \(t_{10}\). However, performing simulations for such long time intervals is computationally prohibitive due to the large number of cases considered in this paper. Therefore, a first set of models are run up to the point when free CO ice is lost. Some models are continued further at various degrees, in order to explore the stability of CO\({}_{2}\) : CO mixtures and CO\({}_{2}\) ice and to estimate their loss rates. However, then it is necessary to jump ahead in time.
Therefore, a second set of models is then initiated briefly before \(t_{10}\). They consider the post-collision daughter nuclei to the previously studied parents, i. e., focus on \(6\leq n\leq 11\) bodies with diameters in the range \(16\leq D\leq 50.8\,\)km. These are run for a period prior to impact to allow for global thermal relaxation to the protosolar luminosity conditions prevailing at the time. At \(t_{10}\), an amount of energy \(Q_{\rm D}^{*}\)/\(f_{\rm waste}\) is injected homogeneously, in order to simulate a newly formed rubble pile, consisting of heated debris from the parent. Leinhardt & Stewart (2009) demonstrated that the interior of collisional rubble piles consists of moderately shocked (thus moderately heated) material, while the surface is a mixture of weakly and strongly shocked (thus coldest or warmest) material. The assumption of homogeneous energy injection is therefore equivalent to assuming that the internal energy density carried by the coldest or warmest material near the surface averages out to a level similar to the internal energy of core material. For technical reasons, this energy is injected gradually during a 100 yr period, which is very short compared to the post-collision cooling timescale back to ambient temperature condi
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \(n\) & \(D_{\rm p}\) [km] & \(t_{10}\) [Myr] & \(d_{\rm proj}\) [km] & \(Q_{\rm D}^{*}\) [k\(\,\)kg\({}^{-1}\)] & \(D_{\rm d}\) [km] & \(\rho_{\rm block}\) [kg m\({}^{-3}\)] & \(\psi_{\rm macro}\) & \(\rho_{\rm bulk}\) [kg m\({}^{-3}\)] & \(f_{\rm waste}\) \\ \hline
12 & 64.0 & 15.2 & 26.0 & 26.2 & **50.8** & 1300 & 0.57 & 556 & 0.68 \\
11 & 50.8 & 13.4 & 18.7 & 19.6 & **40.3** & 1300 & 0.58 & 548 & 0.74 \\
10 & 40.3 & 11.6 & 13.5 & 14.6 & **32.0** & 1300 & 0.58 & 544 & 0.83 \\
9 & 32.0 & 10.2 & 9.7 & 10.9 & **25.4** & 1300 & 0.58 & 541 & 0.94 \\
8 & 25.4 & 8.8 & 7.0 & 8.18 & **20.2** & 1264 & 0.57 & 539 & 1.00 \\
7 & 20.2 & 7.5 & 5.1 & 6.13 & **16.0** & 1219 & 0.56 & 538 & 1.00 \\ \hline
1 & 5.0 & 2.8 & 0.7 & 1.06 & **4.0** & 1033 & 0.48 & 537 & 1.00 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Zone #1 at 15–20 au: the \(n^{\rm th}\) generation parent of 67P with diameter \(D_{\rm p}\) waits a time \(t_{10}\) for a projectile of diameter \(d_{\rm proj}\) to impact with a specific energy \(Q_{\rm D}^{*}\), thereby creating a daughter with diameter \(D_{\rm d}\). The parent material has compressed to blocks of density \(\rho_{\rm block}\) that reassemble with macro porosity \(\psi_{\rm macro}\) to give the daughter a bulk density \(\rho_{\rm bulk}\). A fraction \(f_{\rm waste}\) of \(Q_{\rm D}^{*}\) goes into heating the daughter.
tions. Changes to the abundances of CO\({}_{2}\) : CO mixtures, amorphous H\({}_{2}\)O ice, and CO\({}_{2}\) is are monitored during cooling.
Skipping millions of years of evolution between the two sets of simulations means that it is not possible to account for heating by long-lived radionuclides. Comparisons between the model in Sec. 3.1 with similar simulations performed without radiogenic heating show that the temperature elevation caused by radioactive decay is \(\sim 10\) K for \(D=64\) km bodies. The discrepancy falls rapidly with decreasing body diameter. As argued in section 5, this effect could be partially or fully cancelled if the CO abundance is higher than currently assumed.
The post-collision simulations are treated differently in the three zones. At \(15\leq r_{\rm h}<20\) au (Zone#1) it is necessary to include amorphous water ice, because CO\({}_{2}:\) CO mixtures are unstable at such distances, and amorphous H\({}_{2}\)O provides the only stable refuge for hypervolatiles in the absence of collisions. The effect of collisional heating on this reservoir needs to be investigated, both with and without the heat sink provided by residual CO\({}_{2}:\) CO, potentially being present at the time of collision. However, we do not know for certain that amorphous water ice exists. If it does not exist, we may have to accept that: 1) small objects that originated from Zone#1 no longer carries hypervolatiles; 2) CO\({}_{2}\) is the sole current reservoir of hypervolatiles in comets. At \(20<r_{\rm h}<25\) au (Zone#2) I assume this is the case, i. e., the stability of CO\({}_{2}:\) CO against collisional processing is investigated, without the inclusion of amorphous ice. One reason for doing this is to study the effect of collisions on the mobilisation and internal redistribution of CO\({}_{2}\) ice, without the energy release associated with crystallisation. At \(25<r_{\rm h}\leq 30\) au (Zone#3), amorphous water ice is included anew (in order to test its resilience against collisional heating both a small and large distances).
In the following, the results of numerous simulations are presented. The model bodies are designed to obey specific physical relations (e. g., the saturation pressure of CO\({}_{2}\) vapour as function of temperature), and are assigned specific values of the free model parameters (e. g., the length, width, and tortuosity of tubes that define gas diffusivity). As far as possible, all necessary physical relations are based on laboratory measurements of materials believed to be common in minor outer Solar System bodies. Free parameter values are educated guesses, based on theoretical predictions and fits to observed behaviour. A model body of a given size is thereby designed to behave in one specific way when subjected to given internal and external stimuli (collisional and solar energy). However, the real Solar System bodies of that size most likely have a range of properties depending on individual formation and evolution conditions. Thus, they are not identical to one another and will behave differently when subjected to the same stimuli. It will be assumed that the model bodies behave quantitatively similar to a sub-class of real bodies with the same size. The model simulations will be used to determine whether or not a real body of a given size experiences sufficiently small abundance changes in a catastrophic disruption to be considered a suitable comet nucleus ancestor. From the discussion above it is clear that such statements may only be correct for a fraction of the bodies of a given size. This uncertainty should be kept in mind when a limit \(D_{\star}\) is placed on the largest acceptable ancestor in a collisional cascade - it is a best-effort assessment that problems of keeping CO have started for bodies twice as massive, but it is impossible to certify whether those problems are limited or affects the entire size-class. All that can be said is that problems ought to be significantly smaller for most bodies having half the mass of a \(D_{\star}\) body, and significantly larger for most bodies with twice its mass and above.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \(n\) & \(D_{\rm p}\) [km] & \(t_{\rm 10}\) [Myr] & \(d_{\rm proj}\) [km] & \(Q_{\rm D}^{\star}\) [k kg\({}^{-1}\)] & \(D_{\rm d}\) [km] & \(\rho_{\rm block}\) [kg m\({}^{-3}\)] & \(\psi_{\rm macro}\) & \(\rho_{\rm bulk}\) [kg m\({}^{-3}\)] & \(f_{\rm waste}\) \\ \hline
12 & 64.0 & 70.8 & 46.4 & 11.0 & **50.8** & 1300 & 0.57 & 556 & 0.94 \\
11 & 50.8 & 63.4 & 33.4 & 8.19 & **40.3** & 1264 & 0.57 & 548 & 1.00 \\
10 & 40.3 & 56.5 & 24.0 & 6.12 & **32.0** & 1219 & 0.55 & 544 & 1.00 \\
9 & 32.0 & 50.3 & 17.3 & 4.58 & **25.4** & 1181 & 0.54 & 541 & 1.00 \\
8 & 25.4 & 44.5 & 12.5 & 3.42 & **20.2** & 1148 & 0.53 & 539 & 1.00 \\
7 & 20.2 & 38.9 & 9.0 & 2.56 & **16.0** & 1119 & 0.52 & 538 & 1.00 \\ \hline
1 & 5.0 & 17.0 & 1.2 & 0.44 & **4.0** & 933 & 0.42 & 537 & 1.00 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Zone #3 at 25–30 au: the \(n^{\rm th}\) generation parent of 67P with diameter \(D_{\rm p}\) waits a time \(t_{\rm 10}\) for a projectile of diameter \(d_{\rm proj}\) to impact with a specific energy \(Q_{\rm D}^{\star}\), thereby creating a daughter with diameter \(D_{\rm d}\). The parent material has compressed to blocks of density \(\rho_{\rm block}\) that reassemble with macro porosity \(\psi_{\rm macro}\) to give the daughter a bulk density \(\rho_{\rm bulk}\). A fraction \(f_{\rm waste}\) of \(Q_{\rm D}^{\star}\) goes into heating the daughter.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \(n\) & \(D_{\rm p}\) [km] & \(t_{\rm 10}\) [Myr] & \(d_{\rm proj}\) [km] & \(Q_{\rm D}^{\star}\) [k kg\({}^{-1}\)] & \(D_{\rm d}\) [km] & \(\rho_{\rm block}\) [kg m\({}^{-3}\)] & \(\psi_{\rm macro}\) & \(\rho_{\rm bulk}\) [kg m\({}^{-3}\)] & \(f_{\rm waste}\) \\ \hline
12 & 64.0 & 44.2 & 38.2 & 17.2 & **50.8** & 1300 & 0.57 & 556 & 0.78 \\
11 & 50.8 & 39.3 & 27.5 & 12.8 & **40.3** & 1300 & 0.58 & 548 & 0.87 \\
10 & 40.3 & 34.6 & 19.8 & 9.59 & **32.0** & 1292 & 0.58 & 544 & 1.00 \\
9 & 32.0 & 30.3 & 14.3 & 7.17 & **25.4** & 1243 & 0.56 & 541 & 1.00 \\
8 & 25.4 & 26.7 & 10.3 & 5.36 & **20.2** & 1201 & 0.55 & 539 & 1.00 \\
7 & 20.2 & 23.1 & 7.4 & 4.01 & **16.0** & 1166 & 0.54 & 538 & 1.00 \\ \hline
1 & 5.0 & 9.8 & 1.0 & 0.69 & **4.0** & 987 & 0.46 & 537 & 1.00 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Zone #2 at 20–25 au: the \(n^{\rm th}\) generation parent of 67P with diameter \(D_{\rm p}\) waits a time \(t_{\rm 10}\) for a projectile of diameter \(d_{\rm proj}\) to impact with a specific energy \(Q_{\rm D}^{\star}\), thereby creating a daughter with diameter \(D_{\rm d}\). The parent material has compressed to blocks of density \(\rho_{\rm block}\) that reassemble with macro porosity \(\psi_{\rm macro}\) to give the daughter a bulk density \(\rho_{\rm bulk}\). A fraction \(f_{\rm waste}\) of \(Q_{\rm D}^{\star}\) goes into heating the daughter.
## 4 Results
### Inner disc thermal evolution
#### 4.1.1 Pre-collision evolution at \(r_{\rm h}=15\) au
For the pre-collision models, it is assumed that newly formed planetesimals have \(\mu=4\), a CO\({}_{2}\) molar abundance of 5 per cent relative water, and a 4 per cent molar abundance of CO relative water that is evenly distributed between freely condensed CO and CO entrapment within CO\({}_{2}\). As described in Sec. 3.4, these simulations are performed without radiogenic heating. Additionally, the water ice is here considered pure and crystalline. The bodies are exposed to protosolar illumination at a solar nebula clearing time, assumed to be \(t_{\rm C}=3\) Myr after CAL.
Table 5 shows that it takes \(t_{\rm CO}-3=0.3\) Myr for the smallest (\(D=20.2\) km) body to lose its condensed CO ice, while the \(\sim 32\) times more voluminous body (\(D=64\) km) needs \(t_{\rm CO}-3=2.3\) Myr to complete the loss. In all cases, the loss time-scales are shorter than the collision time-scales (see Table 2). Naively, one would expect that the loss time scales linearly with initial abundance for a body of a given size. That is because a shell at some given distance from the core is being fed with energy at the same rate regardless of the CO abundance in the shell, giving rise to the same loss rate as function of CO front depth. Therefore, if the initial CO abundance was 6 per cent instead of 2 per cent, emptying the shell should take three times longer. However, when this hypothesis was tested in an actual simulation, it turned out that a \(D=20.2\) km body with 6 per cent CO did not require \(0.82\) Myr for the loss as expected, but merely \(0.55\) Myr. The reduction to \(\sim 2/3\) of the loss time expected from simple linear extrapolation can be understood as follows. Initially, the loss from superficial shells of a given volume indeed takes three times longer. But because of this delay, more energy has time to be conducted past the CO front to heat the core. At the points in time when exactly half of the CO remains, the core of the body with 2 per cent CO has heated to \(35\) K but the same-sized body with 6 per cent CO has had time to heat to \(43\) K. Because of the higher temperature of the latter body, its peak CO gas pressure reached \(4.8\) Pa, whereas the former body only reached \(4.0\) Pa. The higher gas pressure speeds up the gas diffusion rate and cuts the total loss time short of the extrapolated time. Therefore, a loss time obtained by scaling linearly with abundance constitutes an _upper limit_.
With this in mind, one can ask how much condensed CO a body should have had at the time of solar nebula clearing in order to still have some left at the time \(t_{10}\) of the catastrophic collision. If parts of the waste energy could be consumed by the sublimation of remaining condensed CO, that would help preserving CO deposits trapped within CO\({}_{2}\) or amorphous H\({}_{2}\)O. Simple linear extrapolation suggests that the \(D=20.2\) km body would need \(\sim 33\) per cent CO, and the \(D=64\) km body would need \(\sim 11\) per cent CO. In reality, the required abundances are even higher, because these amounts would run out before \(t_{10}\), as previously explained. Because the total CO budget likely would constitute 13-26 per cent at most (of which a substantial fraction needs to be stored in CO\({}_{2}\) and/or H\({}_{2}\)O), as discussed in section 2, not much CO ice should remain at \(t_{10}\). This suggests that cooling by condensed CO at the time of collision may not be efficient.
The simulations show that CO\({}_{2}:\) CO mixtures are not stable at \(r_{\rm h}=15\) au, at least not with the currently assumed activation energy and pre-exponential factor (Davidsson, 2021). Comets formed near \(r_{\rm h}=15\) au (either primordially, or through catastrophic collisions) would therefore not be able to withstand segregation in the long run. In order for such objects to maintain CO throughout their primordial disc lifetime, and to be able to release CO when entering the inner Solar system after deep-freeze storage in the scattered disc beyond the Kuiper Belt, or in the Oort Cloud, they need to rely exclusively on the presence of CO-laden amorphous water ice. But importantly, if CO\({}_{2}:\) CO mixtures remain at the time of collision, they may help preserving CO-laden amorphous H\({}_{2}\)O ice by absorbing a fraction of the collision energy.
The simulations performed up to \(t_{10}\) (completed for the \(D=20.2\)-\(32\) km bodies) show that 17-26 per cent of the original amount of CO\({}_{2}\)-stored CO is still present at the time of collision. These calculations, which proceed for several million years in the simulated world, are very time-consuming and increasingly so with growing diameter. Therefore, simulations were stopped at \(t_{\rm CO}\) for the \(D=40.3\)-\(64\) km bodies. Estimates of \(f_{\rm rem}\) (i. e., the fraction of CO\({}_{2}:\) CO mixtures remaining at \(t_{10}\)) for \(D\geq 40.3\) km bodies can be obtained by assuming that \(f_{\rm rem}\propto A/V\). A smaller body has a relatively large surface area \(A\) (capable of absorbing solar radiation and of outgassing CO) compared to its volume \(V\) (which is proportional to the initial amount of CO), hence less capable of preserving its CO\({}_{2}:\) CO (yielding a lower \(f_{\rm rem}\) value), than a bigger body. Figure 2 illustrates this approximate proportionality at \(D=20.2\)-\(32\) km and the fitted line is used to extrapolate \(f_{\rm rem}\) values at \(D=40.3\)-\(64\) km, given in Table 5.
Figure 2: Zone #1 at 15–20 au, prior to catastrophic collisions: the remaining fraction \(f_{\rm rem}\) of CO\({}_{2}:\) CO mixtures at the time of impact \(t_{10}\), as function of the surface area to volume ratio \(A/V\). The \(f_{\rm rem}\) factors for the three largest \(A/V\) values are properly evaluated through shimks simulations (asterisks). Those are used to define a linear LSPG fit that extrapolates to lower \(A/V\) values (bigger body diameters). The squares correspond to the \(f_{\rm rem}\) estimates for \(40.3\leq D\leq 64\) km in Table 5.
\begin{table}
\begin{tabular}{r r r r} \hline \hline \(n\) & \(D_{\rm p}\) [km] & \(t_{\rm CO}\) [Myr] & \(f_{\rm rem}\) \\ \hline
12 & 64.0 & 5.290 & \(\sim 0.32\) \\
11 & 50.8 & 4.552 & \(\sim 0.30\) \\
10 & 40.3 & 4.024 & \(\sim 0.28\) \\
9 & 32.0 & 3.668 & 0.255 \\
8 & 25.4 & 3.427 & 0.201 \\
7 & 20.2 & 3.273 & 0.171 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Zone #1 at 15–20 au, prior to catastrophic collisions: the \(n^{\rm th}\) generation parent of 67P with diameter \(D_{\rm p}\) loses its 2 per cent (moalr relative water) of condensed CO at \(t_{\rm CO}\) (note that simulations are initiated at \(t=3\) Myr). The fraction of the CO\({}_{2}:\) CO mixture still remaining at the time of collision \(t_{10}\) is denoted by \(f_{\rm rem}\) (if proceeded by ‘-’ then estimated, otherwise properly calculated).
#### 4.1.2 Post-collision evolution at \(r_{\rm h}=15\) au
I start with the least energetic collision that formed a \(D_{\rm d}=16\) km daughter from a \(D_{\rm p}=20.2\) km parent (details of the collision are in Table 2). A first case (R09_001C, see Table 6) had no free CO, clean CO\({}_{2}\), and all CO trapped in amorphous water ice. The purpose of this simulation was to see if the sole CO host would survive the collision.
The waste heat released in the collision increased the core temperature from the ambient 63 K to 87 K, or by 24 K, see Fig. 3 (left panel). This initiated a slow partial crystallisation in the equatorial plane close to the surface, where the temperature was highest through a combination of protosolar and collisional heating. During the following 43 kyr, the epicentre of partial crystallisation moved in the equatorial plane towards the centre, while releasing heat, which accelerated the crystallisation rate further. At that point, runaway crystallisation and strong energy release started. An equatorial torus-like region, located about 1-6 km from the core, became completely crystallised. The geometry of this region is illustrated in Fig. 3 (right panel) via the spatial distribution of the pressure due to CO\({}_{2}\) vapour, released by vigorous CO\({}_{2}\) ice sublimation.
In the following 2 kyr, the fully crystallised region spread equatorially to the core and to the surface, and expanded in latitude. The core temperature peaked at \(\sim 127\) K (see Fig. 3, left panel, and Table 6). At most the CO\({}_{2}\) pressure reached 41 Pa. The body then cooled gradually, and the core temperature fell below 80 K about 1.3 Myr after the collision that formed the body. At that point, merely 4 per cent of the original amount of amorphous ice remained, which means that practically all of the CO of the body has been lost to space.
The CO\({}_{2}\) vapour released by the crystallisation-induced sublimation gradually diffused towards the surface during the extensive cooling period of the body. However, because the near-surface region cooled radiatively very quickly after the collision, it became an impenetrable cold trap for the CO\({}_{2}\) vapour. Large amounts of CO\({}_{2}\) re-froze near the surface, so that the body globally lost less than one per mille of its CO\({}_{2}\) ice to space. Figure 4 shows the substantial degree of internal CO\({}_{2}\) ice spatial re-distribution. The core abundance is down to 32 per cent of the original amount. In a \(\sim 1\) km thick near-surface shell, the CO\({}_{2}\) ice abundance has more than doubled with respect to initial values.
It is therefore clear that even a relatively small early-generation ancestor of 67P may have difficulties to preserve its hypervolatiles in a collisional cascade, at least at \(r_{\rm h}=15\) au. This is because the most resilient CO host, amorphous water ice, may crystallise globally. However, there is a possibility that some condensed CO may remain at the time of collision (if the original abundance exceeded 33 per cent, see Sec. 4.1.1). More realistically, there may be CO\({}_{2}:\) CO mixtures that did not yet have time to segregate (provided that such ices exist and were present). If so, such CO may act as a heat sink and aid the preservation of CO-laden amorphous ice by reducing or preventing its crystallisation.
To test this idea, model R09_001B was considered that is similar to R09_001C except that the CO\({}_{2}\) ice now contains 2 per cent CO
Figure 4: The final spatial distribution of CO\({}_{2}\) (the initial abundance scaled to 100 per cent) for the same body as shown in Fig. 3. The CO\({}_{2}\) vapour, released through CO\({}_{2}\) ice sublimation triggered by collisional heating and energy release during amorphous water ice crystallisation, has diffused way from the core and towards the surface. The low near–surface temperatures led to massive CO\({}_{2}\) vapour recondensation. The CO\({}_{2}\) loss to space is negligible, but the internal redistribution of CO\({}_{2}\) ice is substantial.
Figure 3: A \(D=16\) km body at \(r_{\rm h}=15\) au, with \(\mu=4\) refractories/water-ice mass ratio, 5 per cent CO\({}_{2}\) and 2 percent CO (all trapped in the water ice that initially is amorphous). Radionuclides are ignored. It formed at \(t_{10}=7.5\) Myr as the result of a catastrophic disruption of a \(D=20.2\) km parent that deposited \(\dot{O}_{\rm p}^{\rm o}=6.13\) kJ kg\({}^{-1}\) of waste heat. The collisional heating led to widespread crystallisation, loss of \(\sim 96\) per cent of the CO, and to a significant internal displacement of CO\({}_{2}\) ice (though virtually no CO\({}_{2}\) was lost to space). _Left_: core temperature as function of time. _Right_: the spatial distribution of the CO\({}_{2}\) vapour pressure near the onset of runaway crystallisation. The shape of the peak–pressure region is irregular because of the finite spatial resolution of the model (primarily in the latitudinal dimension).
(molar, relative water). Note, that this would have required \(\sim 12\) per cent trapped CO in CO\({}_{2}\) at the assumed solar nebular clearing at \(t_{\rm c}=3\) Myr, because of the loss reported in Table 5 prior to the collision. Storing CO at a CO\({}_{2}:\) CO \(=1:2.4\) ratio is unrealistic, but a more satisfactory mixing ratio could be achieved by simply assuming a larger abundance of CO\({}_{2}\) ice. The only important property is the absolute abundance of CO\({}_{2}\)-trapped CO (in terms of bulk kg m\({}^{-3}\)), because of the latent heat consumed during segregation that partially or fully may consume the energy released by crystallisation of amorphous water ice.
As seen in Table 6, the core temperature of model R09_001B peaks at \(\sim 84\) K. It means that the energy consumption of CO\({}_{2}:\) CO segregation has suppressed the net temperature increase of the collisional heating from 24 K to 21 K. This is sufficient to prevent the runaway crystallisation process seen in R09_001C. Once the body has cooled, \(>99\) per cent of the global amorphous water ice (and the CO it contains) is still intact. The CO\({}_{2}\) ice has not been mobilised, with almost the entire original abundance remaining at the core. Practically all CO\({}_{2}:\) CO is consumed, so the only surviving CO-host of this body is amorphous water ice. This shows that reasonable amounts of CO\({}_{2}:\) CO mixtures may be capable of preserving CO-laden amorphous water ice in the \(n=7\to 6\) collision.
Next, the \(n=8\to 7\) collision (destruction of a \(D_{\rm p}=25.4\) km parent and birth of a \(D_{\rm d}=20.2\) km daughter) is considered. In this case, a 2 per cent CO abundance within CO (model R09_002A) is not sufficient to prevent crystallisation. This is because the specific waste heat is 1.33 times higher than for the \(n=7\to 6\) collision. Linear scaling of waste heats and CO abundances (\(1.33\times 2\approx 2.7\)) suggests that 3 per cent CO in CO\({}_{2}\) should be sufficient to prevent crystallisation. Model R09_002B shows that this is not entirely correct. Global crystallisation is indeed prevented, but the peak temperature is 88 K (or \(\sim 5\) K above R09_001B) and the amorphous ice abundance at the very core is down to 88 per cent of the initial value. Heat dissipation by conduction is slower for the larger body, which means that the interior of the body is kept above a given temperature for a longer time. The degree of crystallisation is not only given by the temperature reached, but also by the duration by which that temperature is maintained. This suggests that it becomes progressively more difficult for segregation to prevent crystallisation.
This is further illustrated by the \(n=9\to 8\) collision (destruction of a \(D_{\rm p}=32.0\) km parent and birth of a \(D_{\rm d}=25.4\) km daughter). With the waste heat increased by another factor 1.33, one could naively expect that \(1.33\times 3\approx 4\) per cent CO in CO\({}_{2}\) would be sufficient to prevent widespread crystallisation. However, Table 6 shows that this is not the case. Model R09_003A experiences almost complete crystallisation and CO loss. With 75 per cent of the CO\({}_{2}:\) CO being segregated at the time of collision (Table 5), it would imply that the initial CO abundance must exceed \(\sim 16\) per cent in order to prevent crystallisation. With the largest mixing ratio of CO\({}_{2}:\) CO \(=5:1\) seen in laboratory experiments (Simon et al., 2019), it implies a CO\({}_{2}\) abundance of at least \(\sim 80\) per cent relative water. That is too high compared to the CO\({}_{2}\)/H\({}_{2}\)O \(=0.32\pm 0.02\) ratio observed in low-mass protostars (Pontoppidan et al., 2008), suggesting that there is not enough CO to prevent crystallisation in this type of collisions. In conclusion, a collisional cascade at 15 au is only allowed for \(D_{\star}<16\) km bodies if amorphous H\({}_{2}\)O is the only CO host, but could include \(D_{\star}\leq 20\)-25.4 km bodies if sufficiently abundant CO\({}_{2}:\) CO is present as well.
The daughters in the range \(32\leq D\leq 50.8\) km form sufficiently hot to lose all amorphous water ice and associated CO, except for surface layers that are merely a few meters thick. The internal redistribution of CO\({}_{2}\) is substantial. At depths \(\leq 1.5\) km, the CO\({}_{2}\) abundance exceeds the initial values (and everywhere below, the abundance has decreased). The \(D_{\rm d}=32\) km and \(D_{\rm d}=40.3\) km bodies still have 6 and 2 per cent of the original CO\({}_{2}\) at their cores, respectively (and the abundances rise gradually towards the initial value at \(\sim 1.5\) km depth). However, the \(D_{\rm d}=50.8\) km body interior has been almost completely deprived of CO\({}_{2}\): less than 100 ppm of the original amount of the supervolatile is still present at \(>2\) km depth. The near-surface enhancement factors of CO\({}_{2}\) (going from the smallest to the largest \(D=32\)-50.8 km daughter) are 9, 13, and 24 times, respectively. These peaks are located at very shallow depths, ranging from 10-16 m. I note that this redistribution of CO\({}_{2}\) does not lead to any _net_ sublimation, and hence no net consumption of energy - it is therefore not capable of acting as a heat sink, or preventing the amorphous water ice from crystallising. However, the CO\({}_{2}\) upward diffusion plays an important role for the transport of heat toward the surface, through its advection. I also note, that if the \(D=50.8\) km body would disintegrate in a second catastrophic collision, most daughter rubble piles would form from its CO\({}_{2}\)-free interior. Such daughters, and all smaller bodies down to comet size,
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Model & \(n\) & \(D_{\rm d}\) & H\({}_{2}\)O : CO & CO\({}_{2}:\) CO & \(T_{\rm max}\) & \(t_{\rm cool}\) & End tot. a–H\({}_{2}\)O & End core a–H\({}_{2}\)O & End CO\({}_{2}:\) CO & End core CO\({}_{2}\) \\ & & [km] & [Per cent] & [Per cent] & [K] & [Myr] & [Frac] & [Frac] & [Frac] & [Frac] \\ \hline R09\_006A & 11 & 50.8 & 2 & 2 & 145.3 & 12.2 & 0.001 & 0.000 & 0.000 & 0.000 \\ R09\_005A & 10 & 40.3 & 2 & 2 & 139.5 & 7.15 & 0.001 & 0.000 & 0.000 & 0.016 \\ R09\_004A & 9 & 32.0 & 2 & 2 & 135.2 & 4.56 & 0.002 & 0.000 & 0.000 & 0.064 \\ R09\_003A & 8 & 25.4 & 2 & 4 & 128.2 & 2.77 & 0.036 & 0.000 & 0.003 & 0.297 \\ R09\_002A & 7 & 20.2 & 2 & 2 & 127.8 & 1.83 & 0.043 & 0.000 & 0.004 & 0.305 \\ R09\_002B & 7 & 20.2 & 2 & 3 & 88.2 & 0.28 & 0.990 & 0.881 & 0.007 & 1.000 \\ R09\_001C & 6 & 16.0 & 2 & 0 & 126.5 & 1.26 & 0.038 & 0.000 & – & 0.320 \\ R09\_001B & 6 & 16.0 & 2 & 2 & 83.5 & 0.13 & 0.999 & 0.997 & 0.007 & 0.997 \\ \hline \end{tabular}
\end{table}
Table 6: Zone #1 at 15–20 au: properties of daughter nuclei born in the aftermath of a catastrophic collision. All bodies have \(\mu=4\), the water ice is initially amorphous, the CO\({}_{2}\) molar abundance relative water is 5 per cent, and radiogenic heating is omitted. The ambient steady–state core temperature is near 63 K. Columns 1–3 are body identifiers (model tag, generation number \(n\), and daughter diameter \(D_{\rm d}\)). Columns 4–5 are initial conditions (abundance of CO trapped in amorphous water ice, and abundance of CO trapped in CO\({}_{2}\), in both cases molar relative water). Columns 6–7 are post–collision core peak temperature, and time needed for the core to cool back to 80 K when collision–related changes have ceased. Columns 8–11 are the surviving fractions of the initial phases and species: amorphous water ice (a–H\({}_{2}\)O) in total and in the core, unsegregated CO\({}_{2}:\) CO mixtures, and the CO\({}_{2}\) ice remaining at the core (essentially no CO\({}_{2}\) is lost to space).
formed as the result of a collisional cascade, would lack both hyper- and supervolatiles.
### Mid-disc thermal evolution
#### 4.2.1 Pre-collision evolution at \(r_{\rm h}=23\) au
The pre-collision models were run under the same conditions as the \(r_{\rm h}=15\) au models in Sec. 4.1.1 (\(\mu=4\) crystalline water ice, 5 per cent CO\({}_{2}\), 2 per cent condensed CO, 2 per cent CO in CO\({}_{2}\), and solar nebular clearing at \(t_{\rm c}=3\) Myr), except that the heliocentric distance was set to \(r_{\rm h}=23\) au. The loss time scales of condensed CO are reported in Table 7, and ranges from 0.4 Myr for the \(D_{\rm p}=20.2\) km perent, to 3.5 Myr for the \(D_{\rm p}=64.0\) km parent. Compared to the conditions at \(r_{\rm h}=15\) au (Table 5), these time scales are 43-51 per cent longer, compared to the solar flux being 57 per cent lower. However, the catastrophic collisions are not expected to occur until \(23\leq t_{\rm 10}\leq 44\) Myr, which means that there will be no condensed CO remaining at the time of collisions, for reasonable CO abundances.
Because of the larger heliocentric distance, the temperature does not reach high enough to cause spontaneous CO\({}_{2}\) : CO segregation. Therefore, CO\({}_{2}\) is a stable and viable candidate for hypervolatile storage at \(r\geq 23\) au.
#### 4.2.2 Post-collision evolution at \(r_{\rm h}=23\) au
For the Zone #2 post-collision simulations, I assume crystalline H\({}_{2}\)O nominally using \(\mu=4\), though two tests with \(\mu=1\) were performed, a 5 per cent abundance of CO\({}_{2}\), and a 2 per cent abundance of CO that initially is trapped fully within CO\({}_{2}\). The outcome of these simulations are summarised in Table 8.
The \(n=7\to 6\) and \(n=8\to 7\) collisions that form \(D_{\rm d}=16\)-\(20.2\) km daughters are not sufficiently energetic to segregate all CO\({}_{2}\) : CO. After cooling-down, these bodies retain 40-90 per cent of their original content of CO. However, the \(n=9\to 8\) and \(n=10\to 9\) collisions that form \(D_{\rm d}=25.4\)-\(32\) km daughters result in severe segregation, where merely 2-6 per cent of the CO\({}_{2}\) : CO survives. Based on these simulations, the largest parent body in a mid-disc collisional cascade would have \(D_{\star}\approx 25\) km.
None of the \(D\leq 40.3\) km daughters suffer any mobilisation or internal redistribution of CO\({}_{2}\) ice. However, the first signs of dislocation of CO\({}_{2}\) from the core to more shallow depths are seen in the \(n=12\to 11\) collision that formed the \(D=50.8\) km body.
These results apply when \(\mu=4\). However, some comet nuclei may be more ice-rich (e. g., the dust/water-ice mass ratio may be as low as \(\mu\approx 1\) for Comet 67P according to Davidson et al. 2022a). Two models (R06_006B and R06_008B) were run to test the sensitivity to the \(\mu\)-value. As mentioned in section 2 a lowered \(\mu\) results in: 1) a larger specific heat capacity \(c\), thus a smaller temperature change \(\Delta T\approx\Delta E/c\) for a given energy change \(\Delta E\); 2) a larger bulk density of hypervolatiles for a given CO/H\({}_{2}\)O ratio due to a higher concentration of water ice. The first effect is rather modest - Table 8 shows that \(D_{\rm d}=20.2\) km and \(32\) km daughters become 9 K and 13 K cooler, respectively, when \(\mu=1\) instead of \(\mu=4\). The second effect is more prominent because there is more CO to get rid of (i. e., more latent energy is required to evacuate all CO). Additionally, the segregation process runs slower at lower temperature. For \(D_{\rm d}=20.2\) km the amount of surviving CO\({}_{2}\) : CO increases from 40 to 90 per cent. For \(D_{\rm d}=32\) km the number goes up from 2 to 12 per cent (perhaps qualifying it as a comet ancestor). Combined, these effects would adjust the tentative limiting diameter in a collisional cascade from \(D_{\star}\approx 25\) km to \(D_{\star}\approx 32\) km. Thus, a drastic reduction of \(\mu\) means that the \(D_{\star}\)-estimate increases by at most one size class (on the currently considered grid).
### Outer disc thermal evolution
#### 4.3.1 Pre-collision evolution at \(r_{\rm h}=30\) au
The pre-collision models at \(r_{\rm h}=30\) au were run under the same conditions as previously (sections 4.1.1 and 4.2.1), i. e., \(\mu=4\) crystalline water ice, 5 per cent CO\({}_{2}\), 2 per cent condensed CO, 2 per cent CO in CO\({}_{2}\), and solar nebular clearing at \(t_{\rm c}=3\) Myr. The loss time scales of condensed CO are reported in Table 9, and grows from 0.6 Myr for the \(D_{\rm p}=20.2\) km parent, to 6.0 Myr for the \(D_{\rm p}=64.0\) km parent. Compared to the conditions at \(r_{\rm h}=15\) au (Table 5), these time scales are 2.1-2.6 times longer. With catastrophic collisions statistically expected to take place at \(39\leq t_{\rm 10}\leq 71\) Myr (see Table 4), there will be no condensed CO remaining at that time. There is no illumination-driven CO\({}_{2}\) : CO segregation at these distances (at least not when the clearing time is as late as \(t_{\rm c}=3\) Myr).
#### 4.3.2 Post-collision evolution at \(r_{\rm h}=30\) au
For the Zone #3 post-collision simulations, I assume amorphous H\({}_{2}\)O, a 5 per cent abundance of CO\({}_{2}\), and a 4 per cent abundance of CO, divided equally between CO\({}_{2}\) and H\({}_{2}\)O (some models only had 2 per cent CO in H\({}_{2}\)O).
Because of the stability of CO\({}_{2}\) : CO mixtures at these heliocentric distances, the entire initial abundance of CO-laden CO\({}_{2}\) is available to absorb waste heat, thereby shielding potential CO deposits in the even more resilient amorphous water ice. The outcome of these simulations are summarised in Table 10.
The capability of \(D_{\rm d}=16\)-\(25.4\) km daughters to retain CO\({}_{2}\) : CO increases with respect to \(r_{\rm h}=23\) au because of the lower ambient temperature at \(r_{\rm h}=30\) au, pushing the surviving fraction from \(\geq 38\) to \(\geq 61\) per cent. The \(D_{\rm d}=32\) km daughter experienced full segregation. A test showed that potential CO deposits within amorphous H\({}_{2}\)O would be fully retained in that type of collision, even in the case of having no CO\({}_{2}\) : CO heat sink.
For a \(D_{\rm d}=40.3\) km daughter, having 2 per cent CO in the CO\({}_{2}\) ice also protected against crystallisation. But without such protection, substantial crystallisation was obtained, though 15 per cent amorphous water ice survived in the cooler crust. If such a nucleus disrupted, its crystalline and (potentially CO-laden) amorphous ices would likely mix in the resulting daughter nuclei, constituting a small but not negligible reservoir of CO. The largest considered daughter, with \(D_{\rm d}=50.8\) km experienced \(\sim 90\) per cent crystallisation even
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(n\) & \(D_{\rm p}\) [km] & \(t_{\rm CO}\) [Myr] & \(f_{\rm rem}\) \\ \hline
12 & 64.0 & 6.450 & 1.000 \\
11 & 50.8 & 5.283 & 1.000 \\
10 & 40.3 & 4.486 & 1.000 \\
9 & 32.0 & 3.963 & 1.000 \\
8 & 25.4 & 3.613 & 1.000 \\
7 & 20.2 & 3.391 & 1.000 \\ \hline \end{tabular}
\end{table}
Table 7: Zone #2 at 20–25 au, prior to catastrophic collisions: the \(n^{\rm th}\) generation parent of 67P with diameter \(D_{\rm p}\) loses its 2 per cent (moular relative water) of condensed CO at \(t_{\rm CO}\) (note that simulations are initiated at \(t=3\) Myr). The fraction of the CO\({}_{2}\) : CO mixture still remaining at the time of collision \(t_{\rm 10}\) is denoted by \(f_{\rm rem}\).
in the presence of a CO\({}_{2}\) : CO heat sink, and would likely lose all amorphous ice and associated CO if no CO\({}_{2}\) : CO was present. It is also noteworthy that both \(D_{\rm d}=40.3\)-50.8 km nuclei experienced substantial internal CO\({}_{2}\) ice re-distribution if not protected by hosted CO, though the core CO\({}_{2}\) abundance remained just over 40 per cent.
Based on these simulations, the largest parent body in a collisional cascade in the outer primordial would have \(D_{\star}\approx 50\)-64 km, if containing CO-laden amorphous water ice, with the exact value depending on whether a buffer exists. However, if the sole carrier of CO is CO\({}_{2}\) ice, the largest potential parent body has \(D_{\star}\approx 30\) km.
The effective heat conductivities applied in this work (obtained by correcting laboratory-measured heat conductivities of compacted materials for porosity, using the method of Shoshamy et al. 2002) result in a thermal inertia that may be high compared to comet material (see section 2). To test the importance of that assumption model R11_005C considered a 20 times lower Hertz factor, which resulted in a thermal inertia ranging \(13\leq\Gamma\leq 49\) J m\({}^{-2}\) K\({}^{-1}\) s\({}^{-1/2}\) for the coldest and hottest regions of the model body (for most of the interior \(\Gamma\approx 30\) J m\({}^{-2}\) K\({}^{-1}\) s\({}^{-1/2}\)). This was done for the daughter of a \(D_{\rm p}=40.3\) km parent, to see if it could be pushed into crystallisation and complete CO loss, despite a 2 per cent CO\({}_{2}\) : CO buffer. Because impact heating is practically instantaneous, the effective heat conductivity has no effect on the temperature reached just after the collision. There is, however, a strong effect on the cooling time scale. Model R11_005C was heated to 79.1 K by the impact (identical to model R11_005A). Over the following 5.68 Myr the core temperature increased another 0.3 K because of (an extremely low level of) crystallisation. At that point the core temperature started to fall, because the outer regions of the body had cooled down sufficiently to allow for net energy dissipation from the core. The simulation was stopped 590 kyr later, when the temperature had dropped by 0.015 K. In model R11_005A (having the nominal heat conductivity) the same level of cooling was completed in 56 kyr. Model R11_005C spent \(\sim 100\) times longer at peak temperature than model R11_005A, yet the only consequence was a 0.2 per cent larger loss of amorphous water ice. For this reason, bodies that are pushed close to the edge of global crystallisation may tip over if the effective heat conductivity is sufficiently small. However, that would be relevant only in a rather small \(T_{\rm max}\) interval, separating bodies that are too cold to crystallise, and the ones for which crystallisation is unavoidable, regardless of heat conductivity. It does not seem that the choice of heat conductivity has a major influence on the \(D_{\star}\) estimate.
## 5 Discussion
We know that small (\(D\la 10\) km) comet nuclei contain substantial amounts of CO and measurable quantities of other hypervolatiles. There is also mounting evidence that clean CO ice is evacuated on short (\(<1\) Myr) time-scales from primordial disc objects of that size, prior to relocation to the current distant and cooler comet reservoirs (see section 1). However, we do not know if the remaining CO is stored in CO\({}_{2}\) ice, or in the more resilient amorphous H\({}_{2}\)O, or in both. If a cascade of catastrophic collisions took place in the primordial disc, we know that such a cascade could not have involved targets heated to the point of substantial segregation and/or crystallisation, because then the last CO would be lost. Hypervolatiles that may survive (within CO\({}_{2}\) and/or amorphous H\({}_{2}\)O) near the rapidly cooling surfaces of the first daughter would be mixed into the interior during subsequent catastrophic collisions and eventually be lost as well. In this paper, I have attempted to determine the critical target diameter \(D_{\star}\) (the starting point of admissible collisional cascades) and how it changes with heliocentric distance. I first discuss nominal results, and then comment on their sensitivity to composition and physical parameters.
The most restrictive scenario is that amorphous water ice does not exist, and that all hypervolatiles therefore necessarily are locked within CO\({}_{2}\) ice until segregation takes place. According to nimbus
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Model & \(n\) & \(D_{\rm d}\) & H\({}_{2}\)O : CO & CO\({}_{2}\) : CO & \(T_{\rm max}\) & \(t_{\rm cool}\) & End tot. a–H\({}_{2}\)O & End core a–H\({}_{2}\)O & End CO\({}_{2}\) : CO & End core CO\({}_{2}\) \\ & & [km] & [Per cent] & [Per cent] & [K] & [Myr] & [Frac] & [Frac] & [Frac] & [Frac] \\ \hline R06\_010A & 11 & 50.8 & 0 & 2 & 104.1 & 1.77 & – & – & 0.005 & 0.994 \\ R06\_009A & 10 & 40.3 & 0 & 2 & 93.5 & 0.89 & – & – & 0.009 & 1.000 \\ R06\_008A & 9 & 32.0 & 0 & 2 & 88.5 & 0.48 & – & – & 0.016 & 1.000 \\ R06\_008B\({}^{*}\) & 9 & 32.0 & 0 & 2 & 75.0 & – & – & – & 0.117 & 1.000 \\ R06\_007A & 8 & 25.4 & 0 & 2 & 80.8 & 0.11 & – & – & 0.057 & 1.000 \\ R06\_006A & 7 & 20.2 & 0 & 2 & 77.6 & – & – & – & 0.378 & 1.000 \\ R06\_006B\({}^{*}\) & 7 & 20.2 & 0 & 2 & 68.5 & – & – & – & 0.906 & 1.000 \\ R06\_005A & 6 & 16.0 & 0 & 2 & 72.5 & – & – & – & 0.907 & 1.000 \\ \hline \end{tabular}
\end{table}
Table 8: Zone #2 at 20–25 au, post–collision: properties of daughter nuclei born in the aftermath of a catastrophic collision. All bodies have \(\mu=4\) (except for models R06_006B and R06_008B, highlighted with asterisks, that considered \(\mu=1\)), the water ice is crystalline, the CO\({}_{2}\) molar abundance relative water is 5 per cent, and radiogenic heating is omitted. The ambient steady–state core temperature is near 52 K. Columns 1–3 are body identifiers (model tag, generation number \(n\), and daughter diameter \(D_{\rm d}\)). Columns 4–5 are initial conditions (abundance of CO trapped in amorphous water ice, and abundance of CO trapped in CO\({}_{2}\), in both cases molar relative water). Columns 6–7 are post–collision core peak temperature, and time needed for the core to cool back to 80 K when collision–related changes have ceased. Columns 8–11 are the surviving fractions of the initial phases and species: amorphous water ice (a–H\({}_{2}\)O) in total and in the core, unsegregated CO\({}_{2}\) : CO mixtures, and the CO\({}_{2}\) ice remaining at the core.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(n\) & \(D_{\rm p}\) [km] & \(t_{\rm CO}\) [Myr] & \(f_{\rm rem}\) \\ \hline
12 & 64.0 & 9.025 & 1.000 \\
11 & 50.8 & 6.649 & 1.000 \\
10 & 40.3 & 5.282 & 1.000 \\
9 & 32.0 & 4.452 & 1.000 \\
8 & 25.4 & 3.913 & 1.000 \\
7 & 20.2 & 3.578 & 1.000 \\ \hline \end{tabular}
\end{table}
Table 9: Zone #3 at 25–30 au, prior to catastrophic collisions: the \(n^{\rm th}\) generation parent of 67P with diameter \(D_{\rm p}\) loses its 2 per cent (polar relative water) of condensed CO at \(t_{\rm CO}\) (note that simulations are initiated at \(t=3\) Myr). The fraction of the CO\({}_{2}\) : CO mixture still remaining at the time of collision \(t_{\rm 10}\) is denoted by \(f_{\rm rem}\).
simulations, segregation is spontaneous at \(r_{\rm h}=15\) au, and CO\({}_{2}\) : CO mixtures only survive near the cold poles at \(<1\) per cent levels compared to initial abundances. With polar axis orientations likely changing over time, especially if there is supposed to be some collisional activity, that last CO would soon be gone as well. However, CO\({}_{2}\) : CO is completely stable at \(r_{\rm h}=23\) au, suggesting that there would be a transition zone somewhere between \(15\la r_{\rm h}\la 23\) au, where the CO abundance increases from zero to high values. Unless we accept the idea that some highly active comets are CO-free (yet to be identified observationally), it means that the inner edge of the primordial disc should have been located farther from the Sun than 15 au, beyond the'segregation line'. If so, such a relatively distant primordial disc might favour a late giant planet dynamic instability (see section 1). In that scenario, relatively CO-poor comets (such as BP/Tuttle, 73P/Schwassmann-Wachmann 3, 103P/Hartley 2, C/1999 S4 LINEAR, and C/2000 WM1 LINEAR, with 0.2-0.7 per cent CO relative water; A'Hearn et al. 2012) may have originated close to that inner edge. Farther from the Sun, more CO-rich bodies would form (such as 22P/Kopff, 88P/Howell, and C/2008 Q3 Garadd, with \(\geq 20\) per cent CO relative water; A'Hearn et al. 2012). The survival of such deposits would place constraints on the largest possible parent body that could participate in a collisional cascade: \(D_{\star}\la 25\) km at \(r_{\rm h}=23\) au, growing to \(D_{\star}\la 32\) km at \(r_{\rm h}=30\) au.
A second possibility is that CO-laden CO\({}_{2}\) does not exist, which means that the CO carrier necessarily has to be amorphous water ice. Such ice has long-term stability at \(r_{\rm h}=15\) au (when only considering protosolar heating). In the inner regions of the primordial disc, the largest acceptable parent in a collisional cascade has \(D_{\star}\la 16\) km. At \(r_{\rm h}=30\) au that threshold has increased to \(D_{\star}\la 58\) km. The crystallisation of amorphous water ice releases additional heat. Combined with collisional heating, it mobilises CO\({}_{2}\) that diffuses from the core closer to the surface, where it recondenses. In the current work, CO\({}_{2}\) therefore does not escape to space and therefore does not consume energy through net sublimation, though it participates in cooling the bodies through its advection. The current paper ignores the presence of frequent cratering, shape-changing, and sub-catastrophic collisions that may cause temporary (or steady-state) heating of the near-surface layers. Such heating may prevent CO\({}_{2}\) recondensation, and allow for loss to space. Such collision-driven CO\({}_{2}\) loss will be the topic of a forthcoming paper.
If CO is carried both by CO\({}_{2}\) and by amorphous water ice, the former acts as a heat sink buffer that may protect the latter. This pushes the upper size of parents commonly participating in collisional cascades to \(D_{\star}\la 20\)-25 km at \(r_{\rm h}=15\) au, and to \(D_{\star}\la 50\)-64 km at \(r_{\rm h}=30\) au.
At \(r_{\rm h}=15\) au (assuming the primordial disc extended that close to the protosun), it is therefore necessary to limit substantial collisional destruction of bodies to the ones with \(D_{\star}\leq 16\)-25 km, depending on composition. For the primordial disc population size used in section 3.2, this means that the lifetime of the inner primordial disc cannot have been much more than 7-9 Myr. At \(r_{\rm h}=30\) au, the largest parents would have had \(D_{\star}\la 32\)-64 km, again depending on composition. That would be possible if the lifetime of the primordial disc did not exceed 50-70 Myr at such distances. If the lifetime in reality was longer, the population size need to be proportionally smaller. For example, a \(\sim 450\) Myr lifetime consistent with the Late Heavy Bombardment, would still be possible as long as the population size is reduced by a factor 6-60 relative to current estimates (Brasser & Morbidelli 2013; Morbidelli & Rickman 2015; Rickman et al. 2015).
It should also be pointed out that if the primordial disc lifetime was \(\sim 10\) Myr or shorter, at most 10 per cent of the \(D_{\rm p}=5\) km (\(n=1\)) nuclei would be collisionally disrupted in the outer half (23-30 au) of the primordial disc (Tables 3 and 4). If that is the case, a majority of the current 67P-type nuclei would be primordially formed bodies, and not collisional fragments.
According to observations, the cumulative size-frequency distribution of TNOs has a turnover from a relatively steep slope at \(D\ga 100\) km, to a more shallow slope at smaller sizes (Bernstein et al. 2004; Fraser et al. 2010, 2014). Some (e. g., Fraser 2009) have interpreted the break as a result of collisional processing, so that \(D\ga 100\) km bodies are mostly primordial, while \(D\la 100\) km bodies are primarily collisional fragments and rubble piles. Others (e. g., Campo Bagatin & Benavidez 2012) have suggested that the kink is primordial and related to the original formation mechanism of the planetesimals. For example, gravitational collapse of pebble-swarms formed by streaming instabilities lead to a cumulative size-frequency distribution with a turnover near \(D\approx 100\) km under some circumstances (Li et al. 2019) that seem to be regulated by turbulent diffusion (Klahr & Schreiber 2021). The current work shows that even for the most resilient bodies (CO is trapped in amorphous H\({}_{2}\)O and a CO\({}_{2}\) : CO heat sink is available), collisional depletion of bodies larger than \(D_{\star}\) = 25-64 km (depending on dis
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Model & \(n\) & \(D_{\rm d}\) & H\({}_{2}\)O : CO & CO\({}_{2}\) : CO & \(T_{\rm max}\) & \(t_{\rm cool}\) & End tot. a–H\({}_{2}\)O & End core a–H\({}_{2}\)O & End CO\({}_{2}\) : CO & End core CO\({}_{2}\) \\ & & [km] & [Per cent] & [Per cent] & [K] & [Myr] & [Frac] & [Frac] & [Frac] & [Frac] \\ \hline R11\_006A & 11 & 50.8 & 2 & 2 & 126.5 & 7.67 & 0.086 & 0.000 & 0.007 & 0.421 \\ R11\_005A & 10 & 40.3 & 2 & 2 & 79.1 & – & 1.000 & 1.000 & 0.002 & 1.000 \\ R11\_005B & 10 & 40.3 & 2 & 0 & 127.0 & 5.29 & 0.146 & 0.000 & – & 0.411 \\ R11\_005C\({}^{*}\) & 10 & 40.3 & 2 & 2 & 79.4 & – & 0.998 & 0.004 & 1.000 & 1.000 \\ R11\_004A & 9 & 32.0 & 2 & 2 & 74.0 & – & 1.000 & 1.000 & 0.010 & 1.000 \\ R11\_004B & 9 & 32.0 & 2 & 0 & 77.9 & – & 1.000 & 1.000 & – & 1.000 \\ R11\_003A & 8 & 25.4 & 2 & 2 & 70.8 & – & 1.000 & 1.000 & 0.607 & 1.000 \\ R11\_002A & 7 & 20.2 & 2 & 2 & 65.9 & – & 1.000 & 1.000 & 0.962 & 1.000 \\ R11\_001A & 6 & 16.0 & 2 & 2 & 64.4 & – & 1.000 & 1.000 & 1.000 & 1.000 \\ \hline \end{tabular}
\end{table}
Table 10: Zone #3 at 25–30 au: properties of daughter nuclei born in the aftermath of a catastrophic collision. All bodies have \(\mu=4\), the water ice is initially amorphous, the CO\({}_{2}\) molar abundance relative water is 5 per cent, and radiogenic heating is omitted. Model R11_005C (highlighted with an asterisk) was run with a 20 times lower heat conductivity than other models. The ambient steady–state core temperature is near 47 K. Columns 1–3 are body identifiers (model tag, generation number \(n\), and daughter diameter \(D_{\rm d}\)). Columns 4–5 are initial conditions (abundance of CO trapped in amorphous water ice, and abundance of CO trapped in CO\({}_{2}\), in both cases molar relative water). Columns 6–7 are post-collision core peak temperature, and time needed for the core to cool back to 80 K when collision–related changes have ceased. Columns 8–11 are the surviving fractions of the initial phases and species: amorphous water ice (a–H\({}_{2}\)O) in total and in the core, unsegregated CO\({}_{2}\) : CO mixtures, and the CO\({}_{2}\) ice remaining at the core.
tance, \(15\leq r_{\rm h}\leq 30\) au) is excluded. If the only CO-bearer is CO\({}_{2}\), those limits are shifted to \(D_{\star}=25\)-\(32\) km. The thermophysical analysis therefore supports a primordial origin of the observed turnover in the TNO size-frequency distribution.
As mentioned in section 2, long-lived radiogenic heating has been ignored, the CO and CO\({}_{2}\) abundances may have been underestimated, while \(\mu\) and \(\kappa_{\rm s}(T,\,\psi)\) may be lower in some objects than considered here. In the following, the consequences for the \(D_{\star}\) estimates are discussed. The error in \(D_{\star}\) introduced by ignoring radiogenic heating can be estimated as follows. The specific energy released in comet material by long-lived radionuclides (Table 12 in Davidsson, 2021) integrated during \(t_{\rm cool}\) increases from \(\sim 10^{3}\) J kg\({}^{-1}\) t\({}_{\rm cool}^{-1}\) at small sizes to \(\sim 10^{4}\) J kg\({}^{-1}\) t\({}_{\rm cool}^{-1}\) at large ones. This roughly equals the difference in \(Q_{\rm D}^{\star}\) when going from one size class to another (Tables 2-4). This suggests that \(D_{\star}\) would be nudged at most one step downwards (in the list of discrete \(D_{\rm p}\)-values considered here), if long-term radioactivity was accounted for.
The CO abundance may be as high as \(\kappa_{\rm s}=0.13\)-\(0.26\)(Pontoppidan et al., 2008). This does not necessarily mean that the assumed 2 per cent abundance of CO trapped in CO\({}_{2}\) or H\({}_{2}\)O is an underestimate1, because we do not know the fraction of all CO that initially was in the form of pure ice. However, laboratory experiments indicate a maximum mixing ratio of CO\({}_{2}:\) CO \(=5:1\)(Simon et al., 2019), suggesting the CO\({}_{2}\) could support at most \(\sim 6\) per cent CO (molar, relative to water). Other experiments show a maximum mixing ratio of H\({}_{2}\)O : CO \(=100:15\)(Bar-Nun et al., 2007), suggesting the trapping efficiency of CO in H\({}_{2}\)O is nearly twice as high as in CO\({}_{2}\). If both substances trapped CO at maximum capacity, the initial abundance of pure CO would have been at most 5 per cent (section 4 shows that such CO would have been lost before the catastrophic disruption). If the abundance of trapped CO indeed is underestimated (by factors \(\stackrel{{<}}{{{}_{\sim}}}3\) in CO\({}_{2}\) and \(\stackrel{{<}}{{{}_{\sim}}}7\) in H\({}_{2}\)O) it has two consequences: 1) the CO\({}_{2}\) : CO segregation heat sink that delays or prevents CO-release from amorphous H\({}_{2}\)O may have been more efficient than currently assumed; 2) the energy release during crystallisation may have been lower than currently assumed, because a larger fraction of the crystallisation energy is consumed by CO during its release (absorbing 41 instead of 5 per cent of that energy). Firstly, the majority of the collisional specific energy is consumed by raising the temperature from the ambient level to the point of segregation onset. Increasing the amount of CO by a factor \(\sim 3\) therefore does not have a drastic effect on the capability of the body to absorb collisional energy - that capability increases by just \(\sim 27\) per cent. That is similar to the increase in \(Q_{\rm D}^{\star}\) when going from one size class to another, which typically is \(\sim 33\) per cent. This suggests that \(D_{\star}\) would be nudged at most one step upwards, if a higher CO-abundance was accounted for. A reduction of \(\mu=4\) to \(\mu=1\) increases the total mass of CO by a factor 2.5 (for a fixed CO/H\({}_{2}\)O ratio), thus having a similar effect.
Footnote 1: At \(r_{\rm h}=15\) au there is a substantial loss of CO\({}_{2}:\) CO due to protosolar-driven segregation between the time of formation and \(t_{10}\). I emphasise that the usage of 2 per cent CO in CO\({}_{2}\) in the post–collision simulations represents what remains at \(t_{10}\), i. e., the initial abundance would have been higher. However, at 23 au and 30 au, where CO\({}_{2}:\) CO nominally is stable, the 2 per cent indeed represents an assumed initial condition.
Secondly, the reduction of the effective crystallisation energy does not prevent the crystallisation process itself, though it may slow it down. However, because the time period near peak temperature is long, crystallisation still has time to complete, and \(D_{\star}\) remains nearly the same. The main effect is that the post-crystallisation peak temperature is lower. That does not affect the CO loss, but may slightly reduce the level of internal CO\({}_{2}\) displacement. Because CO\({}_{2}\) does not cause a net energy consumption, the degree of CO\({}_{2}\) displacement and the initial \(\nu_{6}\)-value have no effect on \(D_{\star}\). Finally, changes to the heat conductivity does not affect the temperature reached right after the impact, and hence does not determine whether or not the body enters the modes of segregation or crystallisation. The nominal simulations used a relatively high heat conductivity (short cooling time). Bodies that were disqualified based on their large loss of CO would be even less suitable comet ancestors if cooling times were extended by lowering the nominal theoretical thermal inertia towards measured values. The question is then if a lowered thermal inertia would significantly push the \(D_{\star}\) estimate to smaller sizes. For the body sizes and heat conductivities considered here, \(t_{\rm cool}\) is a few times \(10\) kyr in cases where crystallisation is not triggered, but could be extended to a few times \(1\) Myr if substantially lowering the heat conductivity (see section 4.3.2). According to Schmitt et al. (1989) a body crystallises into \(\sim 50\) kyr if reaching \(T_{\rm max}\approx 89\) K. If the time available at high temperature is extended to \(\sim 5\) Myr full crystallisation can be achieved if \(T_{\rm max}\approx 85\) K. A significant reduction of heat conductivity therefore only extends the realm of significant CO loss from bodies heated to \(\sim 89\) K in collisions, to somewhat smaller bodies that are heated to \(\sim 85\) K. In summary, the current \(D_{\star}\) estimates are expected to be pushed at most one step down (long-term radioactivity) or one step up (ice abundances), on the considered grid of body diameters if other model parameters had been applied. For objects that simultaneously have elevated hypervolatile abundances and experience long-lived radiogenic heating, the effects would partially cancel.
The current work has focused on the thermophysical evolution of bodies taken place after collisional disruption and gravitational re-accumulation. Simple methods have been used to estimate the amount of energy released during a catastrophic collision. It is important to compare those estimates with more rigorous ones from continuum mechanics simulations. In recent years, the capabilities to model continuum mechanics numerically have evolved to the point where the disruption of highly porous andicy planetesimals can be studied in a sophisticated manner. The parameter space is being explored gradually, and studies have thus far not been devoted primarily to the catastrophic disruption of intermediate-sized (\(20\leq D\leq 64\) km) bodies considered here. Notably, investigations have focused on sub-catastrophic (Jutzi et al., 2017; Jutzi and Benz, 2017) or catastrophic (Schwartz et al., 2018) disruption of small (\(D\leq 4\) km) targets, or alternatively, the cratering, sub-catastrophic, or super-catastrophic impacts onto large (\(40\leq D\leq 400\) km) bodies (Jutzi and Michel, 2020; Golabek and Jutzi, 2021). This makes it difficult to directly compare the levels of energy release in collision codes, with those applied in the current work.
The published cases most similar to the current ones are probably the hyper-catastrophic disruption (at \(2.76Q_{\rm D}^{\star}\)) of a \(D=50\) km target discussed by Jutzi and Michel (2020), and the sub-catastrophic disruption (at \(0.44Q_{\rm D}^{\star}\)) of a \(D=40\) km target discussed by Golabek and Jutzi (2021). They present the results in terms of mass fractions (of the material bound to the daughter, and for the unbound ejecta) that reaches a given temperature. Jutzi and Michel (2020) focus on conditions in which water may sublimate (taken as a temperature increase \(\Delta T\geq 80\) K). They found that \(0.01\) per cent of the bound material, and \(40\) per cent of the unbound ejecta would reach temperatures. However, the major concern in the current work is not water but the loss of the hypervolatile CO, which happens at \(\Delta T\approx 10\) K if it primarily is stored within CO\({}_{2}\), or at \(\Delta T\approx 35\) K if the main host is amorphous water ice. For the sub-catastrophic collision onto a \(D=40\) km target, Golabek and Jutzi (2021) find \(\Delta T\approx 10\) K for 5 per cent of the bound material, and for 50 per cent of the unbound ejecta.
Note, however, that the impactor would need \(\sim 2.3\) times more energy to cause a catastrophic disruption, leading to additional heating. This can be compared to model R06_008A in Table 8, considering the daughter formed when disrupting a \(D_{\rm P}=40.3\) km parent. Here, a global \(\Delta T=37\) K was obtained. Taken at face value, this heating is more substantial than achieved in the simulation by Golabek & Jutzi (2021).
First, I caution that the two cases may not be directly comparable. Although specific impact energies are similar, Golabek & Jutzi (2021) consider a \(d_{\rm proj}=7.2\) km projectile, hitting at velocity \(3\) km s\({}^{-1}\) with a \(45^{\circ}\) impact angle. Model R06_008A instead considers a \(d_{\rm proj}=19.8\) km projectile, hitting head-on at \(0.44\) km s\({}^{-1}\). The first case has projectile-to-target mass ratio \(m_{\rm d}/M_{\rm P}=0.006\), but model R06_008A has \(m_{\rm d}/M_{\rm P}=0.12\). Davison et al. (2010) studied the level of heating in 50 per cent porosity bodies when the total mass (corresponding to a \(12.6\) km diameter body) and impact velocity (\(5\) km s\({}^{-1}\)) were held constant, for different \(m_{\rm d}/M_{\rm P}\) values. They were interested in the mass fraction reaching the dunite solutions (initiation of rock melting at \(1373\) K; Davison 2010). They found insignificant melting at \(m_{\rm d}/M_{\rm P}=0.001\) but 15 per cent rock melting at \(m_{\rm d}/M_{\rm P}=0.1\). This is not only due to differences in specific collision energies: a small and fast projectile causes a less wide-spread heating than a large and slow projectile that carries the same kinetic energy. Davison et al. (2010) explain the reason: heating of the target stops when the release wave catches up with the shock wave. Upon impact, shock waves travel both into the target and into the projectile. When the shock wave in the projectile reaches its antipodal collision point, this triggers a release wave that goes back through the projectile, enters the target, and catches up with the shock wave of the target. The smaller the projectile, the shorter time is needed for stopping the shock wave, and the smaller fraction of the target is compacted and heated. It is therefore expected that a R06_008A-type collision indeed would lead to substantially more heating than the seemingly similar case studied by Golabek & Jutzi (2021).
Second, the model parameters in most models (Jutzi & Asphaug, 2015; Jutzi et al., 2017; Jutzi & Benz, 2017; Schwartz et al., 2018; Jutzi & Michel, 2020; Golabek & Jutzi, 2021) are not necessarily appropriate for icy planetesimals (judging from laboratory measurements of analogue materials), and currently seem to bias towards very low levels of heating. These problems are discussed further in Appendix A. This is not a criticism of these works, but a recognition of the fact that a relatively small part of the parameter space in collision modelling has been explored thus far. It would be unfortunate if the scientific community drew general conclusions regarding the level of heating experienced by colliding, porous, and icy planetesimals, before a larger range of options have been considered. That is because of the consequences that premature conclusions may have on the interpretation of spacecraft observations of comets (e. g., by _Rosetta_ and eventually by _Comet Interceptor_), and the implications for the field of cometary science. Understanding if comets are collisional products or primordial bodies is crucial - it determines whether observed properties inform about original formation or subsequent processing, and has implications on the primordial disc mass and lifetime. A critical step is to determine whether heavy collisional processing is compatible with the observed presence of abundant hypervolatiles and supervolatiles in comet nuclei. Further studies of low-velocity collisions amongst similarly-sized planetesimals with high porosity, for a wider variety of parameters (see Appendix A) are urgently needed.
## 6 Conclusions
This paper models the thermophysical evolution of porous icy planetesimals in the \(16\leq D\leq 64\) km diameter range before and after catastrophic collisional disruption. The focus is on the survival of CO, as representative of all hypervolatiles (e. g., N\({}_{2}\), O\({}_{2}\), CH\({}_{4}\), C\({}_{2}\)H\({}_{6}\), and noble gases). They are assumed to be stored within CO\({}_{2}\) and/or amorphous H\({}_{2}\)O, which means that a potential collisional cascade must avoid segregation and/or crystallisation, otherwise hypervolatiles would not have been abundant in comet nuclei. That places constraints on the starting point of the cascade (i. e., the diameter \(D_{\star}\) of the largest body being frequently disrupted). This, in turn, places constraints on the lifetime of the primordial disc and/or its population size. The main conclusions based on the nominal models are summarised below.
1. Bodies with \(D\geq 64\) km cannot avoid crystallisation due to long-lived radiogenic heating, even when the dust-to-water ice mass ratio is as low as \(\mu=1\). Because the most resilient CO-host is lost, such bodies cannot have participated in a collisional cascade, because it would produce hypervolatile-free comet nuclei.
2. At \(r_{\rm h}=15\) au it takes \(0.3\)-\(2.3\) Myr for \(D=20.2\)-\(64\) km bodies to lose 2 per cent pure CO ice (relative water, when \(\mu=4\)). If the CO abundance is increased, the loss time-scale is _shorter_ than expected from linear extrapolation, because longer time means heating to higher core temperatures, and accelerating vapour evacuation. The corresponding loss times at 23 au and 30 au are \(0.4\)-\(3.5\) Myr and \(0.6\)-\(6.0\) Myr, respectively.
3. CO\({}_{2}\) : CO mixtures are not stable at \(r_{\rm h}=15\) au, at least not for the currently applied activation energy. However, 17-32 per cent is expected to remain at \(r_{10}\) (the point in time when 10 per cent of the bodies of a given size have been collisionally disrupted), helping to protect CO-laden amorphous H\({}_{2}\)O. The CO\({}_{2}\) : CO mixtures are stable at \(r_{\rm h}\geq 23\) au if the protosun is the only heat source.
4. The most resilient nuclei would store CO in amorphous water ice, and additionally have a heat sink in the form of CO\({}_{2}\) : CO mixtures. With such nuclei, a collisional cascade could start at \(D_{\star}\leq 20\)-\(25\) km in the inner (15 au) part of the primordial disc, and at \(D_{\star}\leq 50\)-\(64\) km in its outer (30 au) part.
5. If there is no CO\({}_{2}\) : CO heat sink, crystallisation of amorphous H\({}_{2}\)O and CO loss can only be avoided in collisional cascades starting at \(D_{\star}\leq 16\) km at 15 au and \(D_{\star}\leq 50\) km at 30 au.
6. If CO\({}_{2}\) is the sole carrier of hypervolatiles, such nuclei would only be stable at \(r_{\rm h}\geq 23\) au. A potential collisional cascade must have started at \(D_{\star}\leq 23\) au and at \(D_{\star}\leq 32\) km at 30 au.
7. Nuclei that crystallise not only lose CO, but the CO\({}_{2}\) is redistributed from the core towards the surface. \(D_{\rm d}=32\)-\(50.8\) km daughters formed at \(r_{\rm h}=15\) au would have factor 9-24 CO\({}_{2}\) elevations in the top \(\sim 1.5\) km (peaking at depths 10-16 m). The partially or fully CO\({}_{2}\)-free cores are not suitable comet building-block material.
8. The \(D\simeq 100\) km break in size-frequency distribution slopes of TNO populations does not seem consistent with the starting-point of a collisional cascade, based on the conclusions i, iv-vi. Because \(D_{\star}\) does not change significantly with body composition and conductivity, this conclusion is not biased by the considered nominal model assumptions.
9. In order to prevent collisional cascades involving targets larger than the limits defined above, assuming nominal Nice-model population levels, the primordial disc lifetime could have been at most \(\sim 7\)-\(9\) Myr at 15 au, and at most \(\sim 50\)-\(70\) Myr at 30 au. Alternatively, if the primordial disc lifetime was 450 Myr (as required if invoking association with the Late Heavy Bombardment), the population levels need to be 6-60 times lower than currently assumed.
## Acknowledgements
This research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. The author acknowledges funding from NASA grant 106994 / 811073.02.33.02.90 awarded by the Emerging Worlds program.
## Appendix A Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author.
| primordial disc
collisional cascade
Planetesimals
comet nuclei
Kuiper belt
scattered disc
Oort Cloud
hypervolatiles
heating
collisions
diameter of the largest bodies
primordial disc lifetime
population size
NIMBUS
thermophysical code
thermal evolution
co
crystallization
amorphous H2O
internal relocation
CO2 heatsink
diameter D
collisional cascade
critical diameters
primordial disc lifetime
disc population size
Late Heavy Bombardment
disc disruption |
2306.00204 | Toward Understanding Why Adam Converges Faster Than SGD for Transformers | While stochastic gradient descent (SGD) is still the most popular
optimization algorithm in deep learning, adaptive algorithms such as Adam have
established empirical advantages over SGD in some deep learning applications
such as training transformers. However, it remains a question that why Adam
converges significantly faster than SGD in these scenarios. In this paper, we
propose one explanation of why Adam converges faster than SGD using a new
concept directional sharpness. We argue that the performance of optimization
algorithms is closely related to the directional sharpness of the update steps,
and show SGD has much worse directional sharpness compared to adaptive
algorithms. We further observe that only a small fraction of the coordinates
causes the bad sharpness and slow convergence of SGD, and propose to use
coordinate-wise clipping as a solution to SGD and other optimization
algorithms. We demonstrate the effect of coordinate-wise clipping on sharpness
reduction and speeding up the convergence of optimization algorithms under
various settings. We show that coordinate-wise clipping improves the local loss
reduction when only a small fraction of the coordinates has bad sharpness. We
conclude that the sharpness reduction effect of adaptive coordinate-wise
scaling is the reason for Adam's success in practice and suggest the use of
coordinate-wise clipping as a universal technique to speed up deep learning
optimization. | Yan Pan, Yuanzhi Li | 2023-05-31T21:49:44 | http://arxiv.org/abs/2306.00204v1 | # Toward Understanding Why Adam Converges Faster Than SGD for Transformers
###### Abstract
While stochastic gradient descent (SGD) is still the most popular optimization algorithm in deep learning, adaptive algorithms such as Adam have established empirical advantages over SGD in some deep learning applications such as training transformers. However, it remains a question that why Adam converges significantly faster than SGD in these scenarios. In this paper, we propose one explanation of why Adam converges faster than SGD using a new concept _directional sharpness_. We argue that the performance of optimization algorithms is closely related to the directional sharpness of the update steps, and show SGD has much worse directional sharpness compared to adaptive algorithms. We further observe that only a small fraction of the coordinates causes the bad sharpness and slow convergence of SGD, and propose to use coordinate-wise clipping as a solution to SGD and other optimization algorithms. We demonstrate the effect of coordinate-wise clipping on sharpness reduction and speeding up the convergence of optimization algorithms under various settings. We show that coordinate-wise clipping improves the local loss reduction when only a small fraction of the coordinates has bad sharpness. We conclude that the sharpness reduction effect of adaptive coordinate-wise scaling is the reason for Adam's success in practice and suggest the use of coordinate-wise clipping as a universal technique to speed up deep learning optimization.
## 1 Introduction
Stochastic gradient descent (SGD) [42; 5] is one of the most widely used optimization algorithms for deep learning, due to its simplicity and efficiency on various large-scale neural networks. However, in some tasks, such as training transformers [47; 14], which are powerful models for natural language processing and other domains, SGD often performs poorly compared to adaptive variants of stochastic gradient methods. Adaptive algorithms, such as Adagrad [15], Adam [25], and AMSGrad [40], adjust the learning rate for each parameter based on the magnitude and history of the gradients, which can help them exploit the local geometry of the objective function and escape from saddle points or plateaus. While adaptive algorithms have shown empirical advantages over SGD in many applications [21; 51; 16], the theoretical understanding of their superior performance in these tasks is limited [51; 11]. The best known non-convex convergence rate for AMSGrad [40] only matches the best convergence rate of SGD but does not improve upon it [52; 20]. While pursuing a better general convergence rate for adaptive algorithms is a possible but challenging direction, a more realistic and relevant question is what makes Adam so effective and SGD so ineffective on certain architectures and tasks, such as transformers on language tasks. We aim to identify some properties of transformers that give rise to this phenomenon, and to find some quantities that can indicate the performance of different optimization algorithms in practice. Such insights could then be used to guide the selection and design of faster and more robust optimization algorithms for deep learning.
In this paper, we propose one possible explanation for why Adam converges faster than SGD in practice, especially for transformers. We begin by revisiting a classic simple example. Consider
minimizing the diagonal quadratic function \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\) given by \(f(x)=x^{\top}Ax\), where \(A_{11}=100\) and \(A_{ii}=1\) for all \(i>1\). The gradient is given by \(\nabla f(x)=(200x_{1},2x_{2},\dots,2x_{d})\) and the Hessian has spectral norm \(200\). If we run gradient descent, then by standard convex optimization analysis, we can choose a learning rate at most \(\frac{1}{100}\) for any initial point, which will result in slow convergence. However, if we run adaptive algorithms, signSGD, or simply clip the first coordinate, we can use a much larger learning rate and converge in a few steps. Although this example is much simpler than practical applications of adaptive algorithms, it illustrates the key idea that coordinate-wise scaling can help adaptive algorithms to adjust their step size on different coordinates and exploit the curvature of the function. We wonder if there are similar phenomena in real-world neural networks.
Inspired by this example, we study the local geometry of transformers in Section 3. Instead of analyzing the global convergence and trajectory of optimization algorithms, we focus on the simpler question of finding a good update direction in a fixed local geometry. We decompose the goal of locally minimizing the objective function into two components: _gradient correlation_, which measures the alignment of the update direction with the negative gradient, and _directional sharpness_, which measures the curvature of the function along the update direction. We argue that the directional sharpness of the update direction is a more useful indicator of the performance of optimization algorithms, as high sharpness usually implies low performance. Empirically, we observe through experiments that the update directions of SGD have much higher directional sharpness compared to adaptive algorithms. By studying more algorithms, we observe that in general, algorithms with high directional sharpness converge much slower than adaptive algorithms, which typically have low directional sharpness. We also visualize the corresponding landscape along the update directions, and our results show that algorithms with low directional sharpness can generally achieve a better local loss reduction if optimal step sizes are chosen.
We investigate the cause of SGD's high directional sharpness and find that it is mainly due to the imbalanced distribution of gradient across coordinates. We observe that only a small fraction of the coordinates account for most of SGD's directional sharpness and we infer that it is because of the positive correlation between the Hessian and gradient coordinates. To address this issue, we propose to use coordinate-wise clipping as a simple and effective technique to improve the convergence and directional sharpness of optimization algorithms. The intuition behind clipping is that when a few coordinates have large gradients and bad smoothness, clipping prevents them from dominating the update direction and inflating the directional sharpness. Theoretically, we show that clipping improves the worst-case directional sharpness and enables a better local loss reduction with a larger step size. Empirically, we show that clipping can consistently reduce the directional sharpness, which often leads to a better local function reduction and improves the convergence speed of various optimization algorithms including adaptive algorithms. We demonstrate our findings through two experiments under different settings and show that our observations are robust across different tasks, models, and iterations. Based on the experiments, we argue that the landscape of optimization algorithms in local geometry is a useful proxy for the global convergence speed. We conclude that the **adaptive coordinate-wise scaling** of Adam can effectively balance the trade-off between optimizing gradient correlation and directional sharpness, and that this ability is the key to Adam's fast convergence in deep learning training.
Our main contributions can be summarized as follows:
1. We identify directional sharpness as a key indicator of the performance of optimization algorithms in local geometry, and show that adaptive algorithms have low directional sharpness compared to SGD, especially when training transformers.
2. We propose coordinate-wise clipping as a simple, effective and **universal** technique to improve the directional sharpness and convergence speed of various optimization algorithms, and provide theoretical and empirical support for its benefits.
## 2 Related Work
**General Convergence Rates of Adaptive Algorithms.** Adaptive algorithms have long been studied and applied in deep learning [1; 38; 15; 25; 46; 40]. Several previous work has proved convex and non-convex convergence rates for Adagrad [15; 29; 13; 48] and Adam or AMSGrad [12; 40; 17; 8; 52; 36; 53]. The best known non-convex convergence rate for Adagrad is \(O(\frac{\log T}{\sqrt{T}})\)[29; 13] and \(O(\frac{1}{\sqrt{T}})\) for AMSGrad [52]. While the result by [52] matches the non-convex convergence rate \(O(\frac{1}{\sqrt{T}})\) of
SGD [20], there is no theoretical proof that Adam can converge asymptotically faster than SGD for general functions [11]. Therefore, there is still a significant gap of work between the theoretical understanding of Adam and its empirical fast performance.
**Faster Convergence Rates Under Certain Settings.** Another line of work focused on specific settings that Adam might work better than SGD. Adaptive algorithms can work asymptotically better when the stochastic gradients are sparse [15; 52] or when there is a sparse set of noise [2]. [51] proved that global clipping methods outperforms SGD when the stochastic gradients have heavy-tail noise, argued that Adam can also deal with heavy-tail noise effectively, and designed a new algorithm based on coordinate-wise clipping.
**Coordinate-Wise Clipping.** Both global clipping [35; 51] and coordinate-wise clipping [21] are commonly used in practice with SGD. While global norm clipping and normalization has been studied both theoretically and empirically [35; 28; 22; 51], there has been very little research on coordinate-wise clipping methods. The most relevant work is [51], where the authors use coordinate-wise clipping to propose algorithms CClip and ACClip that works well on transformers in practice. They use adaptive thresholds updated as momentum parameters and clip the coordinates to the corresponding thresholds. [51] shows that ACClip can perform empirically better than Adam on various transformers.
The coordinate-wise properties of the gradient and Hessian is often used in coordinate descent methods [49; 44; 41]. Recently, due to its ability to deal with heavy-tailed noise [51], coordinate-wise clipping has been applied in differentially private coordinate descent methods as it adapts to the coordinate-wise imbalance of the objective [31; 32; 34; 37]. In particular, [34] designs a strategy to choose an adaptive clipping threshold based on the mean of the gradients, while we use the distribution of the gradients to select a threshold that clips exactly a constant fraction of the gradients.
Our work is inspired by the use of coordinate-wise clipping in algorithm design in [51], but we propose different explanations of the effectiveness of coordinate-wise clipping with new empirical evidence. We highlight important differences between our work and the analysis of CClip and ACClip algorithms in [51]. First, we propose different explanations for the performance of clipping. [51] claims that clipping can deal with heavy-tailed noise in transformers, while we discover directional sharpness as a quantitative metric that directly relates to loss minimization and whose properties can be verified easily. Second, while CClip and typical coordinate-wise clipping methods choose thresholds independent to the gradient, we choose an adaptive clipping threshold based on the distribution of the gradient. Most importantly, while [51] focus on designing a new algorithm that can outperform Adam, we aim to propose coordinate-wise clipping as a meta algorithm, such that every optimization can use and improve its performance. Then, every algorithm can beat itself if clipping is added as a new unit, similar to the role of momentum in deep learning.
## 3 Directional Sharpness of Optimization Algorithms
In this section, we introduce a new measurement **directional sharpness** that indicates the performance of optimization algorithms. We show that minimizing the term is extremely important to fast convergence of optimization algorithms and argue that it is closely related to the slow convergence of SGD.
### From Quadratic Taylor Expansion to Directional Sharpness
In convex and non-convex optimization, a typical proof strategy is to consider the quadratic Taylor expansion of the objective function
\[f(x_{t+1})=f(x_{t})+\underbrace{\nabla f(x_{t})^{\top}(x_{t+1}-x_{t})}_{ \text{gradient correlation}}+\frac{1}{2}\underbrace{(x_{t+1}-x_{t})^{\top} \nabla^{2}f(x_{t})(x_{t+1}-x_{t})}_{\text{directional sharpness}}+O(\eta^{3}) \tag{1}\]
where \(x_{t+1}-x_{t}\) is the update step of the optimization algorithm and \(\eta\) is the step size. In order to get \(f(x_{t+1})\leq f(x_{t})\) in expectation, the optimization algorithm should minimize the two terms that depends on the update step, which we respectively denote _gradient correlation_, which measures the alignment of the update direction with the negative gradient, and _directional sharpness_, which measures the curvature of the function along the update direction. To bound the second-order term, the default method in convex and non-convex optimization is to assume that the objective function is
\(L\)-smooth, which equivalently says \(\|\nabla^{2}f(x)\|_{2}\leq L\) for every \(x\)[6], where \(\|\cdot\|_{2}\) is the spectral norm. The local Hessian spectral norm is often called the _sharpness_ of the function in deep learning [9]. If we have \(L\) as the global upper bound on the spectral norm of the Hessian, we would have
\[\frac{1}{2}(x_{t+1}-x_{t})^{\top}\nabla^{2}f(x_{t})(x_{t+1}-x_{t})\leq\frac{1} {2}\|\nabla^{2}f(x_{t})\|_{2}\|x_{t+1}-x_{t}\|_{2}^{2}\leq\frac{L}{2}\|x_{t+1} -x_{t}\|_{2}^{2}. \tag{2}\]
Then we have the following inequality, which is one of the most frequently used lemma in optimization proofs [6; 20; 40; 52]
\[f(x_{t+1})\leq f(x_{t})+\nabla f(x_{t})^{\top}(x_{t+1}-x_{t})+\frac{L}{2}\|x_{ t+1}-x_{t}\|_{2}^{2}. \tag{3}\]
If the function is \(L\)-smooth, the loss can decrease when the first-order term is negative and the norm of the update step is sufficiently small, since the second-order term is quadratic in the step size and the first-order term is linear. This can be guaranteed by using a small learning rate, and this leads to the convergence proofs of many optimization algorithms. However, there are disadvantages of the smoothness assumption in theoretical proofs. For example, the Hessian can adapt to the geometry of the trajectory and can vary significantly for different algorithms [9; 10], so using a global upper bound in the convergence proof might not be fair for some algorithms. Furthermore, even if the local geometry and Hessian are fixed, the update direction \(x_{t+1}-x_{t}\) is also extremely important to minimizing the second-order term. The current bound assumes that we are choosing the worst direction possible, but typically optimization algorithm might find better directions in probability. We could probably believe that if a good direction is chosen, the second-order term can be much lower than the global upper bound, so the bound need not be tight.
Motivated by the definition of sharpness and the above observations, we define the _directional sharpness_ of a function \(f\) at \(x\) in the direction \(v\in\mathbb{R}^{d}\), \(\|v\|_{2}=1\) as \(v^{\top}\nabla^{2}f(x)v\). The directional sharpness at \(x_{t}\) in the update direction is extremely important to minimizing \(f(x_{t+1})\). Since directional sharpness is quadratic in the step size \(\eta\) and gradient correlation is linear, if we consider Equation (1) as a quadratic function of \(\eta\), a lower directional sharpness implies the potential to take a larger step size and possibly lead to a larger local reduction of the objective function. In contrast, if the directional sharpness is large, we have no choice but to take a tiny step, as otherwise the loss would blow up due to the second-order term. This implies that having a low directional sharpness can sometimes be a more desirable property for update directions than having a high gradient correlation.
Although our definition is motivated by the sharpness definition in deep learning, we highlight important differences between them. Sharpness describes the **worst-case directional sharpness** and is the supremum of directional sharpness over all directions. However, directional sharpness consider the sharpness in the specific **update direction** of an iterative optimization algorithm, and can be much lower than the sharpness if the direction is "good". The concept of sharpness is typically associated with the landscape and generalization of neural networks, such as in Sharpness-Aware Minimization [18] and Edge of Stability [9; 10]. We are only interested in optimization of the objective function in the empirical risk minimization problem, or the loss on the training set.
### Directional Sharpness and Update Directions
We study the update step of different optimization algorithms under the same trajectory and local geometry using pseudo-update steps to compute the momentum in order to rule out the impact of trajectory. We compute the directional sharpness of different optimization algorithms and visualize the optimization landscape in the update direction of a variety of optimization algorithms in Figures 2 to 4 and Table 1. The details of the experiment is described in Section 5 and Appendix B. Empirically, we observe that there can be a significant gap between the directional sharpness in the update direction of different optimization algorithms. In particular, the directional sharpness is **much lower for adaptive algorithms** than for SGD.
Based on the observation, we argue that minimizing the directional sharpness is more important for fast convergence of optimization algorithms as compared to minimizing the gradient correlation. The update step of SGD has the best correlation with the actual gradient, so the loss decrease faster when the step size is very small, since in this case the linear term dominates the quadratic term in Equation (1). However, because of the large directional sharpness, when the step size increases the quadratic term grows faster than the linear term, so the loss reaches the local minima in the direction after a very small step size. For adaptive algorithms, the directional sharpness is much lower than
SGD, so they have the potential to use a much larger step size and the optimal step could give a much lower loss compared to SGD.
In order to explain the sharpness reduction effect of adaptive algorithms, since the strategy for adaptive algorithms is to find a coordinate-wise scaling of the gradient, we investigate the distribution of gradient norm across different coordinates. We visualize a histogram of the absolute value of SGD momentum coordinates in Figure 1. We observe that the gradients are distributed unevenly across the coordinates, with half of the coordinates have absolute value ranging from \(10^{-12}\) to \(10^{-6}\), but also exists an innegligible portion of coordinates that can be as high as \(10^{-4}\) to \(10^{-2}\), contributing to most of the gradient norm. The histogram suggests that the gradients are concentrated on a small fraction of the coordinates, and this small fraction of coordinates can contribute to a large portion of sharpness, making optimization hard. For adaptive algorithms, since they already used some forms of scaling, the imbalanced gradient distribution will not be as severe as SGD. As a result, they would have better convergence rate.
In Appendix E, we do a simple experiment with ResNet [23] on image classification that shows the property might be related to the transformer architecture. In particular, the directional sharpness of adaptive algorithms might be worse than SGD for ResNets. This is consistent with empirical observations of the performance of adaptive algorithms in vision tasks, that it is often slower than SGD.
## 4 Coordinate-wise Clipping
### Coordinate-wise Clipping Improves Directional Sharpness
We propose to use _coordinate-wise clipping_ as a solution to the aforementioned imbalanced distribution of gradient based on our experimental findings. We observe that the sharpness is also concentrated in the large coordinates in the gradient, and clipping those coordinates can significantly decrease directional sharpness. Although clipping can decrease gradient correlation, since the dependence on
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Algorithm** & **Sharpness** & **Ratio to SGD** \\ \hline SGD & \(8.674583\) & 1 \\ \hline SGD Clip 10\% & \(0.527104\) & \(0.060764\) \\ \hline Adam & \(0.252707\) & \(0.029131\) \\ \hline Adam Clip 50\% & \(0.000574\) & \(6.617\times 10^{-5}\) \\ \hline Adafactor & \(5.999\times 10^{-5}\) & \(6.916\times 10^{-6}\) \\ \hline Adafactor Clip 50\% & \(2.051\times 10^{-7}\) & \(2.364\times 10^{-8}\) \\ \hline Lion & \(0.118202\) & \(0.013626\) \\ \hline Normalized SGD & \(0.722253\) & \(0.083261\) \\ \hline Normalized SGD Clipping & \(0.179141\) & \(0.020651\) \\ \hline \end{tabular}
\end{table}
Table 1: The sharpness of different optimization algorithms when trained on machine translation, in the same experiment and iteration as Figure 2. The directional sharpness of different optimization algorithms varies significantly. For example, the directional sharpness of SGD can be more than \(10^{7}\) times the directional sharpness of Adafactor with clipping. Furthermore, clipping almost always improve the directional sharpness of optimization algorithms.
Figure 1: Histogram of update step distribution over coordinates for SGD, Adam, and Adafactor on machine translation.
the clipped entry is quadratic for the second-order term and linear for the first-order term, it might not be beneficial to use these coordinates. The use of clipping in optimization algorithms is a trade-off between improving gradient correlation and reducing directional sharpness. By clipping the top coordinates in the gradient, although gradient correlation decreases, the directional sharpness can decrease even more to make up the loss.
We consider using clipping on a variety of optimization algorithms, including SGD, normalized SGD, signSGD, Adam [25], Adafactor [43]. We demonstrate that coordinate-wise clipping significantly reduces the sharpness of adaptive algorithms and speeds up the optimization process. Specifically, at every iteration \(t\), we compute the threshold \(\tau_{t}\) for the top \(k\%\) gradients in terms of the absolute value, and clip the gradient coordinates \(g_{t,i}\) to \(\hat{g}_{t,i}=\mathrm{sgn}(g_{t,i})\min\{|g_{t,i}|,\tau_{t}\}\) based on their sign. Then, the clipped gradient \(\hat{g}_{t}\) is used to update the momentum term. For adaptive algorithms, we make a slight modification of the use of clipped gradient \(\hat{g}_{t}\), that we only update the momentum in the numerator, that is proportional to the update step, using the clipped gradient. The momentum in the denominator is still updated using the original gradient \(g_{t}\). This is because if we update both terms with the clipped gradient, the normalization effect of adaptive algorithms will cancel out with the clipping of the denominator, so the scaling of the update step will be insufficient. Examples of SGD momentum and Adam with coordinate-wise clipping are shown in Figure 5. We also considered clipping the update step for adaptive algorithms, but since the update steps are already scaled based on the gradient, clipping the update step does not appear to be beneficial.
For clipping threshold, we use a small clipping fraction of 10% for SGD and normalized SGD since they do not have coordinate-wise scaling in their algorithms. Hence, we can observe a significant improvement with a small clipping fraction. For Adam and Adafactor, since they already did coordinate-wise scaling in the original algorithm, we use a large clipping fraction of 50%. From Table 1, we can see that clipping the top the directional sharpness decrease significantly. Since we
Figure 3: The loss landscape in different update directions on machine translation in Adam geometry.
Figure 2: The loss landscape in different update directions on machine translation in SGD geometry. The step size is the learning rate normalized by the update step \(\ell_{2}\) norm. The plots of clipped and unclipped variants of the same algorithm have the same color with different opacity.
normalize the update step when we compute the directional sharpness, the sharpness reduction effect of coordinate-wise clipping is not due to significant reduction of the norm of the update step, but the improved flatness of the direction. The landscape visualization in Figure 2 gives a consistent message, that clipped algorithms can find a direction that has better local reduction of the loss in the local geometry.
```
0: initial point \(x_{0}\), learning rate \(\eta\), momentum term \(\beta\) for\(t\gets 1,\dots,T\)do \(g_{t}\leftarrow\nabla f(x_{t})\) or stochastic gradient \(\hat{g}_{t}\leftarrow\text{\bf clip}(g_{t})\) \(m_{t}\leftarrow\beta m_{t-1}+(1-\beta)\hat{g}_{t}\) \(x_{t}\gets x_{t-1}-\eta m_{t}\) endfor
```
**Algorithm 1** SGD momentum with clipping
Finally we demonstrate that clipping algorithms can converge faster than the original counterpart by directly training transformers with the clipping algorithms, with the loss curve shown in Figure 6. According to the result, clipping algorithms can speedup training significantly. For coordinate-wise scaling algorithms such as Adam, it is possible to consider larger clipping thresholds to improve the convergence of the algorithms. Our result suggests that clipping can be used as an universal technique in any non-coordinate-wise-scaling algorithms and speed up training. The new finding can provide insight into designing new optimization algorithms.
### Connection with Coordinate-wise Smoothness
Based on our experimental findings, we conjecture that there is a **positive correlation** between the absolute value of Hessian coordinates and gradient coordinates. The positive correlations is also mentioned in [50], but their proposed correlation is between the norm of Hessian and norm of gradient. We further suggest that there is a positive correlation between the **coordinates** of gradient and Hessian, and the success of Adam is due to the ability to scale down the bad coordinates and reduce the sharpness through coordinate-wise scaling of the gradient.
We revisit the example given in Section 1, that \(f(x)=x^{\top}Ax\) and \(A_{11}=100\), \(A_{ii}=1\) for all \(i>1\). For SGD, the convergence depends on the worst coordinate with smoothness 100, and the gradient is also large in the first coordinate at most of the points since the formula is given as \(200x_{1}\). This gives us a bad sharpness on the first coordinate. But if we use clipping, the gradient could not be too large on the first coordinate, so we could choose a larger learning rate even if the Hessian is still unchanged.
Figure 4: The loss landscape in different update directions on autoregressive language modeling in SGD geometry.
Figure 5: Example of optimization algorithms with coordinate-wise clipping. Note that for Adam, the clipped gradient is only used in the first order momentum.
A closedly related concept in optimization is the coordinate-wise version of the \(L\)-smooth assumption in convex and non-convex optimization, typically used in analysis of coordinate descent methods [49, 44, 41, 2, 30]. Instead of bounding the Hessian with a constant \(L\), each coordinates were bounded using different constants \(L_{1},\ldots,L_{d}\) such that \(L_{i}\leq L\) and \(\max L_{i}=L\). If the gradient has a balanced distribution, the convergence depends on the **average** of the constants. Hence, the bound could be better since most of \(L_{1},\ldots,L_{d}\) could be much less than \(L\). However, if the gradient has an imbalanced distribution, where gradient is concentrated in a small fraction of the coordinates, then the convergence mostly depends on the smoothness of that fraction of coordinates. Then, clipping works well since it removes the imbalanced distribution of the gradients, ensuring "uniformity" of the gradient coordinates. When only an \(\varepsilon\)-fraction of coordinates have bad smoothness \(L\), with clipping threshold \(c_{t}\), the norm of clipped gradient on the \(\varepsilon\)-fraction of coordinates is at most \(\sqrt{\varepsilon}dc_{t}\), so the dependence on \(L\) is at most \(O(\sqrt{\varepsilon}L)\). Similarly, adaptive algorithm enforce the same constraint on the gradients, removing the correlation between the Hessian and gradient.
In Appendix D, we justify with an additional simple experiment that suggests only a small fraction of the coordinates has large smoothness. We approximate the Hessian of the neural network with the Gauss-Newton matrix [33, 4, 9] and study the smoothness of the Hessian if we could remove a small fraction of the coordinates. The result shows that by removing \(\leq 4\%\) of the coordinates, the smoothness of the neural network improve by a constant factor. This provides intuition into why coordinate-wise clipping improves the directional sharpness. Then, under the assumption that we can remove a small fraction of coordinates and achieve a better smoothness, we can formally study the local loss reduction of SGD with clipping, as described by the following informal theorem.
**Theorem 1** (informal).: _Suppose \(f\) is non-convex and \(L\)-smooth, and there exists \(0<\varepsilon<1\) and \(\ell\ll L\) such that for every \(x\), after removing \(\varepsilon\)-fraction of the coordinates, the remaining Hessian has spectral norm at most \(\ell\). Then, in the worst case, if we run SGD clipping with some optimal step size \(\eta\geq\frac{2}{L}\), it achieves better loss reduction than SGD with step size \(\eta\leq\frac{2}{L}\). In particular, the upper bound on the directional sharpness is at most \(O(\sqrt{\varepsilon}L+\ell)\ll L\) compared to \(L\) of SGD._
The formal statement and proof are given in Appendix A. The theorem shed light onto how gradient clipping can improve the loss locally. Understanding of this phenomenon could be essential in proving convergence rates for Adam or clipping algorithms faster than SGD.
## 5 Experiment Setups
In this section, we describe the setting of our full experiments. We demonstrate our findings with two types of experiments, as described in Sections 3 and 4. We explore several different tasks and settings and show our results hold in various setting. Further discussions of the results are in Appendix C.3
**Optimization algorithms.** We select a variety of optimization algorithms. The algorithms all uses momentum in their update steps for a fair comparison. The baseline algorithm is SGD momentum, which we compare the sharpness of other algorithms with. For the class of adaptive algorithms, we choose Adam [25], Adafactor [43], and Lion [7]. Adam is the most popular adaptive algorithm, and Adafactor and Lion both claim to be the state-of-the-art optimization algorithm on some specific
Figure 6: Clipped optimization algorithms generally converge faster than the original algorithms. Furthermore, the result is consistent with the landscape analysis in Figures 2 to 4 and Appendix C, that performance in local geometry is a good indicator of global convergence speed.
tasks [43; 7]. We also include signSGD due to its similarity with the Lion optimizer [7] and having probably the simplist form of adaptive algorithm. Note that signSGD is just SGD with clipping threshold 100%. To show that the improvement in directional sharpness and convergence speed is more related to coordinate-wise scaling than weight-matrix-wise scaling, we also design an algorithm which we call normalized SGD, that normalizes the square of Frobenius norm of each weight matrix to be proportional to the size of the matrix. By comparing normalized SGD with SGD clipping, we can see the importance of **coordinate-wise** scaling in adaptive algorithms and clipping.
**Tasks.** We run our experiments on two tasks, including machine translation and autoregressive language modeling, which are two popular tasks in language processing and can be solved efficiently with transformers. For machine translation, we train a small t5 [39] model on the opus books English-French dataset [45]. For autoregressive, we train a GPT-Neo [3; 19] model on the stack dataset [26] for Python code generation. The code generation task is slightly different from natural language tasks such as machine translation since it deals with programming languages. We will show that most of our results still holds in the setting, suggesting that the observation is more related to properties of the transformer architectures.
**Directional Sharpness and Landscape.** We compute the directional sharpness of a variety of optimization algorithms, including SGD, normalized SGD, signSGD, Adam [25], Adafactor [43], and Lion [7], and visualize the corresponding loss landscape direction, under different local geometry. We show that SGD has bad sharpness under all of the settings, regardless of the task, model, or local geometry. In addition, we demonstrate **clipping can always improve the directional sharpness of optimization algorithms**, and often result in better local loss reduction in the update direction.
**Global Convergence.** We also implement clipping algorithms and use them to train different models, and demonstrate that clipping algorithms converge faster in practice. The result matches the goodness of the direction as measured by the landscape visualization and directional sharpness, that algorithms with better directional sharpness and better local loss reduction in the update direction in the SGD geometry generally converges faster. We conclude that the **performance of optimization algorithms in local geometry can be a good indicator of speed of global convergence**.
## 6 Conclusion
In summary, our work provides a new insight of why Adam converges faster than SGD in practice. In contrast to assumptions on properties of the gradient, we propose to study directional sharpness as an important indicator for the performance of optimization algorithms in deep learning. We show that adaptive algorithms and clipped optimization algorithms can generally achieve significantly better directional sharpness compared to SGD. We argue that the slow convergence of SGD is related to the high directional sharpness, caused by a positive coordinate-wise gradient-Hessian correlation. We propose to use coordinate-wise clipping as a solution to the problem of high sharpness. We demonstrate the sharpness reduction effect of coordinate-wise clipping and show that it is possible to step into a lower loss in the update direction of clipping algorithms compared to the original algorithms. We further demonstrate the effectiveness of coordinate-wise clipping in a wide range of optimization algorithms without coordinate-wise scaling, including SGD, normalized SGD, and Adafactor. We suggest the use of coordinate-wise clipping as a universal technique to speed up any deep learning optimization algorithms. Our work provide useful explanations and conjectures about the superior performance of Adam and further understanding of the results could be useful in theoretical understanding of the empirical advantage of Adam over SGD.
| Stochastic勾配降下法 (SGD)は、深層学習における最も人気のある最適化アルゴリズムですが、Adamなどの適応型アルゴリズムは、 some deep learning applications such as transformer trainingにおけるSGDに比べて、いくつかの深層学習アプリケーションにおいて、Empirical advantages を示しています。しかし、AdamがこれらのシナリオでSGDに比べて大幅に早く収束する理由については疑問が残っています。この論文では、AdamがSGDに比べて早く収束する新しい概念である方向性鋭利さを用いて、AdamがSGDよりも早く収束する理由を説明します。最適化アルゴリズムの性能は、更新ステップの方向性鋭利性に密接に関係しており、SGDは適応型アルゴリズムと比べて方向性鋭利さが劣っていることが示されました。さらに、SGDが遅い収束に悪影響を及ぼすのは、ごく少数の座標のみであり、座標ごとのクリップをSGDやその他の |
2309.03188 | Leo T Dissected with the MUSE-Faint Survey | Leo T is the lowest mass galaxy known to contain neutral gas and to show
signs of recent star formation, which makes it a valuable laboratory for
studying the nature of gas and star formation at the limits of where galaxies
are found to have rejuvenating episodes of star formation. Here we discuss a
novel study of Leo T that uses data from the MUSE integral field spectrograph
and photometric data from HST. The high sensitivity of MUSE allowed us to
increase the number of Leo T stars observed spectroscopically from 19 to 75. We
studied the age and metallicity of these stars and identified two populations,
all consistent with similar metallicity of [Fe/H] $\sim$ -1.5 dex, suggesting
that a large fraction of metals were ejected. Within the young population, we
discovered three emission line Be stars, supporting the conclusion that rapidly
rotating massive stars are common in metal-poor environments. We find
differences in the dynamics of young and old stars, with the young population
having a velocity dispersion consistent with the kinematics of the cold
component of the neutral gas. This finding directly links the recent star
formation in Leo T with the cold component of the neutral gas. | Daniel Vaz, Jarle Brinchmann, The MUSE Collaboration | 2023-09-06T17:51:47 | http://arxiv.org/abs/2309.03188v1 | # Leo T Dissected with the MUSE-Faint Survey
###### Abstract
Leo T is the lowest mass galaxy known to contain neutral gas and to show signs of recent star formation, which makes it a valuable laboratory for studying the nature of gas and star formation at the limits of where galaxies are found to have rejuvenating episodes of star formation.
Here we discuss a novel study of Leo T that uses data from the MUSE integral field spectrograph and photometric data from HST. The high sensitivity of MUSE allowed us to increase the number of Leo T stars observed spectroscopically from 19 to 75. We studied the age and metallicity of these stars and identified two populations, all consistent with similar metallicity of [Fe/H] \(\sim\) -1.5 dex, suggesting that a large fraction of metals were ejected. Within the young population, we discovered three emission line Be stars, supporting the conclusion that rapidly rotating massive stars are common in metal-poor environments. We find differences in the dynamics of young and old stars, with the young population having a velocity dispersion consistent with the kinematics of the cold component of the neutral gas. This finding directly links the recent star formation in Leo T with the cold component of the neutral gas.
Spectroscopy, Galaxies, Leo T, Stars, Kinematics, Star Formation, Be Stars 1
## 1 Introduction
Ultra-Faint Dwarf galaxies (UFDs) represent a fascinating enigma in the study of the Universe. These elusive objects are characterised by their extremely low metallicities, simple assembly histories, and dominant dark matter content (Simon 2019), making them an essential piece of the puzzle in understanding galaxy formation and evolution.
Among the faint and ultra-faint dwarf sample, Leo T stands out as a particularly intriguing object that has received significant attention from astronomers. Leo T is the faintest and least massive dwarf galaxy known to contain neutral gas and exhibit signs of recent star formation. This unique set of characteristics makes Leo T a valuable testing ground for galaxy formation models, as they present a challenge to current theories that have struggled to reproduce similar galaxies. Further observations of Leo T will enable astronomers to refine their models and determine whether they are on the right track towards a comprehensive and predictive theory of galaxy formation.
Leo T was discovered using SDSS imaging by Irwin _et al._ (2007). The stellar mass of Leo T is estimated to be \(M\,=\,1.3\,\times 10^{5}\) M\({}_{\odot}\) (McConnachie 2012). The star formation history (SFH) of Leo T has been extensively studied (Irwin _et al._ 2007; de Jong _et al._ 2008; Weisz _et al._ 2012; Clementini _et al._ 2012). These studies show that 50% of the total stellar mass was formed prior to 7.6 Gyr ago, with star formation beginning over 10 Gyr ago and continuing until recent times. They also show evidence of a quenching of star formation in Leo T about 25 Myr ago. None of the studies found evidence of an evolution in isochronal metallicity such that, over the course of its lifetime, it is consistent with a constant value of \([M/H]\sim\,\,-1.6\).
The only previous spectroscopic observations of Leo T are those of Simon & Geha (2007). They derive a mean radial velocity of \(v_{rad}\,\)=\(\,38.1\,\pm 2\) km s\({}^{-1}\), and velocity dispersion of \(\sigma_{v_{rad}}=7.5\,\,\pm 1.6\) km s\({}^{-1}\), which corresponds to a total dynamical mass of \(8.2\,\,\times 10^{6}\) M\({}_{\odot}\).
Ryan-Weber _et al._ (2008) and Adams & Oosterloo (2018) concluded that Leo T contains neutral gas. The Hi mass is almost four times the stellar mass, with \(M_{H1}~{}=~{}4.1~{}\times 10^{5}\) M\({}_{\odot}\)(Adams & Oosterloo, 2018). They show that the gas is present in a Cold Neutral Medium (CNM) (T \(\sim\) 800K) and a Warm Neutral Medium (WNM) (T \(\sim\) 6000K) Hi, with the CNM corresponding to almost 10% of the total Hi mass. Relevantly, the CNM was found to have a lower velocity dispersion (\(\sigma_{\rm CNM}=2.5~{}\pm 0.1\) km s\({}^{-1}\)) than the WNM (\(\sigma_{\rm WNM}=7.1~{}\pm 0.4\) km s\({}^{-1}\)) (Adams & Oosterloo, 2018). The presence of this large cold component raises the question of whether this component is related to the recent star formation observed in Leo T.
To further investigate Leo T, we used spectroscopic observations using the Multi-Unit Spectroscopic Explorer (MUSE, Bacon _et al._, 2010) integral field spectrograph (IFS). Succinctly, we densely map the stellar content and use the stellar spectra to measure the stellar metallicity and stellar kinematics. We find identifiers of a young population, namely Be stars. The data and results presented here are discussed further in Zoutendijk _et al._ (2021) and Vaz _et al._ (submitted to \(A\&A\)).
## 2 Results and Discussion
The central region of Leo T was observed as part of MUSE-Faint (Zoutendijk _et al._, 2020), a MUSE GTO survey of UFDs (PI Brinchmann). MUSE (Bacon _et al._, 2010) is a large-field medium-spectral-resolution integrated field spectrograph installed at Unit Telescope 4 of the Very Large Telescope (VLT). We extracted spectra from the final data cube using PampleMuse (Kamann _et al._, 2013). As a general rule, the extracted spectra have a modest signal-to-noise ratio (S/N) and spectral resolution (R \(\sim\) 3000). We used the specxxy full-spectrum fitting code (Husser _et al._, 2016) together with the PHOENIX (Husser _et al._, 2013) model spectra to estimate the physical parameters, namely line-of-sight velocities and [Fe/H].
Figure 1: Color-magnitude diagram of the 58 Leo T stars detected with MUSE, plotted against PARSEC isochrones drawn for constant \(\rm[Fe/H]=-1.6\). Three representative isochrones are plotted, two in blue, with ages of 0.2 and 0.8 Gyr, and one in gray for age of 9 Gyr. The stars that were found consistent with the younger isochrones are shown as dark blue squares, with the emission line stars shown as blue squares. The stars consistent with the older isochrones are shown as red diamonds.
### Different Populations in Leo T
We were able to identify 58 stars as members of Leo T based on their kinematics. For 55 of these stars we were also able to obtain an estimate of [Fe/H]. The three stars without [Fe/H] estimates are emission line Be stars, which are discussed further below. By combining with the data from Simon & Geha (2007) we have measurements of the kinematics for 75 stars.
We complemented this data with HST ACS F606W and F814W photometry2. We fit PARSEC stellar isochrones (Bressan _et al._, 2012) to the resulting colour magnitude diagram for the 58 stars (Figure 1). The best-fit [M/H] for the isochrone is [M/H]=-1.6, which is consistent with the value found in Weisz _et al._ (2012), and therefore we use this value to draw the representative isochrones shown in Figure 1. The first clear conclusion is that the sample covers a wide range of ages, with some stars consistent with ages as high as \(>10\) Gyr and others as low as 200 Myr. As such, we divided the stars into two populations: a young population, of 20 stars consistent with ages \(<1\) Gyr, and an old population, of 38 stars consistent with ages \(>5\) Gyr. Both populations are displayed using different colors in Figure 1. To assign each star to a population, we covered the color magnitude space with isochrones of different ages and with [M/H]=-1.6. We assign each star the age of the nearest isochrone. Because there is a degeneracy between the different isochrones in certain parts of the color-color space, we repeated all analyses discussed below by reassign the stars that fall in a degenerated region. We find that this does not affect our results.
Footnote 2: HST Proposals 12914, Principal Investigator Tuan Do and 14224, Principal Investigator Carmen Gallart
Within the young population we identified three emission line stars. We tentatively identified these as Be stars. Due to their peculiar spectra, specxxy failed to fit for metallicity and, therefore, these stars were not included in our metallicity analysis. Of relevance is the fact that they make up 15% of the young sample, which is comparable to Milky Way studies that reported rates at a level of \(10-20\)% in stellar clusters (Mathew _et al._, 2007). More recent studies, such as Schootemeijer _et al._ (2022), show that OBe stars and, therefore, rapidly rotating massive stars, are common in metal-poor environments, and the detection of Be stars in Leo T supports this conclusion and extend them to even lower metallicity.
Figure 2: The distribution of the metallicity ([Fe/H]) of 55 Leo T stars, estimated using specxxy. The younger population, consisting of 17 stars, is represented with a lighter color. Also plotted is the result of a MCMC fit for the mean and standard deviation of the distribution.
### Chemical Evolution of Leo T
The histogram of the metallicity estimates for the 55 stars is shown in Figure 2. To quantify this distribution, we implemented an MCMC model, assuming that the underlying distribution is a Gaussian. Therefore, we fit the mean metallicity and the metallicity dispersion of the distribution, which are also shown in Figure 2. We obtained a metallicity of \(\rm[Fe/H]=-1.53\pm 0.05\), which is in good agreement with our photometric analysis. We repeated the analysis by resampling and removing outliers, without consequences on the results (the outliers are usually low S/N). We find a metallicity dispersion of \(\sigma_{\rm[Fe/H]}=0.21\pm 0.06\), which is low, implying that all stars have similar metallicity and that Leo T underwent almost no metallicity evolution throughout its history. This, in conjunction with the extended history of star formation of Leo T, suggests that a large fraction of metals have been ejected, keeping metallicity constant. In fact, this is consistent with theoretical expectations for low-mass dwarf galaxies (Emerick _et al._, 2018).
We repeated this analysis separately for each population. Even though the results are consistent with each other, the results are not conclusive because the sample of young stars is too small (consisting only of 17 stars). In addition, the distribution is asymmetric, especially for the young stars, with the low S/N spectra preferring a lower metallicity, meaning that our constraints prefer a somewhat lower metallicity for the younger population. However, the uncertainties here are too high to draw any conclusions.
### Stellar Kinematics vs Neutral Gas Kinematics
The histogram of the radial velocity estimates for 75 stars is shown in Figure 3. To fit the distribution, we applied the same MCMC model as before to obtain the mean barycentric velocity and the velocity dispersion of the sample. The fit is also shown in Figure 3. We find a mean velocity \(v_{\rm los}=39.4^{+1.3}_{-1.3}\) km s\({}^{-1}\), and an intrinsic velocity dispersion of \(\sigma_{v}=7.1^{+1.3}_{-1.1}\) km s\({}^{-1}\), which is consistent with what was found by Simon & Geha (2007). We repeated this analysis for each population. In this case, we used the same young population as before, but now the old population consists of 55 stars, including 17 stars from Simon & Geha (2007). These distributions and the respective fits are shown in Figures 4 and 5 for the young and old population, respectively. It is worth noting that the best fit plotted does not include uncertainties.
Figure 3: The distribution of the line-of-sight velocity of 75 Leo T stars, estimated using specxy. The younger population, consisting of 20 stars, is represented with a lighter color. Also plotted is the result of a MCMC fit for the mean and standard deviation of the distribution.
For the younger population we obtain a mean velocity of \(v_{\rm los}=39.3^{+2.1}_{-2.1}\) km s\({}^{-1}\), and a velocity dispersion of \(\sigma_{v}=2.3^{+2.7}_{-1.7}\) kms\({}^{-1}\). For the older population we find a mean velocity of \(v_{los}=39.7^{+1.6}_{-1.6}\) km s\({}^{-1}\), and a velocity dispersion of \(\sigma_{v}=8.2^{+1.7}_{-1.4}\) km s\({}^{-1}\). Notably, we find that both populations have different kinematics, with the younger population having a significantly smaller velocity dispersion than the older stars. This is comparable to what was found for the two components of the gas in Leo T, where the cold component was found to have a velocity dispersion smaller than that of the warm component. We compare the differences in kinematics between young and old stars with what was found for the Hi kinematics of warm and cold neutral gas in Figure 6. We find a good match when comparing the velocity dispersion of the young population with the cold component of the Hi gas, and between the kinematics of old Leo T stars and the warm component of the Hi gas. The natural inference from these results is that the most recent star formation in Leo T is linked to the CNM. The results presented here combined with the results from Weisz _et al._ (2012) of no star formation in Leo T in the last \(\sim 25\) Myr are consistent with recent models that suggest that star formation in low mass
Figure 4: The distribution of the line-of-sight velocity of 20 young Leo T stars, estimated using specxy. Also plotted is the result of a MCMC fit for the mean and standard deviation of the distribution.
Figure 5: The distribution of the line-of-sight velocity of 55 old Leo T stars, estimated using specxy. Also plotted is the result of a MCMC fit for the mean and standard deviation of the distribution.
galaxies should be bursty with short quiescent periods (Collins & Read, 2022): due to stellar feedback, the star formation is momentarily quenched and metals are ejected from the environment and, after a short quiescent period, the gas is allowed to cool down and re-ignite star formation.
| レオTは、中性ガが含まれており、近年の恒星形成を示す、低質量銀河であり、これは銀河の限界におけるガスと恒星形成の性質を研究するための貴重な実験室となっています。ここで、MUSE分光計とHSTの光度測定から得られたデータを使用したレオTに関する新しい研究を議論します。MUSEの高い感度は、レオTの星を分光測定で観察できる数が19から75に増加させました。これらの星の年齢と金属質を調べ、2つの集団を同定しました。すべてが[Fe/H]$\sim$ -1.5 dexの類似の金属質に一致しており、これは金属が大量に放出されたことを示唆しています。若い集団の中に、3つの放出線Be星を発見し、急速回転する重質星が低金属環境に多く存在することを支持しました。若い集団と古い星動態の差異を調べ、若い集団 |
2309.15932 | Developing integrated rate laws of complex self-assembly reactions using
Lie symmetry: Kinetics of Abeta42, Abeta40 and Abeta38 co-aggregation | The development of solutions to the kinetics of homomolecular self-assembly
into amyloid fibrils using fixed-point methods, and their subsequent
application to the analysis of in vitro kinetic experiments, has led to
numerous advances in our understanding of the fundamental chemical mechanisms
behind amyloidogenic disorders such as Alzheimer's and Parkinson's diseases.
However, as our understanding becomes more detailed and new data become
available, kinetic models need to increase in complexity. The resulting rate
equations are no longer amenable to extant solution methods, hindering ongoing
efforts to elucidate the mechanistic determinants of aggregation in living
systems. Here, we demonstrate that most linear self-assembly reactions are
described by the same unusual class of singularly perturbed rate equations,
that cannot be solved by normal singular perturbation techniques such as
renormalization group. We instead develop a new method based on Lie symmetry
that can reliably solve this class of equations, and use it in conjunction with
experimental data to determine the kinetics of co-aggregation of the
Alzheimer's disease-associated Abeta42, Abeta40 and Abeta38 peptides. Our
method also rationalizes several successful earlier solutions for homomolecular
self-assembly kinetics whose mathematical justification was previously unclear.
Alongside its generality and mathematical clarity, its much greater accuracy
and simplicity compared to extant methods will enable its rapid and widespread
adoption by researchers modelling filamentous self-assembly kinetics. | Alexander J. Dear, Georg Meisl, Sara Linse, L. Mahadevan | 2023-09-27T18:14:39 | http://arxiv.org/abs/2309.15932v1 | Developing integrated rate laws of complex self-assembly reactions using Lie symmetry: Kinetics of A942, A940 and A938 co-aggregation
###### Abstract
The development of solutions to the kinetics of homomolecular self-assembly into amyloid fibrils using fixed-point methods, and their subsequent application to the analysis of _in vitro_ kinetic experiments, has led to numerous advances in our understanding of the fundamental chemical mechanisms behind amyloidogenic disorders such as Alzheimer's and Parkinson's diseases. However, as our understanding becomes more detailed and new data become available, kinetic models need to increase in complexity. The resulting rate equations are no longer amenable to extant solution methods, hindering ongoing efforts to elucidate the mechanistic determinants of aggregation in living systems. Here, we demonstrate that most linear self-assembly reactions are described by the same unusual class of singularly perturbed rate equations, that cannot be solved by normal singular perturbation techniques such as renormalization group. We instead develop a new method based on Lie symmetry that can reliably solve this class of equations, and use it in conjunction with experimental data to determine the kinetics of co-aggregation of the Alzheimer's disease-associated A942, A940 and A938 peptides. Our method also rationalizes several successful earlier solutions for homomolecular self-assembly kinetics whose mathematical justification was previously unclear. Alongside its generality and mathematical clarity, its much greater accuracy and simplicity compared to extant methods will enable its rapid and widespread adoption by researchers modelling filamentous self-assembly kinetics.
## I Introduction
Self-assembly of proteins and peptides into amyloid fibrils has been intensively studied in the past 20 years due to its key role in a multitude of increasingly prevalent and incurable human pathologies, such as type-II diabetes, Alzheimer's and Parkinson's diseases [1; 2]. The kinetics of the self-assembly process have been found to be well-described by nonlinear ordinary differential equations that, although relatively simple, do not normally possess exact analytic solutions. Instead, great success has been had in developing accurate any-lytic solutions for several particularly important mechanisms of self-assembly [3; 4; 5; 6; 7; 8; 9]. These expressions have been widely fitted to experimental data in order to identify the constituent reaction steps and their associated rate constants for many different proteins under diverse conditions [10]. This has enabled fundamental discoveries about the chemical mechanism behind the formation of both pathological and functional amyloid [11], ranging from A9 plaques in Alzheimer's disease [6; 9; 12] to functional yeast prions in _S. cerevisiae_[13]. Such solutions are also intensively used in the screening of candidate inhibitory drugs for the treatment of these diseases [14].
Now that some of the simplest systems have been characterized, researchers have become increasingly interested in less idealized and more realistic representations of the self-assembly process, described by more complex kinetic equations. For instance, interactions between different proteins or different forms of a protein during aggregation _in vivo_ is expected to be the norm rather than the exception, given that biological environments tend to be highly complex, containing multiple self-assembly-prone species as well as other molecular factors in close proximity. A notable example of this is the co-aggregation of different length-variants and post-translationally modified variants of the Alzheimer's disease-associated A9 peptide [15; 16]. Several of these variants occur _in vivo_ at non-negligible concentrations, and have been shown or proposed to have differing effects on both the aggregation rate an the progression of the disease [15; 16; 17; 18; 19; 20; 21; 22]. A complete understanding of Alzheimer's disease will likely require a full understanding of the ways in which these proteins interact during aggregation into fibrils. Although these coaggregation reactions have already been studied experimentally _in vitro_[23; 24], the most popular technique for investigating simpler systems, fixed-point theory [3; 4; 5; 6], is incapable of successfully modelling them, limiting the kinetic analysis that could be performed at the time.
In recent work [25], the authors posited that the theory of approximate Lie groups, when appropriately extended, might provide a unifying theoretical basis to a wide range of singular perturbation techniques, including but not limited to the method of multiple scales, and the perturbative renormalization group of Chen, Oono and Goldenfeld (CGO RG). This hypothesis was inspired by the little-known fact that most techniques for the ex
act solution of differential equations (DEs) rely implicitly on the identification and exploitation of exact continuous (Lie) symmetries [26]. This raises the questions: can protein aggregation kinetics be treated in a unified way by Lie symmetry, and can Lie symmetry be used to solve the kinetics of co-aggregation of A\(\beta\) length-variants?
We provide in Appendix A an ultra-brief review of those parts of the Lie group theory of DEs that are needed to understand our results; see ref. [25] for a more detailed review. For more background on Lie group theory for DEs in general, see refs. [26; 27; 28].
We first show that the rate equations for protein self-assembly admit perturbation series only for specific initial conditions, and that as a result most standard singular perturbation techniques including CGO RG _cannot_ be applied. We develop an alternative approach, based on asymptotic Lie symmetries, for regularizing such "local perturbation" series. Using it we obtain a highly accurate approximate solution to the kinetics of co-aggregation of the key A\(\beta\) length variants A\(\beta\)42, A\(\beta\)40, and A\(\beta\)38. (To aid the reader we provide a reference table of mathematical notation in Appendix F.) We successfully fit this model to an array of published data, revealing hitherto undiscovered features of the mechanisms of co-aggregation of these peptides. Additionally, previous highly successful approximate solutions to homogenous protein self-assembly kinetics are derivable using the same methodology, putting them on a mathematically explainable footing. Our method will find immediate application in the analysis of kinetic experiments on other more complex biochemical systems involving protein aggregation in model mixtures, in vivo or in body fluids, and in the search for drugs that can inhibit critical reaction steps in this process.
## II Methods
Sec. II.1 introduces terminology and the fundamental reaction steps in protein aggregation, and introduces dimensionless parameters. We develop our new technique for the solution of protein aggregation rate equations by Lie symmetry in the remaining Methods sections; these mathematically detailed sections may be skipped by readers interested solely in the results on the co-aggregation of A\(\beta\) variants.
### Highly generalized rate equations for protein fibril formation reactions
The kinetics of amyloid fibril self-assembly in a closed _in vitro_ system can generally be modelled by developing rate equations for the fibril number concentration \(P(t)\), and the monomer concentration \(m(t)\). Since amyloid fibrils typically contain a small number om monomers per plane, but a very large number of planes per fibril, their aggregation can unsurprisingly be accurately modelled as a linear self-assembly reaction. As will become apparent, in coaggregating systems it is better to instead use \(P(t)\) to denote the concentration of fibril ends, which in homomolecular aggregation reactions is just twice the fibril number concentration. New protein fibrils form from monomer in solution through a slow primary nucleation reaction step (often mediated by third-party interfaces such as the air-water interface [9]), and subsequently elongate rapidly (Fig. 1**a**). Elongation does not create or remove fibril ends and thus only affects \(m(t)\) (decreasing it with rate proportional to \(m(t)P(t)\)). Since nucleation is much slower than elongation, the monomer lost during nucleation can be ignored and to a good approximation primary nucleation increases only \(P(t)\) (with rate proportional to \(m(t)^{n_{c}}\)).
Most amyloid-forming systems also feature reaction steps whose rates are proportional to the fibril mass concentration \(M(t)=m_{\text{tot}}-m(t)\), sometimes summarised as multiplication processes or secondary processes. Such processes induce autocatalytic amplification in filamentous self-assembly. They include fibril fragmentation (rate \(k_{-}M(t)\)) as well as secondary nucleation of new fibrils on the surface of existing fibrils (Fig. 1**a**; rate proportional to \(m(t)^{n_{2}}M(t)\)). Putting this all together, and defining \(\mu(t)=m(t)/m_{\text{tot}}\), where \(m_{\text{tot}}\) is the total concentration of protein molecules in monomers and polymeric fibrils, we have:
\[\frac{dP}{dt} =\alpha_{1}(\mu)\mu(t)^{n_{c}}+\alpha_{2}(\mu)\mu(t)^{n_{2}}(1-\mu) \tag{1a}\] \[\frac{d\mu}{dt} =-\alpha_{e}(\mu)\mu(t)P(t), \tag{1b}\]
where \(\alpha_{1}\), \(\alpha_{2}\) and \(\alpha_{e}\) are rates of primary nucleation, secondary processes and elongation, that may be modified by additional effects such as catalytic saturation, co-aggregation or inhibition. Defining \(\kappa=\sqrt{\alpha_{e}(1)\alpha_{2}(1)}\), these may be nondimensionalized using \(\tau=\kappa t\) and \(\Pi(t)=\alpha_{e}(1)P(t)/\kappa\), yielding:
\[\frac{d\Pi}{d\tau} =2\varepsilon\frac{\alpha_{1}(\mu)}{\alpha_{1}(1)}\mu(\tau)^{n_ {c}}+\frac{\alpha_{2}(\mu)}{\alpha_{2}(1)}\mu(\tau)^{n_{2}}(1-\mu(\tau)) \tag{2a}\] \[\frac{d\mu}{d\tau} =-\frac{\alpha_{e}(\mu)}{\alpha_{e}(1)}\mu(\tau)\Pi(\tau), \tag{2b}\]
where \(\varepsilon=\alpha_{1}(1)/2\alpha_{2}(1)\), which can be interpreted as the relative importance of primary nucleation over secondary processes. Nondimensionalization is important for revealing the underlying structure of differential equations; upon nondimensionalization, the rate equations for many different kinds of protein aggregation reactions ultimately have the form of Eqs. (2).
### Kinetics of protein aggregation cannot be solved by traditional techniques
Eq. (2) admits a perturbation series in \(\varepsilon\) only for initial conditions \(\{\mu(0)=1,\ \Pi(0)=0\}\), as only then
does the term proportional to \(1-\mu\) linearize. This can be generalized to a perturbation series in \(\varepsilon,\ \delta\) and \(p\), where \(\delta\) and \(p\) enter only the initial conditions as \(\{\mu(0)=1-\delta,\ \Pi(0)=p\}\). Pre-multiplying these parameters by perturbation indexing parameter \(s\), to be later set to \(1\), yields the series:
\[\Pi(\tau) =s\bigg{[}\epsilon(e^{\tau}-e^{-\tau})+\frac{\delta}{2}(e^{\tau}-e ^{-\tau})+\frac{p}{2}(e^{\tau}+e^{-\tau})\bigg{]}\,, \tag{3a}\] \[\mu(\tau) =1-\,s\Big{[}\epsilon(e^{\tau}+e^{-\tau}-2)\] \[+\frac{\delta}{2}(e^{\tau}+e^{-\tau})+\frac{p}{2}(e^{\tau}-e^{- \tau})\bigg{]}\,. \tag{3b}\]
Like any singular perturbation series, Eq. (3) is valid only _asymptotically_ towards the phase point corresponding to the initial conditions (Fig. 1**b**). However, a typical singular perturbation series can be solved for arbitrary initial or boundary conditions, permitting this phase point to be moved arbitrarily. Eq. (3) is unusual because its region of validity is instead fixed around \(\{\mu(0)=1,\ \Pi(0)=0\}\). We refer to such singular perturbation series, that contain "local" perturbation parameters \(\delta_{i}\) originating from the initial or boundary conditions such that the latter may be written \(C_{j}(\delta_{i})\), as "local perturbation series".1
Footnote 1: Note that a local perturbation series is not the same as a perturbation series in the independent variables, which is usually referred to as local analysis [29].
Eq. (3) can often be regularized into a globally valid solution using the self-consistent method; however, as discussed above, this fails to yield accurate solutions to more complex protein aggregation reactions, such as those involving co-aggregation or inhibition. Aside from the self-consistent method, CGO RG is the most powerful technique for regularizing singular perturbation problems, and provides a unified theoretical basis for many of the most popular singular perturbation techniques including multiple scale analysis, matched asymptotics and reductive perturbation. However, in ref. [25] we showed that the mathematical basis for CGO RG depends critically on the presence of undetermined constants of integration in the singular perturbation series. Clearly, therefore, CGO RG and related methods cannot be applied to local perturbation series like Eq. (3), since they cannot be solved for arbitrary initial or boundary conditions and thus cannot possess such constants. Instead, a new method must be developed.
### Exact, approximate and asymptotic Lie symmetries in protein aggregation
Although the intention is to develop a method for application to more complex instances of Eqs (2) (or their higher dimensional equivalents), we illustrate our approach with the simplest possible instance, the kinetics of pure A942 aggregation at pH 8.0, in which \(\alpha_{1}=2k_{n}m_{\text{tot}}^{n_{c}},\ \alpha_{2}=2k_{2}m_{\text{tot}}^{n_{ 2}}\) and \(\alpha_{\text{e}}=k_{+}m_{\text{tot}}\) are \(\mu\)-independent, yielding:
\[\frac{d\Pi}{d\tau} =2\varepsilon\mu(\tau)^{n_{c}}+\mu(\tau)^{n_{2}}(1-\mu(\tau)) \tag{4a}\] \[\frac{d\mu}{d\tau} =-\mu(\tau)\Pi(\tau). \tag{4b}\]
These (and many other instances of Eqs (2)) can be integrated once analytically, and subsequently reduced to quadrature [7]. However, the integration cannot be performed analytically. So, an exact analytic solution for \(\mu\) is not possible, and Eqs (4) should not possess any non-trivial exact symmetries other than those that yield this quadrature. This can be verified explicitly by their computation using CAS. The objective of a Lie symmetry approach must therefore instead be to derive an approximate analytic solution. However, their explicit computation reveals that Eqs (4) have no non-trivial approximate symmetries (Fig. 2**a**) either.
Yet, these equations have several approximate analytical solutions, implying they possess some other kind of approximate symmetry property even if they do not possess formal approximate symmetries. Given that these approximate solutions all become more accurate in the limit \(\mu\to 1\), we consider the possibility of Lie symmetries that become exact only _asymptotically_ in a given region of phase space (Fig. 2**b**). The concept of exact "asymptotic symmetries" of DEs, involving dependent and independent variables only, has been investigated in at least two prior mathematical papers [30; 31]. However, a systematic method for their computation was not established, and instead they were computed by guesswork from the DE and its exact symmetries. Hereafter
Figure 1: Demonstration that Eq. (3b) is a singular perturbative solution for the kinetics of linear protein self-assembly. **a**: Key reaction steps involved. **b**: Parameters: \(n_{2}=3\), \(n_{c}=2\), \(\varepsilon=0.01\), \(\Pi(0)=0\), \(\mu(0)=1\), and \(\alpha_{2}=\alpha_{\text{e}}=1\). After a short initial time period the first- and second-order perturbation series diverge away from the exact numerical solution to Eqs. (2).
we adopt the name "asymptotic" proposed in these papers for this class of symmetries.
Now, we propose asymptotic symmetries of _solutions_ to DEs rather than of DEs themselves, and acting on all parameters in the problem, not just the dependent and independent variables. We also propose a systematic method for their computation. If a local approximation to the solution of a DE is available (such as a local perturbation series), then exact or approximate symmetries of this local approximation will be asymptotic symmetries of the solution to the DE. Since these approximations do not contain derivatives, computation of their Lie symmetries can easily be done by hand with no need for the usual computer algebra approaches.
Asymptotic symmetries computed from a local perturbation series are generally only valid near the initial or boundary conditions \(C_{j}(0)\). They are clearly also only valid to the same order in the perturbation parameter as their parent series. (In principle, _exact_ asymptotic solution symmetries can instead be calculated if local approximations are available that become exact approaching the phase point around which they were computed.) For example, solving Eqs (4) perturbatively to first order with boundary conditions \(\{\mu(0)=1-\delta,\ \Pi(0)=\delta+O(\delta^{2})\}\), and using again indexing parameter \(s\), yields the following local perturbation series for \(\mu\):
\[\mu(\tau)=\mu^{(0)}+s\mu^{(1)}=1-s\big{[}\varepsilon(e^{\tau}+e^{-\tau}-2)+ \delta e^{\tau}\big{]}\,. \tag{5}\]
We can then seek from this a zeroth-order approximate \(\mu\to 1\) asymptotic perturbation symmetry for the exact solution to Eqs. (4), acting solely on parameters \(\varepsilon\) and \(\delta\):
\[\mathbf{X}^{(0)}_{\varepsilon,\delta}=\xi^{(0)}_{\varepsilon}\,\frac{\partial}{ \partial\varepsilon}+\xi^{(0)}_{\delta}\,\frac{\partial}{\partial\delta} \tag{6}\]
Solving \(\mathbf{X}^{(0)}_{\varepsilon,\delta}\left(\mu^{(0)}+s\mu^{(1)}\right)=0\) yields the zeroth-order symmetry:
\[\mathbf{X}^{(0)}_{\varepsilon,\delta}=\xi^{(0)}\left(e^{\tau}\frac{\partial}{ \partial\varepsilon}-(e^{\tau}+e^{-\tau}-2)\frac{\partial}{\partial\delta} \right), \tag{7}\]
where \(\xi^{(0)}\) is an arbitrary function of \(\varepsilon\) and \(\delta\).
Finally, we propose that asymptotic perturbation symmetries may often remain approximately valid throughout the entire phase space of interest. If so, they may in principle be employed to find global approximate solutions. To evaluate whether a given such symmetry is indeed globally valid requires an examination of the bifurcations of the DEs. For protein aggregation the phase space structure is simple, featuring only an attractive fixed point at \(\mu=0\). So, the global dynamics are partitioned into two asymptotic limits: \(\mu\to 1\) and \(\mu\to 0\) (Fig. 2**c**-**d**). The boundary between these regions of phase space is marked by the vanishing of the rate of the secondary process, and the resultant plateauing of the fibril number concentration.
\(\mu\to 1\) asymptotic perturbation symmetries are then approximately valid globally under two circumstances. First, if the parameters transformed by the symmetry in response to an increase in the perturbation parameters drop out of the \(\mu\to 0\) kinetics at leading order. For example, Eqs. (4) lose memory of the initial conditions \(\{\mu(0)=1-\delta,\ \Pi(0)=\delta+O(\delta^{2})\}\) in the \(\mu\to 0\) asymptotic region, becoming independent of \(\delta\). Thus, although the \(\mu\to 1\) asymptotic symmetry Eq. (7) transforms \(\delta\) incorrectly here, it does not matter because the solution no longer depends on \(\delta\) in this limit, and so Eq. (7) is actually universally valid to zeroth order in \(\varepsilon\). The second circumstance is if the boundary between asymptotic regions is sufficiently close to \(\mu=0\), the second region may be neglected. We will see examples of this later.
### Regularizing local perturbation series using asymptotic symmetries
Globally valid perturbation symmetries can in principle be used to regularize a singular perturbation problem by transforming a known special solution. In Appendix B
Figure 2: Illustration of asymptotic symmetries, and asymptotic regions in the kinetics of linear protein self-assembly. **a**: Dodecagons are only approximately invariant under infinitesimal rotational transformations (to \(O(\varepsilon)\), where \(\varepsilon\sim z\cos\theta\), with \(\theta\) the external angle and \(z\) the side length), which are therefore an approximate Lie symmetry. **b**: \(f=x^{2}+\varepsilon\sin(\pi y)x^{5}\) is asymptotically invariant to an arbitrary \(y\)-translation in the limit \(x\to 0\); such a translation is thus an asymptotic Lie symmetry. **c**: Numerical solution for fibril end concentration \(P\) (rate equation Eq. (1a)); parameters are the same as in Fig. 1. **d**: Numerical solution for normalized fibril mass concentration \(1-\mu\) (rate equation Eq. (1b), black). The \(\mu\to 0\) asymptotic regime, dominated by simple exponential decay of \(\mu\), is entered once the fibril number concentration begins to plateau. The local perturbation series (red, Eq. (3b)) is no longer valid in this regime.
we compute such a solution, \(\mu_{0}\), for Eqs. (4) with boundary conditions \(\{\mu(0)=1-\delta,\ \Pi(0)=\delta\}\) when \(\varepsilon=0\) (Eq. (131)). In the limit that \(\delta\ll 1\) this reduces to:
\[\mu_{0}(\tau,c_{1},\delta) =\frac{1}{(1+\delta e^{\tau}/c_{1})^{c_{1}}}, \tag{8a}\] \[c_{1} =\frac{3}{2n_{2}+1}. \tag{8b}\]
Since \(c_{1}\) does not enter into the \(\mu\to 1\) asymptotic dynamics Eq. (5), a global solution to Eqs. (4) for \(\delta=0\) can be obtained simply by integrating the globally valid asymptotic perturbation symmetry Eq. (7) from \((0,\delta)\) to \((\varepsilon,0)\):
\[\frac{d\varepsilon}{ds} =e^{\tau},\ \frac{d\delta}{ds}=-(e^{\tau}+e^{-\tau}-2) \tag{9a}\] \[\varepsilon =se^{\tau},\ -\delta=-s(e^{\tau}+e^{-\tau}-2)\] (9b) \[\therefore\delta \rightarrow\varepsilon(e^{\tau}+e^{-\tau}-2)/e^{\tau}. \tag{9c}\]
Replacing \(\delta\) in Eq. (8) accordingly yields:
\[\mu(\tau)=\frac{1}{\left(1+\frac{\varepsilon}{c_{1}}(e^{\tau}+e^{-\tau}-2) \right)^{c_{1}}}, \tag{10}\]
with \(c_{1}\) defined as before.
The same special solution is often available for the more complicated Eqs. (2) with arbitrary initial conditions when \(\varepsilon=0\) and \(p=p_{0}\) (with \(p_{0}\) a function of \(\delta\) defined in Appendix B). This requires that \(\alpha_{1},\ \alpha_{2}\) and \(\alpha_{e}\) depend on a parameter \(d\) in such a way that \(d=0\) reduces them to finite constants. An asymptotic perturbation symmetry connecting \((c_{1},\delta)\) with \((d,\varepsilon,p)\) may then be used to transform the special solution Eq. (8) to a general solution to Eqs. (2).
Because this kind of symmetry does not transform the dependent and independent variables, a shortcut in this procedure may be taken: it is not necessary to explicitly compute the symmetry and its finite transformations. To see why, suppose such a symmetry connecting \((c_{1},\delta)\) with \((d,\varepsilon)\) has been found. From these, finite transformations taking \((\tilde{c}_{1},\tilde{\delta},0,0)\) to \((c_{1},\delta,d,\varepsilon)\) can be calculated. Whatever they may be, they can always be expressed in inverse form as \(\tilde{\delta}=g_{\delta}(\tau,c_{1},\delta,d,\varepsilon)\), \(\tilde{c}_{1}=g_{c_{1}}(\tau,c_{1},\delta,d,\varepsilon)\) where a tilde over a parameter signifies it is at its pre-transformation value. Our global solution is then \(\mu_{0}(\tau,\tilde{c}_{1},\tilde{\delta})\). Now, since transforming one asymptotic expansion must yield another, \(g_{\delta}\) and \(g_{c_{1}}\) must satisfy:
\[\mu_{0,\text{asy}}(\tau,\tilde{c}_{1},\tilde{\delta})\equiv\mu_{\text{asy}}( \tau,c_{1},\delta,d,\varepsilon), \tag{11}\]
where \(\mu_{0,\text{asy}}\) is the asymptotic expansion of the special solution \(\mu_{0}\) in this region of phase space, and \(\mu_{\text{asy}}(\tau,c_{1},\delta,d,\varepsilon)\) is the asymptotic limit of the full dynamics in the same region (e.g. Eq. (3), or a higher-order series). So, the finite transformations can be identified by inspection of \(\mu_{\text{asy}}\); a globally valid solution is then obtained by substituting these transformations into Eq. (8).
## III Results
### Modelling A\(\sharp\)42 aggregation in the presence of A\(\sharp\)40 and A\(\sharp\)38
Monitoring by ThT the co-aggregation of A\(\sharp\)42 with A\(\sharp\)40, A\(\sharp\)37 or A\(\sharp\)38 in 20 mM NaP and 0.2 mM EDTA at pH 7.4 in recent studies [23; 24] revealed that two separate sigmoidal transitions occur in the transformation of monomeric to fibrillar protein (Fig. 3a). Using separate stable isotopes in A\(\sharp\)40 and A\(\sharp\)42, and identification using mass spectrometry, the first transition was established to correspond to the formation of fibrillar A\(\sharp\)42, and the second to the formation of fibrils consisting exclusively of the other peptide, implying no cross-elongation reaction steps occur [23]. Seeding experiments were furthermore used to rule out cross-secondary nucleation. Since the second sigmoid occurs earlier than that observed during the aggregation of its associated shorter peptide in isolation, but the first does not, it was deduced that cross-primary nucleation occurs (Fig. 3**c**), at a rate much faster than that of primary nucleation of the shorter peptide, but much slower than that of A\(\sharp\)42 primary nucleation. It was also found that the shorter peptide inhibits slightly the aggregation of A\(\sharp\)42 (Fig. 3**b**), but without bespoke kinetic models of inhibition it was not possible to ascertain whether secondary nucleation or elongation was inhibited, and therefore the nature of the co-aggregation interaction responsible [24].
To identify how the inhibition of A\(\sharp\)42 aggregation by A\(\sharp\)xx monomers occurs, we first build explicit kinetic models of A\(\sharp\)42 aggregation in which the A\(\sharp\)xx monomer inhibits one of the reaction steps. We use the subscripts \(a\) and \(b\) to signify concentrations of species consisting of A\(\sharp\)42 and A\(\sharp\)xx, respectively, and brackets (\(a\)) and (\(b\)) to denote the corresponding homomolecular rate constants. Cross-primary nucleation can be neglected since it is much slower than A\(\sharp\)42 primary nucleation.
As in refs. [32; 33], we make the simplifying assumption that the binding of A\(\sharp\)xx to A\(\sharp\)42 can be modelled as pre-equilibrium, which is reasonable if the binding target has a low concentration, as is expected given the catalytic nature of the steps in protein aggregation [9]. We may then model inhibition of primary nucleation and elongation using perturbed rates [33]:
\[\alpha_{1,a} =2k_{n}(a)m_{\text{tot},a}^{n_{e}(a)}(1+m_{\text{tot},b}/K_{I,P})^{-1}, \tag{12}\] \[\alpha_{e,a} =k_{+}(a)m_{\text{tot},a}(1+m_{\text{tot},b}/K_{I,E})^{-1}, \tag{13}\]
where \(K_{I,P}\) and \(K_{I,E}\) are equilibrium constants for dissociation of type-\(b\) monomer from the catalytic sites for type-\(a\) fibril primary nucleation and elongation, respectively.
Modelling inhibition of secondary nucleation is more complicated, because A\(\sharp\)42 secondary nucleation is at least partly saturated under the reaction conditions (meaning that monomeric protein binds faster to the fibril surface than surface-bound monomer can convert to
new fibrils [6]). Using \(\mu_{a}(t)=m_{a}(t)/m_{\text{tot},a}\), the rate of secondary nucleation is found (see Appendix C) to be:
\[\alpha_{2,a}(\mu_{a})=\frac{2k_{2}(a)m_{\text{tot},a}^{n_{2}(a)}\mu_{a}^{n_{2}(a )}}{1+\left(\mu_{a}/\mathcal{K}_{S}(a)\right)^{n_{2}(a)}+1/\mathcal{K}_{S}(ba)}, \tag{14}\]
where \(\mathcal{K}_{S}(a)=K_{S}(a)/m_{\text{tot},a}\) and \(\mathcal{K}_{S}(ba)=K_{S}(ba)/m_{\text{tot},b}\) are the dimensionless dissociation constants for types \(a\) and \(b\) monomers from type-\(a\) fibrils.
Thus, the dimensionless rate equations for protein aggregation Eqs. (2) become, using \(\Pi_{a}(t)=2k^{\prime}_{+}(a)P_{a}(t)/\kappa_{a}\) and \(\tau_{a}=\kappa_{a}t\), where \(\kappa_{a}=\sqrt{\alpha_{e,a}\alpha_{2,a}(1)}\):
\[\frac{d\Pi_{a}}{d\tau_{a}}=2\varepsilon_{a}\mu_{a}^{n_{e}(a)}+\] \[\mu_{a}^{n_{2}(a)}(1-\mu_{a})\frac{1+1/\mathcal{K}_{S}(a)^{n_{2} (a)}+1/\mathcal{K}_{S}(ba)}{1+\mu_{a}^{n_{2}(a)}/\mathcal{K}_{S}(a)^{n_{2}(a) }+1/\mathcal{K}_{S}(ba)}, \tag{15a}\] \[\frac{d\mu_{a}}{d\tau_{a}}=-\mu_{a}(\tau_{a})\Pi_{a}(\tau_{a}), \tag{15b}\]
where \(\varepsilon_{a}=\alpha_{1,a}/2\alpha_{2,a}(1)\).
### Solving the kinetics of A\(\beta\) coaggregation
One of the two conditions for applicability of the "asymptotic symmetry" method introduced in this paper for solving differential equations is that there exists a globally valid special solution for a certain choice of parameter values. Since Eqs. (15) are of the same structure as the generic protein aggregation rate equations (Eqs (2)), they must also possess the same special solution, i.e. Eq. (8). This is indeed a valid solution, for the parameter choice \(\varepsilon_{a}=\mathcal{K}_{S}(a)^{-1}=0\) (identifying \(\tau=\tau_{a}\) and \(n_{2}=n_{2}(a)\)).
The other condition for Eqs. (15) to be solvable by this new method has to do with its Lie symmetry basis (namely, that its \(\mu\to 1\) asymptotic symmetry be approximately valid globally); we demonstrate in Appendix D that this condition is indeed satisfied.
As discussed in Sec. II.4, although this method depends on Lie group theory, its actual implementation can be performed in a way that requires knowledge only of standard perturbation theory. We will take this approach here. For Eqs. (15), this Lie theory-independent implementation amounts to replacing \(\delta\) and \(c_{1}\) in Eq. (8) with functions \(\tilde{\delta}\) and \(\tilde{c}_{1}\) that ensure its Taylor series in \(\delta\) matches the perturbation series of Eqs. (15) in \(\varepsilon\).
Since \(c_{1}\) does not enter the Taylor series of Eq. (8) to first order, we should perform this matching to second order to ensure optimal accuracy. Calculating the latter by expanding \(\mu_{a}=1+\varepsilon\mu_{a}^{(1)}+\varepsilon^{2}\mu_{a}^{(2)}\) and \(\Pi_{a}=\varepsilon\Pi_{a}^{(1)}+\varepsilon^{2}\Pi_{a}^{(2)}\), and substituting these into Eqs. (15), yields:
\[\mu_{a}(\tau_{a}) =1-\varepsilon_{a}(e^{\tau_{a}}+e^{-\tau_{a}}-2)+\frac{2+n^{ \prime}_{2}(a)}{3}\varepsilon_{a}^{2}e^{2\tau_{a}}+\mathcal{R}, \tag{16}\] \[n^{\prime}_{2}(a) =n_{2}(a)\frac{\mathcal{K}_{S}(a)^{n_{2}(a)}+\mathcal{K}_{S}(a)^ {n_{2}(a)}/\mathcal{K}_{S}(ba)}{1+\mathcal{K}_{S}(a)^{n_{2}(a)}+\mathcal{K}_{ S}(a)^{n_{2}(a)}/\mathcal{K}_{S}(ba)}, \tag{17}\]
where \(\mathcal{R}\) denotes terms of \(O(\varepsilon_{a}^{3})\) or of \(\varepsilon_{a}^{2}\) that vanish in comparison to the \(e^{2\tau_{a}}\) term in the limit \(e^{\tau_{a}}\gg 1\).
Since this condition determines \(\tilde{\delta}\) and \(\tilde{c}_{1}\) only to \(O(\varepsilon^{2})\), we are free to require also that the asymptotic expansions in \(t\) match, for additional accuracy. Doing so yields \(c_{1}=3/(2n^{\prime}_{2}(a)+1)\) and \(c_{1}\left[(1-\delta)^{-1/c_{1}}-1\right]e^{\tau_{a}}=\varepsilon_{a}(e^{\tau_ {a}}+e^{-\tau_{a}}-2)\). The highly accurate general solution for \(M_{a}(t)\) (see Fig. 4**a**) is then:
\[\frac{M_{a}(t)}{m_{a}(0)} =1-\left[1+\frac{\varepsilon_{a}}{c_{a}}(e^{\kappa_{a}t}+e^{- \kappa_{a}t}-2)\right]^{-c_{a}}, \tag{18a}\] \[c_{a} =\frac{3}{2n^{\prime}_{2}(a)+1}. \tag{18b}\]
As \(\mathcal{K}_{S}(a)\) and \(\mathcal{K}_{S}(ba)\to\infty\) (i.e. when initial monomer concentration is far below the saturation concentration), single-step kinetics are recovered as required.
Figure 3: A\(\beta\)40 monomer inhibits A\(\beta\)42 fibril formation. The initial A\(\beta\)42 monomer concentration is always 3 \(\upmu\)M for the data presented here. The initial A\(\beta\)40 monomer concentrations are 1 (blue), 3 (green) and 5 (red) \(\upmu\)M. **a**: A\(\beta\)40 and A\(\beta\)42 co-aggregation monitored by ThT fluorescence shows separate sigmoids, with the first corresponding to pure A\(\beta\)42 fibril formation, and the second to pure A\(\beta\)40 fibril formation. (A\(\beta\)38 and A\(\beta\)42 co-aggregation data are qualitatively similar.) **b**: Truncating the data after the first sigmoid and normalizing reveals that monomeric A\(\beta\)40 has a clear inhibitory effect on A\(\beta\)42 fibril formation. In addition to the above A\(\beta\)40 monomer concentrations, included in black is a time series for 3 \(\upmu\)M A\(\beta\)42 aggregation in isolation (i.e. 0 \(\upmu\)M A\(\beta\)40). **c**: Earlier studies demonstrate that monomeric A\(\beta\)42 and A\(\beta\)xx cross-react solely during the primary nucleation reaction step.
A\(\sharp\)40 and A\(\sharp\)38 monomers bind to A\(\sharp\)42 fibril surfaces, inhibiting secondary nucleation
We tested Eq. (18) against the data for A\(\sharp\)42-A\(\sharp\)40 co-aggregation and that for A\(\sharp\)42-A\(\sharp\)38 coaggregation, both truncated after the first sigmoid. It is known that at pH 7.4 secondary nucleation of A\(\sharp\)42 is saturated at all but the lowest monomer concentrations, with a dissociation constant of 1.1 uM [34]. To verify that this value applies here, and that \(n_{c}=n_{2}=2\), we fit in the SI data from homogeneous A\(\sharp\)42 aggregation experiments conducted in the studies from whence the co-aggregation data originates ([23; 24]). Since the other peptide is almost entirely unaggregated during aggregation of A\(\sharp\)42 in the co-aggregation experiments, its concentration is well-approximated as constant.
Allowing inhibition only of primary nucleation by setting \(K_{I,E}^{-1}=\mathcal{K}_{S}(ba)^{-1}=0\) and fitting \(K_{I,P}\) (Fig. 5**a**), or only of elongation by setting \(K_{I,P}^{-1}=\mathcal{K}_{S}(ba)^{-1}=0\) and fitting \(K_{I,E}\) (Fig. 5**b**), yielded misfits. However, allowing inhibition only of secondary nucleation by setting \(K_{I,P}^{-1}=K_{I,E}^{-1}=0\) and fitting \(\mathcal{K}_{S}(ba)\) yielded good fits in both systems (Fig. 5**c**-**d**), providing strong evidence that A\(\sharp\)xx monomers inhibit solely A\(\sharp\)42 secondary nucleation, by binding to the surface of A\(\sharp\)42 fibrils.
The dissociation constant \(K_{S}(ba)\) was found to be 0.39 uM for A\(\sharp\)40 monomer on A\(\sharp\)42 fibrils, and 0.86 uM for A\(\sharp\)38 monomer on A\(\sharp\)42 fibrils. A straightforward comparison of dissociation constants thus suggests that A\(\sharp\)40 monomer has the highest affinity for A\(\sharp\)42 fibril surfaces, followed by A\(\sharp\)38 monomer, with A\(\sharp\)42 monomer perhaps surprisingly in last place. However, the difference between the latter two may not be large enough to be significant relative to experimental error.
Eq. (18) reveals that the kinetics depend on secondary nucleation of A\(\sharp\)42 only via its initial rate:
\[\alpha_{2,a}(1)=\frac{2k_{2}(a)m_{\text{tot},a}^{n_{2}(a)}}{1+\left(m_{\text{ tot},a}/K_{S}(a)\right)^{n_{2}(a)}+m_{\text{tot},b}/K_{S}(ba)}. \tag{19}\]
In the absence of saturation, \(m_{\text{tot},a}\ll K_{S}(a)\) and consequently \(K_{S}(ba)\) is also the concentration of type-\(b\) monomer required to inhibit type-\(a\) secondary nucleation by 50% (by binding 50% of catalytic sites). However, saturation breaks this equivalence. Now, inhibition of secondary nucleation depends not on the absolute affinity of type-\(b\) monomer to type-\(a\) catalytic sites, but its affinity _relative_ to that of type-\(a\) monomers. When \(m_{\text{tot},a}\gg K_{S}(a)\), the initial secondary nucleation rate becomes:
\[\alpha_{2,a}(1)\rightarrow\frac{k_{2}(a)K_{S}(a)^{n_{2}(a)}}{1+\left(K_{S}(a )/m_{\text{tot},a}\right)^{n_{2}(a)}m_{\text{tot},b}/K_{S}(ba)}. \tag{20}\]
The 50% inhibition concentration is now \(m_{\text{tot},b,50\%}=K_{S}(ba)\left(m_{\text{tot},a}/K_{S}(a)\right)^{n_{2} (a)}\). So, the more saturated the kinetics, the higher the concentration of inhibitor is required to achieve the same inhibitory effect.
Since in the solution conditions used here (20 mM NaP, 0.2 mM EDTA, pH 7.4) the A\(\sharp\)42 monomer concentration was 3 uM, its secondary nucleation is fully saturated and Eq. (20) is a good model of its secondary nucleation in the presence of inhibitors. At the 3 uM of A\(\sharp\)42 monomer used in Fig. (5), the cross-dissociation constants \(K_{S}(40,42)\) and \(K_{S}(38,42)\) correspond to 50% inhibition concentrations of 2.9 and 6.4 uM, respectively.
Figure 4: Analytical solutions to the kinetics of co-aggregation (red, dashed) are highly accurate, tracking the numerical solutions to the rate equations (black) almost exactly. Rate constants are those subsequently determined by fitting experimental data for A\(\sharp\)40-A\(\sharp\)42 coaggregation (see Table 2). Numerical solutions in the absence of cross-nucleation (gray) show a clear difference. **a**: The analytical solution to the kinetics of self-assembly of A\(\sharp\)42 fibrils in the presence of A\(\sharp\)40 monomers (Eqs. (18)) closely tracks the numerical solution to Eqs. (15). b: Kinetics of self-assembly of A\(\sharp\)40 fibrils (rate equations Eqs. (22a)) are similarly well-described by the analytical solution Eqs. (25). **c**: Kinetics of self-assembly of all fibrils together are consequently modelled well by the combined solution Eq. (26).
Modelling co-aggregation kinetics of A\(\beta\)42, A\(\beta\)40 and A\(\beta\)38 over the full reaction time course
Having determined the kinetics of A\(\beta\)42 fibril formation in the presence of A\(\beta\)xx monomer, we now write down the rates governing the kinetics of A\(\beta\)xx fibril formation during an A\(\beta\)xx-A\(\beta\)42 coaggregation reaction:
\[\alpha_{1,b} =2k_{n}(b)m_{\text{tot},b}^{n_{c}(b)} \tag{21a}\] \[\alpha_{1,ba} =2k_{n}(ba)m_{\text{tot},a}^{n_{c}(ba)}m_{\text{tot},b}^{n_{c}(bb)}\] (21b) \[\alpha_{e,b} =k_{+}(b)m_{\text{tot},b}\] (21c) \[\alpha_{2,b} =\frac{2k_{2}(b)m_{\text{tot},b}^{n_{2}(b)}}{1+(m_{b}(t)/K_{S}(b) )^{n_{2}(b)}}, \tag{21d}\]
\(\alpha_{1,ba}\) is the rate of production of new type-\(b\) fibril ends via cross-primary nucleation, and secondary nucleation can saturate at the monomer concentrations investigated. The A\(\beta\)40/A\(\beta\)38 fibril proliferation rate via secondary nucleation is, as usual, \(\kappa_{b}=\sqrt{\alpha_{e,b}\alpha_{2,b}}\).
Using \(\tau_{b}=\kappa_{b}t\) and \(\mu_{b}=m_{b}/m_{\text{tot},b}\), the dimensionless rate equations governing aggregation of A\(\beta\)40/A\(\beta\)38 are given by combining Eqs (21) with Eqs (2), yielding:
\[\frac{d\Pi_{b}}{d\tau_{b}} =2\varepsilon_{b}\mu_{b}(\tau_{b})^{n_{c}(b)}+2\varepsilon_{ba} \mu_{a}(\tau_{a})^{n_{c}(ba)}\mu_{b}(\tau_{b})^{n_{c}(bb)}\] \[+\frac{1+\mathcal{K}_{S}(b)^{n_{2}(b)}}{\mu_{b}(\tau_{b})^{n_{2}( b)}+\mathcal{K}_{S}(b)^{n_{2}(b)}}\mu_{b}(\tau_{b})^{n_{2}(b)}\big{(}1-\mu_{b}( \tau_{b})\big{)}, \tag{22a}\] \[\frac{d\mu_{b}}{d\tau_{b}} =-\mu_{b}(\tau_{b})\Pi_{b}(\tau_{b}), \tag{22b}\]
where
\[\varepsilon_{ba}=\frac{\alpha_{1,ba}}{2\alpha_{2,b}(1)},\quad\varepsilon_{b}= \frac{\alpha_{1,b}}{2\alpha_{2,b}(1)}. \tag{23}\]
Once more, Eq. (8) is a special solution to Eq. (22a) with boundary conditions \(\mu_{b}(0)=1-\delta,\ \Pi_{b}(0)=p_{0}(\delta)\) when \(\{\varepsilon,\ \mathcal{K}_{S}(b)^{-1}\}=0\), defining \(\varepsilon=\varepsilon_{b}+\varepsilon_{ba}\). Because it is also of the same form as Eq. (15), we know that the method of solution by asymptotic symmetries will again apply. Since type-\(a\) aggregation is complete before type-\(b\), Eq. (18) may be substituted for \(m_{a}(t)\). Expanding \(\mu_{b}=\mu_{b}^{(0)}+\varepsilon\mu_{b}^{(0)}+\varepsilon^{2}\mu_{b}^{(0)}+O (\varepsilon^{3})\), and \(\Pi_{b}=\varepsilon\Pi_{b}^{(1)}+\varepsilon^{2}\Pi_{b}^{(2)}\), the perturbation series to second order can then be calculated to be (see Appendix E):
\[\mu_{b}(\tau_{b})=1-(\varepsilon_{b}+\varepsilon_{ba}f)\left(e^{ \tau_{b}}+e^{-\tau_{b}}-2\right)+\mathcal{R}\] \[+\frac{1}{3}\left(2+n_{2}(b)\frac{\mathcal{K}_{S}(b)^{n_{2}(b)}} {1+\mathcal{K}_{S}(b)^{n_{2}(b)}}\right)(\varepsilon_{b}+\varepsilon_{ba}f)^{ 2}\,e^{2\tau_{b}}, \tag{24}\]
Figure 5: Determining the origin of the inhibitory effect of A\(\beta\)40 monomers (top) and A\(\beta\)38 monomers (bottom) on aggregation of A\(\beta\)42 (3 mM). Initial A\(\beta\)40 monomer concentrations are 0 (black), 1 (blue), 3 (green) and 5 (red) mM. Initial A\(\beta\)38 monomer concentrations are 0 (black), 2.5 (blue), 5 (green), 10 (yellow) and 15 (red) mM. **a**: Misfit of model in which A\(\beta\)xx inhibits primary nucleation (Eqs. (18) with \(K_{I,E}^{-1}=\mathcal{K}_{S}(ba)^{-1}=0\)). **b**: Misfit to models in which A\(\beta\)xx inhibits elongation (Eqs. (18) with \(K_{I,P}^{-1}=\mathcal{K}_{S}(ba)^{-1}=0\)). **c**: Fit of model in which A\(\beta\)xx inhibits secondary nucleation (Eqs. (18) with \(K_{I,E}^{-1}=\mathcal{K}_{I,P}^{-1}=0\)). Fitted parameter values are summarized in Tables 2-3. **d**: Scaling time by the inhibited secondary nucleation rates collapses the data onto a single curve, confirming that secondary nucleation is the process inhibited.
where \(\mathcal{R}\) consists of either terms of \(\mathcal{O}(\varepsilon^{3})\), or terms that vanish in comparison to the dominant terms at each order in the limit \(e^{\tau}\gg 1\). Since before this limit all terms of \(\mathcal{O}(\varepsilon)\) and above can be neglected anyway, we may ignore \(\mathcal{R}\) for the time being.
Once more, the method can be implemented simply by replacing \(n_{2}\) and \(\delta\) in the special solution Eq. (8) with functions \(n_{2}(\varepsilon,\mathcal{K}_{S}(b)^{-1})\) and \(\delta(\varepsilon,\mathcal{K}_{S}(b)^{-1})\) that ensure its asymptotic expansion in \(\delta\) matches the above perturbative expansion to second order. Again additionally requiring that the early-time kinetics are recovered, this ultimately yields the following highly accurate approximate solution (Fig. 4**b**):
\[\frac{M_{b}(t)}{m_{b}(0)}=1-\left[1+\frac{\varepsilon_{b}+\varepsilon_{ba}f}{c _{b}}\left(e^{\kappa_{b}t}+e^{-\kappa_{b}t}-2\right)\right]^{-c_{b}} \tag{25a}\] \[c_{b}=\frac{3}{2n_{2}^{\prime}(b)+1},\quad n_{2}^{\prime}(b)=n_{2}(b)\frac{ \mathcal{K}_{S}(b)^{n_{2}(b)}}{1+\mathcal{K}_{S}(b)^{n_{2}(b)}}. \tag{25b}\]
We see immediately that the only effect of cross-nucleation is the addition of \(\varepsilon_{ba}f\) to \(\varepsilon_{b}\) in the solution, increasing the effective nucleation rate and translating the kinetic curve for type-\(b\) monomer to the left. Thus \(f<1\) accounts for the reduction in cross-nucleation rate caused by the depletion of A\(\beta\)42 monomer.
The kinetics of the overall mixed system as reported upon by ThT fluorescence is finally given by:
\[M_{\text{ThT}}(t)=\phi_{a}M_{a}+(1-\phi_{a})M_{b}, \tag{26}\]
where \(\phi_{a}\) is related to the fluorescence per unit mass concentration of types \(a\) and \(b\) fibrils, \(\sigma_{a}\) and \(\sigma_{b}\), by:
\[\phi_{a}=\frac{\sigma_{a}}{\sigma_{a}m_{a}(0)+\sigma_{b}m_{b}(0)}. \tag{27}\]
Testing this analytical solution against a numerical solution of the rate equations reveals it to be highly accurate, and thus capturing the salient physico-chemical features of this co-aggregating system (Fig. 4**c**).
Due to inter-repeat variability in fluorescence coefficients, which are highly sensitive to environmental conditions compared to other parameters in the model, we determined \(\sigma_{a}\) individually for each repeat by inspection of the first normalized plateau height. We then fitted this model to the full double-sigmoidal A\(\beta\)42-A\(\beta\)40 dataset, using the parameters previously determined for \(k_{n}(a)k_{+}(a)\), \(k_{n}(a)k_{2}(a)\), \(n_{c}(a)\), \(n_{2}(a)\), \(n_{c}(b)\), \(n_{2}(b)\), \(K_{S}(a)\), and \(K_{S}(ba)\). Imposing \(k_{n}(ab)=0\) (Fig. 6**a**) yielded a clear misfit, verifying the importance of cross-nucleation. Allowing \(k_{n}(ab)\neq 0\) then yielded good fits to both the full A\(\beta\)42-A\(\beta\)40 dataset (Fig. 6**b**) and the full A\(\beta\)42-A\(\beta\)38 dataset (Fig. 6**c**), and the fitted rates of cross-nucleation confirmed the predictions of refs. [23; 24] that cross-nucleation produces new A\(\beta\)xx fibrils much faster than self-nucleation of A\(\beta\)xx.
## IV Discussion
Our technique may be applied to the majority of plausible rate equations for protein aggregation, provided that the kinetics for the system of interest are sigmoidal in character. It provides extremely accurate results for A\(\beta\)xx co-aggregation. It can likely be applied to any DEs featuring solutions that are sigmoidal in character. It may in future provide a method for the use of analytical basis functions in a systematic way to capture the key features of the solutions to highly nonlinear DEs.
Our formalism also allows various earlier solutions of single-peptide systems to be put on a more rigorous footing. First, the solution presented in ref. [8] is revealed as Eq. (18) with \(\mathcal{K}_{S}(a)^{-1}=\mathcal{K}_{S}(ba)^{-1}=0\). Its derivation was claimed to be via CGO RG; however, in reality it was derived implicitly using its \(\mu\to 1\) asymptotic symmetry
Figure 6: Fitting the model (Eq. (26)) to the full datasets confirms the importance of cross-nucleation. **a**: Misfit to full dataset for A\(\beta\)42-A\(\beta\)40 coaggregation using model in which no cross-primary nucleation occurs. **b**: Fit to full dataset for A\(\beta\)42-A\(\beta\)40 coaggregation using model in which cross-primary nucleation occurs; fitted parameter values are summarized in Table 2. **e**: Fit to full dataset for A\(\beta\)42-A\(\beta\)38 coaggregation using model in which cross-primary nucleation occurs; fitted parameter values are summarized in Table 3.
properties. Second, the universal solutions in ref. [9] for the kinetics of protein aggregation in which any participating reaction step can undergo enzyme-like saturation were derived by matching second-order perturbative expansions in \(\varepsilon\) around \(\mu=1\) to that presented in ref. [8]. This is just the method for determining finite transformations of the \(\mu\to 1\) asymptotic symmetries presented above, and is valid for the same reasons.
Although the mathematical justification of the technique is challenging, being rooted in a newly invented sub-field of the specialized field of Lie symmetry analysis of DEs, its practical formulation is clearly very simple. The remarkably simple form of the solutions it produces permits easy analysis of the kinetics. Alongside the lack of alternatives for solving more complicated protein aggregation rate equations, we expect these factors will result in widespread adoption of this new method.
These results are consistent with experimental results showing A\(\sharp\)42 fibrils being coated with A\(\sharp\)40 monomers. For example, A\(\sharp\)42 fibrils with added A\(\sharp\)40 monomer are better dispersed and provide better contrast in cryo-TEM compared to pure A\(\sharp\)42 fibrils [35]. Moreover, the results of SPR experiments show that A\(\sharp\)40 monomers fail to elongate immobilized A\(\sharp\)42 fibrils, yet a saturable binding curve is observed suggesting the binding of A\(\sharp\)40 monomers to the sides of A\(\sharp\)42 fibrils [36].
###### Acknowledgements.
We acknowledge support from the Lindemann Trust Fellowship, English-Speaking Union (AJD), the Swedish Research Council (SL), the MacArthur Foundation (LM), the Simons Foundation (LM) and the Henri Seydoux Fund (LM). The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) through the ERC grants PhysProt (agreement no. 337969), MAMBA (agreement no. 340890) and NovoNordiskFonden (SL).
## Appendix A Introduction to Lie group theory of differential equations
The theory of Lie groups finds diverse application across theoretical physics. It was originally developed by Sophus Lie as a systematic method for exactly solving nonlinear differential equations (DEs) by exploiting their symmetry properties; however, this application is largely unknown today. Consequently, it is widely believed that nonlinear DEs can be solved only by a combination of guesswork and ad-hoc methods of individually narrow applicability. In fact, most such methods may be derived from the Lie group theory of DEs, which provides a unified and general platform for solving DEs of any kind. Here we give a brief summary of those parts of Lie group theory of DEs that are utilized in the paper; for a more in-depth treatment, refs. [26; 27] can be consulted.
### Continuous transformations
A point transformation maps the independent and dependent variables \(x\) and \(y\) of the object being acted upon to \(\tilde{x}\) and \(\tilde{y}\). Point transformations that are indexed by parameter \(s\) may be written \(\tilde{x}=\tilde{x}(x,y,s),\;\tilde{y}=\tilde{x}(x,y,s)\). When these are also invertible, contain the identity at \(s=0\), and obey associativity via \(\tilde{x}(\tilde{x}(x,y,s),\tilde{y}(x,y,s),t)=\tilde{x}(x,y,s+t)\), they form a one-parameter (or multi-parameter) group of point transformations. Because they are continuous, the infinitesimal transformation exists and can be accessed by expanding around \(s=0\):
\[\tilde{x}(x,y,s)=x+s\frac{\partial\tilde{x}}{\partial s}\bigg{|}_{s=0}+\cdots =x+s\mathbf{X}x+O(s^{2}) \tag{10}\]
\[\tilde{y}(x,y,s)=y+s\frac{\partial\tilde{y}}{\partial s}\bigg{|}_{s=0}+\cdots =y+s\mathbf{X}y+O(s^{2}), \tag{11}\]
where the operator \(\mathbf{X}\) is:
\[\mathbf{X}=\xi(x,y)\frac{\partial}{\partial x}+\eta(x,y)\frac{\partial}{\partial y}, \tag{12}\]
and the elements of the tangent vector (\(\xi(x,y),\eta(x,y)\)) are:
\[\xi(x,y)=\frac{\partial\tilde{x}}{\partial s}\bigg{|}_{s=0},\quad\eta(x,y)= \frac{\partial\tilde{y}}{\partial s}\bigg{|}_{s=0}. \tag{13}\]
The operator \(\mathbf{X}\) is the infinitesimal generator of the point transformation. Integrating the tangent vector over \(s\) will yield a finite transformation.
### What is a Lie symmetry?
A Lie symmetry of an object is a continuous transformation that leaves the object invariant. A rotational symmetry of a square is not a Lie symmetry, as it is discrete and can only be performed in multiples of \(\pi/2\) (Fig. 7**a**). However, a rotational symmetry of a circle can involve any angle, and is thus a Lie symmetry (Fig. 7**b**). A DE can be viewed as a geometrical object: a manifold consisting of the union of all possible solutions. Often such DEs possess Lie point symmetries: transformations of the dependent and independent variables that leave the overall manifold invariant. Applied to a particular solution (that spans a subspace of the DE manifold) a Lie symmetry of the DE transforms it into another solution (see Fig. 7**c**)). By analogy, a rotational Lie symmetry maps a circle to itself but maps a point on the circle to another point.
The ability to express a Lie symmetry in infinitesimal form also makes it possible to calculate systematically the Lie point symmetries possessed by a given object. For DEs this procedure, although algorithmic, can be extremely long-winded because derivatives are not transformed in a straightforward way by Lie point symmetries. To avoid dozens or hundreds of pages of working, it is thus best implemented using computer algebra systems (CAS). On the other hand, for objects without derivatives the procedure is simple. For example, the circle in Fig. 7**b** may be expressed in polar coordinates as \(F=r-c=0\). In these co-ordinates the generator is \(\mathbf{X}=\xi_{r}\partial/\partial r+\xi_{\theta}\partial/\partial\theta\). Trivially, solving \(\mathbf{X}F=0\) yields \(\xi_{r}=0\) and arbitrary \(\xi_{\theta}\): a rotational symmetry. In cartesian co-ordinates \(F=x^{2}+y^{2}-c\), and solving \(\mathbf{X}F=0\) yields \(\eta\) in terms of \(\xi\), giving the generator as follows:
\[0=\mathbf{X}F=\left(\xi(x,y)\frac{\partial}{\partial x}+\eta(x,y) \frac{\partial}{\partial y}\right)(x^{2}+y^{2}-c) \tag{10}\] \[\therefore \mathbf{X}=\xi(x,y)\left(y\frac{\partial}{\partial x}-x\frac{ \partial}{\partial y}\right). \tag{11}\]
The arbitrary rotational transformation is recovered in cartesian coordinates as expected.
### Approximate symmetries
A more recent development in the field of Lie group analysis of DEs is the discovery that perturbed DEs can possess "approximate symmetries" [37]. These leave a perturbed DE invariant only to some finite order in the perturbation parameter \(\varepsilon\). They can be identified by solving:
\[(\mathbf{X}^{(0)}+\epsilon\mathbf{X}^{(1)}+...)(F_{0}+\varepsilon F_{1})|_{F_{0}+ \varepsilon F_{1}=0}=0, \tag{12}\]
order-by-order [28]. They can often be used to find approximate solutions to perturbed DEs. However, approximate symmetries of DEs are more difficult to compute than exact symmetries, and there exist few if any CAS implementations of the procedure.
### Perturbation symmetries
Lie point symmetries of a DE are traditionally thought of as transformations acting on its dependent and independent variables. However, there is nothing to stop us pretending that the perturbation parameter \(\varepsilon\) in a perturbed DE is an independent variable, and searching for symmetries that act on \(\varepsilon\) as well [38]. Doing so can significantly extend the power of the Lie group approach. We have previously termed these "perturbation symmetries" (See ref. [25] for a detailed explanation of these symmetries and this choice of terminology).
Crucially, if a reference solution is known for the perturbation problem with \(\varepsilon=0\), this may be converted using a perturbation symmetry of the general solution into a solution valid for arbitrary \(\varepsilon\). This is because such a symmetry leaves the space of solutions for all possible \(\varepsilon\) unchanged. Thus, acting on a solution for a specific \(\varepsilon\) maps it to another solution with a different \(\varepsilon\).
Unfortunately, both exact and approximate perturbation symmetries are often extremely difficult or impossible to compute, due to the high dimensionality of the manifold, which defeats most or all CAS implementations. However, we recently developed a method (explained in detail in [25]) that can compute approximate perturbation symmetries of the _solution_ to a perturbed DE directly, with far greater ease than earlier methods.
## Appendix B Special solution for \(\epsilon=d=0\)
When \(\alpha_{1},\ \alpha_{2}\) and \(\alpha_{e}\) are finite constants and \(\varepsilon=0\), Eqs. (2) reduce to:
\[\frac{d\Pi}{d\tau} =\mu(\tau)^{n_{2}}(1-\mu(\tau)) \tag{13}\] \[\frac{d\mu}{d\tau} =-\mu(\tau)\Pi(\tau). \tag{14}\]
Integrating once, with boundary conditions \(\mu(0)=1-\delta,\ \Pi(0)=p\) yields for \(n_{2}>0\):
\[\Pi(\tau)=\left(p^{2}+2\frac{(1-\delta)^{n_{2}}-\mu(\tau)^{n_{2}}} {n_{2}}\right.\\ \left.-2\frac{(1-\delta)^{n_{2}+1}-\mu(\tau)^{n_{2}+1}}{n_{2}+1} \right)^{1/2}. \tag{15}\]
\(n_{2}=0\) is also possible and indicates fibril fragmentation rather than secondary nucleation. In this case, we instead obtain:
\[\Pi(\tau)=\left(p^{2}-2\ln\frac{\mu}{1-\delta}-2\left((1-\delta)-\mu(\tau) \right)\right)^{1/2}. \tag{16}\]
Figure 7: An overview of Lie symmetries. **a**: Squares have discrete rotational symmetries. These cannot be reduced to infinitesimal form; therefore, they are not Lie symmetries. **b**: Circles can be rotated by any amount; rotation is thus a Lie symmetry of the circle. **c**: In general, symmetries of DEs map solutions to other solutions with different boundary conditions. An arbitrary translation on the \(y\) axis is a Lie symmetry of the DE \(\dot{y}=2t\), because this is solved by \(y=t^{2}+c\), and the translation just changes the value of \(c\), giving the solution to the DE for new boundary conditions.
At this point, the problem is reduced to quadrature, with:
\[t=-\int_{1-\delta}^{\mu}\frac{d\mu}{\mu\Pi(\mu)}. \tag{101}\]
If we choose \(p=p_{0}(\delta)=\delta+O(\delta^{2})\), where:
\[p_{0}=\sqrt{2\frac{1-(1-\delta)^{n_{2}}}{n_{2}}-2\frac{1-(1-\delta)^{n_{2}+1}} {n_{2}+1}}, \tag{102}\]
then Eq. (101) reduces to:
\[t=-\int_{1-\delta}^{\mu}\frac{d\mu}{\mu\left(2\frac{1-\mu^{n_{2}}}{n_{2}}\ -2\frac{1-\mu^{n_{2}+1}}{n_{2}+1}\right)^{1/2}}, \tag{103}\]
with the first term in the square root replaced by \(-2\ln\mu\) if \(n_{2}=0\). To evaluate this integral, it is necessary to find an accurate approximate expression \(g(\mu)\) for the denominator \(f(\mu)\). We start by investigating \(f(\mu)\) in the interval \([0,1]\) containing all possible values of \(\mu\). We find the following basic properties:
\[f(0) =f(1)=0 \tag{104}\] \[f(\mu) >0,\quad 0<\mu<1\] (105) \[f^{\prime}(0) =c,\quad f^{\prime}(1)=-1\] (106) \[f^{\prime\prime}(\mu) \leq 0,\quad 0\leq\mu\leq 1. \tag{107}\]
If we instead restrict our attention to the interval \([0,1-\delta]\), with small positive \(\delta\), we find furthermore that:
\[f(1-\delta)=\delta+O(\delta^{2}),\quad f^{\prime}(1-\delta)=-1+\frac{2n_{2}+4} {3}\delta+O(\delta^{2}). \tag{108}\]
Also, there is a single turning point (a maximum) in this interval. When \(n_{2}=1\) the maximum value is \(f_{\rm max}=1/4\), occurring at \(\mu_{\rm max}=1/2\). As \(n_{2}\to\infty\), \(f_{\rm max}\to c\), and occurs at \(\mu_{\rm max}\to 1\). Taken together, these results indicate that \(f\) is a low hill, rising from \(0\) at either end of the interval \([0,1]\) to a value \(\leq 1/4\). Thus neither \(f\) nor \(f^{\prime}\) have poles.
Such simple behaviour should be adequately captured by the simple functional form:
\[g(\mu)=c_{1}\mu^{p_{1}}+c_{2}\mu^{p_{2}}+c_{3},\quad p_{2}>p_{1}\geq 1. \tag{109}\]
This is fortunate, because more complicated polynomials in \(\mu\) are unlikely to lead to an integrable \(g^{-1}\). Now we constrain the parameters in \(g\) by matching to the properties of \(f\). First imposing \(g(0)=f(0)=0\) requires \(c_{3}=0\). Imposing \(g(1-\delta)=f(1-\delta)=\delta+O(\delta^{2})\) then leads to \(c_{2}=-c_{1}\) and \(p_{2}-p_{1}=1/c_{1}>0\), so \(g\) has the form:
\[g(\mu)=c_{1}\mu^{p_{1}}\left(1-\mu^{1/c_{1}}\right). \tag{110}\]
To inherit the property that \(f^{\prime}(0)>0\) requires \(p_{1}=1\). This is also fortunate, since otherwise \(g^{-1}\) would not be integrable. With this form of \(g\) we can already evaluate (and invert) \(t=\int_{1-\delta}^{\mu}g^{-1}d\mu\), yielding:
\[\mu(\tau)=\frac{1}{\left(1+e^{t}\left[(1-\delta)^{-1/c_{1}}-1\right]\right)^ {c_{1}}}. \tag{111}\]
Our asymptotic symmetry transformation method requires that our special solution have the correct \(\mu\to 1\) asymptotic dynamics. Therefore, to choose \(c_{1}\), we match \(g^{\prime}(1-\delta)=f^{\prime}(1-\delta)\) (\(g^{\prime}(1)\) already equals \(f^{\prime}(1)=-1\)), yielding finally \(c_{1}=3/(2n_{2}+1)\).
(If we had instead matched \(g^{\prime}(0)=f^{\prime}(0)\), we would have obtained \(c_{1}=\sqrt{2/(n_{2}(n_{2}+1))}\). This would give a slightly more accurate solution for \(n_{2}>1\), because for larger values of \(n_{2}\) secondary nucleation decreases significantly at a larger value of \(\mu\), and the \(\mu\to 0\) region is more important to the overall dynamics. However, there is not a great difference between these choices for \(c_{1}\), with the maximum difference of \(6\%\) attained as \(n_{2}\to\infty\).)
Appendix C Kinetic model for amyloid fibril formation with saturating and inhibited secondary nucleation
If type-\(b\) monomers can compete with type-\(a\) monomers for binding to secondary nucleation sites on type-\(a\) fibrils, the total mass concentration of A\(\beta\)42 fibrils is:
\[M_{a}=M_{a}^{f}+M_{a}^{a}+M_{a}^{b}, \tag{112}\]
where \(M_{a}^{f}\) is the free (unbound) fibril mass concentration, and \(M_{a}^{a}\) and \(M_{a}^{b}\) are the mass concentrations of type-\(a\) fibrils bound by types \(a\) and \(b\) monomers, respectively. If, as was done here, the simplifying assumption is made that pre-equilibrium is achieved between bound and unbound states, then we may write:
\[\frac{m_{a}^{n_{2}(a)}M_{a}^{f}}{M_{a}^{a}}=K_{S}(a)^{n_{2}(a)},\qquad\frac{m_ {b}^{n_{2}(ba)}M_{a}^{f}}{M_{a}^{b}}=K_{S}(ba)^{n_{2}(ba)}, \tag{113}\]
where \(K_{S}(a)^{n_{2}(a)}\) and \(K_{S}(ba)^{n_{2}(ba)}\) are the equilibrium constants for the unbinding of types \(a\) and \(b\) monomers respectively from type-\(a\) fibrils. Combining these equations allows us to express the total type-\(a\) fibril mass concentration as:
\[M_{a}=M_{a}^{f}\left(1+m_{a}^{n_{2}(a)}/K_{S}(a)^{n_{2}(a)}+(m_{b}/K_{S}(ba))^{ n_{2}(ba)}\right). \tag{114}\]
Considering that the rate of generation of new type-\(a\) fibrils by secondary nucleation is:
\[r_{S}=2k_{c}M_{a}^{a}, \tag{115}\]
where \(k_{c}\) is some conversion rate constant, this ultimately yields:
\[r_{S}=\frac{2k_{2}(a)m_{a}(t)^{n_{2}(a)}M_{a}(t)}{1+(m_{a}(t)/K_{S}(a))^{n_{2} (a)}+(m_{b}(0)/K_{S}(ba))^{n_{2}(ba)}}, \tag{116}\]
where \(k_{2}=k_{c}/K_{S}(a)^{n_{2}(a)}\). Note that with our A\(\beta\)xx-A\(\delta\)42 system it has been shown that secondary nucleation of A\(\beta\)xx fibrils does not occur on A\(\delta\)42 fibrils, so clusters of type-\(b\) monomers almost certainly do not form on type-\(a\) fibrils, and we expect \(n_{2}(ba)=1\). Indeed, the kinetics of A\(\delta\)42 aggregation inhibition by A\(\beta\)xx monomers is shown in the main text (Fig. **5c**) to be well-described by a model where \(n_{2}(ba)=1\), i.e. only one A\(\beta\)xx monomer can bind to a given secondary nucleation site on an A\(\delta\)42 fibril. We provide in the SI a more detailed derivation of the above. Noting that \(r_{S}=\alpha_{2,a}M_{a}(t)\), nondimensionalization yields finally Eq. (14) in the main text.
Appendix D Near invariance of \(\mu_{a}\to 0\) dynamics under \(\mu_{a}\to 1\) asymptotic symmetry for small \(\mathcal{K}_{S}(a)^{-1},\ \varepsilon_{a}\)
Asymptotic symmetries involving \(\mathcal{K}_{S}(a)^{-1}\) and \(\varepsilon_{a}\) computed from the local perturbation series of Eq. (15) around \(\mu_{a}=1-\delta,\ \Pi_{a}=p_{0}(\delta)\) are valid globally, provided \(\varepsilon_{a}\) is small (as is the case in unseeded A\(\beta\) kinetics, and indeed in most protein aggregation reactions hitherto studied[11]).
For large values of \(\mathcal{K}_{S}(a)^{-1}\), this is because secondary nucleation does not now reduce significantly until \(\mu_{a}\ll 1\). As a consequence, the \(\mu_{a}\to 0\) asymptotic limit is visited too late during saturating aggregation for its perturbation by the introduction of non-zero \(\mathcal{K}_{S}(a)^{-1}\) and \(\varepsilon\) to be important for the overall kinetics.
For small values of \(\mathcal{K}_{S}(a)^{-1}\) this is because \(\varepsilon_{a}\) and \(\mathcal{K}_{S}(a)^{-1}\) then drop out of the \(\mu\to 0\) kinetics at leading order, and such symmetries therefore have no effect in this regime. This may be seen as follows. Integrating Eqs. (15) once with \(\Pi(\mu=1)=1\) yields \(\Pi\) as a function of \(\mu\). Taking the limit \(\mu\to 0\) then yields \(\Pi(\infty)\):
\[\Pi_{a}(\infty)=\left(\frac{2(A+B)}{Bn_{2}(a)}\ln\left[1+\frac{B} {A}\right]+4\frac{\varepsilon_{a}}{n_{c}}\right.\] \[\left.-\frac{2(A+B)}{A(1+n_{2}(a))}{}_{2}F_{1}\!\left[1,1+\frac{ 1}{n_{2}(a)},2+\frac{1}{n_{2}(a)},-\frac{B}{A}\right]\right)^{1/2}, \tag{16}\]
where \(A=1+1/\mathcal{K}_{S}(ba)\), and \(B=1/\mathcal{K}_{S}(a)^{n_{2}(a)}\). In the limit of small \(\mathcal{K}_{S}(a)^{-1}\), and noting that the first-order Taylor series around \(z=0\) of \({}_{2}F_{1}[a,b,c,z]\) is \(1+abz/c\), the hypergeometric becomes:
\[{}_{2}F_{1}\!\left[1,\frac{n_{2}(a)+1}{n_{2}(a)},\frac{2n_{2}(a)+ 1}{n_{2}(a)},-\frac{B}{A}\right]\\ \to 1-\frac{n_{2}(a)+1}{2n_{2}(a)+1}\frac{B}{A}+O(\mathcal{K}_{S}(a )^{-2n_{2}(a)}), \tag{17}\]
and \(\Pi_{a}(\infty)\) reduces to:
\[\Pi_{a}(\infty)=\sqrt{\frac{2}{n_{2}(a)}-\frac{2}{n_{2}(a)+1}}+O(\mathcal{K} _{S}(a)^{-n_{2}(a)},\varepsilon_{a}). \tag{18}\]
Thus, to leading order, \(\mu_{a}\to 1\) asymptotic symmetries in \(\mathcal{K}_{S}(a)^{-n_{2}(a)},\varepsilon_{a}\) have no effect on the \(\mu_{a}\to 0\) dynamics.
## Appendix E Perturbation series for \(\mu_{b}\)
The differential equations to be solved are Eqs. (22a):
\[\frac{d\Pi_{b}}{d\tau_{b}}=2\varepsilon_{b}\mu_{b}(\tau_{b})^{n_ {c}(b)}+2\varepsilon_{ba}\mu_{a}(\tau_{a})^{n_{c}(ba)}\mu_{b}(\tau_{b})^{n_{c }(bb)}+\frac{1+\mathcal{K}_{S}(b)^{n_{2}(b)}}{\mu_{b}(\tau_{b})^{n_{2}(b)}+ \mathcal{K}_{S}(b)^{n_{2}(b)}}\mu_{b}(\tau_{b})^{n_{2}(b)}\big{[}1-\mu_{b}( \tau_{b})\big{]}, \tag{19a}\] \[\frac{d\mu_{b}}{d\tau_{b}}=-\mu_{b}(\tau_{b})\Pi_{b}(\tau_{b}). \tag{19b}\]
In the limit \(e^{\kappa_{a}t}\gg 1\), such that \(\mu_{a}\to(1+\varepsilon_{a}e^{\kappa_{a}t}/c_{a})^{-c_{a}}\), the first-order term is calculated as:
\[\mu_{b}^{(1)}(t)=-\frac{\varepsilon_{ba}}{\varepsilon}\left(e^{ \kappa_{b}t}{}_{2}F_{1}\!\left[-\frac{\kappa_{b}}{\kappa_{a}},c_{a}n_{c}(ba),1- \frac{\kappa_{b}}{\kappa_{a}},-\frac{\varepsilon_{a}}{c_{a}}\right]-{}_{2}F_{1} \!\left[-\frac{\kappa_{b}}{\kappa_{a}},c_{a}n_{c}(ba),1-\frac{\kappa_{b}}{ \kappa_{a}},-\frac{\varepsilon_{a}}{c_{a}}e^{\kappa_{a}t}\right]\\ +e^{-\kappa_{b}t}{}_{2}F_{1}\!\left[\frac{\kappa_{b}}{\kappa_{a}},c_{a}n_{c}(ba),1+\frac{\kappa_{b}}{\kappa_{a}},-\frac{\varepsilon_{a}}{c_{a}} \right]-{}_{2}F_{1}\!\left[\frac{\kappa_{b}}{\kappa_{a}},c_{a}n_{c}(ba),1+\frac {\kappa_{b}}{\kappa_{a}},-\frac{\varepsilon_{a}}{c_{a}}e^{\kappa_{a}t}\right] \right)-\frac{\varepsilon_{b}}{\varepsilon}\left(e^{\kappa_{b}t}+e^{-\kappa_{b }t}-2\right), \tag{20}\]
where \({}_{2}F_{1}[a,b,c,z]\) is the Gaussian hypergeometric function. Bearing in mind the following identity:
\[{}_{2}F_{1}[a,b,c,z]\equiv\frac{1}{(1-z)^{a}}\,{}_{2}F_{1}\!\left[a,c-b,c, \frac{z}{z-1}\right], \tag{21}\]
and since \(\frac{\varepsilon_{a}}{c_{a}}e^{\kappa_{a}t}\gg 1\) by the time the type-\(b\) sigmoid is reached, we may write the second and fourth hypergeometric functions as:
\[{}_{2}F_{1}\!\left[-\frac{\kappa_{b}}{\kappa_{a}},c_{a}n_{c}(ba),1- \frac{\kappa_{b}}{\kappa_{a}},-\frac{\varepsilon_{a}}{c_{a}}e^{\kappa_{a}t}\right] \equiv\left(1+\frac{\varepsilon_{a}}{c_{a}}e^{\kappa_{a}t}\right)^ {\frac{\kappa_{b}}{\kappa_{a}}}{}_{2}F_{1}\!\left[-\frac{\kappa_{b}}{\kappa_{a }},1-\frac{\kappa_{b}}{\kappa_{a}}-c_{a}n_{c}(ba),1-\frac{\kappa_{b}}{\kappa_{a }},\frac{1}{1+\frac{\varepsilon_{a}}{c_{a}}e^{\kappa_{a}t}}\right] \tag{54}\] \[\simeq e^{\kappa_{b}t}\left(\frac{\varepsilon_{a}}{c_{a}}\right)^{ \kappa_{b}/\kappa_{a}}{}_{2}F_{1}\!\left[-\frac{\kappa_{b}}{\kappa_{a}},1- \frac{\kappa_{b}}{\kappa_{a}}-c_{a}n_{c}(ba),1-\frac{\kappa_{b}}{\kappa_{a}},1 \right]\] (55) \[{}_{2}F_{1}\!\left[\frac{\kappa_{b}}{\kappa_{a}},c_{a}n_{c}(ba),1 +\frac{\kappa_{b}}{\kappa_{a}},-\frac{\varepsilon_{a}}{c_{a}}e^{\kappa_{a}t}\right] \equiv\left(1+\frac{\varepsilon_{a}}{c_{a}}e^{\kappa_{a}t}\right)^ {-\frac{\kappa_{b}}{\kappa_{a}}}{}_{2}F_{1}\!\left[\frac{\kappa_{b}}{\kappa_{a }},1+\frac{\kappa_{b}}{\kappa_{a}}-c_{a}n_{c}(ba),1+\frac{\kappa_{b}}{\kappa_{ a}},\frac{\frac{\varepsilon_{a}}{c_{a}}e^{\kappa_{a}t}}{1+\frac{\varepsilon_{a}}{c_{a}}e^{ \kappa_{a}t}}\right]\] (56) \[\simeq e^{-\kappa_{b}t}\left(\frac{\varepsilon_{a}}{c_{a}}\right)^ {-\kappa_{b}/\kappa_{a}}{}_{2}F_{1}\!\left[\frac{\kappa_{b}}{\kappa_{a}},1+ \frac{\kappa_{b}}{\kappa_{a}}-c_{a}n_{c}(ba),1+\frac{\kappa_{b}}{\kappa_{a}},1 \right]. \tag{57}\]
In typical secondary nucleating systems such as A\(\beta\), \(\varepsilon_{a}\ll 1\). Bearing in mind that \({}_{2}F_{1}[a,b,c,z]\to 1\) as \(z\to 0\), we may simplify the first order perturbation solution further to:
\[\mu_{b}^{(1)}(\tau_{b})\simeq-\frac{\varepsilon_{b}}{\varepsilon} \left(e^{\tau_{b}}+e^{-\tau_{b}}-2\right)-\frac{\varepsilon_{ba}}{\varepsilon} \left(e^{\tau_{b}}\left(1-\left(\frac{\varepsilon_{a}}{c_{a}}\right)^{\kappa_ {b}/\kappa_{a}}{}_{2}F_{1}\!\left[-\frac{\kappa_{b}}{\kappa_{a}},1-\frac{ \kappa_{b}}{\kappa_{a}}-c_{a}n_{c}(ba),1-\frac{\kappa_{b}}{\kappa_{a}},1\right]\right)\right.\] \[\left.+e^{-\tau_{b}}\left(1-\left(\frac{\varepsilon_{a}}{c_{a}} \right)^{-\kappa_{b}/\kappa_{a}}{}_{2}F_{1}\!\left[\frac{\kappa_{b}}{\kappa_{ a}},1+\frac{\kappa_{b}}{\kappa_{a}}-c_{a}n_{c}(ba),1+\frac{\kappa_{b}}{\kappa_{a}},1 \right]\right)\right). \tag{58}\]
The simplifications to the second and fourth hypergeometric functions mean this no longer satisfies the initial condition \(\mu_{b}^{(1)}(0)=0\). To a very good approximation we can restore this limiting behaviour by writing:
\[\mu_{b}^{(1)}(\tau_{b})\simeq-\left(\frac{\varepsilon_{b}}{ \varepsilon}+\frac{\varepsilon_{ba}}{\varepsilon}f\right)\left(e^{\tau_{b}}+e ^{-\tau_{b}}-2\right), \tag{59a}\] \[f=1-\left(\frac{\varepsilon_{a}}{c_{a}}\right)^{\frac{\kappa_{b} }{\kappa_{a}}}{}_{2}F_{1}\!\left[-\frac{\kappa_{b}}{\kappa_{a}},1-\frac{ \kappa_{b}}{\kappa_{a}}-c_{a}n_{c}(ba),1-\frac{\kappa_{b}}{\kappa_{a}},1\right]\!. \tag{59b}\]
From this, the second order perturbation expansion Eq. (24) can now be derived.
## Appendix F Summary of parameters | amyloid fibrils using fixed-point methods, and their subsequentapplication to the analysis of in vitro kinetic experiments, has led tonumerous advances in our understanding of the fundamental chemical mechanismsbehind amyloidogenic disorders such as Alzheimer's and Parkinson's diseases.
*Note: Some parts of the sentence may need to be paraphrased for better flow in Japanese.* |
2310.20632 | Constrained Planarity in Practice -- Engineering the Synchronized
Planarity Algorithm | In the constrained planarity setting, we ask whether a graph admits a planar
drawing that additionally satisfies a given set of constraints. These
constraints are often derived from very natural problems; prominent examples
are Level Planarity, where vertices have to lie on given horizontal lines
indicating a hierarchy, and Clustered Planarity, where we additionally draw the
boundaries of clusters which recursively group the vertices in a crossing-free
manner. Despite receiving significant amount of attention and substantial
theoretical progress on these problems, only very few of the found solutions
have been put into practice and evaluated experimentally.
In this paper, we describe our implementation of the recent quadratic-time
algorithm by Bl\"asius et al. [TALG Vol 19, No 4] for solving the problem
Synchronized Planarity, which can be seen as a common generalization of several
constrained planarity problems, including the aforementioned ones. Our
experimental evaluation on an existing benchmark set shows that even our
baseline implementation outperforms all competitors by at least an order of
magnitude. We systematically investigate the degrees of freedom in the
implementation of the Synchronized Planarity algorithm for larger instances and
propose several modifications that further improve the performance. Altogether,
this allows us to solve instances with up to 100 vertices in milliseconds and
instances with up to 100 000 vertices within a few minutes. | Simon D. Fink, Ignaz Rutter | 2023-10-31T17:01:32 | http://arxiv.org/abs/2310.20632v1 | # Constrained Planarity in Practice
###### Abstract.
In the constrained planarity setting, we ask whether a graph admits a planar drawing that additionally satisfies a given set of constraints. These constraints are often derived from very natural problems; prominent examples are Level Planarity, where vertices have to lie on given horizontal lines indicating a hierarchy, and Clustered Planarity, where we additionally draw the boundaries of clusters which recursively group the vertices in a crossing-free manner. Despite receiving significant amount of attention and substantial theoretical progress on these problems, only very few of the found solutions have been put into practice and evaluated experimentally.
In this paper, we describe our implementation of the recent quadratic-time algorithm by Blasius et al. (Blasius et al., 2017) for solving the problem Synchronized Planarity, which can be seen as a common generalization of several constrained planarity problems, including the aforementioned ones. Our experimental evaluation on an existing benchmark set shows that even our baseline implementation outperforms all competitors by at least an order of magnitude. We systematically investigate the degrees of freedom in the implementation of the Synchronized Planarity algorithm for larger instances and propose several modifications that further improve the performance. Altogether, this allows us to solve instances with up to 100 vertices in milliseconds and instances with up to 100 000 vertices within a few minutes.
## 1. Introduction
In many practical graph drawing applications we not only seek any drawing that maximizes legibility, but also want to encode additional information via certain aspects of the underlying layout. Examples are _hierarchical_ drawings like organizational charts, where we encode a hierarchy among vertices by placing them on predefined levels, _clustered_ drawings, where we group vertices by enclosing them in a common region, and _animated_ drawings, where changes to a graph are shown in steps while keeping a static part fixed. In practice, clustered drawings are for example UML diagrams, where classes are grouped according to the package they are contained in, computer networks, where devices are grouped according to their subnetwork, and integrated circuits, where certain components should be placed close to each other. As crossings negatively affect the readability of drawings (Steintein and Steintein, 1998; Steintein and Stein, 1998), we preferably seek planar, i.e. crossing-free, drawings. The combination of these concepts leads to the field of constrained planarity problems, where we ask whether a graph admits a planar drawing that satisfies a given set of constraints. This includes the problems Level Planarity(Steintein and Steintein, 1998; Steintein and Steintein, 1998), Clustered Planarity(Stein and Steintein, 1998; Steintein and Steintein, 1998), and Simultaneous Embedding with Fixed Edges (SEFE) (Steintein and Steintein, 1998; Steintein and Steintein, 1998; Stein and Steintein, 1998), which respectively model the aforementioned applications; see Figure 1. Formally, these problems are defined as follows.
Figure 1. Examples of constrained planarity problems: Level Planarity (a), Clustered Planarity (b), SEFE (c).
Introduction
The _Synchronized Planarity_ algorithm is a generalization of the _Synchronized Planarity_ algorithm to the _Synchronized Planarity_ algorithm. The _Synchronized Planarity_ algorithm is a generalization of the _Synchronized Planarity_ algorithm to the _Synchronized Planarity_ algorithm.
approaches to constrained planarity. In Section 4 we describe our implementation of Synchronized Planarity and evaluate its performance in comparison with the two other available Clustered Planarity implementations. We tune the running time of our implementation to make it practical even on large instances in Section 5. We analyze the effects of our engineering in greater detail in Section 6.
### Preliminaries.
We rely on some well-known concepts from the fields of graph drawing and planar graphs. We only briefly define the most important terms here and refer to the theoretical description of the implemented algorithm (Brandt, 1997) for more comprehensive definitions. A more gentle introduction to the concepts can also be found in Chapter 1 of the Handbook of Graph Drawing and Visualization (Krishnan, 2001). We consider two planar (i.e., crossing-free) drawings equivalent if they define the same _rotation system_, which specifies for each vertex its _rotation_, i.e., the cyclic order of the edges around the vertex. An _embedding_ is an equivalence class of planar drawings induced by this relation. An _embedding tree_(Brandt, 1997) is a PQ-tree (Krishnan, 2001) that describes all possible rotations of a vertex in a planar graph; see Figure 2d. Its leaves correspond to the incident edges, while its inner nodes are either Q-nodes, which dictate a fixed ordering of their incident subtrees that can only be reversed, or are P-nodes, which allow arbitrary permutation. A BC-tree describes the decomposition of a connected graph into its _biconnected_ components, which cannot be disconnected by the removal of a so-called _cut-vertex_. Each node of a BC-tree represents either a cut-vertex or a maximal biconnected _block_. We refer to a vertex that is not a cut-vertex as _block-vertex_. An SPQR-tree (Krishnan, 2001) describes the decomposition of a biconnected graph into its _triconnected_ components, which cannot be disconnected by the removal of a so-called _split-pair_ of two vertices. Each inner node represents a _skeleton_, which is either a triconnected _'rigid'_ minor whose planar embedding can only be mirrored, a split-pair of two _pole_ vertices connected by multiple _'parallel'_ subgraphs that can be permuted arbitrarily, or a cycle formed by split-pairs separating a _'series'_ of subgraphs; see Figure 2c. All three kinds of trees can be computed in time linear in the size of the given graph (Krishnan, 2001; Krishnan, 2001; Krishnan, 2001).
## 2. Constrained Planarity
Schaefer (Schaefer, 1997, Figure 2) introduced a hierarchy on the various variants of constrained planarity that have been studied in the past. Figure 3 shows a subset of this hierarchy, incorporating updates up to 2015 by Da Lozzo (Da Lozzo, 2015, Figure 0.1). Arrows indicate that the target problem either generalizes the source problem or solves it via a reduction. In the version of Da Lozzo, the problems Strip, Clustered and Synchronized Planarity as well as (Connected) SEFE still formed a frontier of problems with unknown complexity, separating efficiently solvable problems from those that are NP-hard. Since then many of these problems were settled in P, especially due to the Clustered
Figure 2. A planar graph (a), its SPQR-tree (b) and the corresponding skeletons (c). Rigids are highlighted in red, parallels in green, and series in blue. The embedding tree of the vertex marked in blue (d). Small black disks are P-nodes, larger white disks are Q-nodes.
Planarity solution from 2019 by Fulek and Toth [31]. The only problem from this hierarchy that remains with an unknown complexity is SEFE. In this section, we want to give a short summary of the history of Clustered Planarity and SEFE, which we see central to the field of constrained planarity and which also serve as a motivation for Synchronized Planarity. Afterwards, we will give a short summary of the algorithm we implement for solving the latter problem. We point the interested reader to the original description [8] for full details.
Recall that in SEFE, we are given two graphs that share some common part and we want to embed both graphs individually such that their common parts are embedded the same way [9; 15; 46]. More general SEFE variants are often NP-complete, e.g., the case with three given graphs [32], even if all share the same common part [6; 47]. In contrast, more restricted variants are often efficiently solvable, e.g., when the shared graph is biconnected, a star, a set of cycles, or has a fixed embedding [2; 3; 10]. The case where the shared graph is connected, which is called Connected SEFE, was shown to be equivalent to the so-called Partitioned \(\mathcal{T}\)-coherent 2-page Book Embedding problem [3] and to be reducible to Clustered Planarity[5], all of which were recently shown to be efficiently solvable [31]. In contrast to these results, the complexity of the general SEFE problem with two graphs sharing an arbitrary common graph is still unknown.
Recall that in Clustered Planarity, the embedding has to respect a laminar family of clusters, that is every vertex is assigned to some (hierarchically nested) cluster and an edge may only cross a the border of a cluster's region if it connects a vertex from the inside with one from the outside [11; 41]; see Figure 4 for an example. Lengauer [41] studied and solved this problem as early as 1989 in the setting where the clusters are connected. Feng et al. [25], who coined the term Clustered Planarity, rediscovered this algorithm and asked the general question where disconnected clusters are allowed. This question remained open for 25 years. In that time, polynomial-time algorithms were found for many special-cases [4; 20; 29; 34] before Fulek and Toth [31] found an \(O((n+d)^{8})\) solution in 2019, where \(d\) is the number of crossings between a cluster-border and an edge leaving the cluster. Shortly thereafter, Blasius et al. [8] gave a solution with running time in \(O((n+d)^{2})\) that works via a linear-time reduction to Synchronized Planarity.
In Synchronized Planarity, we are given a graph together with a set of _pipes_, each of which pairs up two distinct vertices of the graph. Each pipe synchronizes the rotation of its two paired-up vertices (its _endpoints_) in the following sense: We seek a planar embedding of the graph where for each pipe \(\rho\), the rotations of its endpoints line up under the bijection \(\varphi_{\rho}\) associated with \(\rho\)[8]. Formally, this problem is defined as follows.
Figure 3: Constrained planarity variants related to Synchronized Planarity, updated selection from [21]. Problems and reductions marked in blue are used for generating test instances.
**Problem** Synchronized Planarity\({}^{a}\)
**Given**: graph \(G\) and a set \(\mathcal{P}\), where each _pipe_\(\rho\in\mathcal{P}\) consists of two distinct vertices \(v_{1},v_{2}\in V(G)\) and a bijection \(\varphi_{\rho}\) between the edges incident to \(v_{1}\) and those incident to \(v_{2}\), and each vertex is part of at most one pipe
**Question**: Is there a drawing of \(G\) where for each pipe \(\rho=(v_{1},v_{2},\varphi_{\rho})\), the cyclic order of edges incident to \(v_{1}\) lines up with the order of edges incident to \(v_{2}\) under the bijection \(\varphi_{\rho}\)?
* Note that we disregard the originally included Q-vertices here, as they can also be modeled using pipes [8, Section 5].
The motivation for this "synchronization" can best be seen by considering the reduction from Clustered to Synchronized Planarity. At each cluster boundary, we split the graph into two halves: one where we contract the inside of the cluster into a single vertex and one where we contract the outside into a single vertex. In a clustered planar embedding, the order of the edges "leaving" one cluster (i.e. the rotation of its contracted vertex in the one half) needs to match the order in which they "enter" the parent cluster (i.e. the the rotation of the corresponding contracted vertex in the other half). The graph resulting from separately contracting each side of a cluster boundary is called CD-tree [11]; see Figure 4 and [8, Figure 6] for an example. Using this graph, the synchronization of rotations can easily be modeled via Synchronized Planarity by pairing the two contracted vertices corresponding to the same cluster boundary with a pipe.
In the quadratic algorithm for solving Synchronized Planarity, a pipe is _feasible_ if one of the three following operation can be applied to remove it.
**EncapsulateAndJoin**: If both endpoints of the pipe are cut-vertices, they are "encapsulated" by collapsing each incident block to a single vertex to obtain two stars with paired-up centers. Additionally, we split the original components at the two cut-vertices, so that each of their incident blocks is retained as separate component with its own copy of the cut-vertex. These copies are synchronized with the respective vertex incident to the original cut-vertex representing the collapsed block. Now the cut-vertices can be removed by "joining" both stars at their centers, i.e, by identifying their incident edges according to the given bijection; see the top row of Figure 5.
**PropagatePQ**: If one endpoint of the pipe is a block-vertex and has an embedding tree that not only consists of a single P-node (i.e., it is _non-trivial_), a copy of this embedding tree is inserted ("propagated") in place of each respective pipe endpoint. The inner nodes of the embedding trees are synchronized by pairing corresponding vertices with a pipe; see the middle row of Figure 5. Note that, as Q-nodes only have a binary embedding decision, they can more easily be synchronized via a 2-SAT formula instead of using pipes.
Figure 4: A Clustered Planarity instance (a), its cluster tree (b), and its CD-tree representation (c).
SimplifyMatchingIn the remaining case, at least one of the endpoints of the pipe is a block-vertex but has a trivial embedding tree. If the vertex (or, more precisely, the parallel in the SPQR-tree that completely defines its rotation) can respect arbitrary rotations, we can simply remove the pipe. When the other pole of the parallel is also paired-up and has a trivial embedding tree, we "short-circuit" the pipe across the parallel; see the bottom row of Figure 5. One exception is if the pipe matches the poles of the same parallel, where we can again remove the pipe without replacement.
The algorithm then works by simply applying a suitable operation on an _arbitrary_ feasible pipe each step. Moreover, it can be shown that if a pipe is not feasible, then this is directly caused by a close-by pipe with endpoints of higher degree [8]. Especially, this means that maximum-degree pipes are always feasible.
Each of the three operations runs in time linear in the degree of the removed pipe once the embedding trees it depends on has been computed. This is dominated by the time spent on computing the embedding tree, which is linear in the size of the considered biconnected component. Every applied operation removes a pipe, but potentially introduces new pipes of smaller degree. Blasius et al. [8] show that the progress made by the removal of a pipe always dominates the overhead of the newly-introduced pipes and that the number of operations needed to remove all pipes is limited by the total degree of all paired-up vertices. Furthermore, the resulting instance without pipes can be solved and embedded in linear time. An embedding of the input graph can then be obtained by undoing all changes made to the graph in reverse order while maintaining the found embedding. The algorithm thus runs in the following three simple phases:
1. While pipes are left, choose and remove an arbitrary feasible pipe by applying an operation.
2. Solve and embed the resulting pipe-free (_reduced_) instance.
3. Undo all applied operations while maintaining the embedding.
Figure 5. The operations for solving Synchronized Planarity[8]. Pipes are indicated by orange dashed lines, their endpoints are shown as larger disks. **Top:** Two cut-vertices paired-up by a pipe (left), the result of encapsulating their incident blocks (middle) and the bipartite graph resulting from joining both cut-vertices (right). **Middle:** A block-vertex pipe endpoint (left) that has a non-trivial embedding tree (middle) that is propagated to replace both the vertex and its partner (right). **Bottom:** Three different cases of paired-up vertices with trivial embedding trees (blue) and how their pipes can be removed or replaced (red).
## 3. Related Work
Surprisingly, in contrast to their intense theoretical consideration, constrained planarity problems have only received little practical attention so far. Of all variants, practical approaches to Clustered Planarity were studied the most, although all implementations predate the first fully-correct polynomial-time solution and thus either have an exponential worst-case running time or cannot solve all instances. Chimani et al. (Chimani et al., 2017) studied the problem of finding maximal cluster planar subgraphs in practice using an Integer Linear Program (ILP) together with a branch-and-cut algorithm. A later work (Gutwenger et al., 2018) strengthened the ILP for the special case of testing Clustered Planarity, further improving the practical running time. The work by Gutwenger et al. (Gutwenger et al., 2018) takes a different approach by using a Hanani-Tutte-style formulation of the problem based on the work by Schaefer (Schaefer, 2018). Unfortunately, their polynomial-time testing algorithm cannot solve all instances and declines to make a decision for some instances. The Hanani-Tutte-approach solved instances with up to 60 vertices and 8 clusters in up to half a minute, while the ILP approach only solves roughly 90 % of these instances within 10 minutes (Gutwenger et al., 2018).
The only other constrained planarity variant for which we could find experimental results is Partitioned 2-page Book Embedding. Angelini et al. (Angelini et al., 2017) describe an implementation of the SPQR-tree-based linear-time algorithm by Hong and Nagamochi (Hong and Nagamochi, 2018), which solves instances with up to 100 000 vertices and two clusters in up to 40 seconds. Unfortunately, their implementation is not publicly available. For (Radial) Level Planarity, prototypical implementations were described in the dissertations by Leipert (Leippert, 2017) and Bachmaier (Bachmaier, 2018), although in both cases neither further information, experimental results, nor source code is available. The lack of an accessible and correct linear-time implementation may be due to the high complexity of the linear-time algorithms (Chimani et al., 2017). Simpler algorithms with a super-linear running time have been proposed (Gutwenger et al., 2018; Gutwenger et al., 2018; Schaefer et al., 2018). For these, we could only find an implementation by Estrella-Balderrama et al. (Estrella-Balderrama et al., 2018) for the quadratic algorithm by Harrigan and Healy (Harrigan and Healy, 2018). Unfortunately, this implementation has not been evaluated experimentally and we were also unable to make it usable independently of its Microsoft Foundation Classes GUI, with which it is tightly intertwined.
We are not aware of further practical approaches for constrained planarity variants. Note that while the problems Partitioned 2-page Book Embedding and Level Planarity have linear-time solutions, they are much more restricted than Synchronized Planarity (see Figure 3) and have no usable implementations available. We thus focus our comparison on solutions to the Clustered Planarity problem which, besides being a common generalization of both other problems, fortunately also has all relevant implementations available.
\begin{table}
\begin{tabular}{l|c|c c|c c|c c|c c} Dataset & \# & \multicolumn{2}{c|}{Vertices} & \multicolumn{1}{c|}{Density} & \multicolumn{1}{c|}{Components} & \multicolumn{2}{c|}{Clusters/Pipes} & \multicolumn{1}{c}{\(d\)} \\ \hline C-OLD & 1643 & \(\leq\)59 & ( 17.2) & 0.9–2.2 (1.4) & =1 & \(\leq\)19 & ( 4.2) & \(\leq\)256 & ( 34.0) \\ C-NCP & 13834 & \(\leq\)500 & ( 236.8) & 0.6–2.9 (1.9) & \(\leq\)48 & (21.7) & \(\leq\)50 & ( 16.8) & \(\leq\)5390 & ( 783.3) \\ C-MED & 5171 & \(\leq\)10\({}^{3}\) & ( 311.6) & 0.9–2.9 (2.3) & \(\leq\)10 & ( 5.1) & \(\leq\)53 & ( 16.1) & \(\leq\)7221 & ( 831.8) \\ \hline C-LRG & 5096 & \(\leq\)10\({}^{5}\) & (15 214.1) & 0.5–3.0 (2.4) & \(\leq\)100 & (29.8) & \(\leq\)989 & ( 98.8) & \(\leq\)2380 013 & (44 788.7) \\ SEFE-LRG & 1008 & \(\leq\)10\({}^{4}\) & ( 3800.0) & 1.1–2.4 (1.7) & =1 & \(\leq\)20 000 & (7600.0) & \(\leq\)113 608 & (34 762.4) \\ SP-LRG & 1587 & \(\leq\)10\({}^{5}\) & (25 496.6) & 1.3–2.5 (2.0) & \(\leq\)100 & (34.5) & \(\leq\)20 000 & (1467.4) & \(\leq\)139 883 & ( 9627.5) \\ \end{tabular}
\end{table}
Table 1. Statistics for our different datasets, values in parentheses are averages. Column # shows the number of instances while column \(d\) shows the total number of cluster–border edge crossings or the total degree of all pipes, depending on the underlying instances.
## 4. Clustered planarity in practice
In this section, we shortly describe our C++ implementation of the Synchronized Planarity algorithm by Blasius et al. (Blasius et al., 2017) and compare its running time and results on instances derived from Clustered Planarity with those of the two existing implementations by Chimani et al. (Chimani et al., 2017; Guthwenger et al., 2018) and by Gutwenger et al. (Gutwenger et al., 2019). We base our implementation on the graph data structures provided by the OGDF (Guthwenger et al., 2018) and, as the only other dependency, use the PC-tree implementation by Fink et al. (Fink et al., 2019) for the embedding trees. The PC-tree is a datastructure that is conceptually equivalent to the PQ-tree we use as embedding tree, but is faster in practice (Fink et al., 2019).
The algorithm for Synchronized Planarity makes no restriction on how the next feasible pipe should be chosen. For now, we will use a heap to always use a pipe of maximum degree, as this ensures that the pipe is feasible. The operations used for solving Synchronized Planarity heavily rely on (bi-)connectivity information while also making changes to the graph that may affect this information. As recomputing the information before each step would pose a high overhead, we maintain this information in the form of a BC-forest (i.e. a collection of BC-trees). To generate the embedding trees needed by the PropagatePQ and SimplifyMatching operations, we implement the Booth-Lueker algorithm for testing planarity (Blasius et al., 2017; Fink et al., 2019) using PC-trees. We use that, after processing all vertices of a biconnected component, the resulting PC-tree corresponds to the embedding tree of the vertex that was processed last.
### Evaluation Set-Up
We compare our implementation of Synchronized Planarity with the Clustered Planarity implementations ILP by Chimani et al. (Chimani et al., 2017; Guthwenger et al., 2018) and HT by Gutwenger at al. (Gutwenger et al., 2019). Both are written in C++ and are part of the OGDF. The ILP implementation by Chimani et al. (Chimani et al., 2017; Guthwenger et al., 2018) uses the ABACUS ILP solver (Blasius et al., 2017) provided with the OGDF. We refer to our Synchronized Planarity implementation processing pipes in descending order of their degree as SP[d]. We use the embedding it generates for yes-instances as certificate to validate all positive answers. For the Hanani-Tutte algorithm, we give the running times for the modes with embedding generation and verification (HT) and the one without (HT-f) separately. Note that HT-f only checks an important necessary, but not sufficient condition and thus may falsely classify negative instances as positive, see (Gutwenger et al., 2019, Figure 3) and (Guthwenger et al., 2018, Figure 16) for examples where this is the case. Variant HT tries to verify a positive answer by generating an embedding, which works by incrementally fixing parts of a partial embedding and subsequently re-running the test. This process may fail at any point, in which case the algorithm can make no statement about whether the instance is positive or negative (Gutwenger et al., 2019, Section 3.3). We note that, in any of our datasets, we neither found a case of HT-f yielding a false-positive result nor a case of a HT verification failing. The asymptotic running time of HT-f is bounded by \(O(n^{6})\) and the additional verification of HT adds a further factor of \(n\)(Gutwenger et al., 2019).
We combine the Clustered Planarity datasets that were previously used for evaluations on HT and ILP to form the set C-OLD(Chimani et al., 2017; Guthwenger et al., 2018; Guthwenger et al., 2019). We apply the preprocessing rules of Gutwenger at al. (Gutwenger et al., 2019) to all instances and discard instances that become trivial, non-planar or cluster-connected, since the latter are easy to solve (Guthwenger et al., 2018). This leaves 1643 instances; see Table 1. To create the larger dataset C-NCP, we used existing methods from the OGDF to generate instances with up to 500 vertices and up to 50 clusters. This yields 15 750 instances, 13 834 out of which are non-trivial after preprocessing. As this dataset turned out to contain only 10 % yes-instances, we implemented a new clustered-planar instance generator that is guaranteed to yield yes-instances. We use it on random planar graphs with up to 1000 vertices to generate 6300 clustered-planar instances with up to 50 clusters. Out of these, 5171 are non-trivial after preprocessing and make up our dataset C-MED. We provide full details on the generation of our dataset at the end of this section.
We run our experiments on Intel Xeon E5-2690v2 CPUs (3.00 GHz, 10 Cores, 25 MB Cache) with a memory usage limit of 6 GB. As all implementations are single-threaded, we run multiple experiments in parallel using one core per experiment. This allows us to test more instances while causing a small overhead which affects all implementations in the same way. The machines run Debian 11 with a 5.10 Linux Kernel. All binaries are compiled statically using gcc 10.2.1 with flags -O3 -march=native and link-time optimization enabled. We link against slightly modified versions of OGDF 2022.02 and the PC-tree implementation by Fink et al. [26]. The source code of our implementation and all modifications are available at github.com/N-Coder/syncplan,1 while our dataset is on Zenodo with DOI 10.5281/zenodo.7896021.
Footnote 1: It is also archived at Software Heritage with ID swh:1:snp:0date4960cc1303cc3575cf04924e19d664f8ad87.
Details on Dataset Generation.The dataset C-OLD is comprised of the datasets P-Small, P-Medium, P-Large by Chimani and Klein [19] together with PlanarSmallR (a version of PlanarSmall[17] with preprocessing applied), PlanarMediumR and PlanarLargeR by Gutwenger et al. [36]. The preprocessing reduced the dataset of Chimani and Klein [19] to 64 non-trivial instances, leading to dataset C-OLD containing 1643 instances in total.
The OGDF library can generate an entirely random clustering by selecting random subsets of vertices. It can also generate a random clustered-planar and cluster-connected clustering on a given graph by running a depth-first search that is stopped at random vertices, forming new clusters out of the discovered trees. To generate non-cluster-connected but clustered-planar instances, we temporarily add the edges necessary to make a disconnected input graph connected. For the underlying graphs of C-NCP, we use the OGDF to generate three instances for each combination of \(n\in\{100,200,300,400,500\}\) nodes, \(m\in\{n,1.5n,2n,2.5n,3n-6\}\) edges, and \(d\in\{10,20,30,40,50\}\) distinct connected components. For each input graph, we generate six different clusterings, three entirely random and three random clustered-planar, with \(c\in\{3,5,10,20,30,40,50\}\) clusters. This yields 15 750 instances, 13 834 out of which are non-trivial after preprocessing.
It turns out that roughly 90 % of these instances are not clustered-planar (see Table 2), even though half of them are generated by a method claiming to only generate clustered-planar instances. This is because the random DFS-subtree used for clusters by the OGDF only ensures that the generated cluster itself, but not its complement are connected. Thus, if the subgraph induced by the selected vertices contains a cycle, this cycle may separate the outside of the cluster; see Figure 5(a). To reliably generate yes-instances, we implemented a third method for generating random clusterings. We first add temporary edges to connect and triangulate the given input graph. Afterwards, we also generate a random subtree and contract it into a cluster. Each visited vertex is added to the tree with a probability set according to the desired number of vertices per cluster. To ensure the non-tree vertices remain connected, we only add vertices to the tree whose contraction leaves the graph
Figure 6: **(a) Converting the subtree \(\{a,b,c,d\}\) with root \(a\) (shown in orange) into a cluster will separate vertices \(u\) and \(v\), as the edge \(bd\) (dashed) will also be part of the cluster. (b) A clustered-planar graph with two clusters (in addition to the root cluster) that HT classifies as “nonCPlanarVerified”.**
triangulated, i.e., that have at most two neighbors that are already selected for the tree. We convert the selected random subtrees into clusters and contract them for the next iterations until all vertices have been added to a cluster.
As we do not need multiple connected components to ensure the instance is not cluster-connected for our Clustered Planarity instance generator, we used fewer steps for the corresponding parameter, but extended the number of nodes up to 1000 for C-MED. The underlying graphs are thus comprised of three instances for each combination of \(1\leq n\leq 1000\) nodes with \(0\equiv n\mod 100\) (i.e. \(n\) is a multiple of 100), \(m\in\{n,1.5n,2n,2.5n,3n-6\}\) edges, and \(d\in\{1,10,25,50\}\) distinct connected components. For each input graph, we generate three random clustered-planar clusterings with an expected number of \(c\in\{3,5,10,20,30,40,50\}\) clusters. This yields 6300 instances which are guaranteed to be clustered-planar, 5171 out of which are non-trivial after preprocessing and make up our dataset C-MED.
### Results.
Table 2 shows the results of running the different algorithms. The dataset C-OLD is split in roughly equal halves between yes- and no-instances and all algorithms yield the same results, except for the 111 instances for which the ILP ran into our 5-minute timeout. The narrow inter-quartile ranges in Figure 7 show that the running time for HT and SP[d] clearly depends on the number of crossings between cluster boundaries and edges in the given instance, while it is much more scattered for ILP. Still, all instances with less than 20 such crossings could be solved by ILP. For HT, we can see that the verification and embedding of yes-instances has an overhead of at least an order of magnitude over the non-verifying HT-f. The running times for HT on no-instances as well as the times for HT-f on any type of instance are the same, showing that the overhead is solely caused by
\begin{table}
\begin{tabular}{r|r r r r|r r r r|r r r} & \multicolumn{3}{c|}{C-OLD} & \multicolumn{3}{c|}{C-NCP} & \multicolumn{3}{c}{C-MED} \\ & ILP & HT & HT-f & SP[d] & ILP & HT & HT-f & SP[d] & ILP & HT & HT-f & SP[d] \\ \hline Y & 732 & 792 & 792 & 792 & 181 & 1327 & 1534 & 1535 & 953 & 762 & 2696 & 5170 \\ N & 800 & 851 & 851 & 851 & 946 & 6465 & 6463 & 12 308 & 0 & 85 & 85 & 0 \\ ERR & 0 & 0 & 0 & 0 & 5214 & 0 & 0 & 0 & 1263 & 0 & 0 & 0 \\ TO & 111 & 0 & 0 & 0 & 7502 & 6051 & 5846 & 0 & 2955 & 4324 & 2390 & 1 \\ \end{tabular}
\end{table}
Table 2. Counts of the results ‘yes’, ‘no’, ‘error’, and ‘timed out’ on C-OLD, C-NCP and C-MED.
Figure 7. Median running times on dataset C-OLD **(a)** together with the underlying scatter plot **(b)**. For each algorithm, we show running times for yes- and no-instances separately. Markers show medians of bins each containing 10 % of the instances. Shaded regions around each line show inter-quartile ranges.
the verification while the base running time is always the same. For the larger instances in this test set, SP[d] is an order of magnitude faster than HT-f. For SP[d], we also see a division between yes- and no-instances, where the latter can be solved faster, but also with more scattered running times. This is probably due to the fact that the test can fail at any (potentially very early) reduction step or when solving the reduced instance. Furthermore, we additionally generate an embedding for positive instances, which may cause the gap between yes- and no-instances.
The running times on dataset C-NCP are shown in Figure 8. The result counts in Table 2 show that only a small fraction of the instances are positive. With only up to 300 cluster-edge crossings these instances are also comparatively small. The growth of the running times is similar to the one already observed for the smaller instances in Figure 7. HT-f now runs into the timeout for almost all yes-instances of size 200 or larger, and both HT and HT-f time out for all instances of size 1500 and larger. The ILP only manages to solve very few of the instances, often reporting an "undefined optimization result for c-planarity computation" as error; see Table 2. The algorithms all agree on the result if they do not run into a timeout or abort with an error, except for one instance that HT classifies as negative while SP[d] found a (positive) solution and also verified its correctness using the generated embedding as certificate. This is even though the Hanani-Tutte approach by Gutwenger et al. (2018) should answer "no" only if the instance truly is negative. Figure 6b shows a minimal minor of the instance for which the results still disagree.
The running times on dataset C-MED with only positive instances shown in Figure 9 are in accordance with the previous results. We now also see more false-negative answers from the HT approach, which points to an error in its implementation; see also Table 2. The plots clearly show that our approach is much faster than all others. As the Synchronized Planarity reduction fails at an arbitrary step for negative instances, the running times of positive instances form an upper bound for those of negative instances. As we also see verifying positive instances to obtain an embedding as far more common use-case, we focus our following engineering on this case.
## 5. Engineering Synchronized Planarity
In this section, we study how degrees of freedom in the Synchronized Planarity algorithm can be used to improve the running times on yes-instances. The algorithm makes little restriction on the order in which pipes are processed, which gives great freedom to the implementation for choosing the pipe it should process next. In Section 5.1 we investigate the effects of deliberately choosing the next pipe depending on its degree and whether removing it requires generation of an embedding tree. As mentioned by the original description of the Synchronized Planarity
Figure 8. Median running times **(a)** and scatter plot **(b)** on dataset C-NCP.
algorithm, there are two further degrees of freedom in the algorithm, both concerning pipes where both endpoints are block-vertices. The first one is that if both endpoints additionally lie in different connected components, we may apply either PropagatePQ or (EncapsulateAnd)Join to remove the pipe. Joining the pipe directly removes it entirely instead of splitting it into multiple smaller ones, although at the cost of generating larger connected components. The second one is for which endpoint of the pipe to compute an embedding tree when applying PropagatePQ. Instead of computing only one embedding tree, we may also compute both at once and then use their intersection. This preempts multiple following operations propagating back embedding information individually for each newly-created smaller pipe. We investigate the effect of these two decisions in Section 5.2. Lastly, we investigate an alternative method for computing embedding trees in Section 5.3, where we employ a more time-consuming algorithm that in return yields embedding trees for all vertices of a biconnected component simultaneously instead of just for a single vertex.
To gain an initial overview over which parts could benefit the most from improvements, Figure 10 shows how the running time is distributed across different operations, averaged over all instances in C-MED. It shows that with more than 20ms, that is roughly 40 % of the overall running time, a large fraction of time is spent on generating embedding trees, while the actual operations contribute only a minor part of roughly 18 % of the overall running time. 27 % of time is spent on solving and embedding the reduced instance and 15 % is spent on undoing changes to obtain an embedding for the input graph. Thus, the biggest gains can probably be made by reducing the time spent on generating embedding information in the form of embedding trees. We use this as rough guideline in our engineering process.
Dataset Generation.To tune the running time of our algorithm on larger instances, we increased the size of the generated instances by a factor of 100 by changing the parameters of our own cluster-planar instance generator to \(n\in\{100\), 500, 1000, 5000, 10 000, 50 000, 100 000\}, \(d\in\{1,10,100\}\), \(c\in\{3,5,10,25,50,100,1000\}\) for dataset C-LRG. This yields 6615 instances, out of which 5096 are non-trivial after preprocessing; see Table 1.
Figure 10. Average time spent on different operations for SP[d] on C-MED.
Figure 9. Median running times **(a)** and scatter plot **(b)** on dataset C-MED.
In addition to the Clustered Planarity dataset we also generate a dataset that uses the reduction from Connected SEFE. We do so by generating a random connected and planar embedded graph as shared graph. Each exclusive graph contains further edges which are obtained by randomly splitting the faces of the embedded shared graph until we reach a desired density. For the shared graphs, we generate three instances for each combination of \(n\in\{100,\,500,\,1000,\,2500,\,5000,\,7500,\,10\,000\}\) nodes and \(m\in\{n,1.5n,2n,2.5n\}\) edges. For \(d\in\{0.25,0.5,0.75,1\}\), we then add \((3n-6-m)\cdot d\) edges to each exclusive graph, i.e., the fraction \(d\) of the number of edges that can be added until the graph is maximal planar. We also repeat this process three times with different initial random states for each pair of shared graph and parameter \(d\). This leads to the dataset SEFE-LRG containing 1008 instances.
We also generate a dataset of Synchronized Planarity instances by taking a random planar embedded graph and adding pipes between vertices of the same degree, using a bijection that matches their current rotation. The underlying graphs are comprised of three instances for each combination of \(n\in\{100,\,500,\,1000,\,5000,\,10\,000,\,50\,000,\,100\,000\}\) nodes, \(m\in\{1.5n,2n,2.5n\}\) edges, and \(d\in\{1,10,100\}\) distinct connected components. Note that we do not include graphs that would have no edges, e.g., those with \(n=100\) and \(d=100\). For each input graph, we generate three random Synchronized Planarity instances with \(p\in\{0.05n,0.1n,0.2n\}\) pipes. This leads to the dataset SP-LRG containing 1587 instances.
Altogether, our six datasets contain 28 339 instances in total. For the test runs on these large instances, we increase the timeout to 1 hour.
Figure (a)a shows the result of running our baseline variant SP[d] of the Synchronized Planarity algorithm (together with selected further variants of the algorithm from subsequent sections)
Figure 11: C-LRG median absolute running times **(a)** and fraction of timeouts **(b)**. Each marker again corresponds to a bin containing 10 % of the instances.
Figure 12: Scatterplot and estimate for SP[d] running time growth behavior on C-LRG.
on dataset C-LRG. Note that, because the dataset spans a wide range of instance sizes and thus the running times also span a range of different magnitudes, the plot uses a log scale for both axes. Figure 10(b) shows the fraction of runs that timed out for each variant. To estimate the practical runtime growth behavior, we also fit a polynomial to the data shown in Figure 12 and thereby find the running time growth behavior to be similar to \(d^{1.5}\), where \(d\) is the number of crossings between edges and cluster borders.
### Pipe Ordering
To be able to deliberately choose the next pipe, we keep a heap of all pipes in the current instance, where the ordering function can be configured. Note that the topmost pipe from this heap may not be feasible, in which case we will give priority to the close-by pipe of higher degree that blocks the current pipe from being feasible (see (Brandt et al., 2016, Lemma 3.5)). We compare the baseline variant SP[d] sorting by descending (i.e. largest first) degree with the variant SP[a] sorting by ascending degree, and SP[r] using a random order. Note that for these variants, the ordering does not depend on which operation is applicable to a pipe or whether this operation requires the generation of an embedding tree. To see whether making this distinction affects the running time, we also compare the variants SP[d+c], which prefers to process pipes on which EncapsulateAndJoin can be applied, and SP[d-c], which defers such pipes to the very end, processing pipes requiring the generation of embedding trees first.
To make the variants easier to compare, Figure 11(a) shows running times relative to that of the baseline SP[d]. Note that we do not show the median of the last bin, in which up to 70 % of the runs timed out, while this number is far lower for all previous bins; see Figure 10(b). Figure 11(a) shows that the median running times differ by less than 10 % between these variants. The running time of SP[r] seems to randomly alternate between being slightly slower and slightly faster than SP[d]. SP[d] is slightly slower than SP[a] for all bins except the very first and very last, indicating a slight advantage of processing small pipes before bigger ones on these instances. Interestingly, SP[d] is also slower than both SP[d+c] and SP[d-c] for all bins. The fact that these two variants have the same speed-ups indicates that EncapsulateAndJoin should not be interleaved with the other operations, while it does not matter whether it is handled first or last. Still, the variance in relative running times is high and none of the variants is consistently faster on a larger part of the instances. To summarize, the plots show a slight advantage for not interleaving operation EncapsulateAndJoin with the others or sorting by ascending degree, but this advantage is not significant in the statistical sense; see Section 6.2. We keep SP[d] as the baseline for our further analysis.
Figure 13. Relative running times when **(a)** sorting by pipe degree or applicable operation and **(b)** when handling pipes between block-vertices via intersection or join. Note the different scales on the y-axis.
### Pipes with two Block-Vertex Endpoints
Our baseline always processes pipes where both endpoints are block-vertices by applying PropagatePQ or SimplifyMatching based on the embedding tree of an arbitrary endpoint of the pipe. Alternatively, if the endpoints lie in different connected components, such pipes can also be joined directly by identifying their incident edges as in the second step of EncapsulateAndJoin. This directly removes the pipe entirely instead of splitting into further smaller pipes, although it also results in larger connected components. We enable this joining in variant SP[d b]. As a second alternative, we may also compute the embedding trees of both block-vertices and then propagate their intersection. This preempts the multiple following operations propagating back embedding information individually for each newly-created smaller pipe. We enable this intersection in variant SP[d i]. Variant SP[d bi] combines both variants, preferring the join and only intersecting if the endpoints are in the same connected component. We compare the effect of differently handling pipes with two block-vertex endpoints in variants SP[d b], SP[d i] and SP[d bi] with the baseline SP[d], which computes the embedding tree for an arbitrary endpoint and only joins pipes where both pipes are cut-vertices.
Figure 12(b) shows that SP[d b] (and similarly SP[d bi]) is faster by close to \(25\,\mathrm{\char 37}\) on instances with less than \(1000\) cluster-border edge crossings, but quickly grows \(5\) times slower than SP[d] for larger instances. This effect is also visible in the absolute values of Figure 10(a). This is probably caused by the larger connected components (see the last column of Table 3), which make the computation of embedding trees more expensive. Only inserting an embedding tree instead of the whole connected component makes the embedding information of the component directly available in a compressed form without the need to later process the component in its entirety again. Figure 12(b) also shows that SP[d i] is up to a third slower than SP[d], indicating that computing both embedding trees poses a significant overhead while not yielding sufficiently more information to make progress faster. We also evaluated combinations of the variants from this section with the different orderings from the previous section, but observed no notable differences in running time behavior. The effects of the variants from this section always greatly outweigh the effects from the different orderings. To summarize, as the plots only show an advantage of differently handling pipes between block-vertices for small instances, but some strong disadvantages especially for larger instances, we keep SP[d] as our baseline.
### Batched Embedding Tree Generation
Our preliminary analysis showed that the computation of embedding trees consumes a large fraction of the running time (see Figure 10), which cannot be reduced significantly by using the
Figure 14. Relative running times for **(a)** SPQR-tree batched embedding tree generation and **(b)** for different variants thereof.
degrees of freedom of the algorithm studied in the previous two sections. To remedy the overhead of recomputing embedding trees multiple times we now change the algorithm to no longer process pipes one-by-one, but to process all pipes of a biconnected component in one batch. This is facilitated by an alternative approach for generating embedding trees not only for a single vertex, but for all vertices of a biconnected component. The embedding tree of a vertex \(v\) can be derived from the SPQR-tree using the approach described by Blasius et al. (Blasius et al., 2017): Each occurrence of \(v\) in a "parallel" skeleton of the SPQR-tree corresponds to a (PQ-tree) P-node in the embedding tree of \(v\), each occurrence in a "rigid" to a (PQ-tree) Q-node. This derivation can be done comparatively quickly, in time linear in the degree of \(v\). Thus, once we have the SPQR-tree of a biconnected component available, we can apply all currently feasible PropagatePQ and SimplifyMatching operations in a single batch with little overhead. The SPQR-tree computation takes time linear in the size of the biconnected component, albeit with a larger linear factor than for the linear-time planarity test that yields only a single embedding tree. In a direct comparison with the planarity test, this makes the SPQR-tree the more time-consuming approach.
We enable the batched embedding tree computation based on SPQR-trees in variant SP[s]. Figures (a)a and (a)a show that for small instances, this yields a slowdown of close to a third. Showing a behavior inverse to SP[d b], SP[s] grows faster for larger instances and its speed-up even increases to up to 4 times as fast as the baseline SP[d]. This makes SP[s] the clear champion of all variants considered so far. We will thus use it as baseline for our further evaluation, where we combine SP[s] with other, previously considered flags.
### SPQR-Batch Variations
Figure (b)b switches the baseline between the two variants shown in Figure (a)a and additionally contains combinations of the variants from Section 5.2 with the SPQR-batch computation. As in Figure (b)b, the intersection of embedding trees in SP[s i] is consistently slower, albeit with a slightly smaller margin. The joining of blocks in SP[s b] also shows a similar behavior as before, starting out 25 % faster for small instances and growing up to 100 % slower for larger instances. Again, this is probably because too large connected components negatively affect the computation of SPQR-trees. Still, the median of SP[s b] is consistently faster than SP[d]. Different to before, SP[s bi] is now faster than SP[s b], making it the best variant for instances with up to 5000 cluster-border edge crossings. This is probably because in the batched mode, there is no relevant overhead for obtaining a second embedding tree, while the intersection does preempt some following operations. To summarize, for instances up to size 5000, SP[s bi] is the fastest variant, which is outperformed by SP[s] on larger instances. This can also be seen in the absolute running times in Figure (a)a, where SP[s] is more than an order of magnitude faster than SP[d b] on large instances.
## 6. Further Analysis
In this section, we provide further in-depth analysis of the different variants from the previous section and also analyze their performance on the remaining datasets to give a conclusive judgement. To gain more insights into the runtime behavior, we measured the time each individual step of the algorithm takes when using the different variants. An in-depth analysis of this data is given in Section 6.1, where Figure 15 also gives a more detailed visualization of per-step timings. The per-step data corroborates that the main improvement of faster variants is greatly reducing the time spent on the generation of embedding trees, at the cost of slightly increased time spent on the solve and embed phases.
To further verify our ranking of variants' running times from the previous sections, we also use a statistical test to check whether one variant is significantly faster than another. The results
presented in Section 6.2 corroborate our previous results, showing that pipe ordering has no significant effect while the too large connected components and batched processing of pipes using SPQR-trees significantly change the running time.
The results of the remaining datasets SEFE-LRG and SP-LRG are presented in Section 6.3 and mostly agree with the results on C-LRG, with SP[d b] clearly being the slowest and SP[s] being the fastest on large instances. The main difference is the magnitude of the overhead generated by large connected components for variants with flag [b].
### Detailed Runtime Profiling
Table 3 shows the per-step running time information aggregated for variants studied in the previous section. Figure 15 in greater detail shows how the running time spent is split on average across the different steps of the algorithm (Figure 15a) and then also further drills down on the composition of the individual steps that make the instance reduced (Figure 15b), solve the reduced instance (Figure 15d), and then derive a solution and an embedding for the input instance by undoing all changes while maintaining the embedding (Figure 15e). For variants that use the SPQR-tree for embedding information generation, we also analyze the time spent on the steps of this batch operation (Figure 15c). Note that we do not have these measurements available for runs that timed out. To ensure that the bar heights still correspond to the actual overall running times in the topmost plot, we add a bar corresponding to the time consumed by timed-out runs on top. This way, ordering the bars by height yields roughly the same order of variants as we already observed in Figure 11a.
Figure 15b clearly shows that the majority of time during the reduce step is spent on generating embedding information, either in the form of directly computing embedding trees (bars prefixed with "ET") or by computing SPQR trees. This can also be seen by comparing column "Make Reduced" in Table 3 with column "Compute Emb Tree". Only for the fastest variants, those with flag [s] and without [b], the execution of the actual operations of the algorithm becomes more prominent over the generation of embedding information in Figure 15c. Here, the terminal case of the SimplifyMatching operation (described in the bottom left part of Figure 5) now takes the biggest fraction of time, and actually also a bigger absolute amount of time than for the other, slower variants with flag [b] enabled. This is probably because, instead of being joined as with flag [b] enabled, here pipes between block-vertices are split by PropagatePQ into multiple smaller pipes, which
\begin{table}
\begin{tabular}{l|c|c c c|c c c c|c|c} \hline SP[d] & 142.68 & 133.08 & 0.82 & 8.78 & 0.25 & 5.00 & 13.79 & 91.34 & 5.64 & 1811 & 2780 \\ SP[d b] & 197.17 & 194.72 & 0.99 & 1.46 & 0.63 & 1.36 & 1.53 & 186.18 & 0.42 & 652 & 13 021 \\ SP[s] & 86.57 & 57.75 & 1.25 & 27.56 & 0.57 & 9.84 & 22.38 & 7.61 & 18.03 & 2696 & 2890 \\ SP[s b] & 93.07 & 79.25 & 3.55 & 10.26 & 2.92 & 4.29 & 12.74 & 46.31 & 5.46 & 1421 & 22 965 \\ SP[s b1] & 81.32 & 68.90 & 3.09 & 9.32 & 2.51 & 3.79 & 11.52 & 41.16 & 4.84 & 1448 & 23 284 \\ \hline \end{tabular}
\end{table}
Table 3. Average values for different variants of SP on dataset C-LRG. All values, except for the counts in the last two columns, are running times in seconds. The first data column shows the average total running time, followed by how this is split across the three phases. The following four columns show the composition of the running time of the “Make Reduced” step. The last three columns detail information about the “Undo Simplify” step in the “Embed” phase, and the maximum size of biconnected components in the reduced instance.
Figure 15: The average running time of our different Synchronized Planarity variants.
then need to be removed by SimplifyMatching. This leads to the variants without [b] needing, on average, roughly two to three times as many SimplifyMatching applications as those with [b]; see Table 3.
The larger biconnected components caused by [b] may also be the reason why the insertion of wheels takes a larger amount of time for variants with [b] in the solving phase shown in Figure 15d. When replacing a cut-vertex by a wheel, all incident biconnected components with at least two edges incident to the cut-vertex get merged. Updating the information stored with the vertices of the biconnected components is probably consuming the most time here, as undoing the changes by contracting the wheels is again very fast. Other than the "MakeWheels" part, most time during the solving phase is spent on computing SPQR trees, although both is negligible in comparison to the overall running time.
The running times of the embedding phase given in Figure 15e show an interesting behavior as they increase when the "Make Reduced" phase running time decreases, indicating a potential trade-off to be made; see also the "Embed" column in Table 3. As the maximum time spent on the "Make Reduced" phase is still slightly larger, variants where this phase is faster while the embedding phase is slower are still overall the fastest. The biggest contribution of running time in the latter phase is the undoing of SimplifyMatching operations, which means copying the embedding of one endpoint of a removed pipe to the other. The time spent here roughly correlates with the time spent on applying the SimplifyMatching operations in the first place (see Table 3).
To summarize, the per-step data corroborates that the main improvement of faster variants is greatly reducing the time spent on the generation of embedding trees, at the cost of slightly increased time spent on the solve and embed phases. Flags [s] and [b] have the biggest impact on running times, while flag [i] and the processing order of pipes do not seem to have a significant influence on the overall running time. While the variants with [s] clearly have the fastest overall running times, there is some trade-off between the amounts of time spent on different phases of the algorithm when toggling the flag [b].
### Statistical Significance
To test whether one variant is (in the statistical sense) significantly faster than another, we use the methodology proposed by Radermacher (Radermacher, 2016, Section 3.2) for comparing the performance of graph algorithms. For a given graph \(G\) and two variants of the algorithm described by their respective running times \(f_{A}(G),f_{B}(G)\) on \(G\), we want to know whether we have a likelihood at least \(p\) that the one variant is faster than the other by at least a factor \(\Delta\). To do so, we use the binomial sign test with advantages as used by Radermacher (Radermacher, 2016), where we fix two values \(p\in[0,1]\) and \(\Delta\geq 1\), and study the following hypothesis given a random graph \(G\) from our dataset: Inequality \(f_{A}(G)\cdot\Delta<f_{B}(G)\) holds with probability \(\pi\), which is at least \(p\). The respective null hypothesis is that the inequality holds with probability less than \(p\). Note that this is an experiment with exactly two outcomes (the inequality holding or not), which we can independently repeat on a sequence of \(n\) graphs and obtain the number of instances \(k\) for which the inequality holds. Using the binomial test, we can check the likelihood of obtaining at most \(k\) successes by drawing \(n\) times from a binomial distribution with probability \(p\). If this likelihood is below a given significance level \(\alpha\in[0,1]\), that is the obtained result is unlikely under the null hypothesis, we can reject the null hypothesis that the inequality only holds with a probability less than \(p\).
Fixing the significance level to the commonly-used value \(\alpha=0.05\), we still need to fix values for \(p\) and \(\Delta\) to apply this methodology in practice. We will use three different values for \(p\in[0.25,0.5,0.75]\), corresponding to the advantage on a quarter, half, and three quarters of the dataset. To obtain values for \(\Delta\), we will split our datasets evenly into two halves \(\mathcal{G}_{\text{train}}\) and \(\mathcal{G}_{\text{verify}}\), using the \(\mathcal{G}_{\text{train}}\) to obtain an estimate for \(\Delta\) and \(\mathcal{G}_{\text{verify}}\) to verify this value. For a given value of \(p\), we set \(\Delta^{\prime}\)
to the largest value such that \(f_{A}(G)\cdot\Delta^{\prime}<f_{B}(G)\) holds for \(p\cdot|\mathcal{G}_{\text{train}}|\) instances. To increase the likelihood that we can reject the null hypothesis in the verification step on \(\mathcal{G}_{\text{verify}}\), we will slightly discount the obtained value of \(\Delta^{\prime}\), using \(\Delta=\min(1,c\cdot\Delta^{\prime})\) instead with \(c\) set to \(0.75\).
Applying this methodology, Figure 16 compares the pairwise advantages of the variants from Sections 5.1 and 5.2. We see that SP[d i] and especially SP[d b] are significantly slower than the other variants: for the quarter of the dataset with the most extreme differences, the advantage rises up to a 5-fold speed-up for other variants, while slight advantages still persist when considering three quarters of instances. Conversely, not even on a quarter of instances are SP[d i] and SP[d b] faster than other variants. Comparing the remaining variants with each other, we see that each variant has at least a quarter of instances where it is slightly faster than the other variants, but always with no noticeable advantage, that is \(\Delta=1\). This is not surprising as the relative running times are scattered evenly above and below the baseline in Figure 12(a). For half of the dataset, SP[d-c] is still slightly faster than other variants, while no variant from Section 5.1 is faster than another for at least three quarters of instances. To summarize, our results here corroborate the findings from Sections 5.1 and 5.2, with SP[d i] and SP[d b] as the clearly slowest variants. While there is no clear winner among the other variants, at least SP[d-c] is slightly faster than the others on half of the dataset, but still has no noticeable advantage.
Figures 13(a) and 13(b) compare the pairwise advantages of the variants from Sections 5.3 and 5.4 (see also Figure 13(b)) for instances with more and less than 5000 cluster-border edge crossings, respectively. For the larger instances of Figure 13(a), the variants with flag [s] outperform SP[d] on at least \(75\,\%\) of instances, with advantages as high as a factor of 5 on at least a quarter of instances. Furthermore, SP[s] outperforms the variants with additional flags [b] and [i] on at least half of all instances. Considering \(75\,\%\) of all instances, the only significant result is that SP[s bi] outperforms SP[s b] but with no advantage, i.e. \(\Delta=1\). For the smaller instances of Figure 13(b), the comparison looks vastly different. Here, SP[s bi] outperforms all other variants on at least \(75\,\%\) of instances, although its advantage is not large, with only up to \(1.6\) even on the most extreme quarter of the dataset. Furthermore, variants SP[d] and SP[s b] outperform variants SP[s i] and SP[s] on half of the dataset, but again with no noticeable advantage, that is \(\Delta=1\). To summarize, our results are
Figure 16. Advantages of variants without flag [s] on C-LRG instances of size at least 5000. Blue cell backgrounds indicate significant values, while in cells with white background, we were not able to reject the null-hypothesis with significance \(\alpha=0.05\). Empty cells indicate that the fraction where the one algorithm is better than the other is smaller than \(p\).
again in accordance with those from Sections 5.3 and 5.4, where for large instances variant SP[s] is the fastest, whereas for smaller instances SP[s bi] is superior.
### Other Problem Instances
Running the same evaluation on the datasets SEFE-LRG and SP-LRG yielded absolute running times with roughly the same orders of magnitude as for C-LRG, see the left plots in Figures 18 to 20 (but note that the plots show different ranges on the x-axis while having the same scale on the y-axis). The right plots in the figures again detail the running times relative to SP[d]. For SP-LRG, the relative running time behavior is similar to the behavior observed on C-LRG. The two major differences concern variants with flag [b]. Variant SP[d b(i)] is not faster than SP[d] on small instances and also sooner grows slower on large instances. Similarly, SP[s b(i)] is not much faster than SP[d] on small instances, and its speed-up over SP[d] for larger instances has a dent where it returns to having roughly the same speed as SP[d] around size 1000. On a large scale, this behavior indicates that the slowdown caused by large connected components is even worse in dataset SP-LRG. For SEFE-LRG, the instances are less evenly distributed in terms of their total pipe degree, as the total pipe degree directly corresponds to the vertex degrees in the SEFE instance. Regarding the relative running time behavior, we still see that SP[d bi] is much slower and SP[s
Figure 17. Advantages of variants with flag [s] on C-LRG instances of size at least 5000 (a) and at most (b) 5000.
(i)] much faster than SP[d]. For the remaining variants, the difference to SP[d] is much smaller than in the two other datasets. This indicates that the size of connected components does not play an as important role in this dataset as before.
## 7. Conclusion
In this paper, we described the first practical implementation of Synchronized Planarity, which generalizes many constrained planarity problems such as Clustered Planarity and Connected SEFE. We evaluated it on more than \(28\,000\) instances stemming from different problems. Using the quadratic algorithm by Blasius et al. (Blasius et al., 2016), instances with \(100\) vertices are solved in milliseconds, while we can still solve most instances with up to \(100\,000\) vertices within minutes. This makes our implementation at least an order of magnitude faster than all other Clustered Planarity implementations, which also have a worse asymptotic running time. Furthermore, we found that the proposed algorithm is able to perform well in the same time as the one in the previous experiments.
Figure 19. Absolute (a) and relative (b) running times with regard to SP[d] for SP-LRG.
Figure 20. Absolute (a) and relative (b) running times with regard to SP[d] for SEFE-LRG.
Figure 18. Absolute (a) and relative (b) running times with regard to SP[d] for C-LRG. | **Note:** Please provide a complete and accurate translation that reflects the original meaning of the sentence. |
2310.20154 | A Quantum Optimization Method for Geometric Constrained Image
Segmentation | Quantum image processing is a growing field attracting attention from both
the quantum computing and image processing communities. We propose a novel
method in combining a graph-theoretic approach for optimal surface segmentation
and hybrid quantum-classical optimization of the problem-directed graph. The
surface segmentation is modeled classically as a graph partitioning problem in
which a smoothness constraint is imposed to control surface variation for
realistic segmentation. Specifically, segmentation refers to a source set
identified by a minimum s-t cut that divides graph nodes into the source (s)
and sink (t) sets. The resulting surface consists of graph nodes located on the
boundary between the source and the sink. Characteristics of the
problem-specific graph, including its directed edges, connectivity, and edge
capacities, are embedded in a quadratic objective function whose minimum value
corresponds to the ground state energy of an equivalent Ising Hamiltonian. This
work explores the use of quantum processors in image segmentation problems,
which has important applications in medical image analysis. Here, we present a
theoretical basis for the quantum implementation of LOGISMOS and the results of
a simulation study on simple images. Quantum Approximate Optimization Algorithm
(QAOA) approach was utilized to conduct two simulation studies whose objective
was to determine the ground state energies and identify bitstring solutions
that encode the optimal segmentation of objective functions. The objective
function encodes tasks associated with surface segmentation in 2-D and 3-D
images while incorporating a smoothness constraint. In this work, we
demonstrate that the proposed approach can solve the geometric-constrained
surface segmentation problem optimally with the capability of locating multiple
minimum points corresponding to the globally minimal solution. | Nam H. Le, Milan Sonka, Fatima Toor | 2023-10-31T03:41:21 | http://arxiv.org/abs/2310.20154v2 | # A Quantum Optimization Method for Geometric Constrained Image Segmentation
###### Abstract
Quantum image processing is a growing field attracting attention from both the quantum computing and image processing communities. We propose a novel method in combining a graph-theoretic approach for optimal surface segmentation and hybrid quantum-classical optimization of the problem-directed graph. The surface segmentation is modeled classically as a graph partitioning problem in which a smoothness constraint is imposed to control surface variation for realistic segmentation. Specifically, segmentation refers to a source set identified by a minimum s-t cut that divides graph nodes into the source (s) and sink (t) sets. The resulting surface consists of graph nodes located on the boundary between the source and the sink. Characteristics of the problem-specific graph, including its directed edges, connectivity, and edge capacities, are embedded in a quadratic objective function whose minimum value corresponds to the ground state energy of an equivalent Ising Hamiltonian. This work explores the use of quantum processors in image segmentation problems, which has important applications in medical image analysis. Here, we present a theoretical basis for the quantum implementation of LOGISMOS and the results of a simulation study on simple images. Quantum Approximate Optimization Algorithm (QAOA) approach was utilized to conduct two simulation studies whose objective was to determine the ground state energies and identify bitstring solutions that encode the optimal segmentation of objective functions. The objective function encodes tasks associated with surface segmentation in 2-D and 3-D images while incorporating a smoothness constraint. In this work, we demonstrate that the proposed approach can solve the geometric-constrained surface segmentation problem optimally with the capability of locating multiple minimum points corresponding to the globally minimal solution.
quantum computing quantum algorithm combinatorial optimization image segmentation graph theory QAOA
## 1 Introduction
Previously, an algorithmic framework LOGISMOS, Layered Optimal Graph Image Segmentation of Multiple Objects and Surfaces (Li et al. (2004, 2006); Wu and Chen (2002); Zhang et al. (2020)), was developed and its effectiveness in segmentation of multiple interacting surfaces have been proven in clinical applications (Kashyap et al. (2017); Oguz and Sonka (2014); Le et al. (2022)). The principle of LOGISMOS lies in the reformulation of the surface optimization task
to the problem of finding a minimum \(s-t\) cut in a directed graph. This representation is flexible because geometric constraints can be easily incorporated into the graph construction by additions of graph edges with infinite capacities.
Krauss and colleagues (Krauss et al. (2020)) presented three methods to formulate the maximal flow as a quadratic unconstrained binary optimization (QUBO) problem. In a QUBO formulation, the objective function can be expressed as a quadratic polynomial of binary variables. The cost matrix can be converted to a Hamiltonian matrix implemented by a series of rotation Z gates. The problem of finding a minimum function value of QUBO is equivalent to finding the ground state energy of the corresponding problem Hamiltonian.
Estimating the ground state of a random Hamiltonian is difficult. The Quantum Approximate Optimization Algorithm (QAOA) (Farhi et al. (2014)) provides an adiabatic way to evolve a ground state of a simple mixing Hamiltonian to a complete problem Hamiltonian. QAOA is a special case of variational quantum circuits. At each layer of QAOA, a pair of operators called cost and mixer unitaries are applied to slowly perturb the system Hamiltonian to the target Hamiltonian. In order to minimize the expectation value of the problem Hamiltonian, the parameters of the unitaries are subjected to classical optimization. This optimization process is iteratively repeated until no further improvements in the minimum objective function value can be achieved.
Through our analysis, we have discovered that the minimum cut formulation when optimized by QAOA provides a stochastic optimization approach for LOGISMOS surface segmentation which can return valid solutions at different runs.
## 2 Method
The proposed QuantumLOGISMOS framework is depicted in Fig. 1. The process begins with an input image, and the primary objective is to determine an optimal surface that separates a region of interest from the background. This is achieved by defining a cost function that emphasizes intensity changes within the image. The solution to the segmentation problem lies in identifying the nodes whose associated costs minimize the total cost function. Originally, this task is approached by recognizing that the optimal surface nodes are within a closed set, amounting to a minimal total terminal weight. Such a set can be found by cutting the graph with a minimum \(s-t\) cut in a directed graph, whose edge capacities dictate the maximum flow through the network. To initiate the graph, each image pixel is represented as a graph node. We augment the graph with a series of infinite capacity edges that enforce constraints on the permissible variations in smoothness for the solution surface. To determine the minimum \(s-t\) cut that separates the segmentation set from the background, we minimize a QUBO objective function using a hybrid quantum-classical QAOA schedule. The resulting bitstring solution represents the optimal surface within the image.
### LOGISMOS for Single Surface Detection
The LOGISMOS framework models a surface segmentation problem as a maximal-flow minimum-cut problem (Dantzig and Fulkerson (1955)). An optimal surface is the boundary of the resulting graph after cutting it along the min \(s-t\) edges.
A directed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) is constructed from the volumetric image \(\mathcal{I}(\mathbf{x},\mathbf{y},\mathbf{z})\) satisfying the following properties: (i) each pixel (2-D image) or voxel (3-D image) is represented by a graph node and (ii) all nodes are connected by directed edges to their 4-neighbor or 6-neighbor settings in 2-D or 3-D images, respectively.
A cost function maps graph nodes to cost values which are defined as the unlikeliness of a node to reside on a surface \(s\). The lower the cost indicates the greater likeliness of the node to be on the optimal surface. Given a surface function \(s(x)=k\) that maps a column \(x\) to a node \(k\) of a optimal surface \(\mathcal{S}\), the corresponding total cost function of the given surface function \(s\) is defined as Eq. (1)
\[C_{s}=\sum_{x\in Cod_{\mathcal{G}}}c_{s}(x,k)=\sum_{x\in Cod_{\mathcal{G}}}c_{s }(x,s(x)). \tag{1}\]
Equation (1) is the sum of the cost of all nodes \(k\), one per column in the set of columns \(Col_{\mathcal{G}}\) of the graph \(\mathcal{G}\), on the optimal surface \(\mathcal{S}\). The optimal surface \(\hat{\mathcal{S}}\) is characterized by the surface function \(\hat{s}(x)\) that minimizes the total cost function \(C_{s}\) represented in Eq. (2)
\[\hat{s}(x)=\arg\min_{\hat{s}(x)}C_{s}. \tag{2}\]
Figure 1: The proposed Quantum LOGISMOS framework: (1) Estimate cost functions and calculate terminal weights, (2) Introduce internal connections within columns to ensure that the optimal surface cut passes through each column only once, (3) Add inter-edges to impose smoothness condition, (4) Add source and sink nodes, add problem-specific edges, (5) Assign qubit to graph node, (6) QAOA optimization, (7) Bitstring solution and minimum closed set found.
The problem of searching for an optimal surface can be reformulated as the task of finding a closed set with minimal total terminal weights. This formulation is equivalent to minimizing the total cost function, \(\min W_{s}=\min C_{s}\). The total terminal weight \(W_{s}\) of a closed set is determined by summing the terminal weights of a subset of nodes \(k=1,\ldots,K_{x}\) belonging to the corresponding column \(x\), defined as Eq. (3)
\[W_{s}=\sum_{x\in Col_{\mathcal{G}}}\sum_{k^{\prime}=1}^{K_{x}}w_{s}\left(x,k^ {\prime}\right), \tag{3}\]
where
\[w_{s}(x,k)=\begin{cases}-1,&\text{if }k=1\\ c_{s}(x,k)-c_{s}(x,k-1),&\text{otherwise}\end{cases}. \tag{4}\]
Now, we specify how to construct directed edges \(\mathcal{E}\) of the graph which characterizes a number of segmentation configurations. First, we construct a set of infinite-capacity directed edges \(\mathcal{E}_{\text{intra}}\) imposing convex constraints on the surface \(\mathcal{S}\). This ensures each graph column has exactly one cut, in other words, the surface function \(s(x)=k\) is guaranteed to be bijective. The smoothness constraint is imposed by adding a series of edges \(\mathcal{E}_{\text{inter}}\) among adjacent columns, which enables the realistic delineation of the outlines of anatomical structures and is among the main advantages of the LOGISMOS framework.
Finally, the problem-specific edges imposing the terminal weights are embedded in \(\mathcal{E}_{\text{W}}\). To construct \(\mathcal{E}_{\text{W}}\), we first add a source node \(s\) and a sink \(t\) node to the graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), resulting in \(\mathcal{G}_{st}=(\mathcal{V}_{st},\mathcal{E})_{st}\) where \(\mathcal{V}_{st}=V\cup\{s,t\}\). For each node \(v\) belonging to the set of nodes with negative costs \(\mathcal{V}^{-}\), we add directed edges from the source node \(s\) to it with capacity \(|w_{s,v}|\). For those nodes belonging to the set \(\mathcal{V}^{+}\) of positive costs, we add directed edges from those to the sink node \(t\) with capacity \(|w_{v,t}|\). The resulting edges of the LOGISMOS graph \(G\) are
\[\mathcal{E}=\mathcal{E}_{\text{intra}}+\mathcal{E}_{\text{inter}}+\mathcal{E}_ {\text{W}}. \tag{5}\]
### Conversion to an Equivalent QUBO
Building upon the work conducted by Krauss et al. (Krauss et al. (2020)), we develop a QUBO objective function \(F_{C}\) that enables the simultaneous separation of the vertex set \(V\) into a source set \(s\) and a sink set \(t\) when minimized. Here, \(F_{C}\) is a sum of individual objective functions \(F_{\langle i,j\rangle}\) for each edge \(\langle i,j\rangle\in\mathcal{E}\) and \(F_{\langle s,t\rangle}\) for the source-sink edge \(\langle s,t\rangle\), as in Eq. (6) and Eq. (7)
\[F_{\langle i,j\rangle}=x_{i}^{2}-x_{i}x_{j}=\left\{\begin{array}{rl}1,&\text {if }i\in\text{source and }j\in\text{sink}\\ &0,\text{ otherwise}\end{array}\right. \tag{6}\]
\[F_{\langle s,t\rangle}=-(x_{s}^{2}-x_{s}x_{t})=\left\{\begin{array}{rl}-1,& \text{if cut is valid}\\ &0,\text{ otherwise}\end{array}\right. \tag{7}\]
which leads to
\[F_{C}(x)=\sum_{i,j\in\mathcal{V}+\{s,t\}}\lvert w_{\langle i,j\rangle}\rvert F_{\langle i,j\rangle}+\varepsilon F_{\langle s,t\rangle}. \tag{8}\]
The penalty coefficient \(\varepsilon\) ensures any feasible cut will result in negative energy. In our application, we choose the penalty coefficient to be the sum of all capacities plus one, assuming integer terminal weights
\[\varepsilon=1+\sum_{\langle i,j\rangle\in\mathcal{E}}\lvert w_{\langle i,j \rangle}\rvert. \tag{9}\]
We are now ready to construct a QUBO objective function. Specifically, the QUBO problem is expressed as
\[\text{minimize }F_{C}=\mathbf{x}^{T}\mathbf{Q}\mathbf{x}=\sum_{i,j}Q_{ij}x_{i}x_{j}, \tag{10}\]
where \(\mathbf{x}=[x_{1},x_{2},\ldots]\) is a vector whose elements are the encoded variables specific for the optimization problem and \(x_{i}\) are binary and subjected to \(x_{i}\in\{0,1\}\)(Glover et al. (2019)).
Given a LOGISMOS graph, an adjacent matrix \(\mathbf{A}\) can be determined in a square matrix \(\mathbf{A}\) whose matrix elements are given in Eq. (11)
\[\mathbf{A}=\begin{bmatrix}a_{11}&a_{12}&\ldots&a_{1j}\\ a_{21}&a_{22}&\ldots&\vdots\\ \vdots&\vdots&\ddots&\vdots\\ a_{i1}&\ldots&\cdots&a_{ij}\end{bmatrix}. \tag{11}\]
Since LOGISMOS graphs have no loops or self-edges, the diagonal elements of \(\mathbf{A}\) are zero and \(\mathbf{A}\) becomes to Eq. (12)
\[\mathbf{A}=\begin{bmatrix}0&a_{12}&\ldots&a_{1j}\\ a_{21}&0&\ldots&\vdots\\ \vdots&\vdots&\ddots&\vdots\\ a_{i1}&\ldots&\cdots&0\end{bmatrix}. \tag{12}\]
Based on the edge construction rules specified in Section 2.1, the matrix elements of \(\mathbf{A}\) are \(a_{ij}=|w_{(i,j)}|:=w_{ij}\), therefore we get \(\mathbf{A}\) of the form shown in Eq. (13)
\[\mathbf{A}=\begin{bmatrix}0&w_{12}&\ldots&w_{1j}\\ w_{21}&0&\ldots&\vdots\\ \vdots&\vdots&\ddots&\vdots\\ w_{i1}&\ldots&\ldots&0\end{bmatrix}. \tag{13}\]
The square matrix of constant \(\mathbf{Q}\) can be derived from \(\mathbf{A}\) by alternating the signs of off-diagonal elements of \(\mathbf{A}\). The diagonal elements of \(\mathbf{Q}\) are obtained by summing all outgoing edge capacities of each node \(i\) in \(\mathbf{A}\) as in Eq. (14)
\[\mathbf{Q}=\begin{bmatrix}\sum_{j}w_{1j}&-w_{12}&\ldots&-w_{1j}\\ -w_{21}&\sum_{j}w_{2j}&\ldots&\vdots\\ \vdots&\vdots&\ddots&\vdots\\ -w_{i1}&\ldots&\cdots&\sum_{j}w_{ij}\end{bmatrix}. \tag{14}\]
Thus, the objective function \(F_{C}\) can be rewritten in QUBO form as Eq. (15)
\[F_{C}=\mathbf{x}^{T}\mathbf{Q}\mathbf{x}=\mathbf{x}^{T}\begin{bmatrix}\sum_{j}w_{1j}&-w_{12}& \ldots&-w_{1j}\\ -w_{21}&\sum_{j}w_{2j}&\ldots&\vdots\\ \vdots&\vdots&\ddots&\vdots\\ -w_{i1}&\cdots&\cdots&\sum_{j}w_{ij}\end{bmatrix}\mathbf{x}. \tag{15}\]
### Formulation of the Problem Hamiltonian
We need to find the equivalent problem Hamiltonian that models \(F_{C}\) such that
\[H_{C}\left|\psi\right\rangle=F_{C}\left|\psi\right\rangle. \tag{16}\]
Equation (16) indicates that the ground state energy of a quantum system characterized by the Hamiltonian \(H_{C}\) is equivalent to the minimum value of the objective function \(F_{C}\), achieved with the ground state of the quantum system. Multiplying both sides of Eq. (16) by bra \(\left\langle\psi\right|\) yields
\[F_{C}=\left\langle\psi\right|H_{C}\left|\psi\right\rangle=\left\langle H_{C}\right\rangle, \tag{17}\]
which suggests a way to rewrite the QUBO expression of \(F_{C}\) in Eq. (15) as an expectation value of the problem Hamiltonian \(H_{C}\) in the state \(\left|\psi\right\rangle\).
From Eq. (15) and Eq. (17), we have
\[F_{C}=\left\langle\psi\right|H_{C}\left|\psi\right\rangle=\mathbf{x}^{T}\mathbf{Q}\mathbf{x}. \tag{18}\]
The problem Hamiltonian \(H\) corresponding to the objective function \(F_{C}\) that translate binary \(\mathbf{x}\) from classical variables to quantum representation of qubit states \(\left|\psi\right\rangle\) represented by Eq. (19)
\[H_{C}=\sum_{i,j}Q_{ij}\frac{1-Z_{i}}{2}\frac{1-Z_{j}}{2}. \tag{19}\]
The state \(\left|\psi\right\rangle=\left|\psi_{1}\right\rangle\otimes\left|\psi_{2} \right\rangle\otimes...\otimes\left|\psi_{n}\right\rangle\), \(\psi_{n}\in\{0,1\}^{n}\) in the Hilbert space \(\left(\mathscr{C}^{2}\right)^{\otimes n}\) encodes the classical bitstrings \(\mathbf{x}=[x_{1},x_{2},...,x_{n}]\). The connection between the value of the QUBO objective function and the energy of the quantum system is as in Eq. (20)
\[F_{C}(\mathbf{x})=\left\langle\psi\right|H_{C}\left|\psi\right\rangle=\left\langle \psi\right|E\left|\psi\right\rangle=\left\langle H_{C}\right\rangle=E. \tag{20}\]
The solution to the QUBO problem is a superposition state of all qubits representing graph nodes in a QAOA circuit estimating the problem Hamiltonian \(H_{C}\) (Lucas [2014]), that put the circuit energy to ground state energy \(E_{0}\) of the problem Hamiltonian \(H_{C}\) given by Eq. (21)
\[\min F_{C}=E_{0}. \tag{21}\]
### Hybrid Quantum-Classical Optimization
Locating the ground state of an arbitrary Hamiltonian is nontrivial. QAOA, proposed by Farhi et al. (Farhi et al. [2014]), aims to approximate the ground state of a given problem Hamiltonian \(H_{C}\). The idea of QAOA is based on the adiabatic theorem which states that a quantum system remains in its specific energy state if the Hamiltonian changes slowly enough. The framework consists of three parts: (1) prepare an initial ground state of a known and "easy" Hamiltonian \(H_{M}\), (2) a parameterized quantum circuit that slowly evolves the initial state and ground state of the simple Hamiltonian \(H_{M}\) to the final state and supposedly ground state of the problem Hamiltonian \(H_{C}\), and (3) a classical module to get the average expectation values of all shots and update the circuit parameter for the next optimization iteration. The process is repeated until the classical optimizer is no longer able to improve \(\theta\).
A QAOA circuit consists of trotterized unitary operators \(U_{C}(\gamma)\), the time evolution operator imposed by \(H_{C}\), and \(U_{M}(\beta)\), a time evolution operator imposed by a simple Hamiltonian \(H_{M}\). \(U_{C}(\gamma)\) and \(U_{M}(\beta)\) are also called problem and mixer unitaries, represented in Eq. (22) and Eq. (23), respectively
\[U_{C}(\gamma)=e^{-i\gamma H_{C}}, \tag{22}\]
\[U_{M}(\beta)=e^{-i\beta H_{M}}. \tag{23}\]
These pairs of unitary operators are applied to the initial state \(\left|\psi_{0}\right\rangle\)\(p\) times though the parameters \(\gamma_{i}\) and \(\beta_{i}\) at layer \(i\), i.e., \(\mathbf{\theta}_{i}=(\gamma_{i},\beta_{i})\), resulting in a final state as in Eq. (24)
\[\left|\psi(\mathbf{\gamma},\mathbf{\beta})\right\rangle=U\left(B,\beta_{p}\right)U \left(C,\gamma_{p}\right)\cdots U\left(B,\beta_{1}\right)U\left(C,\gamma_{1} \right)\left|\psi_{0}\right\rangle \tag{24}\]
By the adiabatic principle, the Hamiltonian induced by the trotterized circuit is an approximation of the problem Hamiltonian \(H_{C}\) with deeper
\[U(\mathbf{\theta})=\prod_{i=1}^{p}U_{C}\left(\gamma_{i}\right)U_{M}\left(\beta_{i} \right)\approx e^{-iH_{C}t}. \tag{25}\]
The depth \(p\) of the circuit allows for finer perturbation of the QAOA Hamiltonian, i.e., \(\mathbf{\theta}_{i}\) varies more slowly at each parameterized block. When \(p\) approaches infinity, it is theorized that the QAOA circuit is guaranteed to evolve the initial ground state of \(H_{M}\) to the approximation of the ground state of \(H_{C}\). The final state \(\left|\psi(\mathbf{\gamma},\mathbf{\beta})\right\rangle=\left|\psi(\mathbf{\theta})\right\rangle\) of the qubits is roughly estimated in the computational basis after a number of shots at the end of each QAOA execution. The energy of the system is then calculated and represented by Eq. (26)
\[E\left(\mathbf{\theta}\right)=\left\langle\psi\left(\mathbf{\theta}\right)\right|H_{C} \left|\psi\left(\mathbf{\theta}\right)\right\rangle=F_{C}(\mathbf{x}). \tag{26}\]
The process is repeated until the classical optimizer is no longer able to update \(\mathbf{\theta}\). The resulting bitstring \(\mathbf{x}\) corresponding to the computational basis is the approximate solution to the QUBO problem. In other words,
\[\min_{\mathbf{\theta}}E\left(\mathbf{\theta}\right)\approx\min\langle H_{C}\rangle, \tag{27}\]
and
\[\min_{\mathbf{\theta}}E\left(\mathbf{\theta}\right)\approx\min_{\mathbf{x}}F_{C}(\mathbf{x}). \tag{28}\]
### QuantumLOGISMOS
The proposed QuantumLOGISMOS algorithm is presented in 1. The algorithm takes an image \(\mathcal{I}\) as input and returns a surface \(\mathcal{S}\) as output. The algorithm consists of three main stages: (1) construct a LOGISMOS graph from the image \(\mathcal{I}\), (2) construct a QUBO objective function \(F_{C}\) from the LOGISMOS graph, and (3) run the QAOA algorithm to find the optimal surface \(\mathcal{S}\).
```
Input: Image \(\mathcal{I}\) Output: Surface \(\mathcal{S}\)
1 Calculate cost function \(c_{s}(x,k)\)
2 Calculate terminal weights \(w_{s}(x,k)\)
3 Contract edge set \(\mathcal{E}_{\text{intra}}\)
4 Set smoothness constraint \(\delta\)
5 Contract edge set \(\mathcal{E}_{\text{inter}}\)
6 Add a source node \(s\) and a sink \(t\) node
7 Contract edge set \(\mathcal{E}_{\text{W}}\)
8 Get adjacency matrix \(\mathbf{A}\)
9 Calculate the QUBO matrix \(\mathbf{Q}\)
10 Choose QAOA circuit depth \(p\)
11 Assign qubit registers to graph nodes, \(q_{i}:=x_{i}\)
12\(E_{\text{min}}=0\)
13while\(E(\mathbf{\theta})<E_{\text{min}}\)do
14\(E_{\text{min}}=E(\mathbf{\theta})\)
15 Run QAOA circuit with parameters \(\mathbf{\theta}\)
16 Calculate \(E(\mathbf{\theta})\)
17 Update \(\mathbf{\theta}\)
18 end while
19\(\mathcal{S}\) is the set of highest nodes at each column of \(G\)
```
**Algorithm 1**QuantumLOGISMOS
Experiments
### Python Implementation
The code snippet Listing 1 provides a detailed implementation of the core simulation part of our implementation. First, a LOGISMOS graph was constructed from the provided cost matrix and specified smoothness parameter (lines 8-10). In Line 18, the total capacity is calculated by summing the individual capacities of graph edges. Lines 26 to 29 implement the QUBO formulation of LOGISMOS. The Qiskit quantum simulator was used to run the QAOA simulation on the given QUBO objective function (lines 31 to 40). The number of QAOA runs depends on whether the classical optimizer is capable of finding a new set of parameters \(\theta\). Otherwise, the QAOA optimization is terminated.
```
1defsolve(self):
2"""
3GivenLagismosgraph,convertthemax-flowmin-cutproblemtoaQUBOproblem.
4
5:paramgraph:theLOGISMOSgraphtosolvefortheminimums-tcut
6:paramreps:depthofttheQAOAcircuit
7"""
8log_graph=self.graph.graph
9source=self.graph.source
10sink=self.graph.sink
11
12n=len(log_graph.nodes())
13edges=list(log_graph.edges(data=True))
14
15#sumallcapacitiesofthedgesinthegraph
16total_capacity=0
17foredgeinlog_graph.edges(data=True):
18total_capacity+=edge[2]['capacity']
19
20
21model_logismos2d=Model()
22
23x=model_logismos2d.binary_var_list(n)
24
25#DefinetheobjectivefunctionQUBOtobeMINIMIZED
26model_logismos2d.minimize(
27model_logismos2d.sum(
28w.get('capacity')*(x[int(i)-1]*x[int(i)-1]-x[int(i)-1]*x[int(j)-1])
29fori,j,wintedges)+
30(total_capacity+1)*(-x[source-1]*x[source-1]+x[source-1]*x[sink-1]))
31
32problem=from_docplex_mp(model_logismos2d)
33seed=1234
35algorithm_globals.random_seed=seed
36
37spsa=SPSA(maxiter-250)
38sampler=Sampler()
39dao=QAOA(sampler=sampler,optimizer=spsa,reps=self.reps)
40algorithm=MinimumEigenOptimizer(qaoa)
41result=algorithm.solve(problem)
42
43self.objfunc_value=result.fval
44self.segmentation_set=[qubit_index+1forqubit_index,qubit_valinenumerate(result.x)
45ifqubit_val==1]
46self.background_set=[qubit_index+1forqubit_index,qubit_valinenumerate(result.x)ifqubit_val==0]
47
48solution=['x%d=%d'%(i+1,x)fori,xinenumerate(result.x)]
49print('Solutionis:',solution)
* print('Objective function value', result.fval) print('Segmentation set', self.segmentation_set) print('Background set', self.background_set)
The classical optimizer used is Simultaneous Perturbation Stochastic Approximation (SPSA) optimizer (Spall [1998]) implemented in Qiskit-optimization 0.5.0. The maximum number of classical optimizing iterations was set to 250. Mixer layers were not used.
A Python implementation of the proposed QuantumLOGISMOS framework was developed. The code was written in Python 3.10.11. Qiskit 0.43.1 and Networkx 2.8.4 were used to construct directed graphs and conduct quantum simulations.
### Evaluation
Solutions given by the QAOA method were compared to the results of the classical highest-label preflow-push algorithm (Goldberg and Tarjan [1988]). Specifically, the cut produced by QAOA is considered a minimum \(s-t\) cut if it (i) completely separates the source from the sink and (ii) achieves a minimum total capacity.
## 4 Results and Discussion
Fig. 2 shows two simulation studies, in which the proposed quantum optimization scheme was applied to a 2-D image and a 3-D image. The 2-D image is a \(5\times 4\) grid of pixels, which were used to construct a graph of three columns with five nodes each. The 3-D image contains two \(3\times 3\) slices, which were used to construct a graph of three columns with three nodes each. The terminal weights were pre-determined and the optimal surfaces are expected to be along the negatively-weighted nodes. LOGISMOS graphs were constructed with a smoothness constraint \(\delta=2\). The quantum optimization was performed on a quantum simulator provided by Qiskit optimization package (Qiskit [2021]).
In the first experiment (Fig. 2a), a QAOA solver with five repeated parameterized blocks \(p=5\), was able to find a minimum-energy state \(|q_{1}q_{2}\ldots q_{15}q_{s}q_{t}\rangle=|00011000110011110\rangle\) and a ground-state energy \(F=-238\). This corresponds to a source set \(\{q_{4},q_{5},q_{9},q_{10},q_{13},q_{14},q_{15},q_{s}\}\) and the optimal surface \(\mathcal{S}=\{q_{4},q_{9},q_{13}\}\) with a flow value \(3\).
Fig. 2b and 2c show two valid optimal surfaces to the second segmentation task found by the quantum-classical approach with varying numbers of repetitions \(p=2,3,4,5,6\). In both cases, the maximum flow value was \(2\) and the ground-state energy was \(F=-162\). The classical highest-label preflow-push algorithm (Goldberg and Tarjan [1988]), on the other hand, was only able to find the second solution.
Table 1 shows the results given by the QAOA simulators in different \(r\) configurations. At \(p=2,3,4,6\), the first possible solution was identified. At \(p=5,100\), the second possible solution was found. As the number of repeated parameterized blocks increased, we observed the increasing time for the quantum simulators to find a solution. In this particular instance, the solution was successfully obtained after only two iterations of the parameterized blocks.
Nevertheless, it is anticipated that the number of iterations needed to uncover a solution will likely amplify as the dimensions of the image expand.
QAOA has several limitations. It should be noted that QUBO problems are NP-hard, and the conversion of finding a minimum \(s-t\) problem to QUBO does not necessarily make the original problem easier to solve. Furthermore, QAOA is not guaranteed to return correct solutions with finite repetitions of the parameterized blocks. Various attempts to improve the performance of QAOA have been developed. In (Barkoutsos et al. (2019)), the authors proposed the Conditional Value-at-Risk as an aggregation function to speed up the classical optimization process by only averaging the best measurements at the read-out stage. The warm starting strategy proposed by Egger and colleagues (Egger et al. (2020)) suggests a smarter preparation of initial states by considering a state obtained by a classical procedure.
## 5 Conclusion
We propose and demonstrate the quantum implementation and optimization of a geometric-constraint surface segmentation problem. Future work will include the implementation of the proposed scheme on a real quantum computer and the analysis of its performance on images with varying complexity.
| 量子画像処理は、量子コンピューティングと画像処理の両コミュニティから注目を集める、成長する分野です。私たちは、最適な表面分割のためのグラフ論的アプローチと、問題指向グラフのハイブリッド量子古典的最適化方法を組み合わせる新しい方法を提案しました。表面分割は、滑らかさ制約を課すことで、現実的な分割のために表面変動を制御するグラフ分割問題として古典的にモデル化されています。具体的には、分割とは、グラフノードをソース(s)とsink(t)セットに分割する最小s-tカットによって特定されたソースセットです。結果として得られる表面は、ソースとsinkの境界上に位置するグラフノードで構成されています。この問題に特化したグラフの特性、それは方向付けられたエッジ、接続性、エッジ容量を含みます。これらの特性は、二次関数形式の目的関数の最小値に対応する、等価のIsingハミルトニ |
2302.00095 | XCRYPT: Accelerating Lattice Based Cryptography with Memristor Crossbar
Arrays | This paper makes a case for accelerating lattice-based post quantum
cryptography (PQC) with memristor based crossbars, and shows that these
inherently error-tolerant algorithms are a good fit for noisy analog MAC
operations in crossbars. We compare different NIST round-3 lattice-based
candidates for PQC, and identify that SABER is not only a front-runner when
executing on traditional systems, but it is also amenable to acceleration with
crossbars. SABER is a module-LWR based approach, which performs modular
polynomial multiplications with rounding. We map the polynomial multiplications
in SABER on crossbars and show that analog dot-products can yield a
$1.7-32.5\times$ performance and energy efficiency improvement, compared to
recent hardware proposals. This initial design combines the innovations in
multiple state-of-the-art works -- the algorithm in SABER and the memristive
acceleration principles proposed in ISAAC (for deep neural network
acceleration). We then identify the bottlenecks in this initial design and
introduce several additional techniques to improve its efficiency. These
techniques are synergistic and especially benefit from SABER's power-of-two
modulo operation. First, we show that some of the software techniques used in
SABER, that are effective on CPU platforms, are unhelpful in crossbar-based
accelerators. Relying on simpler algorithms further improves our efficiencies
by $1.3-3.6\times$. Second, we exploit the nature of SABER's computations to
stagger the operations in crossbars and share a few variable precision ADCs,
resulting in up to $1.8\times$ higher efficiency. Third, to further reduce ADC
pressure, we propose a simple analog Shift-and-Add technique, which results in
a $1.3-6.3\times$ increase in the efficiency. Overall, our designs achieve
$3-15\times$ higher efficiency over initial design, and $3-51\times$ higher
than prior work. | Sarabjeet Singh, Xiong Fan, Ananth Krishna Prasad, Lin Jia, Anirban Nag, Rajeev Balasubramonian, Mahdi Nazm Bojnordi, Elaine Shi | 2023-01-31T20:53:50 | http://arxiv.org/abs/2302.00095v1 | # XCRYPT: Accelerating Lattice Based Cryptography with Memristor Crossbar Arrays
###### Abstract
This paper makes a case for accelerating lattice-based post quantum cryptography (PQC) with memristor based crossbars, and shows that these inherently error-tolerant algorithms are a good fit for noisy analog MAC operations in crossbars. We compare different NIST round-3 lattice-based candidates for PQC, and identify that SABER is not only a front-runner when executing on traditional systems, but it is also amenable to acceleration with crossbars. SABER is a module-LWR based approach, which performs modular polynomial multiplications with rounding. We map the polynomial multiplications in SABER on crossbars and show that analog dot-products can yield a \(1.7-32.5\times\) performance and energy efficiency improvement, compared to recent hardware proposals. This initial design combines the innovations in multiple state-of-the-art works - the algorithm in SABER and the memristive acceleration principles proposed in ISAAC (for deep neural network acceleration). We then identify the bottlenecks in this initial design and introduce several additional techniques to improve its efficiency. These techniques are synergistic and especially benefit from SABER's power-of-two modulo operation. First, we show that some of the software techniques used in SABER, that are effective on CPU platforms, are unhelpful in crossbar-based accelerators. Relying on simpler algorithms further improves our efficiencies by \(1.3-3.6\times\). Second, we exploit the nature of SABER's computations to stagger the operations in crossbars and share a few variable precision ADCs, resulting in up to \(1.8\times\) higher efficiency. Third, to further reduce ADC pressure, we propose a simple analog Shift-and-Add technique, which results in a \(1.3-6.3\times\) increase in the efficiency. Overall, our designs achieve \(3-15\times\) higher efficiency over initial design, and \(3-51\times\) higher than prior work. Finally, analog operations are more error-prone; we show that the approximate nature of module LWR calculations can mask most of these errors. We also show that such hardware-induced errors do not violate the security guarantees of the algorithm.
## I Introduction
The recent emergence of several quantum computing systems - IBM's Q system [1], Intel's Tangle Lake [2], Google's Bristecone [3], and IonQ [4] - has increased the likelihood that integer factorization and discrete logarithm will be tractable in the near future, thus rendering several modenday cryptographic primitives obsolete [5, 6]. This has spurred interest in alternative cryptographic assumptions that cannot be easily solved by known algorithms (Shor [7] and Grover [8]) on quantum systems. In the past several years, NIST has solicited and short-listed a number of quantum-resistant algorithms [9]. A number of mathematical approaches are being considered for such post-quantum cryptography (PQC), including lattice-based, multivariate, hash-based, isogeny-based, and code-based cryptography [5, 9, 10]. Of these, lattice-based cryptography (LBC) is a front-runner because of its high efficiency and security on several metrics [5, 10]. Research on hardware acceleration of these PQC algorithms is therefore time-critical as it can steer the field towards algorithms that are more amenable to hardware acceleration. This research is also timely because, even though the algorithms continue to evolve, there is convergence on the basic primitives and computations that will be included in most algorithm variants. Several companies, including Google, Microsoft, Digicert, and Thales, are already testing the impact of deploying PQC [11].
Modern infrastructures are built on cloud-based deployments that secure their data in transport using a mix of symmetric and asymmetric key encryption. Typically, a private key is established using a handshake protocol based on asymmetric key encryption. Following this, data in transit is secured using this private key. However, asymmetric key encryption schemes like RSA/ECC are vulnerable to quantum attacks; these may be replaced by their PQC counterparts that have been proposed recently. As we will describe shortly, popular LBC schemes based on Learning With Errors (LWE) [12] (and its variants) rely on large matrix and vector operations that place a significant burden on the hardware. Recent efforts [13, 14, 15, 16, 17, 18, 19, 20] implement LBC algorithms on FPGAs/GPUs/ASICs and report latencies of tens to hundreds of micro-seconds. Classic cryptographic functions can be typically executed in _micro-seconds_ on modern hardware [21, 22]; for instance Intel QuickAssist technology executes RSA decrypt operation in 5 us using a specialized crypto-accelerator [23]. The metrics for LBC therefore fall well short of the demands of modern deployments. Several contemporary HTTPS applications require key-establishment between a web-server and client (usually million requests every second), using asymmetric key cryptographic primitives. PQC algorithms, with software implementations in hundreds of micro-seconds [24], would _increase service time by at least an order of magnitude_. In order to replace the
pre-quantum cryptographic algorithms used in this scenario, such as RSA [25] and Diffie-Hellman [26], PQC algorithms will have to consume significantly lower latency and energy. While higher parallelism can reduce latency, it can incur a significant cost in terms of area, data movement, and energy. It is therefore vital to explore transformational new hardware technologies that can improve all relevant efficiency metrics by orders of magnitude. Without these advancements, the next generation of privacy-preserving deployments - homomorphic encryption that further scales up the computation or medical IoT devices that further scale down the resource constraints - will be out of reach.
This paper explores a promising technology - analog computing in resistive memories - as the foundation for new architectures that efficiently execute a range of algorithms relevant to LBC. While such technologies have been used before [27, 28] to implement computations in deep neural networks, we show that LBC offers new opportunities to further improve the efficiency of this compute-in-memory approach. Although in-memory analog operations are efficient at performing dot-products, the conversion of a high-resolution current into a digital value is expensive. This analog to digital conversion (ADC) is the primary overhead that must be kept in check. A second challenge with analog circuits is that they can be noise-prone. We overcome these challenges by exploiting the following opportunities in PQC algorithm variants: (i) the computations can tolerate some noise, (ii) they perform power-of-2 modulo operations.
We focus on SABER, one of the NIST finalists that relies on a Module-LWR algorithm. Adopting the best practices from prior work in DNN acceleration, we create an initial design that out-performs prior FPGA-based PQC acceleration efforts. We then analyze this initial design and with hardware-software co-design, further improve decryption compute density by 9.5\(\times\) (11.8\(\times\) for encryption) and energy efficiency by 15.7\(\times\) (3.7\(\times\) for encryption). These improvements are achieved by incorporating accelerator-amenable software techniques for polynomial multiplication, staggered computation to reduce ADC pressure, and a novel analog shift-and-add technique. The summary/impact of each novel technique is illustrated in Figure 1. The techniques introduced in this work can be extended to all PQC algorithms that base operations on polynomial math, like Digital Signatures and Homomorphic Encryption.
We make the following key contributions:
* We construct a basic in-memory architecture for PQC encryption and decryption algorithms, leveraging prior best practices [27, 28]. This design serves as the baseline for this study and achieves higher efficiency than recent PQC accelerators.
* We demonstrate software techniques like decomposing polynomial degree and smart scheduling to increase ADC sharing, that improves energy efficiency by 2.4-2.7\(\times\) and computational efficiency by 4.7\(\times\), relative to our baseline.
* We propose a novel technique to perform write-free in-analog shift-and-add operations using crossbars, allowing us to trade-off cell programming with ADC complexity. A design space exploration yields an up to 2.7\(\times\) and 6.3\(\times\) increase in computational and energy efficiency, respectively.
* We show that lattice-based schemes can tolerate small amounts of error in computations, introduced by analog device/circuit variations.
* Overall, XCRYPT (Xbar based accelerator for post-quantum CRYPTography) achieves server deployment with decryption latency of 0.08 us and client deployment with encryption latency of 4 us, with an overall chip area of just 0.04 and 0.3 \(mm^{2}\) respectively.
## II LBC Background
### _Learning With Errors and Its Variants_
Given the popularity of lattice-based approaches, this paper focuses on LBC. A lattice is a set of points in \(n\)-dimensional space. Each point is a linear integer combination of a set of basis vectors that define the lattice. Assuming that the basis vectors are large, given a point, it is computationally expensive (non-polynomial time) to determine the nearest lattice point. Intuitively, the high dimension and the many possible linear combinations of large vectors contribute to making this a difficult problem. The difficulty of the Closest Vector Problem [29] and other related problems (Shortest Vector Problem, Shortest Independent Vector Problem) lay the foundation of many cryptographic primitives, such as public key encryption, digital signature and homomorphic encryption [30].
The hardness of some LBC schemes are based on the hardness of Learning with Errors (LWE [12]) problem. The Standard LWE problem states that, given a randomly chosen \(n\)-dimensional vector \(\vec{s}\) of integers modulo \(q\), it is hard to recover \(\vec{s}\) from \(m\geq n\)_approximate_ random linear equations on \(\vec{s}\). In other words, we start with a secret value \(\vec{s}\); we multiply \(\vec{s}\) with \(m\) random vectors, and add a small error \(e_{i}\), to generate \(\vec{b}\) (public key). Instantiations of LWE schemes differ in the parameter space (\(n,q,m\)), and distributions used to sample \(\vec{s},e\).
In Ring-LWE [31], integer samples are replaced with polynomial samples. The main computation here is polynomial
Fig. 1: Summary of techniques introduced in this paper and their impact.
multiplication (_PolyMult_), which can be implemented efficiently using Number Theoretic Transform (NTT), which is a variant of FFT. However, Ring-LWE may compromise security, compared to Standard LWE [32, 33]. In order to retain the increased efficiency from Ring-LWE while providing higher security guarantees, Module-LWE [34] was proposed. It uses an \(l\)-dimensional vector of polynomials. Its efficiency can be further improved by using a rounding operation instead of padding with randomly drawn errors. This variant is called Learning With Rounding (Module-LWR) [35]. _Two of the NIST round 3 candidates use module structures: Kyber [34] uses Module-LWE, and SABER [36] uses Module-LWR._
### _Saber_
NIST has solicited algorithms for Key Encapsulation Mechanisms (KEMs) and Digital Signatures. SABER [36] is a finalist among the KEMs. The SABER KEM is composed of three algorithms - key generation, encryption, and decryption.
Key generation determines a matrix \(\mathbf{A}\) of polynomials using a pseudo-random number generator based on SHAKE-128 [37]. A secret vector \(\vec{s}\) of polynomials is generated by sampling from a centered binomial distribution. The public key consists of the matrix seed and the rounded product \(\vec{b}=\mathbf{A}^{T}\vec{s}\).
Encryption generates a new secret \(\vec{s}^{\prime}\), and adds the message \(m\) (a polynomial with coefficients \(\in\{0,1\}\)) to the inner product of public key \(\vec{b}\) and \(\vec{s}^{\prime}\), forming the first part of ciphertext. The second part hides the encrypting secret by rounding the product \(\mathbf{A}\vec{s}^{\prime}\).
The third algorithm is Decryption, which uses the secret key \(\vec{s}\) to extract the message encoded in the two parts of ciphertext.
Overall, SABER performs 24 PolyMults, 14 Poly Modulo, and 8 Poly Rounding operations - each PolyMult does \(256^{2}\) integer products and Modulo/Rounding are applied per coefficient. SABER adds a few constants in its calculations so that rounding can be replaced with simple bit shifts.
In module-lattice based cryptography, the performance of PolyMult plays a key role in the overall performance [38]. In our experiments, we observed that PolyMult kernels consume \(>90\%\) of the execution time. When implementing PolyMult, NTT has the asymptotically fastest time complexity of \(O(n\log n)\), but it requires \(q\) to be a prime, which in turn leads to non-trivial complexity for modulo operations. On the other hand, SABER chooses a power-of-two \(q\), which speeds up the modulo (by simply dropping the MSBs). Since NTT is not an option, SABER uses the Toom-Cook-4 algorithm [39] to reduce each degree-256 PolyMult to 7 degree-64 PolyMults, and then further reduces them to degree-32 using the Karatsuba algorithm once. This choice (along with AVX2 support) brings the contribution of PolyMult to 57%, and SABER outperforms implementations (like Kyber) that employ a faster NTT-based polynomial multiplier. SABER demonstrates computations in tens of micro-seconds using AVX2 support [36]. Various HW/SW co-designs of SABER have reported lower latencies while utilizing fewer resources [40, 41, 42], compared to FPGA/ASIC implementations of other lattice-based schemes [43, 44]. Additionally, SABER has among the lowest ciphertext overhead among the candidates. We compare the 3 lattice-based PQC candidates and their NIST security level 1 KEM implementations in Table I. Given its favorable properties in terms of speed and lower hardware/ciphertext complexity, we choose SABER as the target for our hardware acceleration.
In addition to polynomial multiplication, generating pseudo-random numbers using SHAKE-128 is also a costly operation. It is well known that Kececak-core, which is at the heart of SHAKE-128, is very efficient on hardware platforms [41]. We observed Keccak functions to contribute \(<30\%\) of the execution time, also validated by [41]. Further, the Keccak-core can be accelerated [45] and pipelined with other arithmetic operations. In this work, we therefore focus on the main bottleneck of LBC, the polynomial multiplication.
### _Existing PQC implementations_
Efforts are already underway to implement PQC algorithms in hardware [10]. Most of the early efforts have focused on FPGA implementations [13, 14, 15, 16]. We discuss some of the salient efforts here as motivation.
A comprehensive discussion of PQC hardware/software efforts [17, 18, 19, 20, 43, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63] can be found in the survey paper by Nejatollahi et al. [10]. They note that most of the hardware overhead can be attributed to multiplication units; the Gaussian Sampler unit also incurs non-trivial hardware cost. Basu et al. [13] use high-level synthesis to generate FPGA circuits for PQC algorithms. Reported latencies are in the micro-seconds range, with a significant fraction of the hardware overhead in the polynomial multiplier (even after using the \(O(n\log n)\) NTT algorithm). [13, 61] evaluate various NIST candidates on their software/hardware codesign implementations. [63] focus on algorithmic optimizations for polynomial multiplication. Note that efficient FPGA implementations require a specific set of techniques that focus on reducing the bottleneck component - look-up tables (LUTs). Recent works have highlighted using GPUs [17] or their ASIC designs [18, 19, 20]. We compare our basic design against many of these solutions in Section IV-C.
### _Impact of KEM Deployments_
In modern cloud services, millions of new clients request services every minute, each requiring a symmetric key establishment using KEM. Quantum-safe KEM routines like SABER have significantly higher complexity than traditional routines, thus lowering quality-of-service. While this paper focuses on KEMs, the benefits seen in KEM acceleration will also apply to other applications based on PolyMult.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{PQC scheme} & \multicolumn{2}{c|}{Ciphertext/} & \multicolumn{2}{c|}{Haswell Cycles} \\ & Plainetet ratio & KeyGen & Encaps & Decaps \\ \hline Kyber-768 [34] & 37.07 & 85K & 112K & 108K \\ \hline SABER [36] & 34.00 & 101K & 125K & 129K \\ \hline NTRU [46] & 40.03 & 307K & 48K & 67K \\ \hline \end{tabular}
\end{table} TABLE I: Comparison of NIST Round 3 LBC schemes
Quantum-secure digital signature schemes like CRYSTALS-DILTHIUM [64] (NIST Finalist) are based on PolyMult. LBC also forms the basis for Homomorphic Encryption (HE). Popular HE schemes like FV/BFV [65, 66] (implemented in several libraries like FV-NFLlib from CryptoExports [67] and SEAL from Microsoft [68]), GSW [69], CKKS [70], and TFHE [71] encode information in polynomials. Our work can be extended to HE-applications, where polynomial math is also the key bottleneck. Moreover, contemporary HE schemes work with a noise budget; when the budget is consumed, decryption and encryption of ciphertext is required using SABER or SABER-like schemes (the target application in this paper). Cheetah [72] demonstrated Secure Machine Learning as a service using HE, and required decryption/encryption for each output neuron after every layer, thus increasing the contribution of the KEM scheme. A deeper analysis of HE and digital signature deployments is left as future work.
## III Accelerator Background
In the past decade, industry and academia have made significant investments in resistive memory technologies [73, 74, 75, 76]. A resistive memory cell uses its resistance value to store information. Resistive memory arrays have several advantages - very high density, non-volatility, and competitive read latency/energy. They also have a primary challenge - writes are expensive because they consume more energy/time and cause device wearout. After years of research, the first generations of commercial resistive memory products have emerged in the last few years [76]. More recently, multiple groups [77, 28, 78, 79, 27] have identified that a resistive memory array can be configured to perform analog operations. Such _processing-in-memory_ technologies have the potential for high parallelism and low data movement.
A resistive memory array is implemented as a _crossbar_ - a grid of cells, as shown in Figure 1(a). Sandwiched between the X-dimension wires (wordlines) and the Y-dimension wires (bitlines) are the resistive cells (a material with programmable resistance). Figure 1(a) shows the physical layout of a small crossbar. Figure 1(b) shows the logical representation of the crossbar, while Figure 1(c) zooms into the operation in one bitline. If voltages \(V_{1}\) and \(V_{2}\) are applied to the first two wordlines, current is injected into each bitline, proportional to the conductance of each cell in those rows. The current in the last bitline is \(V_{1}\times G_{14}+V_{2}\times G_{24}\), where \(G_{ij}\) represents the conductance of cell \(\{i,j\}\). Thus, the current in each bitline is an analog representation of the dot-product of two vectors - the voltage vector applied to the wordlines and the conductance vector pre-programmed into a column of cells.
Further, the wordline voltage is broadcast to all the columns; each column performs an independent and parallel dot-product on the same voltage vector, but using a different conductance vector. The basic Kirchoff's Law equation is being exploited to design an analog vector-matrix multiplication circuit that yields a vector of output bitline currents in a single step when a vector of input voltages is applied to the wordlines. Programming the cells is a time-consuming process. Performing the vector-matrix multiplication is equivalent to a crossbar read operation followed by analog sensing, an operation that is much faster (order of \(10^{-7}\) seconds). Prior works [77, 28, 27] have shown that this analog matrix-vector multiplication unit can achieve orders of magnitude reduction in energy per operation (depending on the type of analog sensing), compared to an equivalent digital circuit. This is partly because the computation is "in-situ", i.e., the matrix is an operand that doesn't have to move and partly because complex multiplications and additions are being performed by exploiting natural phenomena (Kirchoff's Law).
The analog signal emerging from each bitline has to be converted into a digital value. Analog to digital conversion (ADC) circuits consume significant power that grows exponentially with ADC resolution. Managing ADC resolution and power is a key challenge in exploiting the capabilities of this emerging technology. [28, 78, 80] have explored other circuits as well.
Recent works [81, 82, 83, 27, 28, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 252, 254, 259, 261, 254, 256, 257, 259, 262, 25, 258, 259, 270, 253, 25, 259, 263, 252, 25, 254, 256, 257, 259, 264, 258, 259, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 285, 286, 287, 288, 289, 291, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 31, 33, 39, 32, 34, 36, 38, 39, 33, 35, 37, 39, 34, 37, 38, 39, 35, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 42, 44, 49, 51, 52, 53, 54, 55, 56, 57, 58, 59, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 85, 87, 89, 91, 88, 92, 86, 88, 89, 93, 94, 80, 83, 88, 89, 95, 96, 97, 98, 99, 99, 99, 100, 99, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 12, 19, 13, 14, 15, 16, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 24, 26, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 52, 53, 54, 55, 56, 57, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 80, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 99, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 23, 24, 25, 26, 27, 29, 30, 24, 26, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 55, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 99, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 53, 54, 55, 56, 57, 58, 59, 60, 62, 63, 64, 65, 67, 69, 70, 73, 74, 75, 76, 78, 79, 80, 83, 84, 85, 86, 87, 8, 89, 90, 91, 92
respectively. Recent literature has shown that in-RRAM computation can improve the efficiency of dot-product heavy DNN applications [27, 28, 79, 82, 90]. We now describe how memristive crossbars can be an ideal substrate for dense and energy-efficient calculations for LBC.
_We first create a strong baseline by adapting the ISAAC architecture [27], which was designed for DNNs. Next, we make the case that SABER's PolyMult algorithm (Toom Cook-4 + Karatsuba) is not always well suited for memristor crossbars, and explore different multiplication algorithms. Then, we show techniques, enabled by SABER's power-of-2 modulo, that reduce the overheads of ADCs. In the next section, we extend this design to allow shift and adds in analog, further lowering ADC requirements._
Similar to AES encryption engines in today's processors, the XCRYPT accelerator will also be transparent to programmers and will be invoked by hardware when the software needs to send secure messages. Note that the proposed accelerator is based on memristors (RRAM), which are CMOS compatible and can be integrated on the same die as the processor.
### _Modular PolyMult as Vector-Matrix Multiplication_
At its core, almost all lattice-based candidate schemes in NIST PQC do modular polynomial multiplication. _Modular_ here means that the resultant polynomial of multiplication of two \(n\) degree polynomials \(p_{1},p_{2}\in R_{q}=\mathbb{Z}_{q}[x]/(x^{n}+1)\) would have its \(x^{i}(i\geq n)\) terms divided by \(-x^{n}\) to keep the degree \(n\). This is what dividing polynomial \(\mathbb{Z}_{q}[x]\) by \((x^{n}+1)\) represents. Polynomial multiplication requires each coefficient of \(p_{1}\) to be multiplied with each coefficient of \(p_{2}\). Consider degree-3 polynomials, \(p_{1}=a_{0}x^{0}+a_{1}x^{1}+a_{2}x^{2}\) and \(p_{2}=b_{0}x^{0}+b_{1}x^{1}+b_{2}x^{2}\); \(p_{1}\times p_{2}=(a_{0}b_{0}-a_{1}b_{2}-a_{2}b_{1})x^{0}+(a_{0}b_{1}+a_{1}b_ {0}-a_{2}b_{2})x^{1}+(a_{0}b_{2}+a_{1}b_{1}+a_{2}b_{0})x^{2}\). As seen in Fig 3, this can be viewed as vector \((a_{2},a_{1},a_{0})\) being multiplied by a matrix with columns \((-b_{1},-b_{2},b_{0}),(-b_{2},b_{0},b_{1}),(b_{0},b_{1},b_{2})\). The column for last term \(x^{2}\) is \((b_{0},b_{1},b_{2})\), while the rest are downward single shifted versions of it with a negative value at the column head.
### _Methodology_
Before we describe our proposed accelerator XCRYPT, we first state our methodology. This makes it convenient to discuss XCRYPT design choices with supporting results. We leverage many of the primitives introduced in the ISAAC architecture [27] and adopt an evaluation methodology very similar to that work. The energy and area model for memristor crossbar arrays, including Shift-and-Add Crossbars, is based on that of Hu et al. [77]. The RRAM cell model is derived from [88], with 25 ns write latency and 0.1 pJ/cell/bit write energy, and NVSim [91] is used to extract array level numbers. Read latency is determined by the ADC readout as RC delay of the crossbar is typically sub-ns [77, 92, 93]. Since RRAM is an emerging technology, there are numerous RRAM device parameters accepted within the research community targeting different implementations. We have confirmed that our proposed techniques are independent of these parameter values and yield significant improvements for a wide range of parameters. The read energy of an RRAM cell is four orders of magnitude lower than write energy. Area and energy for shift-and-add, sample-and-hold, and 1-bit DAC circuits are adapted from ISAAC [27]. We have considered an energy and area efficient adaptive ADC that can handle one giga-samples per second [94]. The adaptive ADC has three major components - Charge-based DAC, Comparator, and Controller. To arrive at power/area for different ADC bit precision, we scaled power/area of charge-based DAC exponentially, and the rest linearly. Detailed parameters of our initial design (X-SB, described in next subsection) are listed in Table II. Note that we propose various versions of XCRYPT with varying XBar/ADC sizes, leading to various parameters. We also model CASCADE components based on parameters mentioned by Chou et al. [82]. We modify the code for NIST contestant SABER [36] to execute various XCRYPT design features. We further extend the code to model RRAM cell variance in order to calculate the noise tolerance of the implementation, more of which is described in Section VI. Simulations are done for a million randomly generated key pairs which are used to encrypt and decrypt 32B plaintext, and report the failure rate of decryption. We consider 2 key metrics to evaluate XCRYPT efficiency:
1. Computational Efficiency (CE): Number of 1-bit plaintext/ciphertext operations per second per mm\({}^{2}\) of area \((Gbits/s\times mm^{2})\).
2. Energy Efficiency (EE): Number of 1-bit plaintext/ciphertext operations per 1J of energy \((Gbits/J)\).
We consider the SABER variant that has post-quantum security level similar to AES-192. In SABER, decryption
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Component** & **Count** & **Power (uW)** & **Area (um\({}^{2}\))** \\ \hline XBar (128x128) & 1 & 300 & 25 \\ \hline DAC 1bit & 128 & 3.9 & 0.16 \\ \hline S+H 6bit & 128 & 0.007 & 0.029 \\ \hline ADC Gott & 16 & 945 & 435 \\ \hline ADC 7bit & 1 & 1365 & 628.33 \\ \hline
**Total (array)** & **123.13** & **7737.557** \\ \hline
**N-SB = 1 Encryption +** 1 Decryption Tile = 2 x (48 arrays)+(IR+OR+SA)** & **11.92 mW** & **0.743 mm\({}^{2}\)** \\ \hline \end{tabular}
\end{table} TABLE II: XCRYPT-Schoolbook (X-SB) parameters, at 32nm.
Fig. 3: Representing modular polynomial multiplication as vector-matrix multiplication (shown for degree=3).
requires vector-vector multiplication while encryption requires both vector-vector and matrix-vector multiplications. A vector contains \(l=3\) polynomials of degree \(n=256\), while a matrix consists of \(l^{2}=9\) polynomials. Therefore, a vector-vector multiplication does 3 PolyMults, followed by their addition, while matrix-vector multiplication does 9 PolyMults to obtain a vector of 3 polynomials. SABER's parameters define the coefficients of polynomials in matrix-vector multiplication to be \(\log q=13\) bits. For the multiplication calls, the input polynomials are rounded to have \(\log p=10\) bits. In all PolyMults, one of the polynomials is either \(\vec{s}\) or \(\vec{\tilde{s^{\prime}}}\), which are both secrets sampled from a centered binomial distribution such that each coefficient requires 4 bits.
### _Mapping SABER to Memristor Crossbars_
In this sub-section, we first create a strong crossbar-based baseline to execute SABER, following the principles of the ISAAC design for DNNs [27]. Because RRAM writes are a bottleneck in this design, we introduce a couple techniques to alleviate this overhead. This baseline is then compared against prior work in PQC acceleration before we further address the bottlenecks in the baseline.
In a cloud service platform, several client requests are serviced every second. To guarantee secure communication, the client and server establish a symmetric key, through SABER's PKE (RSA in the non-quantum realm). A server, therefore, generates its key once per client. Using the key pair, it establishes connections with clients by repeatedly invoking the encryption and decryption algorithms that are the focus of this paper.
We use a methodology very similar to that for ISAAC [27] to map the required computations to crossbars. Since secret key \(\vec{\tilde{s}}\) remains constant for much of the server's runtime, we designate \(\vec{s}\) as the operand to be encoded in crossbar cells. We need 72 128\(\times\)128 1-bit crossbars to store key \(\vec{\tilde{s}}\) and its shifted versions (as needed by PolyMult). A tile in our design (shown in Figure 4) has multiple crossbars and can perform 3 PolyMults, as required by a vector-vector multiplication. Using ISAAC's flip-encoding scheme, each column produces a 6-bit current. For our work, we use an area-efficient ADC [94] that can convert samples at a frequency of 1 GSps. To further reduce area overhead, we share 1 ADC across 8 columns of a crossbar, which results in crossbar read cycle time of 8 ns.
**Handling RRAM Writes.** As illustrated in Figure 4, we design 2 different tiles - encryption and decryption. This is because we target two different deployments - a cloud server and a client device. In a typical handshake protocol, the server and client establish a private key using asymmetric key encryption (like SABER in the post-quantum world). During the protocol, client encrypts using the server's public key while the server decrypts using its secret key. Hence, we design XCRYPT encryption/decryption tiles for client/server respectively.
Programming the memristor cells is expensive, in terms of energy, performance, and endurance. Typically, memristors-based cells have a budget of \(10^{12}\) writes before the cells malfunction. Since the server only periodically changes its private key (for freshness), the number of writes to the memristor is low enough to sustain a reasonable lifetime. For instance, considering a conservative assumption of a private key update every second, the accelerator lifetime exceeds 30K years. On the client side, every new connection demands a new memristor write. However, the typical number of connections established by a client is small. Even with \(10^{5}\) connection setups per day, the client accelerator lifetime will exceed 27K years.
**Comparison to Baselines.** We have thus defined an initial XCRYPT design modeled after ISAAC principles and an efficient write policy. We designate this architecture as _XCRYPT-SchoolBook_ or _X-SB_ because it uses the schoolbook algorithm for polynomial multiplication. We compare this initial design with existing implementations of SABER in Figure 5 to show that this basic design out-performs prior work on LBC acceleration. Most prior work report Encapsulation/Decapsulation numbers. These functions have encryption/decryption as their underlying kernels - Encaps calls encryption once, while Decaps calls encryption and decryption once. For a fair comparison, we compare Encaps/Decaps in Figure 5. However, throughout the paper, we report results for the _underlying independent_ kernels, encryption and decryption.
The CPU implementation results are reported from the SABER paper [36], running on an Intel Haswell machine at 3.1 GHz, with AVX2 support. Lee et al. [17] evaluates capabilities of GPUs for running PQC (results shown for RTX3080 in figure). Chung et al. [95] changes SABER's parameters to enable the NTT transformation and benefit from a fast 16bit
Fig. 4: XCRYPT-SchoolBook or X-SB architecture. Arrows denote the path for Encryption.
Fig. 5: SABER on memristor crossbars v/s published implementations of SABER. [18, 19, 20] are ASICs.
NTT multiplier in hardware. Dang et al. [96] also proposed a new design of Saber based on NTT, demonstrated on FPGA. Imran et al. [18] performs a design space exploration of ASIC with smart pipelining and logic sharing between SABER kernels. Their baseline architecture is ported from an FPGA proposal for SABER hardware [41]. Zhu et al. [19] use an 8-level hierarchical Karatsuba framework for PolyMult and a task scheduler that reduces resource utilization by up to 90%. Their post-layout chip is approximately the same size as our X-SB processor, which gives a fair comparison. Ghosh et al. [20] showcase the first Silicon verified ASIC implementation for Saber. _Compared to existing works, RRAM can achieve 1.17\(\times\) higher throughput budget, as seen in Figure 5._ While state-of-the-art ASIC [19] achieves 2.4\(\times\) better energy efficiency than X-SB, we demonstrate techniques in this paper that enable XCRYPT to outperform [19] by 2 orders of magnitude. Memristor crossbars are efficient at performing multiply-accumulate using Kirchhoff's law. Analog dot products are performed at sub-ns RC delay, and are readout at the peripheral's digital conversion frequency. As observed in previous works [27, 77, 82], this ADC circuit is the bottleneck, contributing 70-90% of total energy.
CryptoPIM [80] is the first paper to propose PIM acceleration for lattice-based schemes, demonstrating throughput improvement over existing FPGA work, but significantly higher latency. That work underestimates the programming overhead of RRAM cells. Each operation in their proposal requires writing to RRAM cells, with an assumed latency of 1.1ns. However, realistically each RRAM cell takes 25ns to program [82]. Even if we assume parallel write drivers to simultaneously write cells in an entire row/column, we estimate that CryptoPIM's polynomial multiplication takes 5 orders of magnitude higher latency than our baseline X-SB design, at the cost of 13\(\times\) higher energy. Furthermore, RRAM-based devices are vulnerable to non-idealities, which is overlooked in CryptoPIM. CiM [97] also proposes PIM solutions for PolyMult, targeted at Homomorphic Evaluation (HE), which typically has a much larger parameter set than public-key cryptography. They therefore focus on an SRAM substrate with a ComputeCache implementation [98]. On the other hand, XCRYPT implements a smaller parameter SABER on area-efficient RRAM devices, and targets the ADC bottleneck.
### _Impact of Polynomial Multiplication Algorithms_
In this sub-section, we explore different multiplication algorithms to identify the implementation that is most amenable to crossbar acceleration. Similar to matrix multiplication, polynomial multiplication also benefits from various lower complexity algorithms [10]. Figure 6 quantifies CE and EE for these algorithms. We start with the standard \(O(n^{2})\)-complexity Schoolbook algorithm, designated _X-SB_, for multiplying 256-degree polynomials. Karatsuba's algorithm (\(O(n^{1.58})\)-complexity) breaks down a 256-degree multiplication to 3 128-degree PolyMults, reducing the number of required 128\(\times\)128 crossbars from 32 to 24. This proportionally decreases ADC and crossbar write energy, as depicted by _X-K2_ in Figure 7. Note that decryption doesn't require crossbar writes because of a constant secret \(\vec{s}\), which is why almost all of the decryption energy in Figure 7 is contributed by the ADC. Encryption also has a lower CE than decryption as it requires more crossbars, has more input cycles, and requires a long initial crossbar programming step (90% of end-to-end encryption latency).
_X-K4_ further reduces the 3 128-degree multiplications to 9 64-degree PolyMults, which requires a smaller 64\(\times\)64 crossbar. While X-K4 reduces the degree and the ADC samples per crossbar, it also increases the total number of PolyMults. Due to this, the total number of ADC samples, and the crossbar writes increase, which is why X-K4 results in lower improvements than X-K2 for decryption. Note that X-K4 maps to a smaller 64\(\times\)64 crossbar, enabling a lower write energy contribution than X-K2, despite more crossbar writes.
ToomCook-4 (\(O(n^{1.4})\)-complexity) reduces a 256-degree multiplication to 7 64-degree PolyMults. While ToomCook-4 is asymptotically faster than K2, it creates more polynomials that require increased ADC samples and crossbar requirements, which explains the worse behavior for _X-TC4_, relative to _X-K2_, for decryption. However, TC4 performs well during encryption as the benefits from a smaller crossbar (lower write latency and energy) outweigh the higher crossbar count requirements. Software implementations of SABER [36] use ToomCook-4 + Karatsuba-2 (labeled as _X-TC4K2_ in the figures) to do 21 32-degree PolyMults, which performs best for encryption due to its small crossbar write latency/energy.
Fig. 6: CE and EE for SABER on memristor crossbars, evaluating various polynomial multiplication algorithms.
Fig. 7: Energy breakdown of XCRYPT, for various multiplication algorithms, for encryption and decryption.
For encryption, X-TC4K2 improves CE and EE over X-SB by 3.6\(\times\) and 1.6\(\times\), respectively. Lowering the degree beyond 128 does not result in higher efficiency for decryption where no writes happen. _Thus, the ideal SABER algorithm on a crossbar accelerator varies, and we choose X-K2 for decryption and X-TC4K2 for encryption, for the rest of the paper._
### _Using Modulo to Reduce ADC Overheads_
The _X-K2_ implementation of SABER on memristor crossbars is primarily constrained by ADC overhead. The ADC consumes 90% of the area, and 78% of the energy during encryption. Rest of the energy is attributed to crossbar writes. Similar ADC overheads are reported by previous studies for DNN applications as well [27, 77, 82]. In the 128\(\times\)128 crossbar, with ISAAC's flip encoding scheme, a 6-bit precision ADC is required to convert the analog value to digital [27]. Since the computations are spread across cells in a row and across cycles, the 6-bit ADC results have to be aggregated after appropriate shifts. SABER's parameters define that the coefficients of the output polynomial must go through modulo \(2^{10}\) or \(2^{13}\), which keeps the coefficients at 10 or 13 bits. Thus, given the modulo operation, not all bits from all 6-bit ADC samples contribute to the final output. _Unlike DNN computations, where most significant bits carry the most information, some most significant bits in our computations are ineffectual._ For instance, all 6 bits from cycle 0's LSB column are needed as they add to the LSB 6 bits of the output coefficient. However, the output of cycle 9's LSB column are added to the same coefficient after shifting left 9 times, i.e., only the LSB of the sampled ADC value will contribute to the final 10-bit result. In Figure 8(a), we illustrate the number of relevant bits while doing decryption. Each coefficient of secret \(\vec{s}\) is 4 bits, which is stored across 4 cells in a row, with LSB stored in column 0. Therefore, the outputs of 4 columns have to be added after appropriate shifts. Furthermore, values across cycles are also shifted-and-added. This is shown in Algorithm 1. As seen from the figure, _the number of bits and hence, the ADC precision varies depending upon the value of (cycle+column)_. For vector-vector multiplications, we require full precision (6 bits) only for (cycle+column) \(\leq 4\).
```
1coeff = 0; for(cycle=0; cycle<10; ++cycle) for(column=0; column<4; ++column) coeff += bitline_value[cycle][column]\(\ll\)(cycle+column); coeff = (coeff) mod \(2^{10}\)
```
**Algorithm 1**Pseudo-code for polynomial multiplication, per coefficient, using crossbar
_We take advantage of this flexibility by reordering another crossbar's computations in such a manner that at any given cycle, at most 1 crossbar is producing an output with maximum precision of 6._ This is illustrated in Figure 8(b), which depicts the column 0 output precision requirements for 2 crossbars. The computations for the second crossbar are reordered such that the \(5^{th}\) gets executed in the \(0^{th}\) cycle. This staggering ensures that only one crossbar produces a 6-bit output in a given cycle. By sharing 2 ADCs, one 6-bit and the other 5-bit among the crossbars, we lower the ADC overheads relative to the baseline with two 6-bit ADCs. Each of the ADCs handles half the workload. This concept can be further applied to lower energy while slightly increasing area. The 5-bit ADC workload can be split across a 5-bit ADC and a 4-bit ADC. The 4-bit ADC handles nearly 90% of this split workload, thus saving energy. The area overhead of an extra ADC can be reduced by sharing the 5-bit ADC across 10 crossbars (since the 5-bit ADC is assigned a small fraction of the conversions). Since ADC overheads grow exponentially with its precision, this technique of leveraging shared, lower precision ADCs allows us to improve EE by 1.8\(\times\) and CE by 1.5\(\times\) over X-K2 in decryption, as seen in Figure 9. Since this technique doesn't reduce the number of crossbar writes, benefits in encryption are lower - 1.7\(\times\) in EE and 1.08\(\times\) in CE.
This highlights the importance of choosing modulo as power of 2, which leads to the above _ADCShare_ technique and significant benefits for SABER. _Other FFT-based PQC schemes, which have modulo over large primes, require computations of all the intermediate values before modulo is applied._[36, 97] have demonstrated benefits in modifying schemes to have power-of-2 modulo without affecting security; so the _ADCShare_ technique will apply to a broad range of lattice-based PQC algorithms. The _ADCShare_ technique does not apply to DNNs because dropping intermediate bits can impact the final result. SABER, on the other hand, reduces the output by performing modulo with \(2^{\prime}\), which means that any intermediate result that contributes to the \((t+1)^{th}\) bit is unnecessary.
## V Shift-and-Add Crossbars (SACs)
PolyMult with crossbars generates many intermediate ADC readout values, that are later shifted-and-added to obtain the final output polynomial. For instance, each PolyMult in decryption generates 4 values per cycle, for 10 cycles, which are appropriately shifted-and-added to obtain one output coefficient value (Algorithm 1). Intermediate analog value readouts are expensive since they are done using ADCs. In
Fig. 8: (a) Number of bits that contribute to the final coefficient. (b) Reordering computations in crossbars to enable sharing of maximum precision ADCs.
this section, we explore the possibility of performing the shift-and-add operation in analog to delay the ADC readout.
### _Existing In-analog Shift-and-Add Implementation_
CASCADE [82] proposed in-analog shift-and-add of intermediate values by writing the output of a crossbar's columns to another crossbar called Buffer RRAM crossbar. In a given cycle, the column outputs are written in adjacent cells of a row of the Buffer crossbar, while values across cycles are written to different rows with appropriate shifts. With this mapping, a simple readout of the Buffer crossbar performs shift-and-add of all intermediate values. By performing shift-and-add in analog, CASCADE delays the ADC readout to a single final value. However, CASCADE assumes a lower RRAM cell write energy overhead than that reported in recent literature. While multiple write drivers allow parallel cell programming, the write latency is expected to be about 25 ns. RRAM cells also have 4 orders of magnitude higher write energy (0.1 pJ/cell/bit [88]) than read energy [89, 100, 88, 101]. We next discuss an alternative analog shift-and-add technique that is more efficient than CASCADE.
### _Write-Free In-Analog Shift-and-Add using SACs_
We propose a novel technique to perform shift-and-add in analog. We introduce Shift-and-Add Crossbars (SACs) - tiny single column crossbars whose cells are pre-programmed to hold successive powers of 2. The intuition behind SAC is that given an input vector, when passed through SAC, individual values are multiplied by cell values and aggregated. In practice, up to 6-bit precision RRAM cells have been demonstrated. Therefore, SACs, with highest multiplier factor of 1\(\ll\)5, can only add inputs with a maximum of 5 shifts. However, this can be overcome with hierarchical deployment of SACs.
We first start by using SACs within a cycle. Since the secret \(\vec{s}\) is written to 4 1-bit cells in a row, the output of 4 columns must be added with appropriate shifts, every cycle. In the baseline implementation, this addition happens in digital, after ADC readout. In our work, we propose using a single SAC to perform these shift-and-adds, as demonstrated in Figure 10. In order to feed a crossbar's output current to SAC's DAC, it must be first converted to a proportional voltage signal, which is usually done using TransImpedance Amplifiers (TIAs). We use a fast (11 ns sense+transfer time) TIA circuit proposed by CASCADE [82]. Note that SAC's output is the shift-and-add result of 4 6-bit values, resulting in a 10-bit value. Therefore, a more expensive 10-bit precision ADC is required. However, by delaying ADC readout, it converts 4\(\times\) fewer samples and allows more sharing within the crossbar. Moreover, as the cycles proceed, fewer bits from the accumulated value contribute to the final output coefficient, as described in Section IV-E. This enables flexibility to increase ADC sharing, as only a single 10-bit sample is generated across the whole dot product. On the other hand, in a non-SAC implementation, 4\(\times\) 6-bit samples are generated during many cycles, as seen from Figure 8. The SAC significantly lowers the number of samples, increasing the effectiveness of the ADC sharing and smart scheduling described in Section IV-E. We refer to this design as _SAC-Basic_. SAC-Basic is a synergistic technique that exploits known analog circuits (like TIA), the flexibility offered by SABER's modulo operation, and resource sharing
Fig. 11: Hierarchical-SAC: In-analog accumulation across all cycles for each output polynomial coefficient, for SAC-All.
Fig. 10: Performing Shift-and-Add on outputs of 4 columns using Shift-and-Add Crossbars (SACs).
Fig. 9: Improvement by exploiting low precision computations and ADC sharing by reordering computations.
through smart scheduling on crossbars.
### _In-analog Accumulation Across Cycles_
In the previous subsection, we showed how to accumulate column outputs within a cycle in analog, delaying the ADC readout. Since input values are also streamed 1-bit per cycle, ADC outputs across different cycles must also be shifted-and-added once in digital. Since our design does not write the cycle output to a crossbar (like CASCADE), it cannot readily accumulate across cycles. However, we can overcome this roadblock by doing multiple cycles in parallel.
**Accumulate across cycles:** SAC requires inputs to be fed simultaneously across different rows in order to shift-and-add them. Therefore, to add outputs from 2 cycles, their computations need to be done in parallel. This requires 2\(\times\) compute/storage resources but reduces ADC samples by 2\(\times\) - a reasonable trade-off as ADC accounts for a majority of the energy/area.
We label the per-crossbar single-cycle SAC as _Round1-SAC_. Unlike SAC-Basic, Round1-SAC's outputs are redirected to another SAC using TIA. This SAC is shared across multiple crossbars that contribute to the same output coefficient, and is termed as _Round2-SAC_. This 2-cycle accumulated column value is finally read out with an ADC at the end of Round2-SAC. We term this design as _SAC-2x_ - accumulating 2 cycles before readout. While SAC-2x reduces ADC overheads, it increases the number of crossbars that run in parallel, in turn increasing the overall crossbar write costs. Therefore, we perform a design space exploration evaluating this trade-off while increasing the number of cycles that are accumulated in-analog. At the extreme end, SAC-All performs in-analog accumulation of all columns over all cycles per output polynomial coefficient, similar to CASCADE, and hence generates only 1 ADC sample per output coefficient.
**Hierarchical SACs:** Since the RRAM cell has a max resolution of 6 bits, larger computations are performed hierarchically, thus extending to Round3 and Round4 SACs, as shown in Figure 11. Note that SAC techniques are applicable to other lattice-based PQC schemes and algorithms like HE as well.
Fig. 12: Comparison of various implementations of XCRYPT designs. \(X\)-, \(C\)- represents XCRYPT, CASCADE respectively.
Fig. 13: Energy Breakdown of various XCRYPT designs. \(X\)-, \(C\)- represents XCRYPT, CASCADE respectively.
### _Results_
**SAC-Basic Results:** We compare our basic XCRYPT design with ADC Sharing techniques, CASCADE with similar techniques, and our novel design with SAC adding column values (_SAC-Basic_) within a cycle in Figure 12. Cycle time for the SAC design is determined by the TIA's sense+transfer time (=11ns). In a cycle, inputs are streamed to the first crossbar, sensed by TIA, and then sent as inputs to SAC. In the second cycle, TIA's outputs are streamed through SAC, producing the final value, which is sampled using ADC. Due to increase in the number of cycles and cycle time, end to end latency increases, relative to basic XCRYPT. However, by delaying ADC readout, SAC-Basic achieves 2.7\(\times\) (2.4\(\times\) for encryption) higher CE and 1.5\(\times\) (1.1\(\times\)) higher EE, over X-K2-ADCShare (X-TC4K2-ADCShare). On the other hand, CASCADE (_C-K2-ADCShare_) performs worse than the basic design, highlighting the overheads of performing crossbar writes every cycle. Other applications like DNNs can also benefit from the SAC design.
**Parallel SAC results:** We also compare various SAC-* designs in Figure 12. As we increase the parallelism to reduce ADC samples, the bottleneck shifts from ADCs to crossbar writes. We plot energy breakdown results in Figure 13. SAC designs offer a trade-off between ADC and crossbar write energy. From SAC-Basic to SAC-All, the bottleneck shifts from ADC (77% of total energy) to crossbar writes (83% of total energy). The benefits from fewer ADC samples cannot outweigh the energy overheads of more crossbar writes, which is why SAC-All's EE decreased by 66% from X-TC4K2-ADCShare, for encryption. Meanwhile, since decryption doesn't need to write to the crossbar (after being written once on boot), increasing the level of parallelism increases CE and EE.
We also observe that _the use of these circuit techniques changes the software algorithm that is most amenable to acceleration_, e.g., Schoolbook out-performs Karatsuba in some cases. Overall, the best decryption design (_X-SB-SAC-All_) yields 2.7\(\times\) and 6.3\(\times\) increase in CE and EE, respectively, over X-K2-ADCShare. For encryption, these improvements for _X-TC4K2-SAC-2x_ are 2.5\(\times\) and 1.3\(\times\) over X-TC4K2-ADCShare.
**Results Summary:** Overall, the best decryption design (_X-SB-SAC-All_) yields 4.5\(\times\) and 6.3\(\times\) increase in CE and EE, respectively, over X-K2-ADCShare. For encryption, these improvements for _X-TC4K2-SAC-2x_ are 2.5\(\times\) and 1.3\(\times\) over X-TC4K2-ADCShare. Compared to state-of-the-art ASIC [19], XCRYPT shows 3-51\(\times\) higher efficiencies with 2.6-16\(\times\) speedup. A single tile can perform 0.7\(M/22.7M\) encryptions/decryptions per second with a 0.09/0.07 \(mm^{2}\) area budget. Looking at the broader KEM operations, Encaps can be performed in 1.3 us with an area budget of 0.08 \(mm^{2}\) and 0.16 uJ, while Decaps can be done within 1.3 us with 0.16 \(mm^{2}\) area and 0.17 uJ energy.
## VI Noise Analysis
Due to non-ideal device behaviors and circuit issues, in-RRAM computations are vulnerable to errors [77, 92, 101, 102, 103, 104, 105, 106]. However, we argue first that although the noise generated during PolyMults can affect the correctness of decryption, it **does not leak information** about the secret key. In the first step of SABER's decryption algorithm, instead of computing \(v\) as \(\langle\vec{b}^{\prime},\vec{s}\rangle\) (ignoring the rounding operation), \(v\) is actually set as \(\langle\vec{b}^{\prime},\vec{s}\rangle+e\), where \(e\) is the noise caused by the cell variation. Since the decryption fails, the sum of errors in the computation plus \(e\) exceeds the rounding parameter, where \(e\) accounts for the majority of the sum and is much larger than the norm of the secret key. Then, the noisy value \(\langle\vec{b}^{\prime},\vec{s}\rangle+e\) is "essentially independent" of the secret key \(\vec{s}\). This is sometimes referred to as "noise flooding" [107] since the noise \(e\) "floods" the value related to the secret key and minimizes its effect. Under CPA-secure SABER, decryption based attacks are not applicable, as mentioned in SABER specs [108].
Many approaches use calibration signals [77, 92, 103, 104] or fault-aware mapping/encoding [101, 102], or a combination of the two [105, 106], mostly for machine learning applications. Hu et al. [92] have demonstrated the use of TIA for simple linear calibration in order to achieve low noise in practical deployment. We simulate our SAC-All design, with cell variance ranging up to 10%, and plot the failure probability for 1 million executions of SABER's public-key algorithms in Figure 14. Cell variance is the variance of the distribution function that introduces non-ideality to the output. Specifically, a 5% cell variance means that its output would be within 5% range of the ideal cell _output current_. Our crossbar error is modeled at the level of every single multiply-accumulate operation (individual RRAM cell) as we sample from a probability distribution function for variation. The noise in the TIA circuit is conservatively assumed to be 2%. We have modeled non-idealities occurring from cell programming and variation for the SAC circuitry, with a constant cell variation, which might not hold true when a large current passes through the device [103]. However, unlike the conventional Shift-and-Add where the errors can be amplified due to bit-shifts in digital, the errors accumulated in SAC are over current and would only reflect during ADC readout. Which is why readout, by converting a range of currents to digital values, results in lower error than conventional shift-and-add. Single cell failures can affect the scheme's failure probability. However,
Fig. 14: Number of re-tries needed to achieve failure-free decryption for 1M SABER calls, varying Cell Variance.
such issues can be easily identified and remapped [101, 102].
To detect failed decryption, data is accompanied by its CRC [109]. Upon failure, we re-try that computation. In Figure 14, we show the decryption failure probability as a function of the cell variance and the maximum number of allowed re-tries. As observed from the graph, our accelerator achieves failure-free decryption with cell variance of 5-6%. Beyond that, variation in cells triggers the bitline current to jump over an interval, resulting in incorrect coefficients and hence, failed decryption with up to 0.23 probability. With a few re-tries, these errors can also be corrected. In practice, 5% cell variance is reasonable [110, 111, 77, 104], and can be further lowered with recent calibration techniques. Furthermore, with state-of-the-art fabrication that has higher device yield, lower wire resistance, and that operates at lower conductances, the noise impact can further reduce.
## VII Conclusion
This work evaluates the use of memristor crossbars for accelerating lattice-based post quantum cryptography (PQC). We show that even a simple implementation of SABER, a front-runner PQC candidate for NIST Round-3, performs faster than existing hardware proposals for SABER. By exploiting SABER's algorithmic properties, e.g., its power-of-2 modulo operations, we can further boost the accelerator's efficiency. We identify polynomial multiplication as the key operation in lattice-based schemes, and show that crossbar-based designs might not benefit from some of the existing software techniques for multiplication. We propose SABER-specific variable precision ADCs, which, along with computation reordering, allows high levels of ADC sharing. To further reduce ADC overheads, we propose simple analog Shift-and-Add techniques. Overall, our proposed accelerator achieves 12-51\(\times\)/3-40\(\times\) higher computational/energy efficiency than state-of-the-art ASIC. Finally, we modify SABER's NIST submission code to simulate noisy crossbars, and present a noise analysis for XCRYPT design points.
| この論文では、メモリristorベースのクロスバーを用いてラティスベースのポスト量子暗号 (PQC) を加速し、これらのinherentlyエラー耐性アルゴリズムがクロスバーにおけるノイズなアナログ MAC 動作に適していることを示します。私たちは NIST ラウンド3 のラティスベースの PQC の候補を比較し、SABER は従来のシステムで最も先進的なアルゴリズムであり、クロスバーで加速することができることがわかりました。SABER はモジュール-LWR ベースのアプローチであり、丸め計算によるモジュラ多項式の計算を実行します。私たちは SABER の多項式乗算をクロスバーに移行し、アナログドット積がパフォーマンスとエネルギー効率を$1.7-32.5\times$向上させることを示しました。これは、最近提案されたハードウェアと比較して、初期設計は多様な先進的な研究の革新を組み合わせました。SABERの |
2309.03836 | Quasiperiodic disorder induced critical phases in a periodically driven
dimerized $p$-wave Kitaev chain | The interplay of topology and disorder in non-equilibrium quantum systems is
an intriguing subject. Here, we look for a suitable platform that enables an
in-depth exploration of the topic. To this end, We analyze the topological and
localization properties of a dimerized one-dimensional Kitaev chain in the
presence of an onsite quasiperiodic potential with its amplitude being
modulated periodically in time. The topological features have been explored via
computing the real-space winding numbers corresponding to both the Majorana
zero and the $\pi$ energy modes. We enumerate the scenario at different driving
frequencies. In particular, at some intermediate frequency regime, the phase
diagram concerning the zero mode involves two distinct phase transitions, one
from a topologically trivial to a non-trivial phase, and another from a
topological phase to an Anderson localized phase. On the other hand, the study
of the $\pi$ modes reveals the emergence of a unique topological phase, with
the bulk and the edges being fully localized, which may be called as the
Floquet topological Anderson phase. Furthermore, we study the localization
properties of the bulk states by computing the inverse and normalized
participation ratios, while the critical phase is ascertained by computing the
fractal dimension. We have observed extended, critical, and localized phases at
intermediate frequencies, which are further confirmed via a finite-size scaling
analysis. Finally, fully extended and localized phases are respectively
observed at lower and higher frequencies. | Koustav Roy, Shilpi Roy, Saurabh Basu | 2023-09-07T16:50:14 | http://arxiv.org/abs/2309.03836v3 | # Floquet analysis of a driven Kitaev chain in presence of a quasiperiodic potential
###### Abstract
The interplay of topology and disorder in non-equilibrium quantum systems is an intriguing subject. Here, we look for a suitable platform that enables an in-depth exploration of the topic. To this end, we analyze the topological and localization properties of a dimerized one-dimensional Kitaev chain in the presence of an onsite quasiperiodic potential whose amplitude is modulated periodically in time. The topological features have been explored via computing the real-space winding numbers corresponding to both the Majorana zero and the \(\pi\) energy modes. We enumerate the scenario at different driving frequencies. In particular, at either low or intermediate frequency regimes, the phase diagram concerning the zero mode involves two distinct phase transitions, one from a topologically trivial to a non-trivial phase, and another from a topological phase to an Anderson localized phase. On the other hand, the study of the \(\pi\) modes reveals the emergence of a unique topological phase, where both the bulk and the edge modes are fully localized, which may be called as the Floquet topological Anderson phase. Moreover, while the low and high-frequency regimes host extended and localized states, respectively, at intermediate frequencies, the states can be extended, critical (or multifractal), and localized. Inverse and normalized participation ratios are used as tools to characterize the extended and the localized states. Further, the intermediate frequency regime is thoroughly enumerated via a finite-size scaling analysis of the fractal dimension.
## I Introduction
Anderson localization(AL) is a fundamental phenomenon involving the complete vanishing of transport properties of systems due to the presence of random disorder [1; 2]. Consequently, all the single-particle eigenstates of the non-interacting system suffer a transition from a completely extended (metallic) to a localized (insulating) phase. While this transition depends upon the dimensionality of the system [3], nevertheless, this captivating topic gained paramount interest in a broad range of physical systems, such as matter waves, light waves, optical lattices, photonic lattices, etc [4; 5; 6; 7; 8]. On the other hand, quantum systems with incommensurate potential, such as a quasiperiodic (QP) potential, which lies between periodic and random, can exhibit localization transitions in a one-dimensional system [9]. Recent developments in experimental accessibility to control over quantum systems and engineering new models in need provide a new era of opportunities for both the experimentalists and theoreticians working in the field. As a consequence, quantum systems with QP potential have been studied in a vast range of experimental setups, including optical [10; 11; 12; 13] and photonic lattices [14; 15], optical cavities [16; 17], and moire lattices [18], etc. To understand the localization transition in QP systems, the Aubry-Andre (AA) model [19; 20] is the most studied one. Later, several general models of the AA model were introduced, comprising of many interesting non-trivial results [23; 24; 25; 26; 27; 28; 29]. Moreover, in specific generalized AA models, the transition from an extended to a localized phase is often linked through a critical region, which is characterized as an intermediate or a critical phase [30]. Numerous studies have shown that an energy-dependent transition that is a mobility edge may be present in this phase [31; 32; 33]. The emergence of a mobility edge in a one-dimensional system has sparked substantial curiosity owing to its experimental realizations [34; 35].
On a parallel front, topological quantum matter, such as topological insulators (TIs) and topological superconductors (TSCs), has been receiving enormous attention due to its possible applications in topological quantum computation and spintronics devices [36]. The scientific community believes the fundamental building blocks of TSCs are the Majorana zero modes (MZMs), which could be a potential candidate for qubits [37; 38]. A simple toy model of the TSCs is provided by the Kitaev chain model [39]. It is a one-dimensional spinless \(p\)-wave superconductor chain with MZMs localized at the edges. Various experimental proposals have been projected so far in order to obtain 1D TSCs [40; 41; 42; 43; 44; 45; 46]. In a similar way, a lot of progress has gained attention from the theoretical aspects. Among them, we are particularly interested in a variant of the generalized Kitaev models. Specifically, a dimerized Kitaev chain [47; 48; 49], which is a hybrid of a Su-Schriffer-Heeger (SSH) model [50] and a Kitaev chain, owing to its exciting features. There are rich topological and localization properties (protected by the particle-hole symmetry) that make the study of this model intriguing [51].
Further, Quantum systems driven periodically far from equilibrium are known to exhibit new phases that are otherwise inaccessible in a static setup. These kinds of periodically driven systems can be understood by means of Floquet formalism [52; 53; 54]. By using an external periodic drive, one can pave the way for creating topologically non-trivial materials with substantial tunability, even from those that are topologically trivial in the undriven case. Additionally, the energy bands of the driven systems can be backfolded to a Floquet Brillouin zone (FBZ) at the boundary of which, new types of edge modes, namely the so-called \(\pi\) modes, appear [55; 56; 57; 58].
Numerous studies using ultracold atoms in driven optical lattices, acoustic, and photonic devices have successfully employed the concept of Floquet engineering [59; 60; 61; 62; 63; 64; 65]. Among others, silicon-on-insulator-like materials with lattices of tightly linked octagonal resonators have been used in the experimental realization of Floquet topological insulators (FTIs) on nano-photonics platform [66]. Additionally, photo-induced band gaps can be considered to study the temporal periodicity of systems, where these band gaps can be resolved using a method called the time and angle-resolved photoemission spectroscopy (t-ARPES) [67]. In recent years, there has been a remarkable surge in interest revealed to the study of non-equilibrium dynamics of closed systems. These include generation of higher winding or the Chern numbers in 1D, quasi-1D, and 2D systems [68; 69; 70; 71; 72; 73], emergence of time crystalline phases along with period doubling oscillations [74; 75], Floquet analysis of higher-order topological insulators [76; 77; 78], Floquet topological characterization of quantum chaos model [79; 80], etc. In this context, the periodically driven Kitaev chain has been explored in several studies [81; 82]. In addition to that, several studies have been reported on the periodically driven AA model [83; 84; 85; 86].
Very recently, the interplay of QP potential and topology has been well explored using this model [51], where different phases are characterized in terms of topologically trivial and non-trivial regimes. Most interestingly, a disorder-induced topological phase, namely, the topological Anderson phase, has been observed in the model. In addition to this intriguing phase, the presence of an entire region comprising of multifractal states makes the model more promising in the context of non-ergodic physics [87; 88; 89]. Moreover, in the presence of random disorder, the model can show behavior similar to the Anderson model [90]. Further, a recent study [91] has shown that the Majorana modes created using spatially QP driving are more robust to decoherence than those created using a spatially uniform one. Deriving motivations from the above inputs, an interesting proposal arose here to bridge the two important aspects, namely the topological properties and the localization-delocalization transition for a time-periodic system, which has been considered by us. Thus, we want to ask very specific questions, such as, how the periodically modulated QP potential affects the above-mentioned properties of the model. Our primary goal here is to compare and contrast the topological and localization properties of a periodically kicked dimerized Kitaev chain corresponding to different frequency regimes. In particular, we observe intriguing nontrivial behavior in the driving setting, which is not present in the static scenario. While the high-frequency regime captures the properties of the static counterpart, the low-frequency regime demands great attention to study deeply.
The layout of the subsequent discussions is as follows. In sec.II, we describe the static version of the model to recapitulate its properties, including dimerization in both hopping and the superconducting pairing term, and benchmark against the results for the driven case. Afterwards, we shall introduce the Floquet tool to construct an effective time-independent Hamiltonian. In sec.III, we shall discuss our results on both the topological and the localization properties of the system. At the end, we summarize and conclude our findings in sec.IV.
## II The Hamiltonian and the Floquet formalism
We consider a one-dimensional tight-binding model describing the dimerized Kitaev chain of spinless particles with \(p\)-wave superconductivity in the presence of onsite QP potential, illustrated in Fig. 1. The corresponding Hamiltonian is given by,
\[\begin{split} H&=-t\sum_{j=1}^{N}\Big{[}(1+\delta) \hat{c}_{j,B}^{\dagger}\hat{c}_{j,A}+(1-\delta)\hat{c}_{j+1,A}^{\dagger}\hat{c }_{j,B}+H.c.\Big{]}\\ &\quad-\Delta\sum_{j=1}^{N}\Big{[}(1+\delta)\hat{c}_{j,B}^{ \dagger}\hat{c}_{j,A}^{\dagger}+(1-\delta)\hat{c}_{j+1,A}^{\dagger}\hat{c}_{j,B}^{\dagger}+H.c.\Big{]}\\ &\quad-\sum_{j=1}^{N}\Big{[}\mu_{A}\hat{c}_{A,j}^{\dagger}\hat{c }_{A,j}+\mu_{B}\hat{c}_{B,j}^{\dagger}\hat{c}_{B,j}\Big{]},\end{split} \tag{1}\]
\[\begin{split}&=H_{DKC}-\sum_{j=1}^{N}\Big{[}\mu_{A}\hat{c}_{A,j}^ {\dagger}\hat{c}_{A,j}+\mu_{B}\hat{c}_{B,j}^{\dagger}\hat{c}_{B,j}\Big{]}\end{split} \tag{2}\]
where DKC stands for 'dimerized Kitaev chain' and
\[\mu_{A} =\lambda_{A}\cos{[2\pi\beta(2j-1)+\phi]}, \tag{3a}\] \[\mu_{B} =\lambda_{B}\cos{[2\pi\beta(2j)+\phi]}. \tag{3b}\]
Here, \(A\) and \(B\) represent sublattice indices. The number of unit cells denoted as \(N\) corresponds to the unit cell index \(j\) (\(j=1,2,3,...N\)). Thus, the length of the chain is given by \(L=2N\). The creation (annihilation) operator to create (annihilate) a fermion at the sublattice site
Figure 1: Schematic illustration of dimerized Kitaev chain. The red and green circles denote two sublattices, \(A\) and \(B\) respectively, within a unit cell (dashed line). The thick and thin chain indicates the dimerization of the model. For example, the intracell (strong) and intercell (weak) hopping strengths are defined as \(t(1+\delta)\) and \(t(1-\delta)\), respectively. Whereas the intracell (strong) and intercell (weak) superconducting pairing strengths are defined as \(\Delta(1+\delta)\) and \(\Delta(1-\delta)\), respectively.
\((j,A)\) and \((j,B)\) is given by \(\hat{c}^{\dagger}_{j,A}\) (\(\hat{c}_{j,A}\)) and \(\hat{c}^{\dagger}_{j,B}\) (\(\hat{c}_{j,B}\)), respectively. Further, \(t\) is the nearest-neighbor hopping integral, and \(\Delta\) denotes the nearest-neighbor superconducting pairing term, which is taken to be real without any loss of generality. It is assumed that the strengths of the hopping integral (\(t\)) and the \(p\)-wave superconducting pairing term (\(\Delta\)) alternate between strong (inside the unit cell) and weak (between the unit cells) bonds. Consequently, a dimerization tuning parameter \(\delta\) is introduced in the model to distinguish between them. Hence, the intra (inter) cell hopping integral and the superconducting pairing term are represented by \(t(1+\delta)\) (\(t(1-\delta)\)) and \(\Delta(1+\delta)\) (\(\Delta(1-\delta)\)), respectively, as shown in Fig. 1. We enforce a restriction on \(\delta\) to ensure the hopping terms to assume only positive values, namely, \(|\delta|<1\). In addition, we modulate the chemical potential terms \(\mu_{A}\) at \(A\) and \(\mu_{B}\) at \(B\) quasiperiodically, as shown in Eq. 3. The periodicity of the QP potential is described by \(1/\beta\), where \(\beta\) is an irrational number usually chosen to be the golden ratio, namely, \(\beta=\frac{\sqrt{5}-1}{2}\). The phase term of the potential is represented as \(\phi\), which is set to be zero. The potential strengths at the two sublattice sites are denoted by \(\lambda_{A}\) and \(\lambda_{B}\). In this paper, we shall mainly focus on the staggered case, namely, \(\lambda_{A}=-\lambda_{B}=\lambda\). The staggered case is particularly interesting owing to the possibility of an interplay between the dimerization parameter and strength of the QP potential, etc., on the localization properties [23]. By considering the limiting cases, the dimerized Kitaev chain Hamiltonian is reduced to the Kitaev chain corresponding to \(\delta=0\) and to the SSH chain for \(\mu=0\) and \(\Delta=0\). Throughout this paper, we have fixed \(t\) as a unit of energy, that is, \(t=1\), and the length of the chain is set as \(L=1220\). Additionally, the superconducting pairing strength is set as \(|\Delta|<t\), that is, \(\Delta=0.5\). We may remind ourselves that \(\Delta>0\) denotes a topological phase in the absence of a QP potential (\(\lambda_{A}=\lambda_{B}=0\)).
Now, to enumerate the effects of the quasiperiodically modulated potential, which enters through chemical potentials (\(\mu_{A}\), \(\mu_{B}\)), further, we keep a constant onsite potential term, \(\mu\) to enable a smooth comparison with the static model in the absence of disorder. Consequently, the Hamiltonian assumes a form,
\[\begin{split} H=& H_{DKC}-\sum_{j=1}^{N}\mu\Big{[} \hat{c}^{\dagger}_{A,j}\hat{c}_{A,j}+\hat{c}^{\dagger}_{B,j}\hat{c}_{B,j} \Big{]}\\ &-\sum_{j=1}^{N}\Big{[}\mu_{A}\hat{c}^{\dagger}_{A,j}\hat{c}_{A, j}+\mu_{B}\hat{c}^{\dagger}_{B,j}\hat{c}_{B,j}\Big{]}.\end{split} \tag{4}\]
The temporal form of the driving potential has a form,
\[\mu_{A,B}\rightarrow\mu_{A,B}\sum_{m=-\infty}^{\infty}\delta(t-mT) \tag{5}\]
while the static Hamiltonian can be written as,
\[H_{0}=H_{DKC}-\sum_{j=1}^{N}\mu\Big{[}\hat{c}^{\dagger}_{A,j}\hat{c}_{A,j}+ \hat{c}^{\dagger}_{B,j}\hat{c}_{B,j}\Big{]}. \tag{6}\]
Here, the driving protocol (Eq. 5) indicates applying the onsite QP potential at the sublattice sites \(A\) and \(B\) periodically at times \(t=mT\) where \(m\) is an integer that counts the number of kicks.
In general, to tackle any periodically driven system, one can adopt the Floquet formalism approach. Which provides a tool to construct an effective time-independent Floquet Hamiltonian whose stroboscopic dynamics are equivalent to the static system. Note that, a detailed analysis of the static system in the presence of QP potential has been studied by some of us in [51]. According to the Floquet theorem, the dynamical evolution of a periodically kicked system is obtained via a time-ordered product of the Floquet evolution operators. As a result, it can be written as a product of two terms, that is,
\[\hat{U}(T,0)=\hat{U}^{\prime}_{1}\hat{U}^{\prime\prime}_{2} \tag{7}\]
where,
\[\hat{U}^{\prime}_{1}=e^{-i[\sum_{j=1}^{N}\mu_{A}\hat{c}^{\dagger}_{A,j}\hat{c} _{A,j}+\mu_{B}\hat{c}^{\dagger}_{B,j}\hat{c}_{B,j}]} \tag{8}\]
and
\[\hat{U}^{\prime\prime}_{2}=e^{-i\hat{H}_{0}T}. \tag{9}\]
We can now numerically diagonalize \(\hat{U}(T,0)\) to obtain its eigenvectors \(\ket{\psi_{m}}\) and eigenvalues \(e^{-i\hat{E}_{m}}\) using,
\[\hat{U}(T,0)\ket{\psi_{m}}=e^{-i\hat{H}_{\text{eff}}T}\ket{\psi_{m}}=e^{-iE_{ m}T}\ket{\psi_{m}}, \tag{10}\]
where \(H_{\text{eff}}\) is the Floquet effective Hamiltonian. \(E_{m}\) denotes the quasi-energies which lie within the first FBZ. In the following section, we shall present the exact numerical results obtained by using the Floquet effective Hamiltonian.
Figure 2: A static phase diagram is shown using the real-space winding number as a function of dimerization strength (\(\delta\)) and the onsite QP potential strength (\(\lambda\)).
## III Results
Our main aim here is to study the topological and localization properties of the driven system induced by the interplay of the dimerization strength (\(\delta\)) and the periodic driving amplitude (\(\lambda\)). Hence, based on our numerical analysis, we shall investigate the behavior of the system at different frequencies. In particular, we show different phase transitions and properties of the edge modes and bulk states that are discernible at high- and low-frequency regimes. The system size taken for the numerical calculation is \(L=1220\).
### Topological properties
In this section, we shall start our discussion on the topological properties based on the study of zero-energy edge modes, that is, MZMs that emerge in our system. Before focusing on our periodically kicked setting, a brief discussion on the static scenario in the presence of the QP potential is useful and presented in the following.
The existence of \(p\)-wave superconductivity in our system implies the particle-hole symmetry is inherently present in the Hamiltonian (Eq. 4). Consequently, the quasi-particle energy spectrum is symmetric in nature with respect to the Fermi level (\(E_{F}=0\)), even in the presence of an onsite QP potential. Thus, for each particle-like solution with energy \(+E\), there will be a hole-like solution with energy \(-E\). Only the zero-energy states (\(E=0\)) are self-conjugated with each other. In this scenario, the topologically non-trivial phase is distinct from the topologically trivial phase by the presence of gapless zero-energy modes. Additionally, this distinction can also be captured via the bulk properties of the Hamiltonian. This yields a topological invariant. Thus, a protected chiral, although with broken translational symmetry (due to the onsite QP potential), hints towards the calculation of the real-space winding number as the topological invariant in our system. In fact, the momentum space formula for the winding number turns out to be useful in this regard. Hence, the real-space winding number (\(\nu\)) can then be written by drawing an analogy in the momentum space as, [51; 92],
\[\nu=\frac{1}{L^{\prime}}Tr\Big{(}\hat{\Gamma}\hat{Q}\Big{[}\hat{Q},\hat{X} \Big{]}\Big{)}, \tag{11}\]
where \(\hat{Q}=\sum_{j}^{N}\ket{j}\bra{j}-\ket{\tilde{j}}\bra{\tilde{j}}\) is obtained by solving for a generic chiral symmetric Hamiltonian \(H\ket{j}=E_{j}\ket{j}\) corresponding to \(\ket{\tilde{j}}=\hat{\Gamma}^{-1}\ket{j}\), where \(j\) is the eigenstate index. \(\hat{\Gamma}\) and \(\hat{X}\) are operators corresponding to the chiral symmetry and position, respectively. Note that, in the momentum space, the chiral symmetry operators are defined as \(\hat{C}=\hat{\sigma}_{z}\otimes\hat{\sigma}_{0}\) and \(\hat{C}=\hat{\sigma}_{x}\otimes\hat{\sigma}_{0}\) for the cases, corresponding to \(\mu=0\) and \(\mu\neq 0\), respectively. Hence, the real space representation of the chiral symmetry operator (\(\hat{\Gamma}\)) can be written as the tensor product of \(\hat{C}\) with the corresponding \(N^{\rm th}\) order identity matrix, \(I_{N}\). Finally, \(Tr\) denotes the trace over the lattice sites corresponding to half of the length of the chain, namely, \(L^{\prime}=L/2\), where half the number of sites are considered from the middle of the chain to eliminate edge effects.
The analysis based on the real-space winding number already showed that a static Hamiltonian (Eq. 4) in the absence of constant chemical potential (\(\mu\)) exhibits a disorder-driven topological phase transition. It includes the transition from topologically trivial to a non-trivial phase, known as the topological Anderson phase, that occurs beyond a critical dimerization strength [51]. However, at large values of the QP strength (\(\lambda\)), the system undergoes another phase transition from a topological Anderson to a fully localized Anderson phase. Later, the inclusion of a constant chemical potential (\(\mu\))
Figure 4: The real-space winding number corresponding to the zero and \(\pi\) modes as a function of dimerization strength (\(\delta\)) and the onsite QP potential strength (\(\lambda\)) are shown in (a) and (b), respectively. Here, the driving frequency is \(\omega=2.5\). The system size taken for the calculation is \(L=1220\).
Figure 3: The real-space winding number corresponding to the Majorana-zero modes as a function of the driving period (\(T\)) and the onsite QP potential strength (\(\lambda\)) is shown. Here, the dimerization strength is \(\delta=0.5\).
in the present scenario results in a distinction between the sublattice and chiral symmetries [47]. Further, the phase diagram depicting the topological regime via real-space winding numbers in the \(\delta-\lambda\) plane is shown in Fig. 2. Subsequently, the onset of the Majorana zero-modes emerges at \(\lambda=0\), which is used as a benchmark for making a comparison between the static and the driven scenarios. A disorder-free dimerized Kitaev chain shows a phase transition at \(\delta=0.9\) corresponding to \(\mu=1.5\)[48]. On the other hand, introducing the onsite QP potential in the no dimerization limit, that is, \(\delta=0\), implies a phase transition from a topologically non-trivial to a trivial at \(\lambda<2\). While in the strong dimerization limit (\(\delta=1\)), the system hosts a trivial phase. As the dimerization strength is increased further from \(\delta=0\), it helps in exhibiting the phase transition from the topological to the trivial at the lower value of \(\lambda\) with respect to the \(\delta=0\) condition.
Expectedly, owing to the unharmed chiral symmetry and periodic drive, the system can host Majorana zero modes. In addition to that, a Majorana \(\pi\) mode that has no static counterpart can appear at higher values of the driving strength (\(\lambda\)). A study of the topological properties in this system is pursued as per the periodic table of FTIs [93]. Accordingly, it says that each non-trivial phase of the system can be characterized by a pair of winding numbers \((\nu^{0},\nu^{\pi})\in(\mathbb{Z}\times\mathbb{Z})\). The classification of the two non-commutative winding numbers relies on the mechanism of building a pair of symmetric time frames [70; 94; 95]. Consequently, the Floquet evolution operator acquires a form,
\[\hat{U}=\hat{F}\hat{G}, \tag{12}\]
where, \(\hat{F}\) and \(\hat{G}\) are related by the chiral symmetry operator (\(\hat{C}\)) as,
\[\hat{C}\hat{F}\hat{C}=\hat{G}^{-1}. \tag{13}\]
It is also easy to verify that if a symmetric time frame corresponding to a Floquet evolution operator, \(\hat{U}_{1}=\hat{F}\hat{G}\) exists, then there must exist another symmetric time frame corresponding to the Floquet operator, \(\hat{U}_{2}=\hat{G}\hat{F}\). Now the Floquet evolution operator in one symmetric time frame from \(t=T/2\) to \(t=3T/2\) reads as [96],
\[\hat{U}_{1}=e^{-i\lambda\hat{V}/2}e^{-i\hat{H}_{0}T}e^{-i\lambda\hat{V}/2}, \tag{14}\]
where,
\[\lambda\hat{V}=\sum_{j=1}^{N}[\mu_{A}\hat{c}_{A,j}^{\dagger}\hat{c}_{A,j}+\mu _{B}\hat{c}_{B,j}^{\dagger}\hat{c}_{B,j}]. \tag{15}\]
Similarly, using Eq. 12 and Eq. 13, the Floquet evolution in the second time frame takes the form,
\[\hat{U}_{2}=e^{-i\hat{H}_{0}T/2}e^{-i\lambda\hat{V}}e^{-i\hat{H}_{0}T/2}. \tag{16}\]
Given that \(\hat{U}_{1}\) and \(\hat{U}_{2}\) are chiral symmetric partners and hence share the same quasi-energy spectrum with that of \(\hat{U}(T,0)\). Thus, a suitable combination of their winding numbers should be able to provide the topological invariants of the system via the following equations,
\[\nu^{0}=\frac{\nu^{\prime}+\nu^{\prime\prime}}{2}\quad;\quad\nu^{\pi}=\frac{ \nu^{\prime}-\nu^{\prime\prime}}{2}. \tag{17}\]
Here, \(\nu^{\prime}\) and \(\nu^{\prime\prime}\) are the winding numbers for the two effective Hamiltonians corresponding to the two symmetric time frames, \(\hat{U}_{1}\) and \(\hat{U}_{2}\) respectively. Meanwhile, we add some pedagogical details by depicting the procedure for obtaining the bulk invariants in a uniformly driven system in the appendix section V.
Now, we start our analysis by enumerating the scenario corresponding to different frequency regimes. Moreover, we want to obtain an upper limit for the frequency which will demarcate between low and high frequency regimes.
Figure 5: The Floquet quasienergy spectrum is shown as a function of \(\lambda\) corresponding to (a) \(\delta=0.25\) and (b) \(\delta=0.5\).
Figure 6: Majorana Zero modes (MZMs) are shown as a function of \(\lambda\) in (a) and (c) corresponding to \(\delta=0.25\) and \(\delta=0.5\), respectively. The real-space winding number for zero-energy modes are shown in (b) and (d), corresponding to \(\delta=0.25\) and \(\delta=0.5\), respectively.
This can be achieved by plotting the winding number corresponding to the Majorana zero mode (\(\nu^{0}\)) as a function of the time period \(T\) and the driving amplitude \(\lambda\) for an arbitrary value of the dimerization strength \(\delta=0.5\) in Fig. 3. The phase diagram demonstrates a smooth boundary separating the topological non-trivial phase from a trivial phase in a high-frequency (small period) limit. The observation should emulate the static scenario. On the other hand, in the low-frequency (large period) regime, an intriguing, however highly complex, situation emerges distinct from that of the static counterpart. This non-triviality can be understood by expanding the Hamiltonian using the Baker-Campbell-Hausdorff (BCH) formula given as,
\[\begin{split}\ln(e^{X}e^{Y})=& X+Y+\frac{1}{2}[X,Y]+ \frac{1}{12}[X-Y,[X,Y]]\\ &-\frac{1}{24}[Y,[X,[X,Y]]]+...\end{split} \tag{18}\]
Such that the effective Hamiltonian assumes the form,
\[\begin{split}\hat{H}_{\rm eff}=&\hat{H}_{0}+\frac {\lambda\hat{V}}{T}+\frac{\lambda T}{2}[\hat{H}_{0},\hat{V}]\\ &+\frac{\lambda T}{12}[\hat{H}_{0},[\hat{H}_{0},\hat{V}]]-\frac{ \lambda^{2}}{12}[\hat{V},[\hat{H}_{0},\hat{V}]]+...\end{split} \tag{19}\]
In the high frequency limit (\(T\ll 1\)) and small driving strength (\(\lambda\)), \(H_{\rm eff}\) can be truncated upto the first order as,
\[\hat{H}_{\rm eff}=\hat{H}_{0}+\frac{\lambda\hat{V}}{T}. \tag{20}\]
Such an expansion tells us that the Hamiltonian corresponding to a small time period (high frequency) regime is equivalent to the static Hamiltonian plus a renormalized potential (denoted by the second term in Eq. 20) that increases linearly with the frequency. The inclusion of the second term along with the static Hamiltonian demarcates the different topological phases via a straight line-like boundary in the \(\lambda-T\) plane, as shown in Fig. 3. However, in the limit of low-frequency (large time period), one can not ignore the effects of the additional nested commutators in Eq. 18 that become more important with increasing power of \(T\). Clearly, in the low-frequency regime, the drive can induce longer-range interactions. As a result, one can get topologically protected zero-energy modes even where the QP potential strength is high. Furthermore, multiple phase transitions induced by the disorder can be observed in the limit of low frequency. For further study in the low-frequency regime and in order to get insightful results, we fix the value of frequency corresponding to this regime. For example, we take \(\omega=2.5\).
We show the phase diagrams via the real space winding numbers (\(\nu^{0}\) and \(\nu^{\pi}\)) in the \(\delta-\lambda\) plane corresponding to the above-chosen driving frequency, namely, \(\omega=2.5\) in Figs. 4 (a) and (b). This value really denotes a representative point in the low-frequency regime, and it hosts a critical phase (See Fig.9 and associated discussions below). We shall explain both the phase diagrams and associated analysis in the following.
The Majorana zero-energy phase diagram (Fig. 4 (a)) of the driven system is seen to host an interesting topological behavior and can be perceived by comparing it with the static scenario (Fig. 2). Specifically, it shows a trivial phase corresponding to a weak value for the dimerization strength (\(\delta\)) and driving amplitude (\(\lambda\)), which initially was topologically non-trivial in its static counterpart Fig. 2. Here, we choose two representative points \(\delta=0.25\) and \(0.5\) to discuss further the behavior of the system. In the first case, the result demarcates a phase transition from a topologically trivial phase to a topologically non-trivial phase at a critical driving amplitude, say \(\lambda_{1}\) (shown in Fig. 4) corresponding to a certain dimerization range. Later, upon increasing the driving strength, the system undergoes another phase transition at a second critical point, \(\lambda_{2}\), where a transition from topological non-trivial to the trivial Anderson phase occurs. Therefore, we conclude that phase transitions induced by period kickings are accessible in this system, resulting in the presence and absence of non-trivial phases as opposed to the static scenario. Further, in the second case, the range of topological non-trivial phases increases in the large dimerization regime, resulting in a single-phase transition. Therefore, we observe only a single transition from topologically non-trivial to trivial phases at the critical driving amplitude \(\lambda_{3}\).
In order to acquire a concrete understanding of the topological features observed in the phase diagram, we
Figure 7: The probability distribution of eigenstates as a function of site index (\(j\)) are shown corresponding to (a) \(\lambda=0.65\), (b) \(\lambda=0.7\), (c) \(\lambda=1.9\), and (d) \(\lambda=2\) for a fixed frequency, say, \(\omega=2.5\).
have shown the Floquet quasi-energy spectrum as a function of \(\lambda\) corresponding to two different values of the dimerization strength, say, \(\delta=0.25\) and \(0.5\) in Figs. 5(a) and (b), respectively. The choice of \(\delta=0.25\) derives motivation from the left panel of Fig. 4, where an initial trivial phase is driven into a topological phase beyond a certain value of \(\lambda\), which eventually gets destroyed at larger values of \(\lambda\). Thus, a topologically non-trivial phase corresponding to the Majorana zero modes appears in the spectrum, implying phase transitions occurring at two values of \(\lambda\), that are, namely, \(\lambda_{1}\simeq 0.65\) and \(\lambda_{2}\simeq 2.00\). These values are shown in Fig. 4(a) by the intersection of the white dashed line with the red region. On the other hand, corresponding to \(\delta=0.5\), we observe the onset of the topological phase from the left edge of Fig. 4, that is, \(\lambda=0\), implying the emergence of MZMs. The topological phase exists upto \(\lambda\simeq 2.2\), beyond which the MZMs hybridize with the bulk. Subsequently, the number of MZMs and the winding number (\(\nu^{0}\)) as a function of \(\lambda\) confirm the validity of the bulk-edge correspondence, which is shown in Figs. 6(a) and (b) corresponding to \(\delta=0.25\). In addition to that, we show the same for the other value of \(\delta\), namely, \(\delta=0.5\) in Figs. 6(c) and (d). Note that all these observations validate the phase diagram presented in Fig. 4(a).
Next, we have plotted the probability distribution of the zero-energy Floquet eigenstates as a function of the site index around the first transition point, \(\lambda_{1}\) corresponding to \(\lambda=0.6\), and \(0.7\) in Figs. 7(a) and (b), respectively for \(\delta=0.25\). A phase transition supported by the extended and localized nature of the zero-energy edge modes precisely indicates the existence of the first transition around the critical point \(\lambda_{1}=0.65\). On the other hand, we have plotted the same corresponding to other values of \(\lambda\) that lie on either side of the second transition point; namely, we have taken \(\lambda=1.9\) and \(2.0\) in Figs. 7(c) and (d), respectively. The complete localization of the states at the edges validates the signature of the MZMs, implying a topological non-trivial phase being present at \(\lambda=1.9\). With an increase in the driving strength, the probability distribution at \(\lambda=2.0\) shows a critical behavior at the transition point, thereby indicating a transition to an Anderson phase.
Further, we explore the non-trivial topological features that emerge corresponding to the Majorana \(\pi\)-mode in the system. Expectedly, it is clear from Fig. 4(b) that the Majorana \(\pi\)-modes appear at a strong driving amplitude \(\lambda\) with respect to the MZMs corresponding to the same values of the parameters as considered for the discussion of the MZMs. Here, we shall study the driving induced features by focusing on a specific value of \(\delta\), such as \(\delta=0.5\), where we observe a phase transition occurring at \(\lambda\simeq 3\) (See Fig.4(b)). For further clarification of the situation, we plot the winding number corresponding to \(\pi\)-modes in Fig. 8(a). The data clearly indicate a phase transition from the topologically trivial to a topologically non-trivial phase around \(\lambda\simeq 3\). Moreover, we observe the probability distribution of the \(\pi\)-modes and one of the bulk states as a function of the site index in Fig. 8(b). Most interestingly, we find the \(\pi\) energies to be localized at the edges along with a completely localized bulk. We have checked that all the bulk states are localized in nature. Therefore, a non-trivial phase comprising of the \(\pi\) energy edge modes in addition to a complete localized bulk state, which may be called the Floquet topological Anderson phase, is observed in this system.
### Localization properties
In this section, we shall explore the localization properties of bulk states in the periodically kicked setting. To do
Figure 8: The real-space winding number for \(\pi\)-energies is shown as a function of \(\lambda\) in (a). (b): The probability distributions as a function of the site index are shown corresponding to the \(\pi\) modes and the bulk state. The dimerization strength for all the cases is considered as \(\delta=0.5\).
that, we employ two diagnostic tools, namely, the inverse participation ratio (IPR) and the normalized participation ratio (NPR), to distinguish between the extended, critical, and localized phases. Similar to the non-driven case, the effective Floquet Hamiltonian can be solved using the Bogoliubov-de Gennes (BdG) transformation. This can be done by defining a quasiparticle operator in terms of the superposition of the single-particle creation (\(c^{\dagger}\)) and annihilation (\(c\)) operators via,
\[\Phi_{n}^{\dagger}=\sum_{j=1,\alpha=A,B}^{N}\Big{[}u_{j,\alpha}^{(n)}\hat{c}_{j,\alpha}^{\dagger}+v_{j,\alpha}^{(n)}\hat{c}_{j,\alpha}\Big{]}, \tag{21}\]
where \(\alpha\) and \(n\) denote the sublattice and the energy band indices, respectively. While \(u_{n}\) and \(v_{n}\) denote the particle and hole coefficients. Hence, we can define the IPR and NPR corresponding to the \(n^{\text{th}}\) Floquet eigenstate using \(u_{j}^{n}\) and \(v_{j}^{n}\) as, [23]
\[\text{IPR}^{(n)}=\sum_{j=1,\alpha=A,B}^{N}\Big{[}|u_{j,\alpha}^{(n)}|^{4}+|v_{ j,\alpha}^{(n)}|^{4}\Big{]} \tag{22}\]
and,
\[\text{NPR}^{(n)}=\Bigg{[}L\sum_{j=1,\alpha=A,B}^{N}\Big{[}|u_{j,\alpha}^{(n)}| ^{4}+|v_{j,\alpha}^{(n)}|^{4}\Big{]}\Bigg{]}^{-1}. \tag{23}\]
In the thermodynamic limit, the IPR(NPR) values scale with the system size as \(L^{-1}(L^{0})\) corresponding to an extended state. On the other hand, it varies as \(L^{0}(L^{-1})\) for a localized state. Moreover, one can calculate the average values of the IPR and NPR averaged over all the Floquet eigenstates, given by,
\[\langle\text{IPR}\rangle=\frac{1}{L}\sum_{n=1}^{L}\text{IPR}^{(n)}\quad\text{ and}\quad\langle\text{NPR}\rangle=\frac{1}{L}\sum_{n=1}^{L}\text{NPR}^{(n)}. \tag{24}\]
In Fig. 9 we have shown the variation of \(\langle\text{IPR}\rangle\) and \(\langle\text{NPR}\rangle\) as a function of \(\lambda\) corresponding to different frequencies, such as \(\omega=0.5,\ 2.5,\ 3.5,\) and \(12.0\). We found all the eigenstates to be extended in nature corresponding to the low-frequency limit, that is, for \(\omega=0.5\) in Fig.9(a). Also, all the states are localized for any value of \(\lambda\) at a large frequency, namely, \(\omega=12\), shown in Fig.9(d). The situation can be understood by the BCH expanded the Hamiltonian given in Eq. 19. Now, in the limit of high frequency, the renormalized potential dominates over the rest of the parameters in the static counterpart. As a result, we find that at relatively high (low) driving frequencies, all the states become localized (extended). Whereas, at some intermediate frequencies, such as \(\omega=2.5\) and \(3.5\), we find the emergence of all three phases, namely extended, critical, or the multifractal, and localized phases with increasing value of \(\lambda\). Also, once the system is in the localized phase, it remains localized and is independent of the choice of \(\lambda\). However, the multifractal nature of the states can not be uniquely determined via IPR or NPR alone. Moreover, we are interested in the global properties of the model. To this end, we shall introduce another quantity \(\eta\), which is defined as [23; 30],
\[\eta=\log_{10}[\langle\text{IPR}\rangle\times\langle\text{NPR}\rangle]. \tag{25}\]
If either of the phases, such as an extended or a localized phase, is present in the system, the condition \(\eta\leq-\log_{10}(L)\) arises from the fact that value survives owing to the finiteness of either of the average values of IPR or NPR. Thus, the critical phase occurs corresponding to \(\eta\geq-\log_{10}(L)\). Therefore, in our case, where we have considered \(L=1220\), \(\eta\geq-3.08\) would denote a critical phase.
In Figs. 10(a) and (b), we have shown the phase diagram using \(\eta\) value as a function of \(\delta\) and \(\lambda\) corresponding to the non-driven and the driven cases with the same frequency \(\omega=2.5\) for the sake of completeness. The phase diagram contains overall information on the localized, critical, and extended phases that appear in the model. By comparing the driven system with the undriven one, it is clear that the effective potential strength decreases in the latter situation. The plot indicates a broader region for the critical phase (see Fig. 10(b)).
Finally, a finite-size analysis is shown in Figs. 11(a,b,c) using another quantity, namely, the fractal dimension, \(D_{2}\), which helps in identifying different states accurately and is defined as [26],
\[D_{2}=-\lim_{L\rightarrow\infty}\frac{\log(\text{IPR})}{\log(L)}. \tag{26}\]
It has a value of \(1(0)\) corresponding to the extended(localized) states, in the thermodynamic limit. The critical (multifractal) states have a value between \(0\) and \(1\). In Fig. 11(a), we plot \(D_{2}\) as a function of the ratio of eigenstate index (\(n\)) and system size (\(L\)) corresponding to \(\lambda=0.5\). The \(D_{2}\) values move towards a value \(1\) as the system size is increased, thereby signaling the onset of a completely extended bulk state. Later,
Figure 10: A static phase diagram is shown in panel (a) using the parameter \(\eta\) as a function of dimerization strength (\(\delta\)) and the onsite QP potential strength (\(\lambda\)). Whereas, panel (b) depicts the \(\eta\) phase diagram corresponding to a driven scenario. Extended, critical, and localized phases are shown via ‘EP’, ‘CP’ and ‘LP’ respectively.
a mobility edge emerges between the critical and the localized phases and can be confirmed in Fig. 11(b) corresponding to \(\lambda=1.5\). In the end, at a higher value of the driving amplitude, namely for \(\lambda=3.5\), we observe a complete localization to occur. In all of these three cases, we have chosen \(\delta=0.5\) and have considered half of the quasi-energy spectrum owing to the presence of the chiral symmetry.
## IV Conclusions
In this paper, we consider a one-dimensional dimerized Kitaev chain under a periodic drive of the quasiperiodically modulated onsite chemical potential. We have analyzed the topological and localization properties due to the interplay of the periodic drive and the dimerization term present in the system. Based on the behavior of Majorana zero and \(\pi\) modes, we have found that low driving frequency induces intriguing observations as compared to the static (undriven) counterpart. Corresponding to the Majorana zero mode, driving induces a phase transition from a trivial to a topologically non-trivial phase, followed by another transition from topological to an Anderson localized phase, which is found for a specific range of the dimerization strength. Most interestingly, a phase, namely the Floquet topological Anderson phase, that consists of the localized \(\pi\)-modes at the edges associated with completely localized bulk states is found at large values of the driving strength. Further, the localization properties of the bulk states are analyzed. The observation indicates a fully extended phase corresponding to a low-frequency range, while a complete localization of all the states is established at the high-frequency regime. However, localized, critical, and extended phases exist at an intermediate frequency region. Finally, we have established our analysis via a finite-size scaling analysis of the fractal dimension.
## V Appendix: Winding number for a uniform drive
In addition to an incommensurate QP potential, one can also evaluate bulk invariants corresponding to a homogeneous drive, that is \(\mu_{A}=\mu_{B}=\lambda\sum_{m=-\infty}^{m=\infty}(t-mT)\). Under periodic boundary conditions, we can write down the static counterpart of the Hamiltonian in momentum space as,
\[H=\begin{pmatrix}-\mu&P(k)&0&Q(k)\\ P^{*}(k)&-\mu&-Q^{*}(k)&0\\ 0&-Q(k)&\mu&-P(k)\\ Q^{*}(k)&0&-P^{*}(k)&\mu\end{pmatrix}, \tag{27}\]
where,
\[P(k)=-t[(1+\delta)+(1-\delta)e^{-ika}] \tag{28}\]
and
\[Q(k)=\Delta[(1+\delta)-(1-\delta)e^{-ika}] \tag{29}\]
Note that the effective Hamiltonian, as in Eq. 10, for a uniform drive as above does not necessarily pick up the same symmetry as that of the original Hamiltonian. Moreover, we are interested in a \(\mu\neq 0\) situation, where the chiral symmetry is defined by, \(\tilde{C}=\hat{\sigma}_{x}\otimes\hat{\sigma}_{0}\), such that the Hamiltonian belongs to the BDI class. Hence, the computation of bulk invariants under the same classification corresponding to a driven scenario requires a pair of symmetric time frames. Following the expressions given in Eq.11 and Eq.12, one can obtain the effective Hamiltonian, \(H_{\text{eff}}^{m}\) (\(m=1,2\)) corresponding to the symmetric time frames [79; 70; 94]. Hence, the topological phases
Figure 11: The fractal dimension \(D_{2}\) is plotted as function of \(n/L\) corresponding to (a) \(\lambda=0.5\), (b) \(\lambda=1.5\) and (c) \(\lambda=3.5\). The dimerization strength is taken as \(\delta=0.5\). The system sizes chosen for this calculation are \(L=754,~{}1220\), and \(1974\).
of the system can be characterized by a pair of winding numbers that satisfy Eq. 17.
Further, the evaluation of the winding numbers in each frame requires an introduction of a unitary operator (\(\hat{U}_{s}\)) constructed using the chiral basis, with the help of which, \(H_{\rm eff}^{m}\) can be made off-diagonal in its canonical (chiral) basis representation. For a \(\mu\neq 0\) scenario, the unitary operator, \(\hat{U}_{s}\) takes the form,
\[\hat{U}_{s}=\frac{1}{\sqrt{2}}\begin{pmatrix}\hat{I}&\hat{I}\\ -i\hat{I}&i\hat{I}\end{pmatrix}, \tag{30}\]
such that, if \(\hat{U}_{s}\hat{C}\hat{U}_{s}^{\dagger}={\rm diag}(\hat{I},-\hat{I})\), then,
\[\hat{U}_{s}\hat{H}_{\rm eff}^{m}\hat{U}_{s}^{\dagger}=\begin{pmatrix}0&S(k)\\ S^{\dagger}(k)&0\end{pmatrix} \tag{31}\]
where,
\[S(k)=\begin{pmatrix}-i\mu&i(P(k)-Q(k))\\ i(P^{*}(k)+Q^{*}(k)&-i\mu\end{pmatrix}\end{pmatrix} \tag{32}\]
Now, one can define the chiral index by the following expression,
\[\nu^{m}=\frac{1}{2\pi i}\int_{-\pi/a}^{\pi/a}dk\partial_{k}{\rm ln}S(k) \tag{33}\]
One can evaluate the pair of winding numbers \((\nu^{0},\nu^{\pi})\) by following Eq. 17.
Fig. 12 shows topological phase diagram in terms of \(\nu^{0}\) and \(\nu^{\pi}\) respectively, plotted in \(\mu-\Delta\) plane. The results correctly predict the number of zero and \(\pi\) edge modes corresponding to the real space spectrum shown in Fig. 12(a). Additionally, a line along \(\Delta=0\) signifies the importance of particle-hole symmetry in protecting the topology of the system and is shown in Fig. 12(b) and Fig. 12(c).
| topological disorderと非平衡量子系における相互作用は興味深いテーマです。ここでは、このテーマをより深く理解するための適切なプラットフォームを探しています。この目的で、一対の量子ドットを構成した1次元キタエフ鎖を、時間によって周期的に振幅変動する onsite quasiperiodic potentialに適用し、その特性を分析しました。この特性を調べるために、どちらもMajoranaゼロモードと$\pi$エネルギーモードに対応する空間的な巻き数計算を行います。さまざまな駆動周波数でのシナリオを調べます。特に、ある中間周波数 regimeでは、ゼロモードの相図には、トポロジカルな単純な相と非単純な相の2つの異なる遷移が見られます。一方、$\pi$モードの研究は、全体とエッジが完全にlocalizeされる、ユニークなトポロジカル相の出現を示しています。これはFloquetトポロジカルAnderson相 |
2303.17818 | Lamb dip of a Quadrupole Transition in H$_2$ | The saturated absorption spectrum of the hyperfine-less S(0) quadrupole line
in the (2-0) band of H$_2$ is measured at $\lambda=1189$ nm, using the
NICE-OHMS technique under cryogenic conditions (72~K). It is for the first time
that a Lamb dip of a molecular quadrupole transition is recorded. At low
(150-200 W) saturation powers a single narrow Lamb dip is observed, ruling out
an underlying recoil doublet of 140 kHz. Studies of Doppler-detuned resonances
show that the red-shifted recoil component can be made visible for low
pressures and powers, and prove that the narrow Lamb dip must be interpreted as
the blue recoil component. A transition frequency of 252\,016\,361\,164\,(8)
kHz is extracted, which is off by -2.6 (1.6) MHz from molecular quantum
electrodynamical calculations therewith providing a challenge to theory. | F. M. J. Cozijn, M. L. Diouf, W. Ubachs | 2023-03-31T06:23:30 | http://arxiv.org/abs/2303.17818v1 | # Lamb dip of a Quadrupole Transition in H\({}_{2}\)
###### Abstract
The saturated absorption spectrum of the hyperfine-less S(0) quadrupole line in the (2-0) band of H\({}_{2}\) is measured at \(\lambda\) = 1189 nm, using the NICE-OHMS technique under cryogenic conditions (72 K). It is for the first time that a Lamb dip of a molecular quadrupole transition is recorded. At low (150-200 W) saturation powers a single narrow Lamb dip is observed, ruling out an underlying recoil doublet of 140 kHz. Studies of Doppler-detuned resonances show that the red-shifted recoil component can be made visible for low pressures and powers, and prove that the narrow Lamb dip must be interpreted as the blue recoil component. A transition frequency of 252 016 361 164 (8) kHz is extracted, which is off by -2.6 (1.6) MHz from molecular quantum electrodynamical calculations therewith providing a challenge to theory.
The hydrogen molecule has been a test ground for the development of molecular quantum mechanics for almost a century [1]. In the recent decade the level of precision has been accelerated in benchmark experimental studies focusing on its dissociation and ionization energies [2; 3], now reaching perfect agreement with first principles calculations based on four-particle variational calculations and including relativistic and quantum electrodynamic (QED) effects [4; 5]. The target of activity has in part shifted to measurements of the vibrational quantum in the hydrogen molecule. In the HD isotopologue vibrational transitions were detected in saturation. Lamb dip spectroscopy could be performed at high precision due to the weak dipole moment in this heteronuclear species [6; 7]. However, in these studies a problem of extracting rovibrational transition frequencies surfaced. Observed asymmetric lineshapes were interpreted in various ways, in terms of underlying hyperfine structure and cross-over resonances [8], of Fano-type interferences [9], and of effects of standing waves in the optical cavity [10]. This situation, imposing unclarity on the extraction of energy separations between quantum levels, has halted further progress in the precision metrology of HD, although a focused activity remains [11; 12].
In the homonuclear H\({}_{2}\) species selection rules govern that only quadrupole transitions are allowed and those are two orders of magnitude weaker than the dipole absorption transitions in HD [13]. Vibrational transitions in H\({}_{2}\) have been probed in Doppler-broadened spectroscopy [14; 15; 16], through combination differences of Doppler-free electronic transitions [17], and recently via stimulated Raman scattering [18]. While all rovibrational levels in HD are subject to complex hyperfine structure induced by the magnetic moment of both H and D nuclei, H\({}_{2}\) has the advantage that the levels in para-H\({}_{2}\) exhibit no hyperfine substructure. Until today no saturation spectroscopy has been performed on quadrupole transitions, neither in H\({}_{2}\) nor in any other molecule.
For performing saturation spectroscopy of an extremely weak quadrupole transition a novel setup was built as an upgrade from the setup used for the HD experiments [7; 8]. The optical cavity is redesigned to suppress vibrations and attached to a cryo-cooler to reach temperatures in the range 50-300 K. At the typical operation range of 72 K a larger fraction of the population is condensed into the H\({}_{2}\)\(J=0\) ground level, while the transtitme through the intracavity laser beam is reduced. The laser, an external-cavity diode laser with a tapered amplifier running at 1189 nm, is locked to the cavity for short-term stability and to a frequency-comb-laser for sub-kHz accuracy in long-term measurements, thus also providing the absolute frequency scale. This stability allows for long time averaging over multiple scans. The 371 mm hemispherical resonator is equipped with highly reflective (R \(>\) 0.99999) mirrors of which the concave mirror has an ROC of 2 m. This yields a finesse of 350,000, an intracavity circulating power of up to 10 kW, and a beam waist of 542 \(\mu\)m. Further details on the cryogenic NICE-OHMS spectrometer will be given in a forthcoming publication [19].
Detection is based on the technique of noise-immune cavity-enhanced optical heterodyne molecular spectroscopy (NICE-OHMS) [20; 21; 22] using sideband modulation of the carrier wave, at frequency \(f_{c}\pm f_{m}\) with \(f_{m}=404\) MHz, matching the free-spectral-range (FSR) of the cavity for generating the heterodyne NICE-OHMS signal. The carrier and the two generated sidebands are locked to the cavity with Pound-Drever-Hall and DeVoe-Brewer [23] stabilization, respectively. Consequently, the three beams counterpropagate inside the cavity and interact with the molecules present, giving rise to various sub-Doppler spectroscopic signals from which two possible schemes are shown in Fig. 1. In panel (a), where the carrier is set on top of the resonance center, the counterpropagating carrier beams burn a hole in the center of the velocity distribution (at \(v_{z}=0\)) and generate the generic Lamb dip signal. Additionally, saturation conditions are formed by the red/blue sidebands interacting simultaneously on molecules with velocities \(k\cdot v_{z}=\pm f_{m}\) and burning holes at their respective positions [24]. However, this effect is typically negligible for the weakly saturating regime and for conditions of low sideband power.
In panel (b), on the other hand, the laser is detuned from the molecular resonance by \(f_{m}/2\) = 202 MHz (or
FSR/2). Here, one of the sidebands (the red sideband in this example) in combination with the counterpropagating carrier beam interact with molecules with velocities \(k\cdot v_{z}=\pm f_{m}/2\). As for this velocity class both beams are in resonance, a pump-probe scheme is formed, resulting in Doppler-detuned saturation signals. Since the required detuning is exactly known and the resulting Doppler-shift is equal, it can be seen as an alternative scheme for Doppler-free spectroscopy as only the known detuning needs to be considered to extract the transition frequency. The novelty of this scheme is that the ordinary on-resonance strong standing wave, present for the usual carrier-carrier saturation, is now converted to mostly a travelling wave due to the low intensity sideband. This allows to mitigate possible effects of the strong on-resonance standing wave.
In addition to the sideband modulation for the heterodyne signal, slow wavelength modulation of the cavity length is applied at 395 Hz with a peak-to-peak amplitude of 50 kHz. This allows for lock-in detection, where demodulation at the first derivative (\(1f\)) is applied. The \(1f\) profile function is defined as a derivative of a typical dispersive Lorentzian profile [25]
\[f(\nu)_{1f}=\frac{4\,A\,\big{[}\Gamma^{2}-4(\nu-\nu_{0})^{2}\big{]}}{\big{[} \Gamma^{2}+4(\nu-\nu_{0})^{2}\big{]}^{2}}, \tag{1}\]
where the adjustable parameters are the line position \(\nu_{0}\), the line intensity \(A\), and the width \(\Gamma\).
Measurements of the S(0) (2-0) line were performed in the cryo-NICE-OHMS setup under a variety of conditions of intracavity power and pressure. Since the quadrupole transition is extremely weak, with a line strength \(S=1.6\times 10^{-27}\) cm/molecule, and Einstein coefficient \(A=1.3\times 10^{-7}\) s\({}^{-1}\)[16], it was anticipated that extreme powers would be required to obtain a saturation signal. At high power the spectra recorded displayed complex lineshapes, as shown in panel (a) of Fig. 2, reminiscent of the dispersive line profiles observed in HD [8; 9; 10; 26]. Surprisingly, by lowering the power, the complex lineshapes at 2.0 kW turn to an asymmetric dispersive-like profile at 1.0 kW, to finally a symmetric profile at 150 W. Panel (b) and (c) show the symmetric Lamb dip obtained at the lower powers of 150 W and 200 W, where each individual measurement was obtained after 12 hours of averaging. A \(1f\) dispersive Lorentzian (Eq. 1) fit was then used on the symmetric profiles so as to extract relevant parameters such as the Lamb dip position and the linewidth.
Large sets of data were obtained, mainly at 150 W, 200 W, and 300 W, where symmetric lines were observed, but also at higher powers. The extracted positions of the Lamb dip were treated in a multivariate analysis yielding a transition frequency extrapolated to zero-pressure and zero-power of \(f=252\,016\,361\,234.4\,(7.3)\) kHz, which we will refer to as the 'generic Lamb dip' in the following. Some subsets of pressure-dependent (at 150 W) and power-dependent (at 0.25 Pa and 0.10 Pa) curves are shown in Fig. 3.
Extrapolating the extracted widths to zero pressure yields a linewidth limit of 205 kHz (FWHM) for 150 W, which still overestimates the actual limit as dithering effects are not removed. These values are considerably smaller than the calculated 471 kHz transit-time width (for 72 K) [27] and can be attributed to the selection of cold molecules in the weakly saturating regime [28], as observed in our previous work on HD [7]. The observed width corresponds to a most probable velocity of
Figure 1: Possible interactions of the three fields (carrier and two sidebands) inside the cavity with a molecular resonance. Panel (a) represents the generic Lamb dip generated by the counterpropagating carrier beam on resonance. Additional holes are burned in the Doppler broadened profile with the combined interaction of the red/blue sidebands. In panel (b), the carrier is detuned off resonance by \(f_{m}/2\) (or FSR/2). In that condition, one of the sidebands (in this example red) interacts with the counterpropagating carrier and consequently burns holes at \(\pm\) FSR/2.
Figure 2: Recorded spectra of the measured Lamb dip for the S(0) (2-0) quadrupole transition in H\({}_{2}\) at 72 K at (a) 0.10 Pa for different circulating power as indicated. Panel (b) and (c) respectively show the measured spectra at the lower circulating powers of 150 Watt and 200 Watt for different pressure indicated. A \(1f\) dispersive Lorentzian (Eq. 1) was superimposed on the measured spectra. The absolute frequency scale is given via \(f_{0}=252\,016\,360\) MHz.
\(v_{\rm mp}=335\) m/s and temperature of around 13 K.
In order to accurately extract the transition frequency one needs to consider and correct for the known Doppler shifts and the resulting recoil from conservation of momentum. The total energy carried by a photon for making a transition or released from emission is expressed as [29]
\[E_{\rm photon}=h\nu_{0}\pm\frac{h\vec{k}\cdot\vec{v}}{2\pi}\pm\frac{(h\nu_{0})^{ 2}}{2mc^{2}}-\frac{(h\nu_{0})v^{2}}{2c^{2}}. \tag{2}\]
Here, \(h\nu_{0}\) is the true energy difference between quantum levels. The second term is the first order Doppler shift and is equal to zero under conditions of saturation. The third term is the recoil shift, where the plus/minus sign refers to the case of absorption/stimulated emission. The final term represents the second-order (relativistic) Doppler effect, which is as small as 160 Hz (for 13 K).
In saturation spectroscopy two recoil components are associated with each quantum transition due to conservation of momentum [30]. A high-frequency (blue-detuned) component occurs for absorption from ground-state particles, and a low-frequency (red-detuned) component for stimulated emission from excited-state particles (Fig. 4). Both components will form individual Lamb dips as a characteristic recoil doublet, split at twice the recoil shift and centered around the resonance center. Despite the significance of recoil on extracting the transition frequency, it is often neglected in saturation spectroscopy as typically the observed linewidths are significantly larger than the recoil doublet splitting and thereby making the recoil doublet unresolvable. Nevertheless, there have been studies in which the recoil doublet has been successfully resolved in atoms [31; 32; 33; 34] and in molecules [35; 36; 37; 28].
For the present case of H\({}_{2}\) and the transition frequency of the S(0) line this recoil amounts to 70 kHz and a total splitting of 140 kHz. As this is only marginally larger than the observed linewidth of 230 kHz, effects of the recoil splitting are expected to be visible on the observed lineshape. Model calculations are presented for the known recoil doublet splitting and a variety of widths that each component may obtain (Fig. 4). Comparison of simulated profiles with measurements reveal that the observed lineshape cannot be composed from both recoil components. In fact, in Fig. 4(b), a distinction is clearly seen as the measured Lamb dip does not display any hint of an unresolved recoil doublet and is perfectly fitted by a \(1f\) Lorentzian profile composed of a single transition. This leads to the conclusion that the observed generic Lamb dip at low power cannot consist of both recoil components and that suppression of one of the recoil components has occurred.
There have been studies on suppression on both the red-shifted [32] and blue-shifted [33] recoil components. In either case this was performed by depopulating the upper-state or ground-state respectively through (optical) pumping. Also in an early observation of the resolved recoil doublet unequal intensity components were found, in which the red-shifted component was suppressed under some conditions [35]. In the theoretical derivation from Kol'chenko et al. [30] it was found that the ratio of the depths of the recoil components are determined by the lifetimes of the states involved. From these observations and findings it can be reasoned that due to the two-step
Figure 4: (a) Schematic of the blue (absorption) and red (stimulated emission) recoil components when the two counter-propagating lasers are close to a resonant transition \(|e\rangle\leftarrow|g\rangle\). (b) Modeled profiles of a recoil doublet for the expected recoil splitting of 140 kHz and for varying levels of broadening. (c) A measurement of a ’generic Lamb dip’ at a pressure 0.1 Pa, intracavity power of 0.2 kW and temperature 72 K. The observed symmetric lineshape cannot be modeled by any of the simulated profiles that include the 140 kHz recoil splitting while the simple \(1f\) dispersive Lorentzian fit produces perfect agreement.
Figure 3: Extracted positions of the S(0) Lamb dip at different powers and pressures and extrapolation to zero values. (a) Pressure dependence and shift at \(P=150\) W; (b) Pressure dependent width at 150 W (FWHM); (c) and (d) Power-dependent shifts at \(p=0.25\) and 0.1 Pa. The pressure- and power-dependent slopes are as indicated. The absolute frequency scale is given via \(f_{0}=252\,016\,360\) MHz.
process of the stimulated emission scheme and the typical lower lifetime of the excited state, the red-shifted recoil component is more easily suppressed.
In the case of H\({}_{2}\) the natural lifetimes of states are non-restrictive and other effects of relaxation must be considered. Collisional effects, the finite transit times and effects of the strong standing wave can all be considered as effective methods of depopulation or dephasing. Compared to the previous studies where the recoil doublet was successfully resolved, our present study operates at around one to two orders of magnitude higher pressure. The most striking difference between this study and the previous studies on the CH\({}_{4}\) molecule is the use of extreme laser intensities to saturate the weak quadrupole transition. In our study, up to 10-12 orders of magnitude higher intensities [28; 37] are present which can lead to significant standing wave effects in the optical resonator.
Effects of a strong standing wave on neutral molecules had been theoretically explored in the past by Letokhov and Chebotayev [38]. The finite polarizability of molecules leads to an axial striction force due to the strong electric field gradient, imposing axial velocity modulation, or ultimately, even axial trapping of molecules. The resulting velocity modulation can easily lead to effective dephasing of resonant molecules and can be considered as a depopulation effect. For the condition on resonance, which is usually the case for saturation spectroscopy, the striction force can be severely enhanced as the dynamic polarizability changes significantly near the molecular resonance. Recently the effect of standing waves on the vibrational spectrum of HD [10] and H\({}_{2}\)[39] was considered.
In order to mitigate the possible effects of the on-resonance strong standing wave and simultaneously prolong the transit time through the intracavity laser beam, Doppler-detuned measurements were performed at FSR/2 detuning, as shown in Fig. 1(b). A direct overlap of the two different NICE-OHMS signals (the central carrier-carrier resonance and the FSR/2 detuned carrier-sideband) is accomplished by correcting for the detuning frequency (Fig. 5). Comparison between both measurement schemes are made and a blue and red marker for the supposed recoil positions are added, where the blue line indicates the extracted frequency position for the 'generic Lamb dip'. This shows that under reduced probe (sideband) powers the red recoil component increases in amplitude, from which can be concluded that the 'generic Lamb dip' at low powers is composed of the blue recoil component only. Moreover, at reduced pressure and very low sideband amplitude, the red recoil component is nearly fully recovered. Note that at the powers of 1 kW the line shapes of individual components become asymmetric. The summation of the unequal intensity recoil components then causes an apparent frequency shift of the weakest (red) component.
From these systematic studies we conclude that the observed generic Lamb dip corresponds to the blue recoil shifted component. Correcting for the recoil shift of 70 kHz, and taking into account the contributions to the overall uncertainty (7.3 kHz statistical, with pressure and power effects included, and calibration 1 kHz) the frequency of the S(0) (2-0) quantum transition in H\({}_{2}\) is determined at 252 016 361 164 (8) kHz, with a relative precision of \(3\times 10^{-11}\) representing the most accurate determination of a vibrational splitting in a hydrogen isotopologue [40].
Comparing with the best theoretical result for the S(0) transition frequency, obtained via the approach of non-adiabatic perturbation theory (NAPT) [41] and computed with the H2SPECTRE code [42] the experimental result is higher by 2.6 MHz. The uncertainty from this NAPT approach is 1.6 MHz, hence the deviation between experiment and theory is at 1.6\(\sigma\), and determined by the \(E^{(5)}\) leading order QED-term. Part of this \(E^{(5)}\) term was recently recomputed [43] but the issue of systematic discrepancies for vibrational splittings in HD, at the level of 1.9\(\sigma\), was not resolved. Now deviations of a similar size are found for the homonuclear H\({}_{2}\) species. For the binding energy of two particular levels in H\({}_{2}\), \(J=0,1\) in \(v=0\), separate and dedicated calculations of relativistic and QED corrections were carried out employing nonadiabatic explicitly correlated Gaussian wave functions, yielding an accuracy of 0.78 MHz [4]. The present experimental results pose a challenging test bench for such advanced theoretical approach.
As an outlook we note that the lifetimes of all rovibrational levels in the H\({}_{2}\) electronic ground state exceed \(10^{5}\) s [44], thus allowing in principle for metrology of 20-digit precision if the natural lifetime limit can be reached. This will push tests of molecular quantum electrodynamics and searches for physics beyond the Standard Model [45] to the extreme. The present experiment signifies a step in that direction.
Financial support from the Netherlands Organisation
Figure 5: Comparison of NICE-OHMS signals for Carrier-Carrier and Carrier-Sideband (FSR/2 detuned) schemes at 1 kW intracavity power and at (a) 0.25 Pa and (b) 0.10 Pa. The sideband (probe) has been kept to a low power of 5 W, equivalent to 0.5% of the carrier (pump) power. Note that the blue line corresponds to the measured frequency of the generic Lamb dip as in Fig. 2.
for Scientific Research (NWO), via the Program "The Mysterious Size of the Proton" is gratefully acknowledged. We thank several members of the Quantum Metrology & Laser Applications group at VU Amsterdam (Edcel Salumbides, Max Beyer, Kjeld Eikema, Jeroen Koelemeij, Yuri van der Werf, Hendrick Bethlem) for helpful discussions.
| ```
飽和吸収スペクトルは、H$_2$の(2-0)バンドにおけるハイパーフィネレスレスS(0)四極線で、λ=1189nmでNICE-OHMS技術を用いて測定されています。これは、低温(72~K)の条件で測定されました。これは、分子四極子遷移のLambディップが初めて記録されたことを意味します。低飽和パワー(150-200 W)で、単一の狭いLambディップが観察され、140 kHzの潜在的な反発ダイアモンドが排除されました。ドップラー微調共鳴の研究の結果、赤外反発成分は低圧と低パワーで可視化され、狭いLambディップは青い反発成分と解釈されることが証明されました。252,016,361,164(8)kHzの遷移頻 |
2309.14392 | Unveiling Fairness Biases in Deep Learning-Based Brain MRI
Reconstruction | Deep learning (DL) reconstruction particularly of MRI has led to improvements
in image fidelity and reduction of acquisition time. In neuroimaging, DL
methods can reconstruct high-quality images from undersampled data. However, it
is essential to consider fairness in DL algorithms, particularly in terms of
demographic characteristics. This study presents the first fairness analysis in
a DL-based brain MRI reconstruction model. The model utilises the U-Net
architecture for image reconstruction and explores the presence and sources of
unfairness by implementing baseline Empirical Risk Minimisation (ERM) and
rebalancing strategies. Model performance is evaluated using image
reconstruction metrics. Our findings reveal statistically significant
performance biases between the gender and age subgroups. Surprisingly, data
imbalance and training discrimination are not the main sources of bias. This
analysis provides insights of fairness in DL-based image reconstruction and
aims to improve equity in medical AI applications. | Yuning Du, Yuyang Xue, Rohan Dharmakumar, Sotirios A. Tsaftaris | 2023-09-25T11:07:25 | http://arxiv.org/abs/2309.14392v1 | # Unveiling Fairness Biases in Deep Learning-Based Brain MRI Reconstruction
###### Abstract
Deep learning (DL) reconstruction particularly of MRI has led to improvements in image fidelity and reduction of acquisition time. In neuroimaging, DL methods can reconstruct high-quality images from undersampled data. However, it is essential to consider fairness in DL algorithms, particularly in terms of demographic characteristics. This study presents the first fairness analysis in a DL-based brain MRI reconstruction model. The model utilises the U-Net architecture for image reconstruction and explores the presence and sources of unfairness by implementing baseline Empirical Risk Minimisation (ERM) and rebalancing strategies. Model performance is evaluated using image reconstruction metrics. Our findings reveal statistically significant performance biases between the gender and age subgroups. Surprisingly, data imbalance and training discrimination are not the main sources of bias. This analysis provides insights of fairness in DL-based image reconstruction and aims to improve equity in medical AI applications.
Keywords:Fairness Image Reconstruction Algorithm Bias Neuroimaging.
## 1 Introduction
Magnetic resonance imaging (MRI) is routinely used to help diagnose or ascertain the pathophysiological state in a noninvasive and harmless manner. However, MRI is characterised by long acquisition times. There is an interest in improving imaging fidelity whilst reducing acquisition time. A solution is to subsample the frequency domain (k-space). This introduces aliasing artefacts in the image domain due to the violation of the Nyquist sampling theorem, causing difficulties such as biomarkers extraction and interpretation in neuroimaging.
Recently, deep learning (DL) methods based on convolutional neural networks (CNNs) have been proposed to reconstruct high-quality images from the undersampled k-space data [5]. By learning complex patterns from large amounts of training data and filling in missing k-space data, these DL models successfully reconstruct images that closely resemble those obtained through fully sampled acquisitions. Advances in DL-based image reconstruction enable both accelerated acquisition and high-quality imaging, providing significant benefits.
Deep learning methods may be subject to biases (e.g., from the training dataset) which can lead to fairness and lack of equity. For example, recent studies have shown that image segmentation algorithms can be unfair: Puyol-Anton et al. [8] found racial bias can exist in DL-based cine CMR segmentation models when training with a race-imbalanced dataset. This leads us to ask: _Could DL-based image reconstruction algorithms be also unfair?_
To date, such a, at least empirical, study is lacking, and this article precisely addresses this gap. Our primary objective is to investigate the biases in the algorithm resulting from demographic information present in the training data. To the best of our knowledge, this is the first fairness analysis in a DL-based image reconstruction model. We make the following contributions:
* We identify existing bias in performance between gender and age groups using the publicly available OASIS dataset [6].
* We investigate the origin of these biases by mitigating imbalances in the training set and training paradigm with different bias mitigation strategies.
* We discuss the factors that may impact the fairness of the algorithm, including inherent characteristics and spurious correlations.
## 2 Background
### Fairness Definitions
Amongst the various definitions of fairness,since we study the fairness for different demographic subgroups, we consider only group fairness in our analysis.
**Group Fairness**: Group fairness aims to ensure equitable treatment and outcomes for different demographic or subpopulation groups. It recognises the potential for biases and disparities in healthcare delivery and seeks to address them to promote fairness and equity [3]. To ensure fairness, equalised odds [2] is used as a criterion that focuses on mitigating bias, stating as "The predictor \(\hat{Y}\) satisfies equalised odds with respect to protected attribute A and outcome Y, if \(\hat{Y}\) and A are independent conditional on Y." The criterion can be formulated as
\[\forall y\in\{0,1\}:P(\hat{Y}=1|A=0,Y=y)=P(\hat{Y}=1|A=1,Y=y). \tag{1}\]
**Fairness in Image Reconstruction**: It requires the reconstructed image to faithfully represent the original one without distorting or altering its content based on certain attributes such as race, gender, or other protected attributes.
When applying equalised odds as the fairness criterion, while the original equation focuses on fairness in predictive labels, image reconstruction tasks typically involve matching pixel values or image representations. Thus, we reformulate the problem based on probabilistic equalised odds, as proposed by [7]. We let \(P\subset\mathbb{R}^{k}\) be the input space of an image reconstruction task, \((\mathbf{x},\mathbf{y})\sim P\) represent a patient, with \(\mathbf{x}\) representing the undersampled image, and \(y\) representing the fully sampled image or ground truth image. Also, we assume the presence
of two groups \(g_{1},g_{2}\subset P\), which represent the subsets defined by the protected attribute \(\mathbf{A}\). Fairness using probabilistic equalised odds is formulated as:
\[\forall\mathbf{y}\in\mathcal{Y}:\mathbb{E}(\mathbf{x},\mathbf{y})\sim g_{1}[f( \mathbf{x})\mid\mathbf{Y}=\mathbf{y}]=\mathbb{E}(\mathbf{x},\mathbf{y})\sim g _{2}[f(\mathbf{x})\mid\mathbf{Y}=\mathbf{y}]. \tag{2}\]
Here, \(f\) represents the DL-based reconstruction network. With this formulation, we aim to achieve fairness by ensuring that the quality or fidelity of the reconstructed image is consistent across different data distributions irrespective of different demographic characteristics.
### Source of Bias
**Data imbalance** can be a significant source of bias in medical scenarios [17]. It can refer to the imbalanced distribution of demographic characteristics, such as gender and ethnicity, within the dataset. For example, the cardiac magnetic resonance imaging dataset provided by the UK Biobank [10] is unbalanced with respect to race: \(>80\%\) of the subjects are of white ethnicity, resulting in unequal representation of features correlated to ethnicity. This imbalance can introduce bias in the analysis and interpretation of the data.
**Training discrimination** is another source of bias, possibly occurring concurrently with data imbalance [4]. An imbalanced dataset can lead to imbalanced minibatches drawn for training. Hence, the model mainly learns features from the dominant subgroup in each batch, perpetuating bias in the training process.
**Spurious correlations** can also contribute to bias [17]. This refers to the presence of misleading or incorrect correlations between the training data and the features learned by the model. For instance, a model can learn how to classify skin diseases by observing markings made by dermatologists near lesions, rather than fully learning the diseases [15]. This is particularly likely to happen in the minority subgroup due to limited presence in training dataset, leading to overfitting during the training process and further exacerbating bias.
**Inherent characteristics** can also play a role in bias, even when the model is trained with a balanced dataset [17]. Certain characteristics may inherently affect the performance of different subgroups. For instance, in skin dermatology images, lesions are often more challenging to recognise in darker skin due to lower contrast compared to lighter skin. As a result, bias based on ethnicity can still exist even if the dataset is well-balanced in terms of proportions.
## 3 Methods
**Our main goal**: Our goal is to identify the bias in image reconstruction models and any potential sources of bias related to demographic characteristics. To investigate fairness in image reconstruction tasks, we systematically design and conduct experiments that eliminate potential origins of bias w.r.t. various demographic characteristics. We start by establishing a baseline model using Empirical Risk Minimisation (ERM) to assess the presence of bias in relation to diverse
demographic subgroups. Then, we employ a subgroup rebalancing strategy with a balanced dataset in terms of demographic attributes, to test the hypothesis that bias is caused by data imbalance. Then, we use the minibatch rebalancing strategy to evaluate the effects of training discrimination for each subgroup.
**Reconstruction Networks**: We use a U-Net [12] as the backbone for the reconstruction network. The reconstruction network is trained using undersampled MRI brain scans, which are simulated by applying a random Cartesian mask to the fully sampled k-space data. Details of the data and the experimental setup of the reconstruction network are provided in Section 4.
**Baseline Network**: We follow the principle of Empirical Risk Minimisation (ERM) [14]. ERM seeks to minimise the overall risk of a model by considering the entire population, instead of the composition of specific groups and hence without controlling for the distribution of protected attributes.
**Subgroup Rebalancing Strategy**: This strategy aims to examine the performance when a perfectly balanced dataset of the protected attributes is used. Instead of randomly selecting data from the entire dataset to define a training set, the training set consists of an equal number of subjects from different subgroups according to demographic characteristics. This approach ensures that all subgroups have equal chances during the training phase, helping us identify if data imbalance is the source of bias.
**Minibatch Rebalancing Strategy**: This strategy examines the performance when balanced minibatches in terms of protected attributes are used to eliminate discrepancy before training [9]. Hence, each minibatch has an equal presence of subjects with different demographic characteristics and all subgroups have an equal opportunity during each iteration to influence the model weights.
**Evaluation Metrics**: Although several fairness metrics have been proposed, most of the current work is focused on image classification and segmentation tasks, which may not be directly applicable to image reconstruction tasks. Therefore, we analyse the fairness of image reconstruction using image reconstruction metrics and statistical analysis. The performance of the reconstruction is evaluated using Structural Similarity Index (SSIM, higher is better), Peak Signal-to-Noise Ratio (PSNR, higher is better) on the patient level.
To investigate bias between subgroups with different demographic characteristics, we performed the non-parametric Kruskal-Wallis ANOVA test (as available within OriginPro 2023) to test the omnibus hypothesis that there are differences in subgroups with \(p<0.05\) as the threshold for statistical significance. The test will provide Chi-Square value and p-value as results. Higher Chi-Square values indicate the presence of more significant differences between subgroups. This approach allows us to assess the potential bias in the image reconstruction process specifically instead of relying on fairness metrics designed for other tasks.
## 4 Experimental Analysis
### Dataset and pre-processing
**Dataset**: We select the publicly available Open Access Series of Imaging Studies (OASIS) dataset [6] to evaluate the fairness of the image reconstruction task. The initial data set consists of a cross-sectional collection of 416 subjects(316 subjects is healthy and 100 subjects is clinically diagnosed with very mild to moderate Alzheimer's disease)and for each subject, three or four individual T1-weighted MRI scans obtained in single imaging sessions are included. To simulate clinical practice with uncertainty about patients' conditions, we used an entire dataset consisting of a mix of patients, including both healthy subjects and patients with Alzheimer's disease (AD), without providing explicit labels for their conditions. To study the fairness regarding inherent demographic information, we choose gender and age information provided in the dataset as the protected attributes. Since the patients are aged 18 to 96, we categorise the patients into young adult (age below 40), middle-aged adult (ages 40 to 65), and older adult (age above 65) according to the criteria proposed by [13].The statistics of the subgroups are summarised in Table 1.
According to Table 1, there is a clear imbalance in the distribution of demographic characteristics in the OASIS dataset. In the protected attribute gender, the female is the dominant group with 256 subjects, while the male is the disadvantaged group with only 160 subjects. In terms of age, compared to the middle-aged adults group, the young and older adults groups are dominant groups.
**Data Pre-processing**: To ensure the equal size of dataset for methods in Section 3, the dataset is firstly categorised into six age-gender subgroups(e.g.,middle-aged female adults) and sampled according to the size of minority subgroups, which is 27 from middle-aged male adults, to maintain the balance for both age and gender distribution among sampled dataset (162 subjects in total). Then, we sampled 5 subjects from six age-gender subgroups to form test set, which is 30 subjects in total. For the training and validation set, we sampled the rest 22 subjects from each age-gender subgroups for the rebalancing and minibatch rebalancing strategies, which is 132 subjects in total. While for the baseline network, the training and validation set are randomly sampled with a size of 132 subjects. The train-validation-test splits all follow the proportions of \(20:2:5\). For each patient, we select 122 central slices out of 208 slices in one volume.
### Implementation Details
We employ a U-Net as backbone. Its first feature map size is 32, with 4 pooling cascades, resulting in a total of 7.8M parameters. We employ the Adam optimiser
\begin{table}
\begin{tabular}{c|c|c c|c c c} \hline
**Category** & **All** & **Female Male** & **Young Middle-aged Older** & **Older** \\ \hline
**Count** & 416 & 256 & 160 & 156 & 82 & 178 \\
**Proportion (\%)** & 100.0 & 61.5 & 38.5 & 37.5 & 19.7 & 42.8 \\ \hline \end{tabular}
\end{table}
Table 1: Statistics of demographic subgroups in OASIS. Patients are categorised into young adult (below 40), middle-aged adult (40 to 65) and older adult (above 65).
with a learning rate of \(10^{-4}\) with a step-based scheduler with a decay gamma of 0.1. Both the \(\ell_{1}\) loss and the SSIM loss were incorporated into our experiments. Models were trained for 40 epochs with batch size 6.
5-fold cross validation is used to mitigate sample bias. Our experimental setup uses the PyTorch Lightning framework and we trained on an NVIDIA A100 Tensor Core GPUs. The implementation of our code is inspired by the fastMRI repository.1 Our code is publicly available at: [https://github.com/ydu0117/ReconFairness](https://github.com/ydu0117/ReconFairness).
Footnote 1: [https://github.com/facebookresearch/fastMRI/](https://github.com/facebookresearch/fastMRI/)
### Results
Table 2 reports the SSIM and PSNR results (mean and standard deviation) from 5-fold cross-validation under three different strategies. Figures 1 and 2 demonstrate the reconstruction performance of subgroups defined by demographic characteristics. Table 3 offers the results of Kruskal-Wallis ANOVA test between demographic subgroups, including p-values and Chi-Square values.
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{**Baseline ERM**} & \multicolumn{2}{c|}{**Subgroup Rebalancing**} & \multicolumn{2}{c}{**Minibatch Rebalancing**} \\ \cline{2-7} & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR \\ \hline
**Whole** & 0.872 (0.012) & 7.742 (0.112) & 0.867 (0.011) & 7.529 (0.109) & 0.867 (0.011) & 7.529 (0.109) \\ \hline
**Female** & 0.876 (0.010) & 7.999 (0.099) & 0.876 (0.010) & 8.006 (0.095) & 0.871 (0.010) & 7.767 (0.095) \\
**Male** & 0.868 (0.013) & 7.485 (0.118) & 0.870 (0.013) & 7.509 (0.117) & 0.864 (0.013) & 7.292 (0.117) \\ \hline
**Young Adults** & 0.876 (0.010) & 8.690 (0.092) & 0.877 (0.009) & 8.729 (0.090) & 0.872 (0.009) & 8.496 (0.090) \\
**Middle-aged Adults** & 0.874 (0.011) & 7.859 (0.108) & 0.875 (0.011) & 7.877 (0.106) & 0.869 (0.011) & 7.645 (0.106) \\
**Older Adults** & 0.867 (0.010) & 6.676 (0.102) & 0.867 (0.010) & 6.666 (0.099) & 0.861 (0.010) & 6.448 (0.099) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Statistics of Image Reconstruction Performance under three strategies.
Figure 1: Image Reconstruction Performance for Gender Subgroups. In the figure, ‘F’ represents ‘Female’ and ‘M’ represents ‘Male’. This figure indicates performance gap between two gender subgroups in image reconstruction task under different strategies.
**Presence of Bias**: Focusing on the baseline ERM model, our results show that there is a significant performance difference between subgroups categorised by gender and age. Among the gender subgroups, the female group outperforms the male group. This difference is obvious in Figure 1, showing that the baseline model provides better performance for female subjects compared to male subjects. This difference is statistically significant in Table 3.
Among the three age groups, the results demonstrate an obvious performance gap. Referring to Table 2, the young adults group provides a better performance in all three metrics. Furthermore, the results indicate that as age increases, the reconstruction performance worsens (both metrics). The trend is visually evident in Figure 2 and is statistically significant in Table 3.
**Is dataset imbalance the source of unfairness?** The performance under rebalancing strategies shows that the imbalance of data and the discrimination of training are not the major cause of bias. Specifically, in Table 2, when comparing the performance of the subgroups under different training strategies, the perfor
\begin{table}
\begin{tabular}{c|c c|c c} \hline \multirow{2}{*}{} & \multicolumn{2}{c|}{**Gender**} & \multicolumn{2}{c}{**Age Group**} \\ \cline{2-5} & SSIM & PSNR & SSIM & PSNR \\ \hline
**Baseline ERM** & 13.44\({}^{***}\) & 6.45\({}^{*}\) & 11.64\({}^{**}\) & 78.91\({}^{***}\) \\
**Subgroup Rebalancing** & 10.90\({}^{**}\) & 6.01\({}^{*}\) & 14.64\({}^{**}\) & 81.08\({}^{***}\) \\
**Minibatch Rebalancing** & 10.30\({}^{**}\) & 5.44\({}^{*}\) & 13.70\({}^{**}\) & 78.62\({}^{***}\) \\ \hline \end{tabular}
\end{table}
Table 3: Kruskal-Wallis ANOVA results for the Three Strategies Testing for Influence of “Gender” and “Age Group”. The results include Chi-Square values with statistical significance (p-value) indicated as \({}^{***}\)\(p<0.001\), \({}^{**}\)\(p<0.01\), \({}^{*}\)\(p<0.05\).
Figure 2: Image Reconstruction Performance for Age Subgroups under Three strategies. In the figure, ‘YA’ represents ‘Young Adults’, ‘MA’ represents ‘Middle-aged Adults’ and ‘OA’ represents ‘Older Adults’. This figure indicates performance gaps between age subgroups in image reconstruction task under different strategies.
mance gaps evidenced before still exist. These biases are also visually illustrated in Figures 1 and 2 and are again statistically significant in Table 3.
However, the Chi-square values under rebalancing strategies in Table 3, is reduced compared to the baseline ERM network among gender subgroups. This reduction in Chi-Square values indicates that the rebalancing either of the training set or the minibatch may mitigate partial bias, illustrating that dataset imbalance and training discrimination towards gender may be sources of bias, but not the main source. However, it is noticeable that the balancing strategies result in performance reduction of the dominant subgroup.
## 5 Discussion
**What is the source of unfairness**: We find that data imbalance and training discrimination do not significantly contribute to bias. Instead, the bias may stem from spurious correlations and inherent characteristics. Specifically, the model may focus on neuroanatomical features that are associated with demographic factors [11, 1]. In Figure 3, the relations between demographic features and neuroanatomy metrics including estimated Total Intracranial Volume (eTIV) as well as normalised Whole Brain Volume (nWBV) are analysed. Our results show that women tend to have smaller eTIV compared to men, and young adults have the highest nWBV among age subgroups. Thus, these differences in eTIV between gender and nWBV between age may result in spurious correlations that lead to bias, which requires further investigation in future work.
**Clinical Relevance**: It is noticeable that the difference in SSIM among subgroups is in the second or third decimal place in some cases. Although the small difference may not be clinically meaningful in practice, it can lead to additional errors and bias in downstream tasks such as segmentation and classification, ultimately leading to inaccurate diagnoses.
**Limitations**: Previous studies [16] have reported data imbalances among different racial groups due to geographic limitations of the datasets. In our analysis, due to the lack of racial data, the training set may still exhibit an imbalance in terms of race, even if we implement a rebalancing strategy.
Figure 3: Relations between Demographic Features and Neuroanatomy Metrics.
## 6 Conclusion
In this study, we conducted an initial analysis of fairness in DL-based image reconstruction tasks with respect to demographic characteristics, specifically gender and age. We employed three strategies to investigate the bias caused by these characteristics. Through the use of rebalancing strategies, we found that imbalanced training sets and training discrimination were not the major contributors to bias. However, further investigation is needed to identify the sources of bias in image reconstruction tasks. Correspondingly, we need to propose bias mitigation strategies to ensure fairness in DL-based image reconstruction applications.
## Acknowledgements
This work was supported in part by National Institutes of Health (NIH) grant 7R01HL148788-03. Y. Du and Y. Xue thank additional financial support from the School of Engineering, the University of Edinburgh. S.A. Tsaftaris also acknowledges the support of Canon Medical and the Royal Academy of Engineering and the Research Chairs and Senior Research Fellowships scheme (grant RCSRF1819\(\backslash\)8\(\backslash\)25), and the UK's Engineering and Physical Sciences Research Council (EPSRC) support via grant EP/X017680/1. The authors would like to thank Dr. Chen and K. Vilouras for inspirational discussions and assistance. Data used in Sec. 4.1 were provided by OASIS-1: Cross-Sectional: Principal Investigators: D. Marcus, R, Buckner, J, Csernansky J. Morris; P50 AG05681, P01 AG03991, P01 AG026276, R01 AG021910, P20 MH071616, U24 RR021382. | 深層学習 (DL) の再構築特にMRIの再構築は、画像の精細化と取得時間を短縮に貢献しています。神経画像診断において、DL方法では、欠sampledデータから高品質な画像を再構築できます。しかし、DLアルゴリズムにおいては、特に人口統計的な特徴に関する公平性という重要な考慮事項を検討する必要があります。この研究では、DL基づいた脳MRI再構築モデルにおける公平性の分析を初めて提示しています。このモデルは、U-Netアーキテクチャを用いて画像再構築を行い、ベースラインのEmpirical Risk Minimisation (ERM)と再配分戦略を実装して、不公平性を探索します。モデルの性能は、画像再構築指標を用いて評価されます。私たちの研究結果では、性別と年齢のサブグループ間の statistically significant performance bias が発見されました。興味深いことに、データ不均衡とトレーニングの差別が、バイアスの主要な原因ではありませんでした。 |